The rapid expansion of computer science (CS) instruction in primary and secondary education has highlighted the shortage of teachers qualified to teach the subject. A key strategy for building CS teaching capacity has been preparing teachers of other subjects (e.g., math, technology applications, business) to teach introductory CS through short-term professional development (PD) workshops or online training modules. However, these professional learning experiences tend to be brief in duration and focused on short-term adoption of specific CS curricula, or technology-enhanced tools at the expense of developing teachers’ conceptual understanding of CS standards, and pedagogy for monitoring and supporting students’ progress toward the standards. This white paper highlights some of the challenges faced by current CS teachers and presents a call to action for states and school districts to support CS teacher capacity building through standards-aligned, sustained, scalable, and reusable teacher PD that focuses on promoting teachers’ CS formative assessment literacy as a way to improve teachers’ ability to effectively teach CS. The corresponding practice guide provides concrete steps to systematically develop or select formative assessment tasks and use them to inform instruction.
Practice Guide: Applying a Principled Approach to Develop and Use K–12 Computer Science Formative Assessments
Formative assessment can be a powerful tool to support effective K-12 computer science (CS) instruction to increase student engagement and improve learning outcomes. In this practice guide, we show how to apply the five-step process outlined in the corresponding white paper to systematically develop or select formative assessment tasks and use them to inform instruction. This guide illustrates formative assessment literacy in practice and can be used in CS professional development workshops, teacher communities of practice, policy guidelines, and other avenues. The guide is not based on any specific curriculum; it can be used by anyone tasked with teaching CS. It is designed to enhance teachers’ understanding of CS standards, determine how to select and implement appropriate formative assessment tasks, and learn how to modify instruction to address student challenges identified from the formative assessments. This process can increase teachers’ knowledge of the CS content and how to teach it, as well as improve student engagement and learning.
In today’s increasingly digital world, it is critical that all students learn to think computationally from an early age. Assessments of Computational Thinking (CT) are essential for capturing information about student learning and challenges. Several existing K-12 CT assessments focus on concepts like variables, iterations and conditionals without emphasizing practices like algorithmic thinking, reusing and remixing, and debugging. In this paper, we discuss the development of and results from a validated CT Practices assessment for 4th-6th grade students. The assessment tasks are multilingual, shifting the focus to CT practices, and making the assessment useful for students using different CS curricula and different programming languages. Results from an implementation of the assessment with about 15000 upper elementary students in Hong Kong indicate challenges with algorithm comparison given constraints, deciding when code can be reused, and choosing debugging test cases. These results point to the utility of our assessment as a curricular tool and the need for emphasizing CT practices in future curricular initiatives and teacher professional development.
Computational thinking is a core skill in computer science that has become a focus of instruction in primary and secondary education worldwide. Since 2010, researchers have leveraged Evidence-Centered Design (ECD) methods to develop measures of students’ Computational Thinking (CT) practices. This article describes how ECD was used to develop CT assessments for primary students in Hong Kong and secondary students in the United States. We demonstrate how leveraging ECD yields a principled design for developing assessments of hard-to-assess constructs and, as part of the process, creates reusable artifacts—design patterns and task templates—that inform the design of other, related assessments. Leveraging ECD, as described in this article, represents a principled approach to measuring students’ computational thinking practices, and situates the approach in emerging computational thinking curricula and programs to emphasize the links between curricula and assessment design.
Getting Ready to Learn describes how educational media have and are continuing to play a role in meeting the learning needs of children, parents, and teachers. Based on years of meaningful data from the CPB-PBS Ready To Learn Initiative, chapters explore how to develop engaging, playful, and developmentally appropriate content. From Emmy-Award-winning series to randomized controlled trials, this book covers the media production, scholarly research and technological advances surrounding some of the country’s most beloved programming.
In the US, the new K12 CS Framework and aligned CSTA standards, and the Common Core State Standards and Next Generation Science Standards all include guidance related to computational thinking practices.
This reflects an orientation toward not just an internal, individual “thinking” but “ways of being and doing” that students should demonstrate when learning and exhibiting computer science knowledge, skills, and attitudes.
It represents the application of CS content knowledge via problem solving and inquiry-based methods.
Computational thinking practices: Analyzing and modeling a critical domain in computer science education
SRI Authors: Daisy Wise Rutstein
“Telling” can be an effective tool in helping students engage in intellectually demanding argumentation and productive behavior.
This research examines issues regarding model estimation and robustness in the use of Bayesian Inference Networks (BINs) for measuring Learning Progressions (LPs). It provides background information on LPs and how they might be used in practice. Two simulation studies are performed, along with real data examples. The first study examines the case of using a BIN to measure one LP, while the items in the second study are designed to measure two LPs. For each study, data are generated under four alternative models, and each of the models is fit to the data. The results are compared in terms of fit, parameter recovery, and classification accuracy for individuals. In the case where one LP was used, two models provided high correct classification rates. When two LPs are being measured the classification rates were not found to be high, although an unconstrained model with freely-estimated conditional probabilities had slightly higher rates than a constrained model in which the conditional probabilities were given by lower-dimensional functions. Overall, while BIN show promise in modeling LPs, further research is needed to determine the conditions under which this modeling approach is appropriate