Measuring Student Learning about Computing

aera_comp_sci_assessment_160401_press-release_300x2505b35d

President Obama’s recent initiative, Computer Science for All (CS4All), emphasizes the need to teach computer science (CS) as part of the regular K-12 curriculum. An important part of teaching is measuring learning. For example, teachers need to measure learning so that they can better help students learn. Likewise, students and parents want schools to document what students know about computational thinking (CT), and what they can do with that knowledge.  Whereas tests exist for conventional school subjects like math, there are almost no relevant assessments available for K-12 learning about CS education – and a lack of assessments could put the brakes on the growth of Computer Science for All.

SRI researchers are addressing this pressing need by defining effective assessments of difficult-to-measure concepts in CS education, in ways that are appropriate for K-12 settings. Evidence-based assessment of student learning is at the heart of what researchers in SRI Education’s Center for Technology in Learning have been focused on for decades. Building on this foundation, we’re leading the application of new methods and technologies to design effective assessments of difficult-to-measure problems in CS education.

At the upcoming National Council on Measurement in Education (NCME) conference, SRI Education researcher Daisy Rutstein will highlight SRI’s work with developing new tools for scoring assessments related to the Exploring Computer Science (ECS) curriculum, an academically rigorous and engaging high school CS course that teaches problem-solving skills and CT practices along with computing basics. The assessments focus on assessing student’s ability to engage in CT practices through a series of short constructed response items. These items require students to describe aspects of computational artifacts, explain their reasoning, and engage in processes related to the creation of their own computational artifact. Because there may be more than one right answer, scoring the assessments can become time-consuming and challenging to score reliably.

To aid in the scoring of these assessments, SRI has been evaluating an automated text scoring engine (ATSE) for short constructed response items that measure CT practices. The ATSE was previously developed at SRI in order to score responses to essay questions along 6 aspects. Results from the previous application of the ATSE showed that with enough human-scored responses used for the training of the engine, the engine could score the essays with a reliability measure similar to that of multiple human raters. In this study the ATSE was applied to the short constructed response. Characteristics of the items that did not perform well under the ATSE (the reliability was lower than that of human scorers) were examined in order to make modifications to the ATSE.

The benefit of automated scoring is that feedback on student performance can be obtained in a timely manner. In particular, with an increase in on-line assessments the utility of an automatic scoring engine is increased. Instead of waiting for days or months for human scorers to score these types of responses, the engine can be run directly after the assessment is administered and results can be provided to teachers or other stakeholders. The convenience of scoring could encourage the use of more constructed response items, which can elicit deeper conceptual knowledge and practices—student learning outcomes that have typically been harder to measure.

At the American Educational Research Association (AERA) conference, Shuchi Grover, senior research scientist at SRI Education’s Center for Technology in Learning (CTL), along with Marie Bienkowski, CTL deputy director, will discuss another cutting edge computer science approach to measurement that SRI researchers are developing. In most K-12 classrooms, students are introduced to programming through block-based programming environments. These environments are engaging and fun for students; however, teachers are hampered by a lack of visibility into the “programming process” that shows where students may have encountered difficulties problem-solving strategies. How one develops programs is just as important as what one produces. Measuring CT skills in programming has relied mostly on looking for evidence of students’ understanding in finished programs. This is insufficient because research suggests that the presence of constructs are not always accurate indicators of student understanding; what students struggle with most is not the use of computational constructs, but rather the strategies and the process of composing a computational solution; and students’ processes of constructing programs can be better indicators of student understanding than finished products. We therefore need ways to measure how students are learning problem solving and employing CT practices such as problem decomposition, debugging, and iterative refinement, in order to scaffold the learning to help students develop more expert practices. The AERA presentation describes studies conducted by SRI Education, in collaboration with SRI’s Computer Science Lab and the AI Center, with middle school students working on Blockly Games puzzles. They are exploring the use of a hybrid analysis approach that combines traditional data-driven learning analytics techniques with hypothesis-driven ones where a priori patterns are informed by SRI’s evidence-based assessment approach. Early results from this work will also be presented at the ACM Conference on Learning at Scale in Edinburgh later this year.

Under funding from the NSF’s Cyberlearning and Future Learning Technologies program, Shuchi Grover is also leading a project in which SRI Education is collaborating with Carnegie Mellon University to develop and use the hybrid analytics framework as described above, to understand programming process from clickstream data as students are working on programming problems in the Alice programming environment. Computational models are being used to analyze the specific practices in programming logs from defined programming tasks, helping to assess the process of computational problem solving. The goal is to develop automated measurement and formative scaffolding of the problem solving process so as to enable programming environments and/or CS teachers in the future to intervene at the points where students encounter difficulties, and provide support for deeper learning and understanding of more complex computational problem solving practices.

There is a clear need for CS education in our nation’s K-12 classrooms to be designed in ways that help all learners with deeper learning of programming and computational problem solving. All students should be given the opportunity to acquire the CT skills that will empower them to compete and thrive in the digital economy. At SRI, we’re building the scaffolds to enable CS educators to prepare students through innovative forms of formative and summative assessments in computer science education.


Read more from SRI