Balancing Widespread Use and Positive Learning Impacts of Educational Technology | SRI International

Toggle Menu

Balancing Widespread Use and Positive Learning Impacts of Educational Technology

Today, technology developers can get their innovations into the hands of many users quickly, giving them the opportunity to gather user feedback they can use for product improvement. Educational technology product design is no exception, as illustrated by the Khan Academy, which grew from a few YouTube videos to a million users in less than four years and now features more than 5,000 online learning resources.  

In theory, the rapid scaling, massive amount of user data, and continuous iteration that are part of the Silicon Valley Way result in better products, which in turn lead to growth in market share. But is this really true for educational technology products? Within schools and colleges, a pleasing experience using technology is not enough. The purpose of introducing new technologies is to improve learning outcomes. There’s an argument to be made that things that are the easiest to adopt because they do not require changing normal practice are unlikely to improve learning outcomes significantly. 

AERA Scaling photo

To understand whether widespread use correlates to product effectiveness in terms of positive learning impacts for students, SRI Education conducted an evaluation using scaling and learning outcome data from 22 “Next Generation” digital learning projects. Our findings, which we will present at the upcoming AERA conference, revealed that the features associated with widespread scale were quite different from those associated with positive impacts.

The factors associated with faster scaling for a digital learning innovation were: absence of a requirement for face-to-face instructor training, availability of the product at low or no cost, and little requirement for changing instructional processes or organizational structures.

On the other hand, the factors associated with positive impacts on student learning included a requirement for whole-course redesign including a change in instruction, as well as a deep level of integration and comprehensiveness of the digital learning innovation.

These findings suggest that developers of educational technology products face a Hobbesian choice: Design products that fit into existing courses and classrooms with minimal requirements for teacher training or other adjustments, and therefore are more conducive to rapid adoption but have marginal impact. Or, design products that can bring about transformative impact on classroom learning but involve fundamental changes in pedagogy, making them harder to scale and limiting their growth rate.

This tension between scaling and producing consistently positive outcomes has significant implications also for learning technology adopters, policymakers, and funders who need to consider digital learning innovations for education systems. For fundamental change to occur, engaging with multiple levels of the education system is imperative—which involves training teachers so that they understand the principles of the technology and how to implement it with students, ensuring the technology infrastructure is in place to support the innovation, making any necessary adjustments to the use of time and space, and, often, changing how student progress is measured.

If the goal of the learning technology is to bring about a transformative impact within education, then measurement of scale should go beyond the number of downloads or views of the product. It’s important to look at how users are implementing the technology and whether their engagement with it is really deep enough to affect learning.

Ultimately, while the Internet enables rapid dissemination of technology, the requirements for deep implementation of a digital learning innovation in a way that yields consistent positive impacts within an education system have not changed – they continue to require buy-in and support from multiple levels of the education system. Funders, investors and adopters should demand measures of real implementations—not just user numbers—when thinking about scale, and technology developers should couple their attention to adoption rates with attention to learning impacts.