An edtech impact measurements expert shares key considerations.
GUEST COLUMN | by Olli Vallo
Adaptive Learning Means More Work Without Benefits?
Last year I met a young edtech entrepreneur who was building an adaptive math learning platform for high-school students. The idea of the product was simple, providing timely and personalized assistance when a student is solving an equation. His prototype was being tested in schools and it worked well. But soon he realized that even though the efficacy of math learning was improved, it didn’t benefit students in any way.
‘But soon he realized that even though the efficacy of math learning was improved, it didn’t benefit students in any way.’
He noted that students’ goal in school is not to learn. Instead, it is to get a good enough grade to pass to the next level. And because everyone takes the same test at the same time, it complicates things if you study and progress at your own pace.
He came to the conclusion that adaptive learning works in theory but not in practice, and closed down the business.
Edtech and The Learning Crisis
For the past ten years, I’ve worked with edtech impact measurements. I’ve been developing edtech quality standards for the Asian Development Bank, Finland’s National Agency for Education, and Education Alliance Finland, a company I co-founded in 2015.
During that time edtech has been rising because there is an urgent need for an instrument that can dramatically improve education. From a global perspective, the current school system is unable to provide high-quality learning at scale and at an affordable cost. Millions of students are graduating every year without foundational literacy and numeracy skills.
The situation is alarming, but there’s a mild consensus among decision-makers that edtech would help solve this learning crisis if its full potential as learning improvement can be unleashed.
Seeking the Recipe for Edtech Success
Over the years we’ve seen many failures when it comes to the development of edtech companies or wider-scale implementation of edtech products. There have been big investments into ineffective solutions, which has caused mistrust toward edtech.
To avoid failures, many (including myself) keep suggesting we should have an evidence-based approach to edtech development and procurement. Big organizations like Unicef and Jacobs Foundation are pushing this trend on the front line.
By looking into other industries it’s easy to agree that empirical academic research is the most trustworthy form of efficacy evidence. And with this conclusion edtech companies are expected to conduct empirical, academic-level studies on their products.
The edtech industry has been compared to regulation in the medical industry in regards to how it could be regulated. If it’s not allowed to sell ineffective medicine, why should we allow the selling of ineffective learning tools?
Generally, I think it’s great to have “research-based” as an edtech conference buzzword. And it’s magnificent to see companies conducting empirical studies to measure the efficacy of their products, like Pearson, Newsela, or Imagine Learning are doing. But with this trend, there exist major problems as well.
Can You Work with the System, Yet Innovate the System?
While being excited about this hype around efficacy, I’m thinking of the entrepreneur who built the adaptive math platform. He failed because the product was not impactful within the current school constraints.
We know that these constraints need to be removed to release the full potential of truly transformational edtech. Yet we are demanding companies align with these constraints and prove effective. Can you work with the system, yet innovate the system?
When expecting edtechs to maximize efficacy in the current school setup, we push them in the direction of building only supplemental solutions. Because that is currently the best way to achieve efficacy.
Empirical Research is Exclusive and Complex
Another problem with empirical research is that it’s making the edtech field very exclusive. Only a handful of startups can afford to hire researchers to work for months or years collecting data and lock their code for the time being.
It can also be questioned how useful it is to measure the efficacy of a certain tech product with randomized controlled trials, when so much depends on how it is being used. The quality of infrastructure and governance, and the competencies of teachers and students affect so much that the generalizability of the findings is often weak.
Efficacy Portfolio Provides More Holistic Solution
Having been so critical of efficacy research, it’s important to highlight that I still think it’s crucial to continuously improve our understanding of what makes an edtech product work.
A great approach to collecting impact evidence is suggested by Molly B. Zielizinsky and Jennifer Carolan. They propose that edtech companies start building an efficacy portfolio from day one. The work begins by first defining a rationale, based on existing research, explaining why the product should work. Using existing research is an agile and low-cost way to support the development work. Along the way, the company can expand its impact measurement activities into empirical studies.
Keeping the Focus on the Transformation
While we debate about standardized efficacy measurement practices, I suggest all decisions should address the need for education transformation. The transformation happens bottom-up and edtech providers play a key role in it. We should encourage companies to innovate to the maximum.
Let’s be evidence-driven, yet keep our minds open to reimagining learning.
For the past ten years, Olli Vallo has worked with edtech impact measurements. Olli has been developing edtech quality standards for the Asian Development Bank, Finland’s National Agency for Education, and Education Alliance Finland, a company he co-founded in 2015. Connect with Olli through LinkedIn.
NASA launched the first moonshot then failed to follow through on the promise of colonizing the moon and other planets. Now private corporations are taking up the mission. It’s the same with education. Once great, it served the needs of the industrial age. It is no longer meeting the needs of an innovative economy and cannot itself innovate beyond standardized testing.
[character limit exceeded?]
I read no examples of what innovation is in education in this article. Faster testing? The goal of the system is testing.
Second, there were no business model innovations (only an entrepreneur that quit).
Like Tesla, it is time for the consumer model to emerge.
Chess is an example where complex thinking is scored/rated/ranked. Using game-based learning models, complex thinking can be scored when students try to solve complex scenarios or simulations.
This comes with the added benefit that subtasks can be linked to state standards. So you get the best of both worlds, standards assessment, and scoring/ranking of critical thinking.
You can download a white paper that addresses these issues at http://www.virutalroboticsleague.com to see a series of innovations spelled out: 1. a pathway to equitable investment, 2. framework for testing and assessment for critical thinking AT SCALE. 3. Consumer business model, and more.
If no link above: Virtual Robotics League [dot] com
I’ll also accept an invite to write here.