Amid fascination and controversy, addressing a disconnect in AI’s capabilities for education.
GUEST COLUMN | by Natalia Kucirkova

VOLODYMYR HERASYMCHUK
In the realm of edtech innovation, few developments have sparked as much fascination and controversy as generative artificial intelligence (AI). Generative AI can create art that is unique and different from human-made art, it can generate bedtime stories for children or AI tutors with warm and empathetic voices. AI’s potential for education has captivated imaginations but also fears of bias and misuse.
As we develop nuanced understanding of AI’s capabilities for education, we need to consider the current disconnect between the science of learning and educational practice. The disconnect exists on top of AI biases and limitations and is vital to be addressed before generative AI enters schools in the form of personalized AI tutors and other innovations.
‘As we develop nuanced understanding of AI’s capabilities for education, we need to consider the current disconnect between the science of learning and educational practice.’
Easier Said Than Done
VCs and governments play a crucial role in investing in collaborative infrastructure to make sure that AI for schools is based on learning sciences and evidence of what works. This is easier said than done.
Even Bill Gates admitted in his 2013 ASU keynote that “achieving large-scale impact in education has been much more difficult than in public health.” The discrepancy between scientific evidence for “what works” and what is used in schools is especially pronounced with edtech developed to support K12 core subjects. With generative AI, there is a risk that edtech accelerate learning based on business metrics and engaging but little educational experiences. The reason this risk is very real is that the advancements in AI do not only generate new edtech but also new science of learning.
Consider the case of personalized AI tutors. As Sal Khan, the founder of the Khan Academy, which is used by 135 million students worldwide, explained in his TED Talk, AI tutors are being rapidly embedded into edtech designed to support students’ reading or math. With an AI tutor, students’ responses are scaffolded with tailored prompts designed to incrementally advance students’ understanding in core subject areas.
Such AI tutors can be also trained to assess what students had learnt: a suite of assessment tools is embedded in the platform and directly linked to the teachers’ dashboard. The data visualize the student’s progress but also further train the AI algorithms. This means that the AI tutor can be trained to think and work like a scientist— at the end of the machine learning chain is an algorithm that generates new science of learning.
Underlying Problems
There is currently no globally agreed certification or benchmark of edtech evidence. Calls have been made to certify edtech just like the Food and Drug Administration certifies drugs and pharmaceuticals. In the U.S., an “FDA for Edtech” has been floated for some time but abandoned for fears of technology normalization. With AI, the calls for regulation got louder but the underlying problems with the implementation of such certification are still to be resolved.
For example, there is the issue of current evidence standards not being edtech-specific. The U.S. government released new guidance for evaluating edtech with ESSA standards of evidence. Yet, many edtech are highly personalized, which makes their evaluation based on standards of generalizability difficult.
Another issue is the lack of accountability in the edtech pipeline of content production. The US government has summoned the major platforms – Google, Microsoft and OpenAI – to publicly agree to allow their language models to be transparently evaluated. This is important but it will not solve the problem of verification of educational quality offered by individual edtech products: just like today Google doesn’t verify whether any of the thousands of apps advertised as “educational” on their App Store are truly good for students’ education, so it is unlikely to expect Google to verify whether any of their AI apps are based on human- or AI-generated research of positive impact on learning.
In a Time of Turmoil, Innovation
Put crudely, AI comes at a time when the public education faces major turmoil after covid-19, teacher shortages and lack of educational standards of what works. In such a chaotic landscape it is particularly important to listen to all voices vital for edtech innovation: teachers, researchers and developers.
In particular, there is a need for parallel developments in edtech evidence standards and support for their implementation in the entire pipeline from edtech design to evaluation of impact. It follows that public funding should prioritize the development and maintenance of shared arenas where key stakeholders—teachers, researchers and developers—can together create the learning environments that positively capitalize on AI potential to accelerate learning.
—
Natalia Kucirkova is Professor of Early Childhood Education and Development at the University of Stavanger, Norway and Professor of Reading and Children’s Development at The Open University, UK. She is the founder of the university spin-out Wikit, AS, which integrates science with the children’s edtech industry. Connect with her on LinkedIn.
0 Comments