Teaching Students to Embrace AI Responsibly

A network of staff, teachers, business leaders, and entrepreneurs helps to better understand the benefits, challenges, and learning opportunities surrounding AI.

GUEST COLUMN | by Caroline Jenner, Gus Schmedlen, Laura Turkington, and Bhakti Vithalani


Artificial intelligence (AI) is here to stay. No longer a distant concept, AI is transforming creative, legal, technical, educational, language, and medical sectors . . . and this is just the beginning. As young people grow up in an AI-driven world, educators, mentors, and youth-serving organizations are uniquely positioned to equip the next generation of leaders with the skillset and mindset to use and build AI responsibly.

‘As young people grow up in an AI-driven world, educators, mentors, and youth-serving organizations are uniquely positioned to equip the next generation of leaders with the skillset and mindset to use and build AI responsibly.’

As one of the largest youth-serving NGOs in the world—delivering over 17 million student learning experiences last year alone—JA Worldwide is working with private sector organizations to prepare youth in 115 countries for entrepreneurship, the job market, and financial health. These partnerships enable educators to tap into leading expertise and resources, ensuring that young learners gain insights into the latest innovations driving change in the world.

We’ve been embracing innovation for over 100 years in order to serve the educational and career needs of seven generations of students. This means that AI—with its potential to transform many aspects of our lives—offers yet another opportunity for JA to help students harness innovation and technology with clarity, compassion, and creativity.

In an effort to create a set of practical tips and guidance for educators and mentors, we’ve been culling our network of staff, teachers, business leaders, and entrepreneurs to better understand the benefits, challenges, and learning opportunities surrounding AI.

Is AI Good for Young People?

AI presents tremendous opportunities for young people, including the following:


  • Enhanced availability of assistive technologies to meet the needs of each student, including students who may not have fully participated in education due to geographic, political, technological, or personal constraints. AI gives students greater access to adaptive technologies, which empower young people with disabilities, language barriers, or other challenges through speech recognition, text-to-speech options, the ability to set their own pace, and more. In addition, just as the internet has opened borders and deepened opportunities for global fellowship, AI promises to enable international collaboration unhampered by language, cultural, and geographic differences.


  • More opportunities within entrepreneurship education, ranging from AI-driven business-planning tools and business-challenge simulations to AI-powered options for product design, app and game development, global collaboration, and marketing innovation. 


  • Enhanced management of the school-to-work transition and career readiness, especially given rising demand for AI specialists and data scientists and jobs in related industries. 


Yet, in spite of these AI-driven opportunities, AI may also present obstacles for students. For example: 

  • Inequitable access and educational disparities: When access to AI technologies and resources is not equitable, young people in marginalized communities may have limited access to the benefits of AI, which then create disparities in opportunities and outcomes. In the same way, AI can exacerbate educational inequalities, with students who lack access to AI-driven educational tools or personalized learning experiences falling behind peers who have access to these resources. AI tools that do not offer assistive technologies or simpler user interfaces—or that do offer those technologies, like voice activation, but don’t recognize languages and accents from some parts of the world—may keep students from having equal access to AI. 


  • Privacy issues: Given that AI’s primary use is to collect and analyze vast datasets, the use of AI in educational technology will lead to the collection, storage, and processing of personally identifiable information (PII) for hundreds of millions of students. Not only can these stored data be breached, but they can also be used to exploit student’s vulnerabilities and preferences through data marketing, location tracking, and surveillance via facial recognition. Young people may also be targets of AI-driven cyberattacks, identity theft, misinformation, and manipulation, especially if they lack digital-literacy skills. 


Roy Saurabh, digital transformation advisor for UNESCO who specializes in AI ethics, reminds us that “overcoming bias means having tons of information, and tons of information is at odds with privacy.” The flip side, however, is that teachers can also use AI to spot-grade students’ work to confirm the teacher’s own objectivity and consistency, ensuring fair grading and more opportunities for students who may have had barriers in the classroom due to gender, ethnicity, class, and so on.

Educators and mentors—along with youth-serving organizations like ours and in the private sector—can mitigate the negative effects of AI, harness its potential for positive change, and empower young people to shape the future of AI responsibly, ethically, and critically. 

‘Educators and mentors—along with youth-serving organizations like ours and in the private sector—can mitigate the negative effects of AI, harness its potential for positive change, and empower young people to shape the future of AI responsibly, ethically, and critically.’

Organizations are increasingly using AI to exponentially ramp up social impact in education. “AI can accelerate accessibility to quality education for all. By harnessing AI responsibly, education can transcend its traditional barriers, empowering and expanding the reach of educators everywhere,” explains EY Global Corporate Responsibility Leader, Gillian Hinde. “We are also supporting a faster uptake of AI in education by creating training resources and AI-centered curriculum for students.”

Michael Trucano, Visiting Fellow in the Center for Universal Education at the Brookings Institution, explores issues related to effective and ethical uses of new technologies in education, including AI. “Students are using AI,” he says, “and they’re going to use it in ways we can’t expect or predict—some beneficial, some not.” Michael also stresses that educators will need to push themselves into facilitator roles in order to fully discuss these ideas with students. “Teachers don’t need to be experts,” he says, “they need to discuss, support, and provoke. That’s a teacher’s job now, and it works really well in the AI field.” In particular, Michael believes teachers need to focus on helping students use AI to inform decisions rather than make decisions, keeping the young person in the driver’s seat of whatever outcome they create with AI. 

Roy Saurabh believes that developing ethical decision-making frameworks is the starting point for discussion about the ethics of AI. “AI is not inherently bad,” he says, “but needs to incentivize ethical behavior by teaching youth to develop an ethical decision-making framework and use it again and again.” (And ethical framework is a step-by-step process for exposing distorted or missing information, identifying motivations and influence, looking for competing values, and more.) 

How Can We Help Students Use and Build AI Responsibly?

Educators can empower young people to approach AI with a strong ethical foundation, preparing them to be thoughtful, ethical, and empathetic leaders by incorporating AI literacy. Some examples include the following:

  • Critical thinking and problem-solving: Emphasize critical thinking, a fundamental skill for assessing AI technologies. Educators can create an environment in which students feel comfortable asking questions about AI and its societal impact, and then engage students in discussions and activities that analyze the outputs of AI systems, identify patterns, and connect outputs to inputs.


  • Media and information literacy: In an age of AI-generated content, misinformation, and deep fakes, educators can enhance media and information literacy skills, including teaching youth to recognize AI-generated content, approach content with healthy skepticism, evaluate the credibility and reliability of content, think critically about the ethical implications of their AI choices, consider potential bias or manipulation, and encourage responsible sharing of information and content generated by AI.


  • Ethical discussions and conflict resolution: Provide a foundation for respectful and open dialogue, creating opportunities for educators to facilitate open and reflective discussions about AI ethics, explore ethical frameworks to guide responsible AI use and development, and find common ground when faced with disagreements about AI ethics.


  • Self-awareness: Help students develop self-awareness about their values, biases, and beliefs, which creates an opening for lessons in which students are encouraged to reflect on their own biases, how they might influence AI data sets, and become aware of how AI algorithms may tailor content or recommendations to their preferences and beliefs, reinforcing and expanding those biases.


  • Social responsibility: Emphasize social responsibility and civic engagement. This can be extended to discussions about the responsible development and use of AI, encouraging young people to advocate for ethical AI practices and policies.

Does Including AI in the Classroom Build Additional Student Skillsets?

Incorporating AI into classroom learning experiences not only fosters AI literacy but also enhances other essential skills:

  • Communications skills: To use AI successfully, students need to frame inquiries effectively, listen actively to responses, and build on previous interactions to get to the desired outcome—skills that parallel communications in the human world. In this way, incorporating AI into educational settings can foster the development of communications skills, especially when educators emphasize how to transfer these skills to students’ real-world interactions. 


  • Creativity: Encourage students to use AI as a tool to push the boundaries of their creativity and imagination. Students can, for example, use AI tools to inspire creative writing, music composition, filmmaking, visual art and graphic design, and data analysis (identifying trends and patterns), or to help spur ideas during a creative block. Soon, students will be able to use AI as a critic and tutor—in a way that used to require a creative workshop or personal coach—receiving feedback on creative endeavors and honing their skills with each new iteration. 


  • Empathy and perspective-taking: Foster empathy and the ability to understand the feelings and perspectives of others. Students can consider how AI technologies may impact different individuals and communities and empathize with those who may be negatively affected by AI biases, discrimination, or unfair practices.


  • Hands-on activities and projects: Hands-on learning has been at the core of JA for over 100 years; for AI education, this may mean that we encourage students to develop AI projects or businesses with ethical considerations in mind, incorporate real-world AI-related ethical dilemmas and case studies into lessons, and engage in practical experiments in which student identify bias in AI datasets or outputs. Among the millions of JA alumni building businesses around the world, many are using their AI expertise to build companies. Here are two examples, both using AI for sight-impaired individuals:
    • Christian Erfurt, CEO of Be My Eyes, a service that connects those needing help with everyday tasks with sighted volunteers in their communities, just launched Be My AI, a service that enables its sight-impaired users to take a photo of an item or of written text and hear verbal descriptions of and answers about the item or text through the power of AI. 
    • Cornel Amariei, another JA alumnus, founded .lumen to produce a headset that uses AI to mimic the features of guide dogs, which are in too short supply for all those who are sight-impaired. The headset glasses process the environment around them and guide its wearer to an unobstructed path, just as a guide dog does. 

What’s the Most Important Aspect of AI That Students Need to Learn?

The responsible use of AI hinges on the recognition and mitigation of bias that may be built into AI data sets. Teaching young people to recognize such bias is crucial for fostering critical thinking and promoting ethical AI awareness . . . and for their role in building an AI-driven future that is open, equitable, and fair. Here are some strategies and approaches to help young people understand and identify bias in AI datasets:

  • Start simply: Begin by explaining what bias means in the context of AI and data. Use relatable examples to illustrate how bias can affect AI systems and their outcomes.


  • Explore real-life examples and bring in experts: Show examples of real-world cases where bias in AI datasets has had significant consequences. Discuss how these biases can perpetuate stereotypes, discrimination, or unfair treatment. If possible, invite guest speakers or experts in the field of AI ethics and bias to provide insights and share their experiences with students.


  • Plan lessons on data-collection methods: Teach young people about the methods used to collect data for AI systems. Explain how the data collection process can introduce bias, especially if the data is not representative or if there are flaws in the sampling method. Use diverse datasets as examples to highlight the importance of collecting data from a wide range of sources and perspectives to reduce bias. Discuss the benefits of diversity in data.


  • Analyze sources of bias: Help students identify potential sources of bias, such as underrepresentation of certain groups in the data, data collection biases, and human biases in labeling or categorizing data.


Amelia Kelly, CTO of SoapBox Labs, is using AI for language and speech learning, an area ripe for built-in bias. “If the training set is biased,” she says—for example, if we record only the speech patterns of white boys, the result will not work the same” and students will suffer. So, SoapBox is recording voices from around the world, doing all sorts of tasks and activities, with all sorts of background noises. “Unless you’re doing that,” Amelia continues, “you can build an incredibly biased system. Training sets must be incredibly diverse, tested rigorously and transparently, and enable overriding of the system” by a human who can see that the AI is not responding to the user. Amelia sees great promise in educating young people about bias in AI, whether as developers or users. Young people should ask, “Does this work?” and “How do you know that it works?” They can follow on by asking, “What was the size of the training set” and “Who was part of the testing?” And so on. 

Michael Trucano agrees. “Students shouldn’t be passively accepting the guardrails from other people; they should be inquiring, proactively asking questions,” he says. “Young entrepreneurs and innovators may not understand the tools they’re using, so need to be aware of what they don’t know, especially when they build onto another platform, where the AI may be invisible.” Students, he says, should ask a lot of questions about where the data came from, and whether AI is informing decisions or making them. “The latter,” he says, “can be really dangerous.”

  • Examine algorithmic outputs: Show how bias can manifest in AI outputs, especially how training data can result in biased recommendations or targeting. For example, last year, BBC reporter Marianna Spring invented five social-media profiles based on extensive data from Pew Research Center that developed typologies based on demographics, age, interests, and opinion. Spring then tracked how quickly these profiles were targeted with disinformation across social platforms. 


  • Explore impacts on communities and engage in ethical discussions: Discuss how biased AI systems can negatively affect different communities and individuals, including issues related to fairness, justice, and equal opportunity. Engage students in discussions about the ethical implications of bias in AI. Encourage them to consider the broader societal consequences and moral responsibilities associated with addressing bias.


  • Use AI tools and visualizations to better understand AI tools: Introduce AI tools and visualizations that can help highlight bias in data. Tools like word cloud generators or sentiment analysis tools can demonstrate how data-driven systems can reflect biases.


  • Assign hands-on activities, research, and projects: Provide interactive activities that involve evaluating and identifying bias in datasets. Encourage students to examine data and assess whether it is representative and fair. Encourage students to explore bias in AI through research projects. They can investigate specific cases, develop bias detection algorithms, or propose ways to mitigate bias in datasets.


  • Keep building your own knowledge base: Keep your curriculum up-to-date with and emerging research on AI bias to ensure students are exposed to the latest information. 

Where Else Can Educators Find Resources?

Recently, a number of resources have emerged to help educators introduce and manage AI in the classroom:

  • TeachAI supports a global community of AI and education leaders with real-world examples and practical resources.
  • Microsoft offers an AI classroom toolkit, a blog and discussion group for educators, resources for students, and more.
  • Code.org shares curriculum offerings, professional learning courses, and video series designed for teachers and students.


Teachers and mentors have a clear opportunity to foster AI literacy and enhance skills by utilizing AI in the classroom. This will give students the understanding and competencies to use AI responsibly, recognize biases built into AI datasets, and begin developing the skillsets that enable them to build future AI products and services.

This article was co-authored by Caroline Jenner, Chief Operating Officer, JA Worldwide; Gus Schmedlen, Chief Revenue Officer, Texthelp Group, and Co-Chair of the JA Worldwide Board of Governors Learning Experiences Committee; Laura Turkington, Commercial and Innovation Leader, Global Corporate Responsibility, EY; Bhakti Vithalani, Founder & CEO, BigSpring, and Co-Chair of the JA Worldwide Board of Governors Learning Experiences Committee.


    Leave a Comment

    %d bloggers like this: