AI Adoption in Schools Faces Scrutiny After Controversies in Los Angeles and San Diego

AI Adoption in Schools Faces Scrutiny After Controversies in Los Angeles and San Diego
Table of Contents
1AI Adoption in Schools Faces Scrutiny After Controversies in Los Angeles and San Diego
Grading AI Tools and Risk Management
Lessons from Missteps

Los Angeles, CA – In March, with the enthusiasm of a startup founder, Alberto Carvalho, superintendent of the Los Angeles Unified School District, introduced "Ed," an AI chatbot touted as a groundbreaking tool for personalizing education. Speaking to parents and students, Carvalho claimed it had unparalleled potential to transform the educational experience on a global scale.

“No other technology can deliver real-time on this promise,” he declared. “We know it will succeed.”

However, just three months and nearly $3 million later, the district shelved Ed following significant layoffs at AllHere, the startup behind the chatbot. District spokesperson Britt Vaughan declined to comment on Ed’s performance or its usage statistics before its shutdown.

In June, San Diego faced its own AI controversy when it was revealed that school board members were unaware of a tool that automatically suggested grades for writing assignments, purchased as part of a larger contract with Houghton Mifflin. This revelation came to light after Point Loma High School teacher Jen Roberts highlighted both the tool’s time-saving benefits and its occasional inaccuracies in grading. The incident led to concerns about the lack of board awareness and oversight regarding AI tools in the district.

Both cases underscore the growing pressure on educators to adopt AI, coupled with the need for thorough vetting and critical analysis of such technologies. Experts suggest that education leaders need to ask tougher questions and engage with external advisors to avoid future pitfalls.

Following the launch of ChatGPT by OpenAI nearly two years ago, the California Education Department encouraged the adoption of AI, describing it as an “AI revolution.” Educators fear missing out on advancements that could benefit students in learning and future job markets.

Grading AI Tools and Risk Management

Hannah Quay-de la Vallee, a senior technologist at the Center for Democracy and Technology, emphasizes the importance of critical analysis before implementing AI in classrooms. The level of scrutiny, she says, should correspond to the risk posed by the AI tool. High-risk applications, such as grading or predicting student dropout rates, require more rigorous evaluation.

The European Union’s risk-based regulation of AI and the National Institute of Standards and Technology’s framework for AI risk management in the U.S. offer models for how AI in education could be better managed.

California’s state schools superintendent, Tony Thurmond, has not commented on measures to prevent future AI-related issues. However, legislation proposed by State Senator Josh Becker, supported by Thurmond, aims to create a working group to provide guidelines on the safe and effective use of AI in education.

Lessons from Missteps

Michael Matsuda, Superintendent of Anaheim Union High School District, advocates for a collaborative approach to AI adoption. He stresses the importance of vetting AI products and learning from recent mistakes in Los Angeles and San Diego. His district, which uses AI to personalize learning materials, is considering high-risk labels for certain AI applications, such as grading.

Stephen Aguilar, co-lead of the Center for Generative AI and Society at the University of Southern California, advises district officials to maintain a critical perspective on AI vendors’ claims. Continuous evaluation of AI models is necessary as different versions can yield varying results.

Alix Gallagher, head of strategic partnerships at Stanford University's Policy Analysis for California Education, notes the market pressure on educators to adopt AI. She emphasizes that while AI can alleviate teacher burnout, smaller districts especially struggle to keep pace with technological changes. Trusted nonprofits or state officials should assist in evaluating AI tools.

As AI continues to integrate into educational products, school districts must remain vigilant. The controversies in Los Angeles and San Diego highlight the urgent need for careful consideration and robust oversight in AI adoption, ensuring these tools truly benefit students and educators.