Opportunity Knocks #48 - My comments at Pennsylvania Advisory Committee to the U.S. Commission on Civil Rights on AI and Education
Every Monday, I share reflections, ideas, questions, and content suggestions focused on championing, building, and accelerating opportunity for children.
I am taking some time off next week, so sharing earlier than usual. This morning, I provided thoughts to the Pennsylvania Advisory Committee to the U.S. Commission on Civil Rights on AI and Education. Thanks to all who provided feedback, thoughts, and suggestions. What I said below. Happy Easter to all celebrating.
Good morning. Chairman Irwin, members of the Advisory Committee, I am grateful for the opportunity to be here today to discuss the challenges and opportunities in K-12 education emerging from generative Artificial Intelligence – a computational technique that creates new open-ended, creative assets from statistical patterns in enormous datasets - which for simplicity, I will refer to as AI during the remainder of my comments.
I am here because I am eager to see AI fulfill its potential to improve pedagogical practice and student performance. But for that to happen, we need a more deliberate, considered, and systematic approach to its adoption in K-12.
My name is Andrew Buher. I am the Founder and Managing Director of Opportunity Labs, a national nonprofit organization focused on thriving youth. I am also a lecturer at Princeton University’s School of Public and International Affairs, where I teach education policy and policy implementation courses.
Of particular relevance to this conversation, I have been working on policy formulation and implementation challenges associated with the digital divide and technology adoption in K-12 education for the past 15 years across the federal, state and local levels.
Drawing on my experience, my comments today will focus on ideas to smooth the integration of AI-powered products and tools in K-12.
These ideas center on mitigating potential risks associated with AI adoption while paving the way for its potential benefits.
I want to quickly reflect on the mindset we must adopt to move towards the promise of AI to improve schooling, teaching, and learning.
We must operate from a place of healthy skepticism.
We should minimize panic, fearing AI or the change it represents is a disservice to children. We should also minimize hype. It is of paramount importance to rigorously examine and critically evaluate the role and value of AI within K-12.
Engaging in philosophical inquiries about how technology has changed the nature of childhood, by altering human relationships and development, is imperative to building a future that harmonizes technological advancement with the healthy holistic development of our youth.
Skepticism is the best way to build trust.
Developments in AI make addressing persistent inequities in K-12 both more challenging and more achievable.
AI has the potential to dramatically improve K-12 education, driving equity, efficiency, and advances in teaching and learning.
However, there needs to be more evidence of efficacy, even for the most promising use cases. And the impacts on student learning, cognitive, and social development need to be understood with more nuance.
The potential benefits are real, but generally, they are far off and speculative. If anyone tells you otherwise, they are probably trying to sell you something.
Again, I want AI to succeed in improving teaching and learning, for instance by leveraging the technology to help students build agency. But for that to happen we absolutely have to mitigate risks. Here is a short example of why this is the case.
Imagine a middle school English classroom. Students are developing a deeper awareness of who they are, what they are capable of, what they think, and what they feel.
Amid this critical developmental phase, an AI-powered teaching assistant tool is introduced to supplement the classroom teacher by providing timely, constructive feedback.
Initially, the tool proves to be effective, offering students tailored learning resources, which, theoretically, could eliminate achievement gaps. However, as the class increasingly relies on this AI tool, several risks begin to surface.
The emphasis on personalized AI-driven learning reduces meaningful interaction between students and between students and their teacher. Cooperative group learning and associated social skill development may suffer as students spend more time interacting with AI than their peers—a dynamic we see now with teens and social media.
The teacher deprioritizes opportunities to provide students with feedback and as a result, misses out on opportunities to grasp what drives students, to understand their worries, to establish common ground based on real-world experiences, and to build a thriving classroom community, which are all critical components to academic success.
Students rely heavily on the AI to make decisions for them, diminishing their problem-solving, critical-thinking skills, creativity, and creative confidence. They become accustomed to receiving immediate solutions and stop putting effort into understanding underlying concepts.
The AI tool makes recommendations that are not age-appropriate, not contextually suitable for all students, or just false, exposing students to content that is harmful, biased, or factually inaccurate.
The need to continuously vet AI-generated content stretches the teacher, eliminating any time savings offered by AI-based productivity tools. Existing inequities in staffing are exacerbated as affluent districts contract technologists to assist in implementing, managing, and monitoring AI tools.
Reliance on expensive AI tools widens the digital divide. Every student has access to the technology, but students from more affluent backgrounds have access to advanced, personalized learning from family members and private tutors–living, caring adults.
Since the AI tool is designed to create the impression of thought and intelligence, students begin to place undue trust in the technology, leading to an overreliance on the tool.
Of course, these examples are illustrative and exaggerated. Still, they underscore the potential risks of AI technologies to cognitive development, social connection, mental health, academic achievement, safety, and privacy. These risks are minimally understood and deserve significantly more attention from the research community.
Because of local control, many of the 13,000 or so school districts nationwide will be forced to make decisions about AI adoption unilaterally. Guidance released thus far from eight state departments of education supporting AI adoption isn’t all that helpful.
In large part, the guidance reinforces things that superintendents and school leaders already know how to do.
If we’re to get AI right in K-12, state guidance for districts and schools will have to go further than what we have seen to date.
Districts and schools need direction from state policymakers to make difficult adoption decisions focused on mitigating the most significant risks while creating space to attain the potential benefits.
Which brings me to policy ideas.
Until proven otherwise, it's reasonable to assume that AI technologies are not being developed with children’s needs as the focal point. They probably aren’t being designed by expert educators from diverse backgrounds or with data that is representative of the students that will access the tools either. Thus, safety, privacy, bias, and efficacy will remain issues unless we can shift the incentives for technology companies to develop products designed for kids, with a deep understanding of children's needs.
One of the most critical levers school districts and schools have to change incentives is procurement. Districts and schools purchase everything: buses, supplies, and staff, as well as furniture, food, and materials. The most recent available data showed that $837 billion was spent on public elementary and secondary education in a single school year. That’s an average of $17,015 per student. Leveraging this vast sum may be an effective way for K-12 policymakers to shift the incentives.
State guidance for districts and schools could include AI-specific safety, privacy, efficacy, data security, and equity benchmarks for companies to meet to be eligible to sell to schools.
For example, benchmarks might include documentation to confirm that products have been designed in order to be accessible and suited to all learners, regardless of cognitive or physical need. Or, that companies are providing basic protections and more equitable access for good faith AI safety and trustworthiness research.
Of course, not all districts will have the capacity to assess adherence to such benchmarks, so resources could be established by states for districts to contract third-party technology audit providers to assess whether vendors meet established benchmarks.
Districts could codify in contracts a right to pursue legal action if the company does not continue to meet established benchmarks. For example, if a child is exposed to obscene content on a provider platform or a student’s data is hacked or misused the provider should be subject to significant monetary damages and suspension of service.
Small districts have little bargaining power when it comes to purchasing from large technology companies. So, Regional Education AI purchasing consortiums could be established to negotiate lowest cost and enforce acceptable procurement terms. These regional consortiums could also be structured to generally increase equitable access to AI expertise.
Finally on procurement, to further guarantee that historically underserved communities see the benefits of AI without shouldering more risk, there is a need for central, state-level repositories of procurement artifacts and vendor performance reviews. These repositories could be accessible to all schools as they vet AI providers and their compliance with established benchmarks.
Second, state guidance could create guardrails to prevent inappropriate use or over reliance on AI while continuing to center human development and relationships in student’s classroom experiences.
The simplest way to do this is to strategically limit AI-powered products and tools in classrooms until they establish empirical evidence of effectiveness.
For example, states could provide detailed frameworks for schools to create pilots that rigorously test new AI-powered products and tools.
A framework for a pilot might include methods for valid demonstrations of efficacy aligned with the Every Student Succeeds Act, valid data collection and analysis tools, partnership structures for engaging proximate researchers, and an AI literacy training plan for students and staff prior to the pilot so they can be more critical evaluators of the tools they are utilizing.
Finally, given the financial difficulties that almost all school districts will soon face due to decreasing state revenues, changing student enrollment patterns, and the end of federal COVID-19 relief funds, states could financially support schools and districts in adopting AI.
As mentioned, in my estimation, there is simply not yet enough evidence of effectiveness for districts to pay for AI-powered products and tools at scale, particularly at the expense of personnel or high-quality instructional materials, mental and socio-emotional health services, high-dosage tutoring, and out-of-school time programming with demonstrated evidence of impact.
If states believe in the promise of AI, they should invest in professional training and development, ensuring educators have the time to reflect, experiment, and refine their practice with AI-powered products and tools.