Ana smiles as she writes about her abuela.
She describes the long hours her grandmother spends working at the neighborhood bodega — greeting customers in Spanish, keeping the shelves stocked, and making sure Ana always has what she needs after school. It’s a story rooted in love, sacrifice, and culture.
When Ana submits her writing to an artificial intelligence (AI) tool her class has been asked to use, the feedback comes back quickly.
It suggests simplifying her language.
It flags her use of Spanish words.
It recommends making the story “more universally relatable.”
But Ana isn’t trying to be universal.
She’s trying to be understood.
And in that moment, the technology doesn’t expand her voice — it stifles it.
Artificial intelligence is rapidly expanding into classrooms across the country. Despite the learning gains that are possible through AI technology, there are also risks. Unfortunately, many of the funders helping integrate the technology into schools are not partnering with families and communities to discuss how to mitigate risks and maximize rewards.
The lack of partnership is especially troubling for students and communities of color and from low-income backgrounds because it means their perspectives are silenced. Without the voices and perspectives of families, educators, students, and other community members, the benefits of AI in classrooms decrease, while the threats multiply.
We’re already seeing how AI systems can misinterpret student work, reinforce inequities, and embed biases. Focus groups conducted by EdTrust in Massachusetts highlight community concerns, including that AI systems will penalize multilingual learners or provide feedback that discourages rather than supports learning.
But there’s another truth: families want their students prepared for an AI-driven economy. Parents see the potential—if the tools are designed ethically, tested for bias, and implemented with strong guardrails.
In fact, we published a blog post on navigating the promises and perils of AI for students of color. One key takeaway from that work is that we have to start listening to and elevating community voices, especially if we want AI to advance equity and learning for underserved students. This means collaborating with communities to:
- Craft guardrails before tools are procured or adopted to make sure AI technologies are effective and do not discriminate or embed biases
- Design educator training, AI literacy curriculum, and processes and criteria that ensure AI technologies in classrooms have a clear purpose and foster inclusive, engaging, and rigorous learning experiences
- Evaluate and improve AI technologies in use to ensure they are used in ways that further learning opportunities for all students instead of becoming a replacement for student engagement, relationship-building, and learning
Funders and policymakers should view community engagement not as a “nice to have,” but as a prerequisite for responsible AI in education. The future is for students so we must co-design their education with them and their families and communities.
Let’s choose partnership, equity, and listening. Students deserve nothing less.
Photo by Aerps.com on Unsplash
Series: CEO Perspectives