At Oxford’s AIEOU convening, nobody asked if AI belongs in education—the debate was about how we keep learning human while scaling what works.
Earlier this month, I got to attend Oxford University’s AI in Education convening (AIEOU), where educators, researchers, and policymakers gathered to wrestle with what AI means for learning. The conversations went far beyond hype, focusing instead on how to make AI useful, safe, and truly human-centered. Below are the five themes and a key takeaway from my perspective.
Five Themes: At-a-Glance
1. Human-first pedagogy & protecting agency
Education’s future depends on a clear boundary between what only humans can do—carry epistemic responsibility, build reciprocal trust, nurture identity—and what AI can amplify. Without this clarity, we risk confusing efficiency with authenticity.
2. Governance, safety, and trust
Trust is the ignition switch for adoption: without governance frameworks, secure staff practices, and contextualized harm taxonomies, AI in schools won’t scale responsibly.
3. AI literacy at scale
AI literacy spreads faster in communities than in classrooms. Teacher prep programs, student programs, and whole-school platforms prove understanding must be lived, not laminated.
4. Equity, inclusion, and accessibility
Designing from the margins ensures AI closes gaps instead of widening them. Assistive tools, cultural adaptations, and public system pilots show what inclusive-by-default looks like.
5. Assessment, curriculum, and learning design
AI is forcing us to unBloom our taxonomies, rethink purpose-first assessment, and build whole-school curricula for the 2020s, not 2010.
My Takeaway: Why Humans Matter More Than Ever
One main idea that lingered with me was the discussion around epistemic responsibility. Clare Jarmy framed this as the moral accountability each of us has for what we believe, given the evidence we encounter. It’s a reminder that thinking isn’t passive: when we adopt a belief, we are also accepting responsibility for its consequences. And that responsibility, Jarmy argued, can never be handed off to a machine.
This becomes even more pressing when considered alongside what Mel Sellick described as synthetic social traps. Humans are wired so that if something feels social, our brains treat it as reciprocal. We grant it trust, even when the “other” is an algorithm that can’t reciprocate in any meaningful way. In an age where large language models are engineered to sound friendly, supportive, and endlessly patient, this is a risky dynamic. The danger is subtle but profound: the more we lean on these systems as if they were partners, the more we risk lowering our own epistemic responsibility.
It’s doubly dangerous because of the business model behind many AI tools. The companies running them are often incentivized to maximize engagement: time spent in conversation, prompts typed, tokens consumed. That means these systems aren’t just accidentally “feeling social.” They are actively tuned to keep us talking, smoothing over doubts and reinforcing the illusion of social interaction. For young people especially – Sellick noted that 70% of teens have already used AI in relational ways like therapy or companionship – the effect is a perfect storm: machines designed to feel like friends, and brains wired to trust friends.
I’m reminded that the role of human teachers isn’t merely to deliver content, but to hold learners accountable to their own thinking. Epistemic responsibility is not a skill we can delegate; it’s a human capacity we must cultivate. And at the same time, schools need to help students learn how to spot and resist the lure of synthetic reciprocity. If we don’t, we risk raising a generation who can fluently use AI, but not wisely question it.