Prohibition Didn’t Stop Alcohol Use. Will It Work With AI?

Date:


During our focus group, a middle school media and library specialist from New York sighed and said:

“We don’t need another policy about what not to do with AI. We need a philosophy that helps teachers think critically about these tools.”

This sentiment was echoed among our EdSurge research project, “Teaching Tech: Navigating Learning and AI in the Industrial Revolution.” Educators who participated in the project represented schools from the peach orchards of Georgia to the white sand beaches of Guam. Most of our participants agreed that even if they noticed a gap in AI utility, they preferred guidance and a culture of responsible AI usage over bans.

In the fall of 2024, EdSurge Research talked with a group of teachers about their experiences with and perceptions of generative AI, especially chatbots like ChatGPT. We gathered a group of 17 teachers from all over the world who teach third through 12th grades. Their perspectives on the promise of AI for teaching and learning were layered, highlighting the importance of a nuanced approach to AI in schools.

We asked some of them to design lesson plans using AI, which we’ll share more about in upcoming stories. It was during this task that we encountered one of our first obstacles: some participants’ schools had banned common AI chatbot websites on school devices. As schools across the United States restrict access to ChatGPT, and some states enact cellphone bans for students, our observations from this exploratory research project revealed that schools may be repeating a history of prohibition. All-out restriction, that is, is often tantamount to creating the conditions for misuse.

While some of our participants’ AI-supported lesson plans were stalled, we soon found workarounds. And that’s what kids do, too — and sometimes, better than we can. So instead of banning ChatGPT and other chatbots, we suggest a harm reduction approach to student AI usage.

What is Harm Reduction?

Have you ever told a 3-year-old, “No,” only for them to do the complete opposite? What about a 10-year-old or a 15-year-old? You say, “Don’t do this,” and they do that thing anyway? The results are almost always the same. Harm reduction, by contrast, is ethics in action. This approach is about accepting that a potentially pervasive or hazardous substance, object or experience exists and is unavoidable. With a harm reduction approach, instead of taking away AI on school devices and hoping students don’t use it for homework, adults equip them with the tools to responsibly engage with it.

One of our focus group participants, a computer science and engineering teacher from New Jersey, said, “AI can do the task, but can students explain why it matters?”

That’s where harm reduction is helpful. We want to build capacity in order to mitigate the risk of harm. We’ve borrowed the harm reduction approach from public health field. Although not perfect, it’s been successful in several areas, like helping address the opioid epidemic. In the context of K-12 schools, maintaining this humanistic approach helps manage the risks associated with banning students from generative AI websites.

Harm reduction posits a nuanced balance between moral panic and blind optimism. The purpose is to allow developmentally appropriate exposure and understanding to build those critical thinking skills teachers impart to students, instead of attempting to hide their not-so-secret AI usage. This approach won’t remove all ChatGPT-generated essays from your classroom, but it works with, not against, what research tells us about the developing brain.

Cautiously Curious

Across our focus group sessions, educators described navigating AI in schools as both an opportunity and a disruption. Their reflections revealed a shared tension between curiosity and caution. They also expressed a desire to engage students in responsible exploration while maintaining academic integrity and professional boundaries.

A high school special education teacher from New York City summarized the dilemma succinctly:

“My students ask if they’re cheating when they use AI. I tell them — if you’re learning with it, not from it, that’s a good start.”

Her comment reflects a nuanced understanding of harm reduction in practice, acknowledging the inevitability of student AI use and redirecting it toward critical engagement, rather than avoidance.

An elementary technology teacher from Texas raised another concern:

“We talk a lot about academic integrity, but no one’s defining what integrity looks like in the age of AI.”

Many participants echoed this gap between institutional expectations and classroom realities. While districts have started issuing guidance on AI, most educators remain without clear parameters for transparency or disclosure (see our own example below). In response, some are creating their own classroom frameworks and encouraging students to reflect on when and how they use AI. This helps model openness about their experimentation.

These accounts from classroom teachers demonstrate that harm reduction, in educational contexts, is less about permissiveness and more about preparedness. Teachers are not abandoning ethical standards; they’re redefining them to fit the complexity of contemporary learning environments and the latest industrial revolution.

Three Parts of AI Harm Reduction in Schools

From our analysis of educator reflections and existing research, three main principles emerged for applying harm reduction to AI in K-12 settings. Each one connects to a different layer of practice: systems, pedagogy and community.

Systems: Embedded or Optional?

Teachers recognize that AI already shapes the tools they use daily, an engineering teacher at a virtual school in Georgia said:

“If the tools are already in what we use every day, pretending they aren’t doesn’t make us safer.”

This principle calls for transparency. Schools should audit existing contracts, require vendor disclosure, and normalize open acknowledgment of AI use by teachers and students. Rather than hiding the use of AI in lesson design or assignments, educators can model honesty and critical engagement.

Pedagogy: Co-Learning for Capacity Building

A literacy coach from Illinois noted:

“We can’t just give teachers a new platform and expect them to know what’s ethical. That has to be a learning process.”

Harm reduction treats AI integration as collaborative learning, rather than compliance. Teachers and students can learn through small pilots, shared lessons and reflections. With this approach, AI isn’t replacing teachers; instead, it functions as a creative tool for teaching.

Community: Context-Specific Guardrails

Educators also stressed that any framework must reflect local context. The needs of a kindergarten classroom differ from those of an AP computer science course. Harm reduction works best when it adapts to each environment, prioritizing community values and student development over uniform rules. Districts that co-create AI norms with teachers, parents and students tend to foster both safety and engagement.

These principles translate harm reduction from theory to practice and can outpace the rapid changes in edtech and the education ecosystem.

How To Utilize Harm Reduction?

The educators’ insights from this exploratory research project, combined with the current research and data on AI usage in teaching and learning, helped shape the development of this suggested AI harm reduction approach. Future research on this budding area can evaluate the application of this approach in different settings.

While schools might block ChatGPT on school-issued devices or ban cellphones, which may temporarily alleviate this type of distraction, if students can visit that website on a phone or tablet at home, then they are still using chatbots with the ethics toolkit most aligned with their brain development. And if I’m a middle school student with a developing, 12-year-old brain, I might really enjoy my chatbot’s eternally supportive and warm demeanor. To tend to this complex challenge, our research suggests approaching this industrial revolution with candor, care and curiosity.


AI Disclosure Statement: Parts of this article were drafted with the assistance of generative AI to organize qualitative data and refine scientific language. All analysis, interpretation and final editorial decisions were conducted by the EdSurge Research team. The model served as an analytical and writing aid and for triangulation like a research assistant, not an author or decision-maker.

Share post:

Subscribe

Popular

More like this
Related

Climate change charts a dangerous course for the world’s largest fish

Warmer oceans are putting two giants of the...

Forecasting a Future for South Atlantic Red Snapper

Fishermen and the latest fishery stock assessment agree:...

Climate adaptation can’t be just for the rich, COP30 president says

COP30 must take concrete steps to help vulnerable...