Educators Must Adapt to AI, but They Need Help

Date:


I recently had the opportunity to be part of an OpenAI faculty roundtable. I was one of about a dozen professors that were joined by several staff from OpenAI’s recently created “Education Team.” We talked about our best practices for teaching with AI and our worries about its impact on student engagement, motivation, and academic integrity. The Education Team listened, asked questions, and presented their own vision of an “AI Native Institution.”

I hate to admit this, but I left the event feeling really depressed.

Our conversations were all about isolated and idiosyncratic (and, sure, exemplary) pedagogical practices, but completely lacking in big-picture vision—as if all we had to do was better integrate some whiz-bang gadget one student, one faculty, one institution at a time. Yes, I liked how Jeffrey Bussgang created custom GPTs for his entrepreneurship class at the Harvard Business School. And, yes, I thought Stefano Puntoni’s work at Wharton for integrating AI into his students’ writing was interesting. (OpenAI used these examples as “proof of concept”.) But to be fair, most of us sitting around the table have made similar or even better adaptations, and I don’t think any of us feel like we are part of the solution. Rather, we’re all barely keeping our heads above water as we navigate what Ethan Mollick terms a “post-apocalyptic education.”

This is why I believe AI has precipitated a fundamental crisis of purpose in higher education, and I am far from alone in this perspective. So, I expected more from a $300 billion company on the cutting edge of disrupting the world.

This is what OpenAI should have done.

First and foremost, they should have named the correct problem. Everyone thinks the issue with AI is that just about every student is cheating their way through college. Yes and no. It’s true that most students have little intrinsic motivation to learn and find the easiest way through the checklist of courses in order to get their credential.

But the real story is that AI has broken the transmission model of education, where professors teach and then grade students on how much they learned. A passing grade used to mean students had learned enough of what the professor had “transmitted.” No longer. These past two years faculty have given out A’s left and right to students who don’t understand (much less read) the assignment they just submitted. I cannot overstate this: AI has decoupled students’ performance (what they submit to us) and student knowledge.

This is not all bad news; a massive crisis is also a massive opportunity. The second thing OpenAI should have done is tease out the implications of and solutions to this disruption they have wrought. This doesn’t mean reactive and on-the-margins interventions—a return to blue books, watermarking AI output, process tracking, honor code updates—that may temporarily mitigate the problem.

Share post:

Subscribe

Popular

More like this
Related

June 2025: Parenting as Peacebuilding

June 18, 2025 This email series highlights voices of...

Why Districts Are Turning to Esports to Reach More Learners

The energy was electric: Three teams of students...

More Energy, Less Regret: Your Guide to a Sober Summer

“Believe you can, and you’re halfway there.” ~Theodore...