My 12-year-old twins can prompt ChatGPT with alarming fluency. They’ve generated AI music, transformed family photos into wispy Van Gogh-style portraits, and built a chatbot that mimics their favorite anime characters. As their mother, I’d love to say it’s because they’re brilliant, and of course they are, but the truth is less flattering and far more important.
My children are AI literate because of a weighted mix of luck and privilege. My husband and I have graduate degrees and jobs that require computer fluency. Their Pennsylvania school district, Haverford, consistently places among the top districts in our state. Their middle school benefits from stable funding, high-quality teachers, and a strong IT department, all leading to discussions about AI in their sixth grade classrooms.
It’s a 20-minute drive from their school to Delaware County Community College, where I’ve been teaching for over a decade, and many of our students come from underperforming high schools. My classrooms are filled with recent graduates who have been taught that AI is little more than a contentious cheating machine. One of my returning adult learners told me she’d heard of AI, but had no idea what it was. After class, I gave her a quick demonstration of ChatGPT on our overhead projector. She sighed and said, “Well, now I know why my daughter’s suddenly getting through her homework so fast.”
This knowledge gap? It’s not just technological. It’s generational, socioeconomic and institutional. And it’s growing wider by the day. As first-year writing professors at community colleges, if we don’t meet this moment with intention, we will leave our most vulnerable students behind.
I felt this realization as a call to action and I didn’t just dive in, I cannon-balled. Over the past six months, I’ve clocked more than 150 hours building my fluency across multiple large language models. I studied the terminology, immersed myself in the ethics and mechanics of generative tools and leaned on the IT minds in my family. I read books, I listened to podcasts, and I had long conversations with colleagues about what equitable, ethical AI should look like in our courses.
In May, I received a grant to provide my fall Composition I students with ChatGPT subscriptions. These students will meet in a computer lab, giving us space to explore these tools in a collaborative setting. With OpenAI access, students will benefit from faster responses, voice-to-text, custom learning tools, and Sora, OpenAI’s image and video generator, to deepen engagement with our readings. Throughout the semester, I’ll collect data and administer surveys to gauge how this access shapes their learning and digital literacy.
And I’ve used grant funding to integrate the AI-detection tool Pangram into my Composition II course this summer. Rather than leaving me to play Sherlock Holmes, scrutinizing student prose for malfeasance, Pangram’s findings offer transparency to both student and instructor. Unlike detectors I’ve used in the past, Pangram identifies subtly humanized AI-generated writing, removing the familiar crutch many students have reached for in the past to avoid the messier process of developing as writers.
The most effective tool I’ve employed is the AI Transparency Journal, a shared Google Doc where students track every AI interaction throughout the semester. They log each prompt, how AI responded, what surprised them and where they struggled, creating a record of process, experimentation and growth.
In my current summer Composition II course, I started with an experiment: students uploaded our syllabus to ChatGPT, introduced themselves using a custom prompt about their background, goals and past experiences with writing, and asked the AI to identify what they might enjoy, what could challenge them and how the course might help them grow.
The results were eye-opening. Students reported feeling more prepared and reflective before reading a single assigned text. Even those initially skeptical about AI were surprised by how personalized — and surprisingly insightful — the responses felt. Several students shared reflections that stayed with me:
- “The response felt like it understood both the good and the hard stuff about me. It even helped me connect my love for reading the Quran to the diverse literature we’ll be exploring.”
- “I never expected AI to suggest keeping a personal phrase list to help with my vocabulary. That idea alone changed how I’m approaching this class.”
- “Honestly, it was like having my horoscope read — but more useful. The AI’s clarity helped me understand the syllabus better than just reading it on my own.”
Even those students who didn’t feel their AI’s response effectively captured their learning style appreciated how it offered a game plan for tackling our accelerated course. Most importantly, it inspired metacognition, reflection and writing before we even cracked our first literary text.
I’m writing this as I grade posts from the halfway mark of our six week course: our poetry unit. My students selected their favorite passage from either Langston Hughes’ “Let America be America Again” or Dunya Mikhail’s “The War Works Hard,” and used a free AI image generator to create a picture to capture its themes. They then posted their image and evaluated how well they felt it captured what they held in their imaginations.
Many students are enthralled by the generated pictures and their journal responses are averaging twice as long as required. While a few were disappointed, they were eager to explain why. For the second part of the assignment, I asked them to respond to at least one other image; most opted to respond to two or three different posts.
After we passed the halfway point in my current class, I paused to compare my current students’ progress against those in my same ENG 112 course one year ago, before I had integrated Pangram or any formal AI tools. This summer, I began with 37 students, and 29 are still actively submitting work. Of those, 24 are earning A’s or B’s and consistently completing their assignments. In contrast, last summer I started with 38 students, but by Week Four only 21 were still engaged, and just 17 finished the course with a C or higher, the threshold for transfer eligibility.
That said, there have been struggles with my wide-scale AI integration. I’ve had more Zoom calls with students than any previous semester as I’m walking my less technically inclined students through the many steps required to navigate AI interfaces.
But no one has complained; I have one student in her 50s who shared she’s done little more than use her computer for emailing and Facebook. After one of our longer video calls, she emailed me: “Dr. Ray, Thank you for your time today. I’m so glad you’re showing us all this. I never understood what all this AI stuff was before. I never thought I’d learn how to do this in an English class!”
And beneath all our trial and error, something else is emerging: engagement, community and a newfound energy, an indescribable undercurrent that floats through a positively charged learning space, even a virtual one.
So I leave you with this. Our students need guidance in navigating these new technologies, and if we fail to teach them how to engage with AI ethically and intelligently, we won’t just widen the skills gap, we’ll reinforce the equity gap, one many of us have spent our careers trying to dismantle.
It’s time to shift the conversation from fear to responsibility. Our students are ready. We need to meet them here.