How about a Pause on the Race to Embed AI in Schools?

I haven’t written much about AI and education, for several reasons.

First, there are already many people writing compellingly and with considerable expertise about the uses and misuses of AI in the classroom. Some of those people will show up in this blog. Follow them. Read what they write.

Also, some years ago I developed a reputation for being a cranky Luddite. I wrote pieces about the downside of the ubiquitous online gradebook, accessible to parents 24/7, and other uses of computer programs that added to teachers’ workloads and didn’t fit with the important content and skills I was teaching students (lots of students) the old-fashioned way. The real costs of “free” programs and apps, no matter how glittery and hip, seemed obvious to me. Why didn’t other educators see this?

This came to a head when I was invited to be part of an online panel on ed technologies. Presenters sent me the language they planned to use to introduce me—did I approve? I confirmed, and then they messaged back: the bio had been created by ChatGPT. Ha-ha.

Finally, I haven’t written much about AI because I just find it hard to conceptualize how it could be useful in the classroom. In other fields, perhaps—with a lot of caveats, oversight and suspicion—but it runs contrary to the essential purpose of teaching and learning. Doesn’t it?

It’s never seemed right to let machines do the ‘thinking’ or ‘creating’ that is better done, or at least attempted, daily, by children. In short, I don’t get it. Maybe that’s because I haven’t been enlightened? So—shut up already?

I think many, if not most, practicing educators are in the same boat: Unclear about what AI actually is, and what use could be made of AI tools in their vital mission to make children independent thinkers, evaluators and creators.

For starters, who’s cool with Big Data collecting info on our public school kiddos’ engagement with their products? NEPC Report on digital platforms:  

While educators may see platforms as neutral tools, they are in fact shaped by competing interests and hidden imperatives. Teachers, students, and administrators are only one market. The other market involves data on performance, usage patterns and engagement—data flowing to advertisers, data brokers and investors, often without users’ knowledge or consent.’ 

A pretty good synopsis of what AI is, from Josh Marshall, Talking Points Memo:

“AI is being built, even more than most of us realize, by consuming everyone else’s creative work with no compensation. It’s less ‘thought’ than more and more refined statistical associations between different words and word patterns.” He goes on to make the salient point that the AI “products” being produced that will be “privately owned and sold to us.”

Doesn’t sound like something that schools need to quickly embrace, what with all our other problems, like teaching kids to read, rising absence rates and budgets stripped of our ability to feed children a nutritious breakfast and lunch.

Add in the environmental concerns and rampant intellectual property theft to teachers’ uncertainty about dumping more new, unvetted toys into an already-crammed curriculum. So I was thoroughly surprised to see the AFT get on the “AI in the classroom!!” bandwagon.

Why not take a pause—let’s call it a shutdown—on the race to embed AI in our schools? Why not sort through those competing interests and hidden imperatives? We’ve been bamboozled by climbing on attractive but ultimately damaging educational bandwagons before. Just who wants us on this one?

Well, scammers. And the folks who turned DEI into something to be avoided. Clueless Tik-Toking middle schoolers could up their game with AI. And right-wing edu-site The 74 says educators can save six hours a week by using AI to make worksheets, tests and exit tickets. Really? That’s an awful lot of worksheets.

Wouldn’t it make more sense to approach this transformative technology with great caution, holding fast to the evergreen principle of teaching and learning being a social endeavor? To look at the available research before being bedazzled by something new?

‘Participants, mostly undergraduate and graduate students, who constructed essays with the assistance of ChatGPT exhibited less brain activity during the task than those participants who were asked to write on their own. The AI-users were much less likely to be able to recall what they had written and felt less ownership over their work. Independent evaluators who reviewed the essays found the AI-supported ones to be lacking in individuality and creativity.’

If you want to read better pieces on AI, many are hyperlinked in this blog. But here are a few folks whose words and thoughts come from places of deep knowledge and experience:

Audrey Watters, the best Ed-Tech thinker on the planet,
for my money.

Pete Buttigieg, who thinks ahead of trends. Stop worrying about when he’s going to run for President and start absorbing his ideas on politics and relevant policy. Including AI.

Lucian Truscott, who writes about many things and made me understand why AI may ultimately fail: The men who run the big AI companies would do well to think through what they are doing with all those big buildings and all that electricity they consume. The “answer,” such as it is, to what they are seeking to accomplish may not exist, or it may be simpler than they think.

Educator Alfie Kohn, who points out that those most receptive to this technology are the people who know the least about it. This piece made my skin crawl.

My friend Peter Greene does a better job of debunking AI crapola than anyone I know. I credit this to his decades of classroom experience, during which he Paid Attention to Things—things more important than launching new products and making the big bucks.

So why should anyone pay attention to what a tech skeptic writes about AI in schools?

Because we’ll all be lured into making photos come to life, or relying on a questionable AI answer to an important question, or laughing at Russ Vought as Grim Reaper. Sticky and fun, but ultimately shallow, inconsequential.  Not what school-based learning should be.

Earlier this year, on a day when I made a (delicious) strawberry pie, I clicked on a song-writing app. Give us some lyrics, and a musical style, and we’ll write a song for you.

Here is my song: Strawberry Pie. Sticky and fun, but not much effort on my part.

3 Comments

  1. Unknown's avatar

    AI, AI, Oh!!

    Kathy Kosobud

    10/3/2025

    From a social media (FB) conversation:

    Me: Doing research, gathering data has been made worse by the AI interface on Google. Between that, and government agencies cleaning databases off of their websites is making it increasingly difficult to fact-check and to assemble data to support arguments for particular policy actions.

    Friend: agreed, I’ve been noticing this as well

    Me: The podcast, “How to Do Everything,” suggests that you can get around the AI by adding a swear word to your search. Worth a try! https://open.spotify.com/episode/116T82pKiw593sfPX3azmt…

    Friend: oh, good…I’ll just type how I talk then.

    Me: My 3 teenaged grandchildren tried it out…It works!

    My smartypants granddaughter was complaining about her new high school and how many of her classmates use AI to do their class assignments. I was a bit shaken by that. I have always thought that AI could be useful as a proofreading tool, but never as a substitute for thinking.

    My thoughts: 

    I’ve been thinking about doing some scholarly-mode writing to prepare for conversations with candidates and policy-pushers, and to exercise the scholarly muscles that I set aside, when I retired from education and nonprofit endeavors. I was dismayed, to discover that our government agency websites are being scoured of various verboten content.  I don’t have time right now to identify all of the specific items that have either been scrubbed or revised to reflect the beliefs of the current administration. In general, though, webmasters have been directed to delete references to guidance, documents, policy statements, and statistical data that do not conform to the current administration’s dogma. That led me to wonder a bit more about some of the AI-generated material that has appeared on these websites, including some egregious claims, unbacked by scholarly research, that appeared on the HHS website. Ever skeptical. I want to delve into this some more, just to see what kinds of things are being eliminated or obfuscated. But this has to be placed on the back burner, so that I can complete more pressing tasks of a more personal nature.

    Earlier this year, when I was more hopeful about hitting the old laptop and beginning to write opinion pieces on disability, and gender, I read a small research report comparing student use of AI  in report writing at three different stages of the writing process.  (Not sure, maybe Edutopia or NEPC).  Anyway, the report asked students to give feedback after they had completed the writing process. Students in one group were asked to do their first draft without the aid of AI tools. Then they were able to perform a first edit, using AI suggestions, and finally they revised their writing, incorporating whatever suggestions they found useful into their final draft. Another group generated their first draft by entering criteria for the paper into an AI tool. Their revision used both. AI and originally generated ideas to create a final draft.  Students self-reported on the value that they got out of the use of AI tools at different stages in the writing process. The conclusion, as I recall, was that students who did an original first draft tended to report better mystery of the content. My conclusion is that thinking happens at a different level than reviewing an AI-generated report. 

    I wonder what new teachers are being told about the use of AI as a writing tool.  Do they learn about the cognitive processes of learning?   What values are held by teachers as far as judging their students’ writing products? Do they care about plagiarism? Is the use of AI plagiarism, cheating, or just stretching a different cognitive muscle?  Has the field of education been changed by the ubiquity of AI? Is it even possible or efficient to check student work for originality?

    (to be continued)

    Like

    Reply

    1. Unknown's avatar

       Kathleen said: “I wonder what new teachers are being told about the use of AI as a writing tool.  Do they learn about the cognitive processes of learning?   What values are held by teachers as far as judging their students’ writing products? Do they care about plagiarism? Is the use of AI plagiarism, cheating, or just stretching a different cognitive muscle?  Has the field of education been changed by the ubiquity of AI? Is it even possible or efficient to check student work for originality?”

      I wonder about that, too. Of course I based all my pedagogy on the idea of “writing as a way of knowing”– you can’t master a topic without writing about it, with evaluation being less about the mechanics of writing (that came later, in the edits) than the ideas and questions generated.

      I read several glowing pieces about how AI was going to assist teachers, but they were all bullshit things– AI will create exit tickets!–and reassurances that using AI wasn’t cheating, anymore. I was most concerned about the idea that AI would grade students’ work. How will teachers understand what students have actually absorbed–and plan subsequent lessons? What will become of accomplished teaching? It gives me a headache.

      Liked by 1 person

      Reply

  2. Unknown's avatar

    My apologies–I have arthritis in both hands and so I often dictate what I’m writing I also use Grammarly to catch spelling and punctuation errors in my drafts. I noticed a few AI-generated typos:

    6 lines up from the bottom of the 2nd paragraph:  “Their revision used both. AI and originally generated ideas to create a final draft. ” (delete period between “both” and “AI”)

    4 lines up from the bottom of the 2nd paragraph: “The conclusion, as I recall, was that students who did an original first draft tended to report better mystery of the content.” (this one is kind of funny, I meant “mastery”, but Grammarly chose “mystery”. Learning is supposed to de-mystify, but apparently not in this case!)

    K.Kosobud

    Like

    Reply

Leave a comment