I am sensing a definite backlash against AI. There are moments in my use of AI that I realize that I’ve eliminated AI components from the digital tools I use. Consider the following:
- I rely on markdown editors (e.g. Joplin Notes) that do not possess AI
- I avoid Google Gemini embedded everywhere it seems (along with ads, ugh) in Google Workspace
- I have eradicated Microsoft 365 from my devices, except for MS Edge since I rely on it for a consistent experience for work
The AI tools that I do rely on include ChatGPT Custom GPTs I’ve created as well as Perplexity Pro. Oh, I shouldn’t leave out Canva’s suite of AI-powered editing tools known as Magic Studio to remove background, grab text and make it easy to edit, translate content, erase portions of images.
I use these tools daily. But there is a bigger fight on the horizon than whether people should use AI for enhancing productivity or as a thought partner.
AI as Tool of Destruction
We know Big Tech wants us to spend public school funding on AI tools, whether they are ready or not. That’s why they’ve made AI too cheap to avoid. Even AI can describe the lock-down model in use.
Audrey Watters, Marc Watkins, and now John Warner, among many others, are sounding alarm bells. But perhaps of more consequence than these educators speaking out is who supports the use of AI and what they envision.
After watching Elon Musk destroy the lives of American workers in government, in the same way he destroyed Twitter, I can’t help but wonder, “Is supporting AI use actually supporting the destruction of education, making it easier to dump a teacher workforce that has been demonized as being ‘woke’ (we need more of that) and supporting diversity, equity, and inclusion?”
Leverage AI for Good
Unlike John Warner, I do think AI is unavoidable and as an ed tech users, I want to leverage it for my work and life. But I don’t want that at the expense of people, the environment, and people’s jobs. Is there a path that doesn’t force one to fall for a sucker’s choice, that doesn’t make AI use a good (it’s going to improve humanity) or bad (it’s brain-deadening, environmentally unfriendly, digital evil perpetrated by oligarchs)?
John Warner says:
I am persuaded by Marc Watkins’s framing of “AI is unavoidable, not inevitable” for no other reason than the tech companies will not allow us to avoid their generative AI offerings. We can’t get away from this stuff if we want to, and boy, do I really want to. But just because it is unavoidable and must be acknowledged and, in its way, dealt with, does not mean we are required to use or experiment with it.
What’s a path that takes us through the two extremes?
AI with Students
I have real concerns about using AI with students before they have built the capacity for skeptical thinking, but see that there is a way forward. But that way forward may suppose that students lack agency, the ability to make their own decisions about whether they want to learn to think or not. It forces educators, because who else will do it, to coach their students in how to use, or not use AI, responsibly and appropriately.
I wonder if we could do what I do when the fever to do or buy something comes over me. I put it on hold for a few days. If the feeling to move forward with a purchase or course of action goes away, then I know I was simply caught up in the excitement of the moment.
Let’s table AI in education for five years. By then, the Big Tech folks can put up or shut up about AGI, and all that. And, we can spend that money on professional learning for educators, upgrading school buildings and infrastructure, free food for students, parent outreach programs, print books, notebooks, and pencils/pens for all.
But, I don’t think that’s going to happen.
Discover more from Another Think Coming
Subscribe to get the latest posts sent to your email.