Let’s put AI through the critical thinking grinder. If it IS so life-altering, let’s a look.
Wait. This blog is a failure already. I won’t be putting AI through the grinder. I’m not that smart. Worse, I’m not really gunning for AI. I’m aiming my peashooters at those K-12 institutional managers, the bigwigs, the institutional authoritarians in K-16 schools who are pushing AI into classrooms. I’m pushing back against the edtech evangelists who have exchanged one false god (edtech) for another (AI) and are pursuing their next dollar.
To them all, I ask a simple question:
Are you here to be an AI ambassador, prophet, priest, nun, missionary, or a teacher finding the best way for kids to learn and then sharing that with other teachers?
The Grinder
Ok, let me try to lay the ground work, at least. Even though this blog has failed already. When I awoke this morning at 3:00 AM, I had a flash of insight. What if we put these crazy ideas people have about AI and use critical thinking to analyze them? In my head, I had this diagram, which I immediately called “the grinder” because it was going to cut through all the crazy talk and real talk and produce something valuable at the end. But I forgot one important detail. More on that detail later.
![]() |
| Image Source: Home Depot |
Reasoning vs Rationalizing
Melanie Trecek-King does a wonderful job explaining the difference between Reasoning and Rationalizing. She makes this point:
- Reasoning: Following evidence to a logical conclusion
-
Rationalizing: Selecting evidence to justify a conclusion
She has a neat diagram that looks something like this, which you can find in the citation, but this is my flawed version. It’s flawed because in HER diagram, you see “logical fallacies” instead of “confirmation bias,” which I put in there as an example.
![]() |
| Adapted or stolen from the incomparable Melanie Trecek-King. She features a better version in her article, The Person Who Lies to You the Most is You. Read her article on this. |
Where Melanie writes about “motivated reasoning and logical fallacies,” I put down motivated reasoning and confirmation bias. To me, the latter was a great example.
“We think we follow evidence to a conclusion. In reality, we come to our beliefs in irrational ways, then work backwards to find evidence to rationalize the belief,” says Melanie Trecek-King
Yep. Think of this in terms of Artificial Intelligence. Is it possible many are thinking, “Wow, this AI is amazing. Look at all the great things it’s doing. We need to put this in the hands of kids because it’s going to change everything” then we try to justify it’s adoption in K-16 education. But, scientific research and studies in mind, we can’t claim that there’s a lot of evidence to support its use, right?
It’s Just Like…
A colleague wrote the following:
“AI is just the new BYOD, computers in the classroom, etc. Remember when districts banned YouTube for teachers! Same thing. History repeating itself.
AND all these people putting out books on AI in the classroom is just like everything that has come before. Google, Flip, etc.” (Source: Anonymous colleague)
We are simply making our best guess about AI’s impact. People are flocking to it because of the money, the cool factor. But don’t we know better than that already?
There isn’t a study showing that AI impacts student achievement, right? Let’s set that aside for now. Critical thinking is more fun. It’s not as easy to use as AI. Hmm…
A New Report
The US Government released this report:
What I find curious is that it came from The Office of Educational Technology. Their blog says:
Educators need time to learn about AI, as the AI systems offer possibilities that are unlike existing educational technology (see prior blog posts in this series).
Educators also need time to explore specific AI systems that a district or school is considering, as it is often not easy to understand how AI systems gather and use data, nor how AI systems make inferences, decisions, and take actions.
For AI to be effective in education, it must be trusted — it isn’t widely trusted now and building trust takes time..
That’s a big claim (see bold highlighted text). The problem isn’t the AI, it’s the people trying to bring it into schools as if it were the greatest thing ever. What they forget is, “We’ve already seen waves of “greatest edtech ever” and this is simply the most recent wave.”
Why not just drop the hype?
Defining Motivated Reasoning
Maybe the reason is motivated reasoning. That’s why they can’t drop the hype.
In that diagram featuring reasoning vs rationalizing, you can see I put confirmation bias under motivated reasoning. In my notes on Melanie’s article when you read this definition via Wikipedia of the phrase:
Motivated reasoning (motivational reasoning bias) is a cognitive and social response in which individuals, consciously or unconsciously, allow emotion-loaded motivational biases to affect how new information is perceived. Individuals tend to favor evidence that coincides with their current beliefs and reject new information that contradicts them, despite contrary evidence.
Motivated reasoning overlaps with confirmation bias. Both favor evidence supporting one’s beliefs, at the same time dismissing contradictory evidence. However, confirmation bias is mainly an unconscious (innate) cognitive bias.
In contrast, motivated reasoning (motivational bias) is an unconscious or conscious process by which one’s emotions control the evidence supported or dismissed. For confirmation bias, the evidence or arguments can be logical as well as emotional. (read more)
Despite contrary evidence (facts), people favor evidence that coincides with their current beliefs. We reject new information that contradicts us. Right now, I suspect AI/EdTech hucksters are throwing emotion-laden elevator speeches at everyone to see what sticks.
Down a Memory Lane of Confirmation Bias
Reading that definition and reflecting on Melanie’s diagram took me back to when I was writing book reports in elementary school, I decided on an easier approach to writing. I would start with getting the whole of a story in my head (reading it), then coming to a conclusion. After that, I would look for evidence to support that conclusion. This made things a heck of a lot easier.
As long as you pick a conclusion most people agree with, you can marshal the evidence, even if you are engaging in some type of logical fallacy, or cognitive biases. Consider these two terms:
- Confirmation bias. Melanie Trecek-King describes it as, “We are more likely to seek out and agree with judgements and analyses that fit our worldview rather than anything that challenges it.”
- The Backfire Effect. The more information that comes in that conflicts with a pre-existing worldview, the more our brains reject it. This is known as “the backfire effect.” Here’s how one website describes it: “…showing people evidence which proves that they are wrong is often ineffective, and can actually end up backfiring, by causing them to support their original stance more strongly than they previously did” (source).
I suppose, to some degree, all of us look at what’s happening around us, and tell ourselves a story, not unlike Crucial Conversations, right? Check out what usually happens:
![]() |
| Source: “Our Stories Create Our Emotions–We Create Our Stories,” Crucial Conversations as expressed in this slide |
I love this chart from the Crucial Conversations folks. It captures how a lot of folks interact (the blue arrow). But under each piece of the arrow (“See & Hear”) we see what we really need to do (“Get back to the facts”).
What’s the story we’re telling ourselves about AI in schools?
Putting AI Through the Grinder
As you can see, putting AI through a “Critical Thinking” process, or grinder for fun, isn’t that easy. You have to know a lot to think in a critical way. The truth is, I’m new at this. I’ve been wrong before. I might be mistaken about all or a little of what I think about AI (adapted from Ann Druyan’s quote in Carl Sagan’s book).
Now that I think about it more, it’s going to take a bit more time and effort than a quiet evening before falling asleep to put people’s AI stories through the grinder, if at all.
Discover more from Another Think Coming
Subscribe to get the latest posts sent to your email.


