Dangerous Thinking with #AI

Later this month, I have the opportunity to share a little about AI with faculty and staff, perhaps even students, at a Texas university. Looking over my presentation, I feel like I’m turning the fire hose, a torrent of ideas and information on and about AI, on them while also trying to model some specific use cases. A part of me wants to break the presentation up into several components that look like:

  • An Environmental Scan of the State of AI and its use in K-16 schools that addresses ethical, environmental, and privacy concerns, as well as human and environmental impacts (there are so many ample examples, I could spend my entire hour preso on these topics without getting at the other two items, which potentially are what they really wanted)
  • My Favorite AI Tools to Enhance Productivity, which includes how I use AI with tools like Napkin AI, Canva, Gamma, etc. and ending this section with an intro to custom GPTs/Spaces/Projects/Gems to get things done, and more
  • Knowledge Stacks as Digital Backpacks for custom GPTs

The truth is, I’ve packed WAY too much into a Lunch and Learn presentation, but it really feels like I can’t let any piece go. I’ll share the preso and resources on April 21st without a password, but for now, I’m still fine-tuning it.

Worse, new reports show up every day (e.g. the RAND report, Marc Zao-Sanders 2025 version of GenAI actual usage report dropped, Efrat Furst’s insightful take on GenAI in higher ed, how AI is transforming higher ed). It really feels like drinking from the firehose or watching a fire ant nest that’s been kicked….

Given the topics, I prompted Gemini and then using that info, I made a checklist. It’s a bit slimmed down from the long table at the end of this blog post, but it’s an easy handout to get the conversation started.

A checklist made by the author in Canva with Google Gemini 2.0 support
A checklist made by the author in Canva with Google Gemini 2.0 support. Read below to learn more about the background. CC-SA-BY

Then, there’s the fun ways to use AI tools. It never ends, right?

The Dynamics of AI Power and Control

Of course, “never ending” may be the point. The constant turmoil, excitement, new releases mean that the news cycle gets dominated by whatever is newest. Little time is spent examining the underpinnings. I’m looking for a lens through which to analyze all of this, to understand it, and gain some measure of control. To that end, I read something that reminded me again to look under the hood more.

It comes from this quote that came to me via Doug Belshaw’s Thought Shrapnel from James O’Hagan’s piece on how Teaching AI Without Talking About Power Misses the Point:

We need AI literacy that makes students dangerous thinkers, not docile users.

Let students ask who funds the tools. Who sets the limits. Who benefits. Let them critique the platforms that shape their school day. Let them design alternatives rooted in their experiences. And let us stop pretending that integration is progress if the terms are dictated from the outside.

We can — and must — teach the technical. But we should not stop there. We need to lift the hood, yes. But we also need to ask why the engine was built in the first place, who it leaves behind, and where it refuses to go.

What does it mean to become a dangerous thinkers? It sounds a bit edgy, right? And, probably what K-16 schools will NOT want to do. It borders on social justice, and everything that comes with that.

Defining Dangerous

Does dangerous mean…

  • Applying critical thinking to AI outputs, being aware of the biases, and questioning everything? Makes me think of AI outputs as being claims we can apply skeptical thinking to, using tools like FLOATER, SIFT, and CRITIC.
  • Knowing that AI reflects the needs and goals of the people who made it, and what those needs/goals are, and following the money trail. I suspect students may not, for the most part, care unless this is baked into the school assignment…and would teachers and schools get in trouble for digging too deep, being a little TOO critical about those goals and motivations?
  • Take ownership of their own use of AI, deciding when it’s appropriate or not, and having the discipline/willpower to resist using it.
  • Figuring out new and novel ways to use AI to solve real life problems, not only use it in ways other decide is best.

In schools, IT usually means instructional technology rather than information technology. This is so because schools are organized to support and control instruction, and instruction is the defining characteristic of the work of teachers. Therefore, new technologies are almost always examined in terms of their potential for supporting and improving the work of teachers rather than in terms of their capacity to support the work of students. (Source: Philip C. Schlechty, Leading for Learning)

What might AI have to say if prompted to come up with a table for fifth grade level reader?

Domesticated Learner vs Dangerous Thinker

Okay, here is an AI-generated table comparing the “Domesticated Learner” and the “Dangerous Thinker” using simple words, aimed at a 5th-grade level.

FeatureDomesticated LearnerDangerous ThinkerExample Scenario (What the student does)
Getting AnswersAsks AI for the answer. Takes it.Asks AI. Then asks, “Is this right?”.Homework: DL asks AI for the math answer. DT asks AI, then checks the answer themselves.
Trusting AIBelieves what AI says.Asks “Is this true?”. Checks facts.Fun Fact: AI says, “Pigs fly!” DL tells friends. DT says, “Hmm, let me look that up.”
How AI WorksThinks AI is like magic.Knows AI learned from lots of info.Drawing: AI draws a cat with six legs. DL accepts it. DT thinks, “AI saw weird pictures.”
FairnessDoes not think if AI treats all fair.Asks, “Is this fair to everyone?”.Story AI: AI only makes boys heroes. DL reads it. DT asks, “Why no girl heroes?”
Who Made AI?Does not ask who made the AI or why.Asks, “Who made this? What do they want?”Game App: DL just plays. DT wonders, “Who runs this game? Do they watch what I do?”
Using AI ToolsUses AI just like the rules say.Tries to test AI. Uses it in new ways.Chatbot: DL asks school questions. DT tries to teach the AI silly things or find mistakes.

Here’s an “older” student version…do you think this might be too simplistic?

Okay, here is the information presented in a table, with “Docile User” changed to “Domesticated Learner”.

FeatureDomesticated LearnerDangerous Thinker
Prompting SkillsLearns how to write prompts to get desired outputs.Learns why certain prompts work, how AI interprets language, its limitations (hallucinations, biases), and how to test its boundaries and reliability. They analyze why the AI gave a specific answer.
Critical EvaluationAccepts AI output as factual or useful.Is taught to fact-check AI outputs rigorously, compare them against other sources, identify potential biases (based on training data), and understand the concept of AI “confidence” vs. actual accuracy.
‘Under the Hood’ ConceptsSees AI as a black box.Gains a conceptual understanding (even if not deeply technical) of how AI models are trained, the role of data, the existence of algorithms, and why this process can lead to flaws, biases, or unexpected results. Understands the environmental cost of training large models.
Ethics & Societal ImpactUses AI without considering the consequences.Engages in discussions and case studies about: Bias (racial, gender, cultural) in AI systems, Job displacement and economic shifts, Surveillance and privacy implications, The spread of misinformation and deepfakes, Issues of copyright and intellectual property, Autonomous systems (e.g., weapons) and accountability.
Power AnalysisUnaware of the forces behind AI.Explores questions like: Who funds AI development (Big Tech, governments, military)? What are their goals? Who owns the vast datasets used for training? Who benefits most from AI advancements? Who is potentially harmed or excluded? How does AI concentrate or redistribute power? How does access to AI tools (or lack thereof) create new digital divides? How are regulations (or lack thereof) shaped, and by whom?
Creative Use & ExperimentationFollows instructions and templates.Is encouraged to “break” the AI, test its limits, use it for unintended purposes (e.g., generating critiques of systems, creating resistant art), and understand its vulnerabilities (“red teaming”).
Metacognition & AgencyBecomes dependent on AI.Is prompted to reflect on their own use of AI. When is it genuinely helpful? When is it a crutch? When might it be detrimental or unethical to use? They learn to make conscious choices about if and how to integrate AI into their workflow and thinking, retaining human judgment and responsibility.


Discover more from Another Think Coming

Subscribe to get the latest posts sent to your email.

Leave a comment