Is #AI Worth the Trouble and Cost?

Is Gen AI worth the trouble and cost in K-16? That’s the question that apparently is bugging lots of people:

Gina Parnaby, a 12th grade English teacher at Atlanta’s Marist School, told Axios that she has seen students using AI “as a way to outsource their thinking” and “flat-out cheat. Parnaby is resisting the use of AI in her classes. For students learning “how to write and how to think … you’re not at a level where you need to outsource that,” she said.”
“AI can improve efficiency, it may also reduce critical engagement … raising concerns about long-term reliance and diminished independent problem-solving,” the  Carnegie Mellon University study  noted. as cited in Axios

But others are asking a more important question, one that many already know the answer to and are caught between a rock and a hard place on. It’s not a question of can you use AI, but SHOULD you use AI? And, maybe AGI coming in 2026 will replace junior level workers, getting rid of “average workers” because it can do average work at a much cheaper cost.

More Reading

Here are some articles I’ve saved to my list of what I’m reading. You can add to the top of that pile, The Broken Copier’s The Sentence I’m Very Tired of Hearing as a Teacher.

You can hear the frustration in The Broken Copier, not to mention all the other voices here. Educators are facing a relentless onslaught by Big Tech to adopt AI. Many say we should not.

The sentence?

This can save you time as a teacher.

Regardless of whether this sentence arrives from someone genuinely trying to help teachers or someone trying to promote the latest AI product to hit the market, it feels like all the algorithms have somehow barricaded me into a loop of being told repeatedly that there are all these magical solutions of efficiency that will avail me endless time to do “the things that really matter.” (Source: The Broken Copier)

For those of us with ed tech background, I must admit to being more curious about getting the technology to do what I want it to at the least possible cost. And, I DO see benefits to using AI…and time-saving isn’t always it. When I figure out how to do something at a deeper level after having a conversation, I’m thrilled.

Today, I used AI to organize information in less than 5 minutes that would have taken me 30-40 minutes. Then, I used that time to go work on something that really needed more of my attention and time. Every day, I find a “use case” for AI in my work.

Is AI Use Ethical?

That question is, “Is AI use ethical given all the other issues?” That to me is the toughest question. It sort of reminds me of the story of Jesus and the Rich Young Man. You know, a rich guy says, “Hey, I’ve done all you’ve said.” Then, Jesus says, “Ok, wonderful, time to give up all your wealth.” The rich man is disappointed and maybe walks away.

For those who have already begun to use AI and see its benefits, walking away because we know that it is bad for people, the environment, critical thinking, makes the wrong people money remains difficult. Tressie McMillan Cottom tries to make this a little easier.

AI on Fumes?

Tressie McMillan Cottom, The Tech Fantasy That Powers A.I. Is Running on Fumes makes some interesting points in her piece. Cottom challenges the idea that AI is really going to save us time, ending her piece with a political statement (isn’t it all about politics these days?). The thing is, it IS about politics. And money. Corporate greed.

The Anti-Labor Hammer

Cottom makes this point in regards to education:

“The political problem with A.I.’s hype is that its most compelling use case is starving the host — fewer teachers, fewer degrees, fewer workers, fewer healthy information environments.”

It seeds this perspective then drops that AI (“mid tech”) is an anti-labor hammer in the hands of those with power (e.g. Musk is but one of them). The author cites DOGE’s efforts.

It seems to me that Big Tech wants public school funding, but EVEN MORE IMPORTANTLY, it wants fresh data, even if it has to pull it out of children’s brains at the source.

The lake is children’s minds. The popsicle is their fresh idea or brain pattern, and Big Tech wants it to train their AI models.

In schools, we might throw out ALL technology and embrace research-based practices a la Visible Learning (not that educators agree on that, either). But at least, there is some consensus on what works with humans learning and it does NOT require technology.

Mediocre is Good Enough

“Good enough” isn’t, not anymore. Cottom says:

We are using A.I. to make mediocre improvements, such as emailing more. Even the most enthusiastic papers about A.I.’s power to augment white-collar work have struggled to come up with something more exciting than “A brief that once took two days to write will now take two hours!”

Perhaps the point Cottom is missing is the argument that ZDnet’s Lester Mapp is making:

The days of being average and just getting by are officially over. 😖

Once upon a time, a team was structured like this:

  • 1x senior level
  • 4x mid level
  • 8x junior level

But that’s rapidly shifting to something more like this:

  • 1x senior level
  • 4x mid level
  • AI agents

AI hasn’t taken the junior-level jobs directly. Instead, what’s really happening is that mid-level employees have become dramatically more productive by eliminating mundane, repetitive, or tedious tasks. Thanks to AI, mid-level workers get 13 hours of productivity out of a standard 8-hour day. 

Is it the death of the average worker?

The Average Worker Faces A Coming Sunset

As Paul Roetzer points out in this podcast, Episode 141: The AI Timeline is Accelerating, the workplace is made up of a lot of average workers. Businesses and teams (with A, B, C players) need to ask, “Are AI models at the B-player, or average human level?” Roetzer makes the point like Lester above does that people doing “simply” average work can be replaced by Competent AI…we don’t need to get to the “expert” or “Ph.D” level AI.

He drops some quotes from Demis Hassabis (prob this one with Hannah Fry interviewing Demis about AGI) then cites the levels of AGI:

  • Level 0: No AI
    • Narrow Non-AI: Calculator software; compiler
    • General Non-AI: Human in the loop computing, e.g. Amazon Mechanical Turk
  • Level 1: Emerging – equal or somewhat better than an unskilled human
    • Emerging Narrow AI: Simple rule-based systems
    • Emerging AGI: ChatGPT, Llama, Gemini
  • Level 2: Competent – at least 50th percentile of skilled adults
    • Competent Narrow AI: Toxicity Detectors, Smart Speakers like Siri, Alexa, Google Assistant, Watson, short essay writing, simple coding
    • Competent: Not yet achieved
  • Level 3: Expert – at least 90th percentile of skilled adults.
    • Expert Narrow AI: spelling and grammar checkers such as Grammarly; Imagen
    • Expert AGI: not yet achieved
  • Level 4: Virtuoso – at least 99th percentile of skilled adults
    • Virtuoso Narrow AI: Deep Blue, AlphaGO
    • Virtuoso AGI: not yet achieved
  • Level 5: Superhuman – outperforms 100% of humans
    • Superhuman Narrow AI: AlphaFold, AlphaZero, StockFish
    • Artificial Superintelligence (ASI): not yet achieved

Moving at the speed of Competence may be the same as mediocrity these days. When average workers can be replaced by AI, should they be?

Wait, What About AI in Education?

I remain divided on this question about AI in education. As a knowledge worker, a writer, as someone who writes and tries to think straight by writing, who’s often been the fastest wordslinger/wordsmith in the room, I welcome AI assistance.

Some times it works, some times it doesn’t (like the image below).

A bald dude (hey, I’m bald!) with a beard using an atlatl, a spear thrower. This is meant to be a comparison of an educator using AI to augment the work they can do. But, it’s not perfect, just as this AI generated image misses the mark.

AI remains legal in the United States. There’s no legal reason why K-16 educators shouldn’t use AI to make their work easier, less difficult. Should schools be encouraging the use of AI Models that their creators have trained on copyrighted material (such as Meta using the entire contents of the LibGen shadow library to train their AI)? What about all the environmental issues, the data centers consuming water like crazy, etc.?

If it’s legal, why not use it in schools? If we can’t talk about the real history of America, just shove AI environmental and human impact in there with the Indigenous Peoples, the enslaved, etc., no?

What I Think Right Now

We are all moving quite fast towards AGI or post-AGI. Until we achieve it, it will be all speed ahead, no matter the consequences. I don’t see someone saying to schools, “No, no, wait. Focus on teaching kids to think, we’ve set up an impermeable wall that prevents people from using AI in all aspects of their life. Carry on with chalk, chalkboards, paper and pen.” We do have to manage the firehose of AI hype.

We do need to keep in mind, although no one has asked educators have they?, that there isn’t any research supporting AI as contributing to long-term information retention for children, critical thinking, or anything else. So, where is all that money going? And, if using free AI, then what’s happening to the “sensitive data?”

Ok, those are my thoughts for now. Incomplete, imperfect. I’d spend more time refining this but hey, I work and this is me at the end of a long day.


Discover more from Another Think Coming

Subscribe to get the latest posts sent to your email.

Leave a comment