I was only writing last week about my worry that AI agents — if widely adopted and then depended upon — will kill our own agency.
If we become reliant on AI models to do our critical thinking, handle our decision making and automate our ability to make sense of the world around us, we're losing a very fundamental aspect of being human. Well, a new paper published by Microsoft (yes, the bastions of making good technology and business decisions) shows there is growing evidence that this is a very real concern.
To summarise the findings, it basically found that the more workers rely on generative AI, the less critical thinking they use in their jobs. The study highlighted that AI use falls into two categories;
Those who use it, and then trust themselves to examine the results and make their own judgments.
Those who lack — or are beginning to lack, thanks to relying more on the AI models — the self-confidence to express their agency, and blindly accept the results of using generative AI.
Interesting, but nothing new here. We know that generative AI use is already split into never, lightly, and for everything. However, the study highlights a change in how we are using our brains. As it reads, "GenAI shifts the nature of critical thinking toward information verification, response integration, and task stewardship." In other words, we're starting to do most of our "critical" thinking when we're looking at generated sludge and trying to decipher how shit it is or isn't, rather than creating stuff from scratch. Sure, it's maximizing productivity — but at what cost?
I think the real impacts of this so-called cognitive atrophy won't be seen in any current generation. There's a real difference between gradually integrating technology into your life versus being born into it. I'm of the age where my childhood was pre-smartphone, and I have a very grounded understanding of what it's like to live with and without the influence of technology. Kids today are born less with a silver spoon in hand and more with a giant aluminum tablet, and it shows.
The same applies to AI. We're currently experiencing its early days, and while it's moving too quickly, we still have the space to figure out how it can complement or integrate with our creative outlets, work and life. We've still got a clear understanding of what life was like pre-AI, and each individual can judge to what extent they want to bring AI into their lives or to what level they trust the output.
But that's very different from having AI be part of your life from the days when you were wearing diapers and trying to figure out that the square shape will never fit into the circle hole.
I like the way Eric Hoel frames it:
"I am, however, very worried about a 7th grader using AI to do their homework, and then, furthermore, coming to it with questions they should be thinking through themselves, because inevitably those questions are going to be about more and more minor things. People already worry enough about a generation of "iPad kids." I don't think we want to worry about a generation of brain-drained "meat puppets" next."
Generation Meat Puppet. It has quite the dystopian ring to it.
So, what does it mean for us all if we experience a collective cognitive atrophy?
In the short term, it's exactly what Big Business wants. The sooner it can outsource jobs to AI and have a few skilled humans oversee this network of LLMs, the sooner it can then replace those humans with lower-paid workers from developing countries, cutting down spend and making shareholders even more money. Real “for the benefit of humanity” stuff.
Beyond that, we may come to a point sooner than we thought, where critical thinking itself could become a sought-after commodity. The ability to put together a thought, an opinion or an argument without the help of an AI assistant will put you above those who switched their brains off. You know, the thing writers do, the very creators our AI overlords seemed more than happy to get rid of as collateral damage.
It made me think of the news about Anthropic job applications, in which, get this: they asked applicants not to use any form of AI when applying. Why? Because the company wants to "understand your personal interest in Anthropic without AI mediation and evaluate your non-AI communication skills." The irony is delicious — an AI company realizing that relying on AI and chatbots makes your brain melt, and people who do this make for pretty terrible candidates for a job.
Or, perhaps future generations are going to have access to untapped knowledge and computing power, and yet to be too dumb to process any of it.
A whole world that could use the most advanced technology ever seen by humanity, if only they could remember how to think outside of the AI chatbot box.
I think about this quite a bit; I'm a retired school librarian and witnessed many hesitations about the introduction of new tech. Librarians have been early adopters and simultaneously early critics of new tech.
I think technology is irrevocable and every advance in communication technology has been met with skepticism. Plato said much the same as what is stated in your article when he criticized writing for ruining memory. Homer allegedly recited his stories from memory, being blind and incapable of reading. Many oral cultures had individuals who specialized in memorizing sacred literature and family/clan histories. Visual artists complained about the introduction of photography, saying it would destroy artistic expression - then they stumbled onto Impressionism. These are just two examples but I could go on.
My fear is more about the tendency of entrepreneurs to seize power as they accumulate wealth and the persistence of get rich schemes. Do we really need this new tech or is it just a way for the rich to accumulate more wealth by providing enticing conveniences that generate new capital and become seemingly necessary. We certainly don't need AI for survival, unlike technology for food production or wellness. The biggest strength in my mind is that interacting with ChatGPT has been a boon for learning. Learning requires critical thinking skills and those skills need to be applied to any interaction with LLM just as has been true of using Google to search for information. Librarians have always advised cross referencing.
…meat puppets…up on the sun…great record…