I've been reading a lot about the idea that technology is going to become so advanced—of course, I'm mainly talking about AI here—that, instead of sitting dormant waiting for a human to interact with it, it will take the lead and interact with us.
Basically, it's the beginnings of technology doing your thinking for you.
It might sound fun on paper.
But in reality, if we stop thinking about even the most basic things, we're doomed to become drooling zombies, aimlessly living until our overlords decide to send us our next prompt.
We've been on this path since the dawn of the algorithm. When social media first turned up, it was a simple concept — see the stuff your family and friends post. Remember those good old days? Then, it got a little more advanced, showing us stuff from friends of friends (of friends of friends). Before we knew it, content from friends was relegated to the algorithmic abyss, hidden in feeds that were no longer chronological, replaced by content from "people the completely non-transparent algorithm thinks you'll be interested in," based on the 1000s of data points they stole from us before we realized what was really going on.
Put another way, we stopped thinking about what we wanted to see and started to see what we were given to see. There was no longer a need for agency, exploration, or thought — it was now swipe, swipe and swipe, or, as
terms it, “intellectual poison.” It's been wildly successful, too, in the sense that it has increased the time we spend on platforms and devices.It's also been successful in destroying what little enjoyment ever existed when using these platforms. The algorithm is to blame for the steady decline of social media as both a platform and as a utility. That's where we're at now, and most of us hate it (even if many won't admit it).
But the current wave of AI hype — pushed by Sam Altman and currently Mark Zuckerberg as he desperately pivots away from the Metaverse — suggests we're moving into a new phase of this brain rot, one that is even more cynical, depressing and scary.
As Zuckerberg put it in a recent interview,
"I think the next logical jump is like, "Okay, we're showing you content from your friends and creators that you're following and creators that you're not following that are generating interesting things. And you just add on to that, a layer of, "Okay, and we're also going to show you content that's generated by an AI system that might be something that you're interested in."
Yuck. So we're entering a future where we won't seek out things we enjoy. Hell, we won't even surrender ourselves to doomscrolling through algorithmically produced feeds. No, we're going to have AI generate what it thinks we want to see and push it on us without prompting.
It seems we'll soon have AI prompt us with suggestions of what to text friends, what picture we should take, what location we should walk to, randomly update us on something, or start a conversation with us. Is this helpful? Of course not. But it will drive engagement, even in places where engagement should not be measured (like a messaging application). And engagement means an opportunity to steal more data and monetize.
It's a little ironic. One of the main defenses AI pushers give is that "it's a tool." I think this potential development — one where, in a sense, it gains some form of agency to decide when it functions — flies in the face of that. Tools are there to be used when needed. A hammer is used to drive a nail into a bit of wood; it doesn't randomly float around your house, controlled by lines of code, deciding when and what it should smash next.
It is also a bit scary. We're doomed as a society if we come to depend on AI to assist in the most basic of interactions and thought processes. If we can't do the most normal, basic actions like send a loved one a message, or understand our own preferences and tastes and interests and spend time seeking out things that satisfy them (you know, one of the great things about living), it’s a grim portrayal of where we are headed.
We’ve seen with devices and platforms that once they’re intertwined with society, it’s hard to separate from them. It’s not a huge leap to suggest that if enough AI agents or programs start to do tasks for us or think for us, bit by bit, more and more, they’ll integrate with our lives in a way that we can’t detach from.
I hope this is just another false dawn for Big Tech. I hope it’s another narrative they are pushing in desperation to keep the momentum building and the VC dollars pouring in. I hope the general public realizes it's absolutely fucking insane to let an AI think for you, to make decisions for you, to have any agency over your life at all. I hope we push back against feeds of AI content that's been generated to try and suck us deeper into our device addiction under the guise of it being “what we want.” I hope we push back against messaging apps telling us what to write and to whom to send it. I hope we see that they want to rot our brains and have us depend on their products and services so that they can drain every last penny and data point from us.
I hope we wake up and stop the brain rot.
This is why I plan to disappear into the forest and create a nice, cozy hermitage.
The problem is the mindless adopters, who think that any technological “advancement” is automatically superior to everything that came before it and therefore, they must use it. These people will adopt anything and everything and lose their brains completely, but they’ll still try to convince you that their way is better and that you’re just a silly Luddite who doesn’t get it.