In the last decade or so — the algorithm era — Big Tech companies have meticulously designed systems that try to control what we see, hear and engage with.
It's been a wild success. I call it the Vanilla Internet, a world where most people with Internet connectivity don't have tastes or preferences. Well, they do, but it's whatever the rest of the herd is into, too. They tell themselves that they like or engage with stuff because it's good. They are not willing to admit it's more likely because it's popular by algorithm or that it's infiltrated their day-to-day lives through a calculated barrage across platforms and media. This is the way Big Tech companies want it to be — they don't want consumers to have individuality because that makes it harder to capture attention. With controlled, widespread taste, they can engineer paint-by-numbers content that drives whatever engagement metric makes the company look best for their next shareholder update. And the more we eat it up, the more they double down on serving us second helpings.
It's gotten to the stage where Netflix makes content designed to be played in the background. Like, what the fuck? We let a company make stuff it admits it doesn't care about, then watch it promote it across our feeds or put it in our 'up next' lists, and then we actually engage with it?
The problem is that, as algorithms have become more finely tuned (based on our data), more targeted, and more interwoven into our services and platforms — the majority of consumers have given up their agency. It made me think of WALL·E and the humans who had completely surrendered to their AI-controlled world. It seems fun for a while until you realize it's not the dream you were sold.
Now, I see signs that this is repeating, though not just with slightly rudimentary things like one's favorite artist or TV series — we're seeing the early signs of people giving up agency on their work and their lives. As generative AI continues to stutter, seemingly struggling to scale or progress much past the point it is now (which is, at best, slightly useful), and still burning through billions of dollars and resources while it stagnates (despite us now knowing, thanks to DeepSeek, that they don't need that much money), we keep hearing about the next phase, the one that will truly show us why AI is here to stay.
That future is AI agents. Without boring you to death, AI agents are systems or programs that are capable of autonomously performing tasks on behalf of a user. Right now, they are being pitched as ways to handle monotonous things so we can better maximize our productivity.
But I can't help but feel another what the fuck coming on.
Because it isn't just about mundane tasks; that's the softball they've thrown to warm us up. It's about everything. It's about slowly but surely letting AI take control of things in our lives, whether it's emails, calendars, conversations with friends, the content we watch, or our thoughts and interactions. Each iteration takes more of our control, more of our freedom of choice, and more of our agency. We're doomed as a society if we come to depend on AI to assist in the most basic of interactions and thought processes. If we can't do the most normal human actions like sustain ourselves, send a loved one a message, know how to write an email, construct our thoughts and put them down on paper, or understand our owndifferences and tastes and interests and spend time seeking out things that satisfy them (you know, one of the great things about living), it's a grim portrayal of where we are headed.
Again, this is what Big Tech wants. Once devices and platforms are intertwined with society, it's hard to separate from them. It's not a huge leap to suggest that if enough AI agents start to do tasks for us or think for us, bit by bit, more and more, they'll integrate with our lives in a way that we can't detach from. We'll become dependent on them to live and operate, and we'll soon enough be fully wired into systems that are controlled and owned by monopolies that don't have our best interests at heart. What happened to AI being for the good of humanity again?
AI agents may just be the next narrative being pushed in desperation to keep the momentum building around AI and the VC dollars pouring into the industry. I hope so because I don't see the point in living if we lose interest in learning things, achieving things, taking pride in our work and ourselves, and being functioning, respectable human beings. If we give that over to AI, what do we have left? Creative outlets? Oh yeah, it's already taking that too.
The general public needs to realize it's absolutely fucking insane to let an AI have any agency over your life at all. In the last few years, we've woken up to the insidious practices of our tech overlords and now understand — albeit too late — that they exert far too much control and influence over our lives. We've realized that we need to take back our agency and stop letting technology own us.
As the next wave of AI descends on us, I hope we don't forget that.
Ever see an 80s movie, the greed is good era business ones? The trope of the "busy businessman" who gets his secretary to buy anniversary gifts for his wife and neglects his kid exists for a reason - we all instinctively know what a dick he is, and how awful a way that is to live your life. He isn't a hero for it and even then we didn't admire him for it.
AI is going to provide new ways to disengage and automate parts of your life. It -could take over talking to friends and engaging with your kids but it won't, because we won't want to use it that way, any more than we wanted clippys help in the 90s. Just because big tech haven't worked out how to make it appealing yet and are rolling out ridiculous claims showing how little they understand people doesn't mean it's time to panic.
Constant hang wringing about what will happen to us, as if we already possess no agency to choose, is no different from the ridiculous claims that the big tech are making about AI. The reality will be, as always, somewhere in the middle.
That's a bit overwrought. As long as agents are servants not masters whats the problem? They will be great for repetitive tasks. I would love to have one to apply for jobs for example.