Since the boom of generative AI, there has been an unanswered question: What is the point of all this?
Is it to improve society, cure diseases, enable people to live lives free of monotonous work, or is it to generate videos of Sam Altman as Skibidi Toilet?
We’ve swung wildly between “AI will enslave us” and “AI is nothing more than a sometimes-useful tool” so often, I should sue for whiplash. Even Chief Bullshit Officer Sam Altman switches between comparing ChatGPT to the Manhattan Project, and, oh don’t worry, we’re only creating tools for the funzies on a weekly basis. There is absolutely no correlation between this and when the company needs to raise capital.
The diehard enthusiasts tout that the slop era is a stepping stone to something greater. By training LLMs to “make” art, or writing, or video, we are priming them with capabilities to achieve greater things. The justification for the slop machines is that Generative AI is the crawl before AI technology can walk towards a new dawn.
But lately, I think we’re seeing that this may not be true.
Right now, Generative AI is the main offering.
Perhaps, the only offering.
So far, the technology has failed to deliver the workplace revolution it promised. At the enterprise level, 95% of AI pilots are failing to improve productivity. Across the SaaS industry, products are ruining user experience and functionality with unnecessary AI features and then bumping up costs to pay for it. Professionals across the board are using AI tools more because they can, rather than because they actually improve work processes (looking at you, therapists), which is hardly sustainable. The main issue is that the fundamental problems remain. Despite being years into the GenAI era, there is still no trust in the intentions behind it, no reliability in its output, no transparency in its data use, no path to financial viability, and no resolution to the copyright infringements that continue to occur. With concerns that the bubble is close to bursting ramping up by the day, there is serious pressure on the entire sector to find new hype, new use cases and new growth.
Which brings us back to slop machines.
The release of Sora 2, essentially TikTok for AI-generated content, is a damning reflection of the desperation at play. It’s given us some of the answers — it is, and always was, about money. And the money isn’t coming from anything other than subscription fees to sludge generators. It’s all they have, and so everyone is doubling down.
The AI field has some of the smartest minds in the world working in it (and some of the highest paid), and it’s been reduced to creating more chatbots and TikTok clones. In OpenAI’s case, the accumulation of all this effort, knowledge, and billions of dollars of investment is apps and social networks that are highly addictive, hyper-personalized content farms designed to eventually sell adverts? Erotic conversation and role-playing with chatbots? A feature that has chatbots prompting the user each morning? The company — once nonprofit and for humanity — is now chasing the same growth and engagement metrics as every other tech giant.
The result is a bunch of smart people making stupid products.
As if we needed more of that shit.
We know we’re addicted to devices and platforms. We know the creators of them know that. We know that they spend time and money figuring out how to make them more addictive, and how to get them into younger hands and more malleable minds. And yet, we still consume. We completely lost the battle. Take one look at the average screen time — it’s frankly disgusting. But thank you, Mr Altman, for your solution: More apps, but this time with a twist; you can create anything in your wildest dreams with a few clicks, hundreds of times a day, and doom scroll feeds of content that will keep you hooked in ways we’ve never seen before!
There’s an even darker side: deep fakes with realistic look and voice created in a matter of seconds, held together with very lightweight safe guards (a watermark and some copyright blocks) that will be easily side-stepped by those who want to. And people want to. Read any comment threads on Reddit, and it will tell you that users only like using Generative AI when they can do whatever they want with it. Users will demand platforms with fewer and fewer guardrails, and companies will race to provide them, despite the obvious implications, because money.
That’s a scary prospect. We’re creating hyper-realistic deep fake machines that will be accessible to anyone with a few bucks and a smartphone, and then being so kind as to provide the social infrastructure to let users share this content to a wide audience. It’s incentivised deep fakes. What could go wrong? Is this what they meant when they speak about “democratising creativity?” I’ve written many times about why that’s not a good idea, and Sora 2 will prove why that’s correct.
At least we can relax in the reassurance that we’ve got consent and regulations to keep users safe. Oh, wait.
The good news is that the platform is not showing any signs of staying power. It’s only a week old (!) and it’s already reduced the output of free users. It’s also been rocked by copyright infringements complaints left, right and centre, and users are now complaining that the platform is boring as it’s forced to control what people can generate. For Sam Altman, he’s caught in a lose/lose scenario — the app is burning his company’s money while rights holders are angry at him for violating copyright, and users are angry at him for not violating copyright hard enough.
(The app, which shot to number 1 in the US charts on the day it released, now has a 2.9/5 rating.)
I also think it’s still targeting a smaller market than people think — i.e, those who are too lazy to be creative and those who are happy to consume the end product of this laziness. The movement against AI slop is gathering pace. As I wrote a few weeks ago, for me, it all comes down to intention. Creating garbage for the sake of it is a sign of bad intentions, and it shows in the outcome. As we continue to push back against mindless content generation, what possible future can platforms entirely dedicated to promoting it have?
The entire AI industry is caught in a vicious cycle right now, with the shadow of collapse looming over it that can only be outrun with more capital and more hype, and it’s having a knock-on effect on the products and services being released. Chasing money leads to products no one asked for and takes these companies away from doing good work. And again, when the smartest minds on the planet are involved, they are very capable of this.
The sooner these smart people start making smart products, the sooner humanity will benefit from this technology.
“For Sam Altman, he’s caught in a lose/lose scenario — “ May this spiral continue
It’s interesting how stupid smart people can be.