Well, it happened. AI-generated videos—created directly from prompts—are now at a level that would at least pass off as run-of-the-mill Netflix slop.
I don't know if that's a compliment to generative AI progress or a dig at Netflix and its paint-by-algorithm content mill. It probably is both.
I'll admit that this is the first time in a little while that I've been caught off guard by an AI release. Most of the time, I hear, "This is the most powerful model yet," and yawn. They'll waffle on during a demo that shows a big nothing burger, like a user doing a task they could do much quicker if they just did it without AI. It's always more of the same thing, only slightly faster, slightly more capable, or slightly less wrong all the time, thanks to it being trained off of even more of our data, art, creativity and writing in return for zero compensation, all while burning through more of the planet's resources in the process.
But these latest video updates have me a little spooked.
This one is produced using Google's Veo 3, which was released during a keynote the other day. Yes, it has some of the usual jank, that uncanny valley that throws you off a little. Yes, you can still tell that it's AI, but let's be honest, for how long?
There are some pretty crazy ones in this thread. The gap is closing by the day, and it's clear that we're only a few years away from AI-generated video being completely indistinguishable from human-generated output in a visual sense.
As we hurtle towards this future of blurred lines, the usual questions linger.
Do we need this?
Who is it for?
How does it make money?
The answers to those are pretty straightforward.
We don't really need it. It's essentially still a big tech demo to show what AI could be capable of, which drums up hype and keeps investors frothing at the mouth, desperate to throw their money into the ring. Remember, that’s the key aim of almost all AI companies — more venture money equals higher valuations equals bigger dick to swing around.
It'll be sold as a way to "democratize creativity" (more on that coming next week), or as Sam Altman likes it to put it, for the benefit of humanity, but really, it will only serve to consolidate wealth at the top. Once tools like these get to even 80% of the level of human output, it means one thing: they replace human capital. No wonder actors went on strike; as the streaming wars continue to be a zero-sum game, there's no doubt the suits see this technology as a way to produce more for less.
It doesn't really make money, aside from some monthly subscriptions, which don't even make back the computing costs of prompting, let alone the training costs and the astronomical salaries they're paying the folks at these companies. That's a big problem. Right now, AI companies (OpenAI in particular) are having to raise capital to stay afloat, staying true to that "fake it till you make it" ethos of Silicon Valley. If you are anti-AI, consider this the one potential saving grace. If it fails to make money itself or to justify why businesses should invest in it (say, for Enterprise packages that cost thousands per month), then I can't really see how any of these companies survive in the long term.
But there's a more worrying question that exists, and it always seems to get skipped over.
Where are the safeguards?
It's not too extreme to say that the ability to generate videos of anything and anyone from a simple prompt has very dangerous downsides. While we used to laugh at attempted deepfakes (remember Will Smith eating the spaghetti?), we ain't going to be laughing for much longer. We are entering the deepfake era. It's a world where we'll have to try to decipher between what's real and what isn't, where almost everything can be dismissed as AI, have its legitimacy called into question or be confused as legitimate.
Worse, it's going to be a world where real people have their lives turned upside down or changed forever, all thanks to videos of them generated in a matter of minutes.
Let's not kid ourselves; as these tools continue to progress and get copied by other companies and businesses who are less beholden to shareholders and public perception like Google — we're going to see an explosion of deepfake porn, fake news, defamation, character assassinations and more. They are going to be so lifelike that it's going to become nearly impossible to rule out that they "could" be real. We've all seen the cesspit the Internet has morphed into over the last number of years, and this strikes me as the ultimate fuel source for that fire.
I don't see how that can't be scary.
I hear the argument already — you could already do this! Sure, if you were an expert in using tools, software and programs. To use the phrase again, this is "democratising" creativity, reducing the need to learn anything or gain any mastery or proficiency, to bypass the inputs and go straight to the outputs. It's letting anyone with an Internet connection get 75% of the result for 0.1% of the effort. The trolls are rejoicing.
So, how are we going to protect ourselves? Where are the guardrails? Why are we seemingly happy, as always, to let a handful of tech overlords do whatever the fuck they want under the guise of "move fast and break things," when that thing could be society itself?
OpenAI once touted itself as wanting to create the "good" AI before others created the "bad" AI. In its early days, it pointed to Google and DeepMind as potential bad guys. It was going to be the company that would help keep AI in check and always have humanity at the forefront.
It was a noble cause for the whole 5 minutes it lasted.
Then, venture capital entered the space, and all the AI companies became the same. They are all chasing the same goals, working for the big tech giants who have already done so much harm, desperate for first-mover advantage so they can consolidate power and money and fatten the wallets of their shareholders. It's an AI arms race, and we are all caught up in the middle of it.
Make no mistake—humanity has slipped down a place or two on their list of priorities. This is evident in the lack of thought, care, or concern for how these tools can be misused. AI is fast opening the door to a deepfake world where we lose the very concept of what is real and what isn't, and those leading the charge don't seem to care.
All they care about is being the first to ship it.
It's incredible how short the golden age of "video evidence" will prove to be when someone can generate "evidence" on a whim.
To think we were getting closer to a place where we could all objectively see the truth in real-time and now we have to question everything we see is flabbergasting.
I believe all these technologies will ultimately push those who are looking for a true and meaningful life to unplug and embrace raw disconnected life again.