After over two years and tens (or hundreds?) of billions of dollars spent, generative AI is still nothing more than a tech demo fuelled by what ifs, what could be’s and what probably won't be’s.
There have been countless products, chatbots and LLMs, and the best we've got so far is some moderately helpful tools that are decent at regurgitating code and words, weird images, some ridiculously wrong and incompetent output, and a lot of products ruined by unnecessary AI stuffing.
The industry is entering a sticky patch, and it's starting to show.
Take OpenAI; the once heralded frontrunner of this supposed new AI age. Let’s be clear, its current business model is failing — the company burns through ridiculous amounts of money and resources for every single prompt or dumb question asked, without generating a profit in return. Recent GPT releases are failing to make a big splash, throwing into question just how far scaling alone can really take generative AI (not as far as hoped), and Sora, its text-to-video editor, is pretty underwhelming for a so-called revolutionary product. In an effort to regain some control of its narrative, the company has chosen to release a new model, GPT-4.5, that's not really an update and currently costs far more per token, all while competitors are coming out and showing it can be done cheaper?
Slowly but surely, it has squandered its lead, and I'm not sure it will get it back.
The biggest problem is that OpenAI has no moat. It simply has to be the market leader to be successful; anything else is a bust. There's no secondary product, no devices, no software — there's no established offering at the foundation of the business. It knows this, too. Its business model is nothing more than a scaled-up version of 'fake it till you make it.' Yes, we're losing billions every quarter, but eventually, we'll create AGI, you know, that thing we haven't been able to define yet or even understand how to create. That’s why Altman and its other members, past and present, have tried to hammer home that training machines is expensive, that it needs more servers and compute that cost more and more money, how important it is to do this in a “pro-humanity” way (lol, okay) and that it needs to find vast quantities of untrained-on data — it needs to paint this picture to justify why it exists, and why it should get continued investment.
However, its main competitors, Google and Meta, don't rely on AI. Yes, they are guilty of pivoting to AI so hard it gave them whiplash. They are certainly guilty of stuffing AI into all their products and services purely to please shareholders, even when it makes them worse, but at the end of the day, AI is only a part of their organisation's makeup. Meta has an exceptionally profitable ad business, and the Metaverse. (Okay, that one was a wee joke). Google has its ad/search monopoly. Amazon has its e-commerce and cloud computing. Twitter X, being owned by Musk, has all his other companies to fall back on when Grok falls apart. Microsoft has its software empire. To make things complicated, it also kinda owns OpenAI. As part of their $10 billion investment deal, Microsoft has access to the company's technology until they can generate at least $100 billion in profits. With pretty unrealistic targets like that, this arrangement suits Microsoft more than OpenAI, who can access its products to serve its other offerings.
OpenAI has… AI. That was fine when it had its break-out moment and clearly sat atop the market. But that first mover advantage is long gone, and now, the company could be up shit creek.
There are a few major warning signs right now:
Everyone keeps leaving the damn company.
Sam Altman is a shady motherfucker who has long abandoned any notions of being altruistic with AI in favor of being just another greedy tech overlord.
Competition is stronger than ever, and we’re already at a point where each model lacks a USP with most capable of doing the same tasks. So why use the model that wants up to $200 per month to use?
The company has very little runway. It will need to raise many more billion to stay afloat, all while its competition pumps out models that are equal or better in performance and certainly cheaper. Hardly an enticing opportunity?
It's now pinning its hopes on a savour in Masayoshi Son — the genius behind such investments as WeWork and head of the Vision Fund, which loses billions every quarter.
What the hell happened? Well, firstly, where is GPT-5? It appears this model was intended to be that release, but for whatever reason, they couldn't scale it, couldn’t find the data, or couldn't build something that was enough of a jump. It's probably still on the way, but will anyone care by the time they release it? Someone will likely beat them to the punch. Second, why did they bother releasing a model if the improvements are so incremental they've left most of the user base disappointed? It reeks of desperation — and that's not a good reflection on its leadership, of which doubts have been forming for some time now. Altman is a great fundraiser, but is that enough? Third, the damn things still don’t work that well, certainly not reliably enough to be tasked to do almost anything without human oversight and correction.
And what of the bigger goal, building AGI? That pipe dream is all Altman has left. Yet why do they keep flip-flopping between being "confident we know how to build AGI as we have traditionally understood it" and then denying it's feasible, close or even possible? Oh, that's right. They don't know how to create it, and never have, but they have to wheel the concept out now and then to keep their company relevant and avoid dulling the hype.
The entire industry, and certainly OpenAI, is on shaky ground right now. If it doesn't break out of this current phase, one of incremental updates that don't fix the core problems with generative AI, a lack of industry and society-wide use cases, and products and services that don’t generate enough profit to make the entire concept sustainable, the fallout for generative AI — and the wider tech industry — could be catastrophic.
I'll leave you with a great state-of-play summary from Ed Zitron —
Where is the money that this supposedly revolutionary, world-changing industry is making, and will make?
The answer is simple: I do not believe it exists. Generative AI lacks the basic unit economics, product-market fit, or market penetration associated with any meaningful software boom, and outside of OpenAI, the industry may be pathetically, hopelessly small, all while providing few meaningful business returns and constantly losing money.
The one thing AI has going for it is that it's piqued the interest of the government's surveillance division and the promises that stoke their Orwellian fever dream. That will keep plates $pinning in the air for another decade.
What I can't wait to see is when Musk, Zuck, Altman, and the rest start turning on each other.
Maybe Peter Thiel will stage a big meth-fueled WWWF-like slugfest at his digs in Florida (the house that's near the highrise condo that his boyfriend 'jumped' (?) to his death from.)
I think the problem is that A.I. is a solution to a problem to a question no one asked. LLM's are constantly being paraded as being as smart as a grad student...or a PHD student...or a college professsor at writing papers. Which is great, but real world solutions to real world problems are more of a reach.
The only things that I have seen that are interesting are using A.I. models almost as an API or like a Google LLM and filling their sandbox in a walled garden and giving them access to pre-selected data. Stuff like HuggingFace and Llama are good at embedding in platforms and doing some of the boring stuff. Where this leads is saving dev time...but not much!