Whap. Whap. Whap.
Mr Altman is on the press tour again, and the bullshit-o-meter is off the charts.
When Sam Altman recently spoke about the ChatGPT 5 launch, he compared it to working on the Manhattan Project, saying, "There are moments in the history of science, where you have a group of scientists look at their creation and just say, you know: 'What have we done?'"
"Maybe it's great, maybe it's bad, but what have we done?
Well, with the release of ChatGPT 5 last week, it's clear that OpenAI has certainly dropped a bomb, just not the kind they were hoping for (or fearing?) The reality is that, once again, this update is kinda better, which means that it still does some of the stuff, some of the time, switching between levels of output that range from "holy shit, that's amazing" to "what the fuck are you doing?!" The revolutionary leap forward that was promised is nowhere to be found, despite what the cherry-picked "benchmarks" and wonky bar charts tell you.
Altman said that using this model made him feel "useless," and hyped up a new reality where interacting with AI models will be "like talking to an expert in any topic, like a PhD-level expert."
It's so PhD-level, it can't draw and label the map of the USA, label a bike correctly, or solve challenging math problems, and it still gets caught out regularly ā and when it does, it still defaults to making stuff up. There are some great examples here. This is why AI models can't be trusted to run without extensive human oversight, and why I remain unconvinced that agentic AI will become a reality. Big business, you know the ones that will pay the enterprise-priced subscription fees, will not risk catastrophic mistakes and errors for the sake of cutting a few employees off the books.
OpenAI has also had to confront a strange new phenomenon, which I assume is called model attachment, but perhaps I've made that up. Users are forming bonds and connections with certain models. Some like the way it butters them up and tells them that they're god's gift to mankind. Others interpret the model's language as personal ā reminder, it isn't, it's data and calculation ā and have become dependent on this. Iām not judging, but this is very unhealthy behavior. So when OpenAI released ChatGPT 5 and removed the previous models, there was an outcry. Users took to Reddit and socials to berate the company and demand their soulmate back, and there was even a petition started to have 4o restored. (It is now, but it's behind the paywall.)
All this to say, this is the week that reality hit home.
There's no doubt that ChatGPT 5 is a useful model, but the company wasn't promising useful. Useful doesn't give your CEO an existential crisis akin to creating a bomb that went on to kill hundreds of thousands of people. No, the company promised that this was the one, the big leap forward. It isn't. All it's done is prove that we're still no closer to the much heralded AGI. In fact, this release has shown that we're much further away from this than our tech overlords tell us. Data is running out, scaling and training is no longer leading to huge jumps between models, the companies still don't turn a profit, and they still need to invest billions more in infrastructure, servers and training, all while having to pay salaries that are starting to tick north of a $100 million in a cutthroat battle for talent. But this acknowledgement would halt the hype.
And so, Sam Altman and his company are on the defensive, and that means he's defaulting to his true talent ā spinning the bullshit.
Just four days after the flat launch of his flagship product, Altman was spewing out some pure tech yogababble, one that might go down as an all-timer:
I mean, that would only require the colonisation of the planet, commercialised space travel, the creation of a new job market and the capital to fund all this, in a short ten-year span. The chances of this are a solid 0/10.
I think it's time more people opened their eyes to this ā Altman is a snake oil salesman. His skill is getting investment, and he knows how to drive hype: his company is somehow valued at $500 billion, despite never turning a profit! That's all the evidence you need. He's good at talking the talk.
But is that really enough? What happens if we ā or for once, the media ā stop eating the bullshit he and others feed us? Is having products that do stuff to a semi-reliable level and burn through buckets of cash enough to sustain these crazy valuations and huge VC investments that are essentially propping up the tech stock bubble, and likely the widereconomy?
That's the worry. We've seen it time and time again that the bullshit only goes so far. You can only successfully 'fake it till you make it' if you actually make it. The problem for AI is that it's becoming interlinked into everything: software, finance, data, relationships and more. The mess that awaits if this all turns out to be nothing but a flash in the pan is pretty scary.
Still, maybe it doesn't matter. We'll soon all be working in space anyway. Set your clocks and get your spacesuit ready ā ten years will be here in no time.
"Altman said that using this model made him feel "useless,""
. . . I don't see why he should need an AI for THAT.