After over two years and tens (or hundreds?) of billions of dollars spent, generative AI is still nothing more than a tech demo fuelled by what ifs, what could be’s and what probably won't be’s.
The one thing AI has going for it is that it's piqued the interest of the government's surveillance division and the promises that stoke their Orwellian fever dream. That will keep plates $pinning in the air for another decade.
What I can't wait to see is when Musk, Zuck, Altman, and the rest start turning on each other.
Maybe Peter Thiel will stage a big meth-fueled WWWF-like slugfest at his digs in Florida (the house that's near the highrise condo that his boyfriend 'jumped' (?) to his death from.)
I think the problem is that A.I. is a solution to a problem to a question no one asked. LLM's are constantly being paraded as being as smart as a grad student...or a PHD student...or a college professsor at writing papers. Which is great, but real world solutions to real world problems are more of a reach.
The only things that I have seen that are interesting are using A.I. models almost as an API or like a Google LLM and filling their sandbox in a walled garden and giving them access to pre-selected data. Stuff like HuggingFace and Llama are good at embedding in platforms and doing some of the boring stuff. Where this leads is saving dev time...but not much!
The one thing AI has going for it is that it's piqued the interest of the government's surveillance division and the promises that stoke their Orwellian fever dream. That will keep plates $pinning in the air for another decade.
What I can't wait to see is when Musk, Zuck, Altman, and the rest start turning on each other.
Maybe Peter Thiel will stage a big meth-fueled WWWF-like slugfest at his digs in Florida (the house that's near the highrise condo that his boyfriend 'jumped' (?) to his death from.)
I guess in a way they’ve already turned on each other, but if it lead to their collective downfall… pass me the popcorn
I think the problem is that A.I. is a solution to a problem to a question no one asked. LLM's are constantly being paraded as being as smart as a grad student...or a PHD student...or a college professsor at writing papers. Which is great, but real world solutions to real world problems are more of a reach.
The only things that I have seen that are interesting are using A.I. models almost as an API or like a Google LLM and filling their sandbox in a walled garden and giving them access to pre-selected data. Stuff like HuggingFace and Llama are good at embedding in platforms and doing some of the boring stuff. Where this leads is saving dev time...but not much!
I’ve found some use in Perplexity, but honestly that’s because Google has fucked its own product so bad by chasing AI. Kinda ironic
At the base, LLMs are not intelligent, and that appears to be a fundamental flaw for scaling their usefulness.