'GenAI' is De-Levelling the Market
Its biggest success has been to amplify the inherent laziness found in wannabe creators, bad actors and boardroom execs
For most of my writing career (over 8 years), I’ve done bucketloads of editorial work. I’ve ran publications, edited articles from CEOs and “thought leaders” — is that term finally dead yet? — and worked on books from high-flying business execs.
I’ve edited some great, industry-recognized words.
That’s the fun side of the gig.
Recently, and not out of choice, I’ve started to experience an emerging less-fun side of the gig: editing AI-generated sludge.
I’ll get right to it. On the whole, the stuff I’ve seen sucks.
I don’t care what the Twitter X gurus tell you when they try to sell you their AI prompt writing course or what the so-called “prompt engineers” like to kid themselves into thinking, but AI-generated writing is, at best, serviceable. It’s mid-level, vanilla, by-the-book writing, lacking all the emotion, vulnerability, opinion and quirks that make writing such an impactful medium. It’s the fast food of writing; it fills the hole, but you could have made something much better yourself. In my editorial work, my task is usually to make it sound “more human,” but therein lies the problem. The writing is 100% un-human. It leaves me scratching my head as I read through it because most of what it spits out doesn’t read like any human speaks. Strange turn of phrases, word choice that tries to be too smart, paragraphs saying the same thing and taking double the word count only to poorly land the point. Worst of all, it likes to make stuff up, so I spend too much time fact-checking.
I’ll admit that when AI burst into the mainstream in late 2022 with the release of ChatGPT, I was worried about the potential implications for all of us who use the writing word to make a living. Yes, the early output it produced was sucky, but with the exponential speed it seemed to be growing, it was only a matter of time before it wrote better than humans. Or, at the very least, it would figure out how to replicate us to a point where it was impossible to tell the difference.
That was a little bit premature.
AI-generated writing has stagnated. It still produces output that takes so long to edit that a human may as well have written it in the first place. It hasn’t reached a level where it could replace even a semi-skilled writer, especially in long-form content. All it has really achieved is allowing people to push out more quantity, with less thought, less craft, and with a huge sacrifice on quality.
I’ve said before that I think generative AI is killing creativity because it side-steps the craft needed to do something like write well. As a result, its biggest success has been to amplify the inherent laziness found in wannabe creators and bad actors and to give boardroom execs a way to cut out the “cost” of creativity — the humans who produce it.
Just look at the reasons people give for why it’s “revolutionary.” People tell me how they use it to pump out their month of social content in an instant, or to write an email for them, or to summarize a book they can’t be bothered to read, or to give them a list of things to write about. How revolutionary. The only improvement, if we can call it that, is the speed of output. It achieves that by removing the element of thinking. We all know that thinking is what unlocks the truly great pieces of work, hence my belief that it kills creativity.
Now, the result of this is becoming clear.
Generated AI is de-leveling the market.
What I mean by that is that it’s lowering the accepted standard of work and driving down the cost needed to achieve it. The people paying for creative work generated by AI are settling for less because it’s cheaper, which is terrible for everyone. I’ve seen it with my own eyes in the writing world. What people are willing to pay for writing doesn’t equate to what it costs to get a real, skilled writer to do it. So they’ve turned to the cheapest means necessary. The more companies, execs and brands do this, the more it brings down the level of acceptable work. With budgets tightening across the media and content landscape, it’s becoming all too tempting to turn to the generated sludge to save a buck. Before we know it, we’ll be trapped in a race to the bottom, ever-cheapening the quality and cost of the craft.
There might be some good news ahead, though. (Good news, on Trend Mill?! Must be a first).
As I mentioned, I assumed GenAI would soon be at a level to replace certain forms of human-produced writing. It may still get there. But there’s a wider thought forming that we could potentially be reaching “peak AI.”
Why? Because it’s possible that the LLMs and other artificial intelligence models don’t have enough data to learn from. It has almost cannibalized every form of data available, leaving very few legal options on the table. (Note: Coincidentally, as I was writing this, the Financial Times announced it had agreed a deal with OpenAI to use its data for training. Perhaps the media content floodgates will soon open?) There is talk of transcribing YouTube videos to give OpenClosedAI more data. Rumors are swirling that companies like Meta would love to use your messages and DMs to hoover up more data. Then, there is the small issue of copyright, and many media sites or social platforms are making it harder for AI bots to train on the data within them (at least for free). Basically, without more data, it could be stuck somewhere around this level. Unless it turns to the content it has already generated — something called model collapse — in which case the quality of output will get even worse.
All this should be worrying for AI pushers everywhere. Not only because, at its current level of capabilities, it’s just not that good, but also because there are signs that public interest is waning and usage of ChatGPT is declining. It seems the consumer is tired of being told which technology will change their lives (think Metaverse, headsets, NFTs) and is wary of accepting AI. There has been continued backlash from creative types, with people pushing back against AI imagery and music. I’ve noticed more and more creators label themselves AI or LLM-free as an almost badge of honor. Companies are not yet seeing the profits promised by incorporating AI. There has been a lack of breakout companies and products since OpenAI, which has shrunk the hype distortion field. The current hardware projects have been underwhelming, or worse, absolutely broken on launch. The technology is still struggling for real use cases beyond “make things possible to do without any creative talent or thought.” And biggest of all, the computers doing the training require an outrageous amount of energy that will come at a huge financial cost, and likely an environmental one too.
What does this mean for the future? My hope is that technology continues to stagnate and that more people wake up to the fact the content produced, whether writing, imagery, or video, is not that good or beneficial and is only serving to de-level the creative markets. The more we realize this, the more the hype will slowly fade away, and tech overlords like Sam Altman — I recommend this read on him and other non-tech founders who are ruining tech businesses — will fade with it.
If you work in creative fields, don’t lose hope. It’s a little sucky out there, but the tide shows signs of turning against GenAI.
Keep doing you.
Keep being human.
On the Trend Mill this week
Stuck in the rabbit hole — Another week, another underwhelming AI product launch. This week, the Rabbit r1 was released into the wild, and, surprise, surprise, it’s an unfinished device that can only do a few things and fails to deliver on much of its promise. It's just the anti-app device that should have been an app. I enjoyed this 20-second destruction of the device. Remember, folks, buy the product, not for the promise.
Devoid of reality — Apple has reportedly cut shipments of its Vision Pro headset due to underwhelming sales. Who would have thought it? A device that most consumers don’t want, made by a company that didn’t want to make it, released into a market that was already turning its back on the idea of headsets after wasting billions on the Metaverse with little to show for it. The big question is whether the cheaper version, which is said to be pushed beyond 2025, is now buried.
Well said--and I find it strange how people without the interest or discipline in actually building creative skills seem so excited about this. I was at a conference where a speaker excitedly exclaimed, "Now, everyone can be a novelist!" No, that's not how it works.
Story’s the same with AI generated music. It sucks. Many producers are touting the incredible things it can do, but it’s just nifty. It doesn’t make anything good. It could someday replace stock music, but that’s a long way away. It will never replace real music though because real music is art and art is innovative and rule-breaking. I just hope in the meantime our race to the bottom doesn’t last too long or cause too much permanent damage.