There is a fine line between using AI as part of your process and using it as your process.
I’m not really here to say where that line is. It’s different for each industry and each art form, and it’s up to each individual — and perhaps their audience — to decide how much they relinquish their creative process to a machine (remember, creativity is the fun part, and I’d suggest doing as much of that on your own as possible).
I have softened a little on my stance with AI over the past year. I do use it, mainly as a search engine, and also as a poorly paid assistant, giving it the prompt equivalent of “hey, this task is mind-numbingly boring, can you do it for me?” But I think my stance has eased also because recent releases have continued to underwhelm, and we’re seeing signs that its current form is not far off its ceiling. You might even say a bubble? Scaling is only going to take the technology so far, and there is a shrinking pool of data left for it to train on. Yes, there are still fears about its future impact, but I’m also starting to feel optimism again, as it becomes clear that AI is too unreliable to fully replace us.
So, for the foreseeable future, Generative AI models will remain what they are as of now: Relatively useful tools that don’t live up to the hype, don’t justify the ridiculous sums of money funding it, and said relative usefulness that doesn’t negate the resources it burns through.
It’s a tool, nothing more.
That distinction is important, given the current conversation around AI. With all tools, the intention behind why you use it is everything. Using ChatGPT instead of Google search because search engines suck now? Cool, though be sure to check those sources. Using Claude to take all of your research, the detailed outline you created and the style guide you spent hours writing to produce a first draft for you? Fine by me. I know some writers who use AI in their process, and they still take hours to write and edit an article.
Part of the process, not the process.
If the intention is for the project to be human-led and the output to be majority human-made, that’s using AI with good intentions. Past that line, things get murky. Generate an entire article from a single sentence prompt? Produce a “piece of art” from a few words in an image generator? Make a full series of YouTube videos in an hour with the typing of a few words?
Here, the scales have shifted to a machine-led process.
And that’s where Big Tech and corporations see the dollar signs. They want to use the technology to pump out more content AI sludge, at a scale never seen before, for the sake of cutting costs and selling advertisements, all in service of their pockets.
In other words, they plan on using the tool with bad intentions.
A little like this woozy.
Or remember this one?
This is where the problem lies, now and in the future. AI is enabling output at a frightening scale and speed, which means companies can overwhelm feeds and channels with content. It’s already resulting in mass content farms, created for the sole purpose of selling advertisements (or driving clicks from AI-generated bots). Again, it’s all in the intention. This is a bad use case for AI. These people don’t like books. They don’t like podcasts. They don’t like us. If they did, they wouldn’t be able to sleep at night as they fronted their bullshit companies and shouted about the good they are doing.
These are actual words that came from the CEO’s mouth — “We believe that in the near future half the people on the planet will be AI, and we are the company that’s bringing those people to life.”
Why do we want half the people on the planet to be AI? Why on earth would we need 3,000 podcasts a week? What the fuck are we doing here? A quote from their co-founder and chief production officer is even worse — “I think it [AI podcasts] exists alongside it, and it can delve into areas where human hosts might not want to go that deep.”
First, we need to stop pretending that LLMs are “deep.” In terms of emotion, depth, vulnerability and relatability, they have nothing on humans. Not a single iota. Second. we need to stop eating the narrative that AI content should be seen as a separate entity that can live alongside human content. That’s not part of the process, that is the process. It’s a means to cheap labour and cheap content, and claiming to serve a “need” is a cheap disguise to hide behind. They are creating nothing of value here, i.e., creating with bad intentions.
I saw someone describe people who have allowed their brains to melt due to their reliance on AI as ‘skin bags of bones and organs.’
It’s so fitting. And it’s where we’re headed as a society if we don’t push back against this nonsense.
Let’s be clear. You have no integrity if you waste brain energy on this slop. If you are happy to listen to podcasts produced by a company that doesn’t give a single fuck about podcasts, there is no saving you. If you read a book by an author who cared so much about it that they spent a whole 12 minutes creating it, you’re a chump. Your brain is already melting, and you deserve the tidal wave of sludge that’s coming our way. Hell, no doubt you’ll enjoy it. Just remember there are millions of creators out there — many who use AI as part of their process — who painstakingly craft their content. You could choose to support them, but instead, you’ll engage with paint-by-numbers, farmed garbage.
The point here is this. I don’t think the question should be if people use AI or not. It should be about how they use it.
Don’t become a husk of a human who can’t think for themselves, who can’t learn (or doesn’t want to) new skills, who can’t exercise taste and have their own opinions, who can’t live life without an algorithm feeding them content, and who can’t appreciate being creative, in whatever way speaks to you, as an art form, as a hobby, and as a way to enjoy being human. Creativity is a truly human endeavour, and we should not automate that to machines, or support those who do.
We’ll always be skin bags full of bones and organs, but we don’t have to lose our brain and our heart.