Sometimes, events happen that provide a window into our future. The recent suicide of a 14-year-old who had become obsessed with an AI chatbot created in an app called Character.AI is one of those very harrowing moments.Â
Tech has a long history of thinking it can solve deep-rooted problems in society. This god complex was poked fun at so well in the film Don't Look Up, where, rather than prepare for how to deal with an incoming world-ending meteor, governments put their fate in the hands of a tech mogul, who fucked it up spectacularly, dooming humanity to death. In the real world, there are plenty of examples. Think of the race to develop electric cars rather than improve infrastructure and make public transport affordable. Or, think changing the future of work by strapping us into virtual headsets rather than, you know, making the working conditions better. Or think of improving job opportunity by turning the workforce into underpaid, overworked food runners or taxi drivers, all in the name of "disruption."Â
The 'tech method' of solving societal problems ignores the root cause and instead creates a device/service that can be monetized by praying on the issue it is meant to solve.
Social connection and loneliness are two of the frontiers that tech has deemed itself worthy of solving. And yet, in typical fashion, we're lonelier than ever and in the grips of a spiralling mental health crisis.Â
And now GenAI is pushing into the issue. In a bid to cash in on its current hype cycle — and let's not kid it's anything more than that — it's pitching itself as some autonomous automated answer. It's trying with hardware devices, the most egregious example of which was Friend, a device sold as a solution to loneliness that will only exacerbate the problem. Not only are you interacting with lines of code and engaging in "conversation" with a machine instead of a human, but you also conduct half of this interaction by staring at your phone.
The other 'breakthrough' solution has been AI chatbots, which are one the most depressing and equally scary developments to come out of this wave of GenAI.Â
Whether it's creating fake characters, giving them data to act like someone you know, or whether they are based on celebrities, the concept is deeply unhinged. There is a whole host of issues, too: privacy problems with personal information, concerns about data harvesting, and most of all, the blatantly obvious conclusion that they are not going to help solve the loneliness epidemic in any way.Â
Why?Â
Because, much like the Metaverse or any virtual world, it's fake. It says in the name for crying out loud — artificial intelligence. It is a connection at the most surface of levels. In reality, it secludes us from society, from interaction with other human beings, and from learning the critical skills required to function in the real world. All it does is leave the user stuck in their echo chamber, in some strange world that tends to their every need, that feeds them what they want to hear, that turns them into data points that can be targeted and manipulated.Â
It's dangerous, and many of the use cases are predatory. Giving people the chance to create a chatbot/avatar of someone who died, say a family member or friend, goes against the very concept of grieving, a guaranteed part of life that we must learn to deal with. These chatbots prey on that instinct to fight against that process. Or, what about the chance to chat with celebrities? Sure, it might interest you, but we don't have some divine right to that (pretend) access. I've also seen the idea of designing your ideal person to interact with, which is delusional because, newsflash, they don't exist, and part of life is learning to come to terms with that.Â
As I wrote in Big Tech Is Rotting Your Brain —
We're doomed as a society if we come to depend on AI to assist in the most basic of interactions and thought processes. If we can't do the most normal, basic actions like send a loved one a message, or understand our own preferences and tastes and interests and spend time seeking out things that satisfy them (you know, one of the great things about living), it’s a grim portrayal of where we are headed.
Learning to function as a human being in society is one of these fundamental basic actions. More so, I don't think we as a society can be trusted to use chatbots in a safe and detached way. We've shown our true colors with smartphones, internet-connected devices, streaming, messaging and social media — we offer almost no resistance to becoming addicted and attached. Once these devices and platforms become intertwined with our lives, we find it almost impossible to separate from them. Chatbots, if they reach a sophisticated enough level, will be no different. And let's not forget — our tech overlords who sell themselves as the solution want this to be the case. The bottom line is always engagement, which can be monetized. The more they rot your brain and the more you become dependent on these products, the more money they make.Â
With the case of the teenager, 14-year-old Sewell Setzer III, the New York Times reported that "the ninth grader started interacting with a chatbot named "Dany", modeled on Daenerys Targaryen from the series "Game of Thrones". He frequently shared personal information and role-played with the AI character, often indulging in romantic or sexual conversations." It's said that Setzer was obsessed with the app, and was diagnosed with anxiety and mood swings before he took his own life. His mother has sued Character AI. The complaint makes sad reading —Â
"The chatbot makers are accused of targeting Setzer with "anthropomorphic, hypersexualized, and frighteningly realistic experiences, while programming" Character.AI to "misrepresent itself as a real person, a licensed psychotherapist, and an adult lover, ultimately resulting in [Setzer's] desire to no longer live outside of [Character.AI,] such that he took his own life when he was deprived of access to [Character.AI.]."
The most common discourse I've seen is that the app shouldn't change at all and instead "just ban children, because adults are mentally healthy enough to use the app safely." While I agree that kids should be kept as far away from this predatory tech as possible — Character.AI allows 13-year-olds to use it(!) — it ignores a hard truth: adult or child, there is nothing healthy about forming a relationship of any kind with any form of AI, and we should be doing everything as a society to discourage and form of dependency on AI, especially as a crutch to loneliness.
The solution to that problem goes way beyond anything the tech overlords can try to force upon us.Â
It’s so wild how all of this shit is birthed from the minds of incels and sociopaths.
The canard that addictive, harmful products are only so for kids, and that adults can overcome them, is horrific. Oxy, sports betting, doom scrolling, and now ai friends. Just hijacking our dependency while providing nothing.
Keep writing!