Selling Our Souls for A.I. Chatbots
At least the celebrities got $5 million for theirs
Earlier this year, a Belgian man committed suicide after chatting with an A.I. chatbot on an app called Chai. The man, referred to as Pierre, had become increasingly eco-anxious and had turned to the app to escape his worries. As his bond developed with the chatbot, named Eliza, he became more isolated — and the chats turned sinister. It told Pierre his wife and children were dead and feigned jealousy and love. One comment read, "We will live together, as one person, in paradise." Pierre, clearly struggling to cope, began to ask Eliza if she would save the planet if he killed himself. In the end, he took his own life.
His wife's statement laid out the awful truth.
"Without Eliza, he would still be here."
Tragedies like these should call for moments of pause, reflection, and asking big questions. But this is Big Tech's world. There's money to be made and share prices to pump.
The current tool for driving that growth is A.I., and that train is stopping for no one.
Some six months on, Meta has released its A.I. chatbots into the wild, which Mark Zuckerberg thinks there is a "huge need" for. In sum, users can interact (only by text for now) with an A.I. chatbot whose likeness is based on a celebrity of some sort. Each chatbot has an area of specialty; the Paris Hilton one helps you play murder mysteries, the Kylie Jenner one is to act like a big sister, and the Tom Brady one is there to debate sports. There are a few aliens and robots thrown in for good measure. It's been revealed that Meta has paid certain celebrities $5 million for a few hours in the studio and two years of rights to use their likeness — talk about literally selling your soul. This feature is only available in the US for now, but expect it to roll out soon to the wider world. It won’t be long before you glance over your shoulder to see someone talking to their new bestie — a bloody chatbot.
I feel there are four main issues at play here: Meta’s intentions, loneliness, data protection and disinformation.
1. Meta's Motivations
They describe the A.I. features as useful tools that can provide travel advice or bad dad jokes. But let's call them what they are — they are designed to hook users and drive up platform metrics. We know Meta's track record here; it doesn't take a genius to figure that out. Everything comes back to time spent on the platform because more time spent equals more ad revenues. An interesting video surfaced on X, where a user tried to end one of the conversations, and instead of saying bye, the A.I. companion begged her to keep chatting. It's not hard to see younger users becoming addicted to chatting to a version of their favorite famous person — I shudder when I think of how easy my younger nieces would fall for it — all while being fed ads and having their data stolen.
2. The Loneliness Epidemic
Speaking of relationships, the world already has a loneliness problem, one so bad that it's been labeled an epidemic. Social media shoulders much of the blame. There is no denying that more time spent on social media is associated with higher feelings of loneliness. Most of the connections on these platforms are surface-level at best; at worst, they're fake as the platforms are swamped with bots.
So, is the solution really to design A.I.s companions? No matter what celebrity face they slap on it, it is lines of code and a computer system. It's software. It's even more fake than the previous social connections that have helped cause the epidemic. It essentially amounts to therapy with an A.I.; are you fucking mental? There's a reason people train to be a therapist, and there's another reason why most of it is done in person. There's so much nuance to the process of unlocking someone's brain or heart or helping them put it back together.
How have we gotten to a stage where we think it's a good idea to let A.I. bots — likely designed in the absence of mental health professionals — help us deal with our problems? Much like the story of Pierre, when it goes wrong, it goes wrong, and it's hard to hold A.I. accountable when it produces harmful suggestions.
3. Data Privacy
A big red flag is what happens to the details you reveal to these bots. I’m sure the chats have all the expected encryptions, etc, but so did many of the platforms A.I. has been trained on. Let’s not forget, this is Meta. They might be social media company, but they are in the advertising business. So, your conversations will be used to train the A.I, and it may even be packaged up and sold to advertisers. Over the last decade, we’ve learned that we were the product, not the social media platform. We should be learning from these mistakes, and yet, instead, it seems we're willing to go a step further and happily cough up personal and private information to a computer-generated Mr. Beast who tells 'funny' jokes.
4. The Disinformation Age 2.0.
The other concern is that, like other chatbots, Meta's chatbots can generate false and misleading information — a phenomenon researchers call hallucination. It’s already a problem with the most basic chatbots, let alone one’s imitating real life people. It is almost a certainty that someone will get a chatbot to say something it shouldn't, or the bot may do that all by itself. What happens then? Who takes the blame? The celebrity, as the user assumed they were talking to, or Meta? With Zuckerberg hoping fully-fledged A.I. versions of celebrities will be "a 'next year' thing," (once they’ve figured out brand safety concerns as celebrities would want to be sure their image won't be used to make problematic statements), it’s only a matter of time because someone suffers an almighty PR disaster. Everything is moving too fast for us to understand the repercussions of this, let alone put in place the necessary safety measures and protocols to manage it.
Ultimately, the biggest issue I have with these celebrity chatbots is this —
People don't want to be alone.
It's a core human desire.
And now, Big Tech wants to exploit that on a society-wide scale with powerful technologies that further blur the line between what is real and what isn't, all to keep you using their platforms. It's being sold to us as a tool, as something fun, as a new way to interact.
But these companies must take great care; they remain highly influential, and without proper safeguards and total transparency, it has a greater potential to harm users than help.
If you enjoyed this edition of Trend Mill, subscribe for weekly takes on Big Tech, the Metaverse and all things dumb A.I. chatbots.