Exactly, the SnapTikTakoGrams and Co themselves were basically prototypes of the same trend.
The minds that get to work to put as much distance as possible to any possible human interaction are largely anti-social, hence why the design of almost all social platforms promotes angst and limits on what could had become a healthy interaction.
(Quick example: develop tech to connect instantly with someone else at the other side of the globe, which is great, but also put limits on the amount of characters available on said message and just below a threshold were most messages are likely to be inflammatory)
This type of AI use is basically a very advanced next level of the same minds at work.
The canard that addictive, harmful products are only so for kids, and that adults can overcome them, is horrific. Oxy, sports betting, doom scrolling, and now ai friends. Just hijacking our dependency while providing nothing.
Yup. We have a track record of being just as susceptible and readily addicted. If we are meant to be children’s role models in the increasingly tech-dependant world, we ain’t doing a good job
You have put in words what I couldn’t, even after reading the main article which happens to have many of these side thoughtful nuggets.
I so agree with you that it’s totally a misdirection, it’s is so much more than just AI and social media.
And if we were really somewhat immune to it the world would be quite a different place… so much effort and resources, taxes and government programs, families sufferings, the human toll, etc that is spent on trying to patch up (but conveniently not fix) all sorts of addictions is mind blowing.
In addition to all the social media and now AI dependencies that are coming up:
- 70+% of North America is obese and/or overweight: food addiction, eating more than what’s spent.
- Financial troubles and paycheck to paycheck living: spending more than what’s earned, consumerism addiction. Made worse by this crazy economy we got going on.
- All the drugs crime, deaths and wars that go even beyond borders: the baseline all sorts of drugs addictions that plague so many countries and feed so many other ancillary crimes.
So, at a level as a society we do agree that whether a kid or adult, our minds can fall into addiction… else no help care would be available at all.
But at the same time we got the likes of McDonalds who work around the clock 24/7 making sure their food will hook you, or the tech moguls and pharma-industry to hook you on their product, or the Ads Machinery to make sure we part ways with our hard earned money to buy stuff with don’t even need.
(I’m repeating myself and repeating part of what you clearly placed in words, wanted leave an expanded comment on it)
…nothing much lonelier than befriending a robot or having sex with a doll…the creators of these apps are predators…you should look into the coming swell of a.i. therapy and doctors…imagine being depressed and having to tell a robot about it to get better…totally see that working…
Agreed with your message, but there’s a middle ground there I believe (or likely ‘3/4’ ground) as in, it’s a more advanced version of a static webpage blog that have a few temporary suggestions of what to do if feeling depressed: “do exercises, meditate and do yoga, eat healthy, call your family” etc with a disclaimer at the beginning and the end about “NOT health advice, look for professional help and care as soon as possible”.
This one happens to be “dynamic”, as in, said bullet points will get generated on the fly. Some quite wrong though, so I wonder if we should have just stayed with the static web blog pages of old.
Absolutely nothing could make anybody feel better by then, and this is overall nasty, wow that princesses thing.
Wholeheartedly agreed it creates more problems than being solved, like the sorrowing kid’s example of this essay.
I think that some people have really absolutely no one to talk to and they are at the verge of the end… so they go to google or similar and try for help, the outcome of that search should be short, clear and concise: “get professional help here and here and here”, with an addendum of why it is so blunt and critical with extra advice to try to reach to family, friends, etc in the meantime.
…i am funny enough (irony intended) writing about this for my halloween post…i called a suicide hotline and got put on hold…that saved me lol…but it really did…care from humans being outsourced to robots won’t work imo…the calm app enrages me…appreciate the discourse…
By all means, let’s also forbid Ouija boards, psychic reading, Astrology, tea leaves, palmistry, and for that matter all sermons and political advertisements, self-help books and adult classes, novels and movies, as well as all news programs. Heaven forbid someone might be misled!
Chatbots are a human product which uses statistical sampling of a dataset which always ultimately began with human language to provide words to people. A book of Astrology has no more significance for having been directly written by a human than Astrology from a LLM. The meaning imparted is from the observer.
Waking into a library or bookstore and choosing a random book of Astrology, or choosing among a myriad of messages in any number of print media has no significant difference to a LLM.
A human cutting and pasting political speech to create messages has no meaningful difference with an LLM doing the same, except perhaps granularity.
The “Artificial” in AI means human produced. Unnatural.
Of course it is different. You could say that reading a book about astrology is similar to reading a book written by an AI, but the act of intimate interaction with an AI is wildly dissimilar. The medium may actually be the message.
I respectfully disagree. I’m fully immune to imagining a chatbot is nothing more than an elaborate way to retrieve fragments of texts, I made one my husband
And I laughed with over 30 years ago, trained on Project Gutenberg texts, small “context window” slow and with limited attention but worked like a charm. Put it on xs4all in Holland and friends played with it.
So, to be clear, a teenager obsessing over tarot or astrology to suicide doesn’t call for regulating tarot or astrology because humans were tangentially involved, but obsessing over chat texts is. You also grasp that people obsessed over ELIZA 60 years ago which was a heuristic chatbot?
Are you aware that OCD can drive people to obsess over throwing dice and if it a good they do one thing and bad another? Have you ever seen a Magic 8-Ball - maybe lockdown for them also. Zoltar machines are also a nightmare.
The problem isn’t the honeypot of a word generator, the issue is a teenager with clinical depression without any awareness, or possibly interest by his parents.
Humans are excellent at superimposing some reason for irrational things - there has to be a story with an event and logical consequence and a bad actor.
Blaming inanimate objects is an age-old technique. The cursed chatbot; the demon-possessed doll; the television which only shows horror, the Videotape which kills you unless you pass it on. Candyman.
Regulating them is quite a new turn.
Anonymous people driving a teenager to self-harm, ok. Chatbot sending out random phrases, bad. Parents oblivious to child’s life and emotional state, totally ok.
But tarot cards, 8 balls, whatever other examples aren’t designed or engineered to be addictive or drive engagement — and that’s where the chatbot/companions become predatory. You also don’t tell a magic 8 ball your deepest secrets/desires/thoughts with the risk of it sharing them or one day having them hacked and shared with the world.
So, parents should be making sure their children don’t engage with such systems, including and above all social media. Exposing children at scale to predators is much more alarming to me frankly than a version of clippy.
I still don’t get why obsession over self-guided tarot reading isn’t on the OCD regulation list, or Astrology? Humans certainly aren’t involved. The Death card and Burning Tower would be devastating. Perhaps I’m too well-versed in stochastic entertainment systems.
It’s the parents. It starts there and ends there. No amount of regulation can possibly take their duty of care. It’s called negligence.
Every time a child shoots someone, is poisoned, obsesses over social media (of which chatbots are a subset frankly) I would ask: where were the parents?
Not “where were the tech bros”. Designed to be “addictive” is an old canard - sex addiction, internet addiction, chatbot addiction, gaming addiction, TV addiction, masturbation addiction, rock and roll addiction, “comic book addiction” for god sake, “porn addiction”, These tropes (oh, I meant “memes”) pop up like clockwork every 15 years to explain an inconvenient truth that children aren’t well cared-for.
“But chatbots are different.” No, in fact they aren’t. They’re new.
The dose makes the poison. All of the things you mention require lots of time and effort, by multiple people, and are naturally dose-limited. AI chatbots will keep going as long as someone is paying the cloud bill.
So does social media, TV, solitaire, self-reading Tarot, single-player gaming, online gambling - I could go on.
Blaming a chatbot is risible because it creates a staggering trivialization of the death.
It used to be parents would limit children to one hour of TV a day, and they had to go out of the house and play. TV was seen to be addictive, debilitating, and leading to social isolation and failure at life. That was when there was only CBS, NBC, ABC and PBS to watch and stations went off the air at midnight.
There was a period when Comic Books led to juvenile delinquency, smoking, drugs, alcohol, and of course death.
I’ll sound like a broken record but the issue is parents not taking the time to be aware of what’s going on with their kid (old word: Neglect) and ensuring they aren’t doing something harmful.
And I’ll repeat:
Blaming a chatbot is risible because it creates a staggering trivialization of the death.
That’s ultimately what I find amazing.
A neglected death of despair is a failure of python coding.
No, it’s failure by parents to understand, support, and identify legitimate emotional help for a child.
I think you are touching here on a very sore point that will receive a lot of friction but ultimately is on the right track in my opinion.
I mention this because due to whatever 21st century circumstances, people today have kids to be sent to the closest daycare to be brought up by total strangers since very very young.
Because “mom and dad have to work to be able to pay for the things they love more than their kids” (the McMansion, the McSUV, the McTrips, you name it).
Because there’s no grandma or grandpa or uncle/aunt available to help (even while still alive) because the sense of family and community had been severed many decades ago.
Besides the legit financial panorama with the insane inflation of today, a lot of the ailments are similar in their root: an addiction to something else.
From addiction to spend in useless things “because I deserve it” or to fill a void, food addiction (or at least eating lack of control) to the more recent ones like social media and AI addiction, to the harder ones like drugs.
Anyways, I wholeheartedly agree, and while nobody likes to be pointed fingers at, from my point of view it hits hard and true and I can now clearly see what you meant by “staggering trivialization” of the kid’s sad outcome.
Thanks, I’m immune to friction having used “the internet” post/response since roughly 1982 via Usenet.
Whenever you see “we have to X” because of a death it usually making the death instrumental to further something tangential. In this case is the utter trivialization of a child suicide as I noted as somehow relating to python code testing use cases.
The truth is that death of children is quite rare, even suicide - the largest suicide problem is adult white men over 50, who commit suicide perhaps 6 times as often as boys. Of the prominent causes of death in children are accident, disease, homicide, and later suicide. Homicide finds mothers acting alone twice as likely to kill children than a father acting alone.
You will never ever hear a call to federally regulate chatbots because a 50 year old man shot himself in the head, I guarantee it.
It’s so wild how all of this shit is birthed from the minds of incels and sociopaths.
Couldn’t have put it better
Exactly, the SnapTikTakoGrams and Co themselves were basically prototypes of the same trend.
The minds that get to work to put as much distance as possible to any possible human interaction are largely anti-social, hence why the design of almost all social platforms promotes angst and limits on what could had become a healthy interaction.
(Quick example: develop tech to connect instantly with someone else at the other side of the globe, which is great, but also put limits on the amount of characters available on said message and just below a threshold were most messages are likely to be inflammatory)
This type of AI use is basically a very advanced next level of the same minds at work.
The canard that addictive, harmful products are only so for kids, and that adults can overcome them, is horrific. Oxy, sports betting, doom scrolling, and now ai friends. Just hijacking our dependency while providing nothing.
Keep writing!
Yup. We have a track record of being just as susceptible and readily addicted. If we are meant to be children’s role models in the increasingly tech-dependant world, we ain’t doing a good job
It is classic misdirection. Taken something wildly addicting (booze, cigarettes) and cast it as “adult” so you are to blame. Sadness
You have put in words what I couldn’t, even after reading the main article which happens to have many of these side thoughtful nuggets.
I so agree with you that it’s totally a misdirection, it’s is so much more than just AI and social media.
And if we were really somewhat immune to it the world would be quite a different place… so much effort and resources, taxes and government programs, families sufferings, the human toll, etc that is spent on trying to patch up (but conveniently not fix) all sorts of addictions is mind blowing.
In addition to all the social media and now AI dependencies that are coming up:
- 70+% of North America is obese and/or overweight: food addiction, eating more than what’s spent.
- Financial troubles and paycheck to paycheck living: spending more than what’s earned, consumerism addiction. Made worse by this crazy economy we got going on.
- All the drugs crime, deaths and wars that go even beyond borders: the baseline all sorts of drugs addictions that plague so many countries and feed so many other ancillary crimes.
So, at a level as a society we do agree that whether a kid or adult, our minds can fall into addiction… else no help care would be available at all.
But at the same time we got the likes of McDonalds who work around the clock 24/7 making sure their food will hook you, or the tech moguls and pharma-industry to hook you on their product, or the Ads Machinery to make sure we part ways with our hard earned money to buy stuff with don’t even need.
(I’m repeating myself and repeating part of what you clearly placed in words, wanted leave an expanded comment on it)
Yes!
…nothing much lonelier than befriending a robot or having sex with a doll…the creators of these apps are predators…you should look into the coming swell of a.i. therapy and doctors…imagine being depressed and having to tell a robot about it to get better…totally see that working…
I’ve already seen cases of people using ChatGPT as a therapist…
…tell me more…
Agreed with your message, but there’s a middle ground there I believe (or likely ‘3/4’ ground) as in, it’s a more advanced version of a static webpage blog that have a few temporary suggestions of what to do if feeling depressed: “do exercises, meditate and do yoga, eat healthy, call your family” etc with a disclaimer at the beginning and the end about “NOT health advice, look for professional help and care as soon as possible”.
This one happens to be “dynamic”, as in, said bullet points will get generated on the fly. Some quite wrong though, so I wonder if we should have just stayed with the static web blog pages of old.
...please tell me how in the throws of a suicidal urge talking to a webpage will make me feel better...walk me through the LLM prompts that will cure my illl...as with most things a.i. i keep seeing problems created instead of being solved...here is an a.i. company making princesses to talk to kids...should end great...https://www.elflabs.com/?tnames=partnership000031__15668663975&_bhlid=d35b232d1de6a0c68e90abb2baf350cd4f244754...
Absolutely nothing could make anybody feel better by then, and this is overall nasty, wow that princesses thing.
Wholeheartedly agreed it creates more problems than being solved, like the sorrowing kid’s example of this essay.
I think that some people have really absolutely no one to talk to and they are at the verge of the end… so they go to google or similar and try for help, the outcome of that search should be short, clear and concise: “get professional help here and here and here”, with an addendum of why it is so blunt and critical with extra advice to try to reach to family, friends, etc in the meantime.
…i am funny enough (irony intended) writing about this for my halloween post…i called a suicide hotline and got put on hold…that saved me lol…but it really did…care from humans being outsourced to robots won’t work imo…the calm app enrages me…appreciate the discourse…
Good insight, thanks. Can I translate part of this article into Spanish with links to you and a descripción of your newsletter?
Of course
Done, it´s here (you can change whatever you want):
https://cienciasocial.substack.com/p/los-chatbots-de-ia-son-tecnologia
(The description and most of the links are at the bottom).
I mean I can’t read Spanish… so I’ll take your word for it 😂
By all means, let’s also forbid Ouija boards, psychic reading, Astrology, tea leaves, palmistry, and for that matter all sermons and political advertisements, self-help books and adult classes, novels and movies, as well as all news programs. Heaven forbid someone might be misled!
I mean almost all of those examples are humans misleading humans, or presenting points of view that humans gravitate toward.
AI chatbots are lines of code
No, chatbots are a human product too.
Chatbots are a human product which uses statistical sampling of a dataset which always ultimately began with human language to provide words to people. A book of Astrology has no more significance for having been directly written by a human than Astrology from a LLM. The meaning imparted is from the observer.
Waking into a library or bookstore and choosing a random book of Astrology, or choosing among a myriad of messages in any number of print media has no significant difference to a LLM.
A human cutting and pasting political speech to create messages has no meaningful difference with an LLM doing the same, except perhaps granularity.
The “Artificial” in AI means human produced. Unnatural.
Of course it is different. You could say that reading a book about astrology is similar to reading a book written by an AI, but the act of intimate interaction with an AI is wildly dissimilar. The medium may actually be the message.
I respectfully disagree. I’m fully immune to imagining a chatbot is nothing more than an elaborate way to retrieve fragments of texts, I made one my husband
And I laughed with over 30 years ago, trained on Project Gutenberg texts, small “context window” slow and with limited attention but worked like a charm. Put it on xs4all in Holland and friends played with it.
So, to be clear, a teenager obsessing over tarot or astrology to suicide doesn’t call for regulating tarot or astrology because humans were tangentially involved, but obsessing over chat texts is. You also grasp that people obsessed over ELIZA 60 years ago which was a heuristic chatbot?
Are you aware that OCD can drive people to obsess over throwing dice and if it a good they do one thing and bad another? Have you ever seen a Magic 8-Ball - maybe lockdown for them also. Zoltar machines are also a nightmare.
The problem isn’t the honeypot of a word generator, the issue is a teenager with clinical depression without any awareness, or possibly interest by his parents.
Humans are excellent at superimposing some reason for irrational things - there has to be a story with an event and logical consequence and a bad actor.
Blaming inanimate objects is an age-old technique. The cursed chatbot; the demon-possessed doll; the television which only shows horror, the Videotape which kills you unless you pass it on. Candyman.
Regulating them is quite a new turn.
Anonymous people driving a teenager to self-harm, ok. Chatbot sending out random phrases, bad. Parents oblivious to child’s life and emotional state, totally ok.
But tarot cards, 8 balls, whatever other examples aren’t designed or engineered to be addictive or drive engagement — and that’s where the chatbot/companions become predatory. You also don’t tell a magic 8 ball your deepest secrets/desires/thoughts with the risk of it sharing them or one day having them hacked and shared with the world.
So, parents should be making sure their children don’t engage with such systems, including and above all social media. Exposing children at scale to predators is much more alarming to me frankly than a version of clippy.
I still don’t get why obsession over self-guided tarot reading isn’t on the OCD regulation list, or Astrology? Humans certainly aren’t involved. The Death card and Burning Tower would be devastating. Perhaps I’m too well-versed in stochastic entertainment systems.
It’s the parents. It starts there and ends there. No amount of regulation can possibly take their duty of care. It’s called negligence.
Every time a child shoots someone, is poisoned, obsesses over social media (of which chatbots are a subset frankly) I would ask: where were the parents?
Not “where were the tech bros”. Designed to be “addictive” is an old canard - sex addiction, internet addiction, chatbot addiction, gaming addiction, TV addiction, masturbation addiction, rock and roll addiction, “comic book addiction” for god sake, “porn addiction”, These tropes (oh, I meant “memes”) pop up like clockwork every 15 years to explain an inconvenient truth that children aren’t well cared-for.
“But chatbots are different.” No, in fact they aren’t. They’re new.
The dose makes the poison. All of the things you mention require lots of time and effort, by multiple people, and are naturally dose-limited. AI chatbots will keep going as long as someone is paying the cloud bill.
So does social media, TV, solitaire, self-reading Tarot, single-player gaming, online gambling - I could go on.
Blaming a chatbot is risible because it creates a staggering trivialization of the death.
It used to be parents would limit children to one hour of TV a day, and they had to go out of the house and play. TV was seen to be addictive, debilitating, and leading to social isolation and failure at life. That was when there was only CBS, NBC, ABC and PBS to watch and stations went off the air at midnight.
There was a period when Comic Books led to juvenile delinquency, smoking, drugs, alcohol, and of course death.
I’ll sound like a broken record but the issue is parents not taking the time to be aware of what’s going on with their kid (old word: Neglect) and ensuring they aren’t doing something harmful.
And I’ll repeat:
Blaming a chatbot is risible because it creates a staggering trivialization of the death.
That’s ultimately what I find amazing.
A neglected death of despair is a failure of python coding.
No, it’s failure by parents to understand, support, and identify legitimate emotional help for a child.
I think you are touching here on a very sore point that will receive a lot of friction but ultimately is on the right track in my opinion.
I mention this because due to whatever 21st century circumstances, people today have kids to be sent to the closest daycare to be brought up by total strangers since very very young.
Because “mom and dad have to work to be able to pay for the things they love more than their kids” (the McMansion, the McSUV, the McTrips, you name it).
Because there’s no grandma or grandpa or uncle/aunt available to help (even while still alive) because the sense of family and community had been severed many decades ago.
Besides the legit financial panorama with the insane inflation of today, a lot of the ailments are similar in their root: an addiction to something else.
From addiction to spend in useless things “because I deserve it” or to fill a void, food addiction (or at least eating lack of control) to the more recent ones like social media and AI addiction, to the harder ones like drugs.
Anyways, I wholeheartedly agree, and while nobody likes to be pointed fingers at, from my point of view it hits hard and true and I can now clearly see what you meant by “staggering trivialization” of the kid’s sad outcome.
Thanks, I’m immune to friction having used “the internet” post/response since roughly 1982 via Usenet.
Whenever you see “we have to X” because of a death it usually making the death instrumental to further something tangential. In this case is the utter trivialization of a child suicide as I noted as somehow relating to python code testing use cases.
The truth is that death of children is quite rare, even suicide - the largest suicide problem is adult white men over 50, who commit suicide perhaps 6 times as often as boys. Of the prominent causes of death in children are accident, disease, homicide, and later suicide. Homicide finds mothers acting alone twice as likely to kill children than a father acting alone.
You will never ever hear a call to federally regulate chatbots because a 50 year old man shot himself in the head, I guarantee it.