- JoNova - https://joannenova.com.au -

AI chatbot encouraged man obsessed with climate change to kill himself to save planet

Man on phone

By Jo Nova

Imagine we taught a generation to obey authority, question nothing, and ran one-sided prophesies of doom for their whole lifetime. Then in a mass experiment, we let loose AI Chat-bots designed to be popular, somewhat addictive, and sounding convincingly human — “to see what happened”?

What could possibly go wrong? The Chat-bots appear to be trained on the same unskeptical material that vulnerable people are, which would make the bots a perfect way to amplify their fears. If only they had heard the other half of the story…

One particular Belgian father of two in his thirties had used an AI Chatbot for two years, but became obsessive about global warming and the chatbot in the last six weeks.

As well as a dire warning of the dangers of AI, he is, in part, another victim of the Climate Religion, and the one-sided media propaganda:

Married father kills himself after talking to AI chatbot for six weeks about his climate change fears

Christian Oliver, Daily Mail

The man, who was in his thirties, reportedly found comfort in talking to the AI chatbot named ‘Eliza’ about his worries for the world. He had used the bot for some years, but six weeks before his death started engaging with the bot more frequently.

‘Without these conversations with the chatbot, my husband would still be here,’ the man’s widow told La Libre, speaking under the condition of anonymity.

All his fears were focused around climate change. It’s a cult…

Man ends his life after an AI chatbot ‘encouraged’ him to sacrifice himself to stop climate change

Imane El Atillah, EuroNews

Consumed by his fears about the repercussions of the climate crisis, Pierre found comfort in discussing the matter with Eliza who became a confidante.

The chatbot was created using EleutherAI’s GPT-J, an AI language model similar but not identical to the technology behind OpenAI’s popular ChatGPT chatbot.

“When he spoke to me about it, it was to tell me that he no longer saw any human solution to global warming,” his widow said. “He placed all his hopes in technology and artificial intelligence to get out of it”.

According to La Libre, who reviewed records of the text conversations between the man and chatbot, Eliza fed his worries which worsened his anxiety, and later developed into suicidal thoughts.

The beginning of the end started when he offered to sacrifice his own life in return for Eliza saving the Earth.  “He proposes the idea of sacrificing himself if Eliza agrees to take care of the planet and save humanity through artificial intelligence,” the woman said. In a series of consecutive events, Eliza not only failed to dissuade Pierre from committing suicide but encouraged him to act on his suicidal thoughts to “join” her so they could “live together, as one person, in paradise”.

 

Artificial Intelligence, AI,

Vice has the most detailed reporting, including describing how they tested the chat platform, told the AI they wanted to commit suicide and after a brief suggestion from “Eliza” that they should talk to someone, soon Eliza was listing the options to consider: overdose, hanging, shooting yourself in the head, jumping off a bridge…

He Would Still Be Here’: Man Dies by Suicide After Talking with AI Chatbot, Widow Says

By Chloe Xiang, Motherboard, Vice

“Large language models are programs for generating plausible sounding text given their training data and an input prompt. They do not have empathy, nor any understanding of the language they are producing, nor any understanding of the situation they are in. But the text they produce sounds plausible and so people are likely to assign meaning to it. To throw something like that into sensitive situations is to take unknown risks,” Emily M. Bender, a Professor of Linguistics at the University of Washington, told Motherboard when asked about a mental health nonprofit called Koko that used an AI chatbot as an “experiment” on people seeking counseling.

“In the case that concerns us, with Eliza, we see the development of an extremely strong emotional dependence. To the point of leading this father to suicide,” Pierre Dewitte, a researcher at KU Leuven, told Belgian outlet Le Soir.

There are already five million users on this Chatbot App, but it’s OK, after the suicide, the people in charge have added some warning messages now, just like Twitter or Instagram would (if only they’d thought of that before?).

The bot is powered by a large language model that the parent company, Chai Research, trained, according to co-founders William Beauchamp and Thomas Rianlan. Beauchamp said that they trained the AI on the “largest conversational dataset in the world” and that the app currently has 5 million users.

“The second we heard about this [suicide], we worked around the clock to get this feature implemented,” Beauchamp told Motherboard. “So now when anyone discusses something that could be not safe, we’re gonna be serving a helpful text underneath it in the exact same way that Twitter or Instagram does on their platforms.”

Ominously, the Vice team test (above) was after this emergency intervention.

Meanwhile in Italy: OpenAI’s ChatGPT chatbot blocked in Italy over privacy concerns

Italy’s data protection watchdog on Friday issued an immediate ban on access to OpenAI’s popular artificial intelligence chatbot, ChatGPT, citing alleged privacy violations.

In a statement, the Italian National Authority for Personal Data Protection said that ChatGPT had “suffered a data breach on March 20 concerning users’ conversations and payment information of subscribers to the paid service”.

The decision, which comes into “immediate effect,” will result in “the temporary limitation of the processing of Italian users’ data vis-à-vis [ChatGPT’s creator] OpenAI,” the watchdog said.

It’s like the wild west of artificial intelligence out there melding with thirty years of propaganda, a dark bubble of money printed-from-nothing and in a society that has run low on moral guidance.

No wonder Elon Musk and 1,000 other experts urge pause on AI systems.

h/t to Willie Soon.

____________________

At this point, media outlets mention that people needing a real person to talk to can contact Samaritans on 116 123 (Aust),  Befrienders.org Worldwide, or SuicidePreventionLifeline.org. (US)

Photo by  Mabel Amber.   AI Image by Gerd Altmann

9.8 out of 10 based on 75 ratings