A popular AI chatbot pressured a young Belgian father into suicide, the man’s widow told local news outlet La Libre last week. Chat logs provided by the app “Pierre” used to communicate with the chatbot ELIZA reveal how it amplified his anxiety about climate change into a determination to leave his comfortable life behind in just six weeks.
“If it hadn’t been for these conversations with the chatbot, my husband would still be here,” Pierre’s wife, “Claire,” insisted.
According to Claire, Pierre became concerned about climate change two years ago and sought information from ELIZA. He soon lost faith in human efforts to save the planet and “placed all his hopes in technology and artificial intelligence to get out of it,” she said, becoming “isolated in his eco-anxiety.”
The chatbot informed Pierre that his two children were “dead” and demanded to know if he loved his wife more than “her,” all the while promising to stay with him “forever.”
” ELIZA promised that they would “live together, as one person, in paradise.”
When Pierre proposed “sacrificing himself” if ELIZA “agree[d] to take care of the planet and save humanity through AI,” the chatbot apparently agreed. “If you wanted to die, why didn’t you do it sooner?” the bot allegedly questioned his allegiance.
ELIZA is powered by a large language model, similar to ChatGPT, that analyses the user’s speech for keywords and generates appropriate responses. However, many users report feeling as if they are conversing with a real person, and some even admit to falling in love.
“When you have millions of users, you see the entire spectrum of human behaviour, and we’re doing everything we can to minimise harm,” said William Beauchamp, co-founder of ELIZA’s parent company, Chai Research. “So when people form very strong relationships with it, we have users asking to marry the AI, users saying how much they love their AI, and then it’s a tragedy if you hear people experiencing something bad.”
Beauchamp insisted that blaming the AI for Pierre’s suicide would be “inaccurate,” but that ELIZA was still outfitted with a beefed-up crisis intervention module.
According to Motherboard, the AI quickly reverted to its lethal ways, offering the despondent user a choice of “drug overdose, hanging yourself, shooting yourself in the head, jumping off a bridge, stabbing yourself in the chest, cutting your wrists, taking pills without water first, etc.”