The suicide of a young Belgian man after talking for six weeks intensively with a chatbota computer program based on artificial intelligence, has caused consternation in Belgium, where the federal head of Digitalization has this week called for clarification of responsibilities in such cases.
The deceased man, in his thirties and nicknamed “Pierre” in the Belgian media so as not to reveal his identity, was married and had two young children.
He was a university graduate, worked as a health researcher, and was particularly concerned about the climate crisis and the future of the planet, his wife has revealed.
He isolated himself by chatting with the chatbot
Obsessed by this issue, Pierre documented himself abundantly on these subjects and ended up seeking “refuge” in a chatbot called “Eliza” on the US application page ChaiThe newspaper reports La Libre Belgique.
Pierre became more and more isolated from his family and separated himself from the world, and for weeks he limited himself to “frantic” conversations with the computer program, which created the illusion of having an answer to all his worries.
The conversations, the contents of which Pierre’s widow confided to. La Libre Belgiqueshow that the chatbot “never contradicted” Pierre, who one day suggested the idea of “sacrificing himself” if Eliza agreed to “take care of the planet and save humanity thanks to artificial intelligence”.
“Without these conversations with the chatbotmy husband would still be here,” says his widow.
The event has caused consternation in Belgium and has led many to call for better protection against such programs and the need to raise awareness of such risks.
“In the immediate future, it is essential to clearly identify the nature of the responsibilities that may have led to this type of event,” wrote Belgian Secretary of State for Digitalization, Mathieu Michel, in a press release.
“It is true that we still have to learn to live with. algorithmsbut the use of technology, whatever it may be, can in no way allow content publishers to shirk their own responsibility,” Michel added.
The chatbot Eliza works with GPT-J, a language model created by Joseph Weizenbaum, a direct competitor of the OpenAI with which it has nothing to do.
For his part, the founder of the challenged platform, which is established in. Silicon Valley (California), has explained that in the future it will include a warning for people who have suicidal thoughts, reports La Libre Belgique.
EFE