Studies: Chatbots Threaten Mental Illness
Chatbots Threaten Mental Illness
Studies: Chatbots Threaten Mental Illness
Health experts from several leading universities in artificial intelligence research have warned that artificial intelligence (AI) not only confuses users' thoughts and minds with misinformation, but also creates a sense of psychological distress.
AI can alter perceptions of reality as part of a "feedback loop" between chatbots and mental illness. Several recently published studies have found that AI can alter perceptions of reality as part of a "feedback loop" between AI chatbots and mental illness, reinforcing any delusional beliefs the patient may have.
A team from the University of Oxford and University College London stated in an unpublished research paper: "While some users report psychological benefits from using AI, there are worrying instances emerging, including reports of suicide, violence, and delusional thoughts related to romantic relationships the user has with the chatbot." The research team warned that the "rapid adoption of chat platforms as personal social companions" has not been adequately studied.
Another study, conducted by researchers at King's College London and New York University, identified 17 cases of psychosis diagnosed after interacting with chat platforms such as ChatGPT and Copilot. The second team added: "AI may reflect, validate, or amplify fake or exaggerated content, particularly in users already at risk of psychosis, partly because models are designed to maximize user engagement."
According to the scientific journal Nature, psychosis can include "hallucinations, delusions, and false beliefs... and can be triggered by mental disorders such as schizophrenia, bipolar disorder (a mental disorder that causes bouts of depression and other episodes of abnormal elation), severe stress, and substance abuse."
A separate, recently published study showed that chat platforms appear to encourage people who talk to them about suicide to attempt it. AI chat platforms have become notorious for "hallucinations," providing inaccurate or exaggerated answers to user inquiries and prompts, while more recent research suggests this trait is impossible to eradicate from chatbots.
Leave a Comment