As robots become more human-like, does AI threaten our human interactions?

As robots become more human-like, does AI  threaten our human interactions
robots become more human-like



 As robots become more human-like, does AI  threaten our human interactions?

People with neurological differences—including those with autism, ADHD, dyslexia, and other conditions—find it more difficult to interact with those around them than the general population.

This is because when they talk to a colleague, or even text a friend, the conversation can be fraught with misunderstandings, incomprehensible tone of voice, and unintended impressions. AI-powered chatbots have emerged as an unexpected ally, helping people navigate social encounters with immediate guidance. Although this new technology is not without risks—particularly some concern about overreliance—many users with neurological differences now see it as a lifeline.

How do bots work?

ChatGPT acts as an editor, translator, and confidant. People can ask the chatbot to consider the tone and context of conversations, or direct it to take on the role of a psychologist or therapist. However, the chatbot is positive and non-judgmental. A study conducted by Google and polling firm Ipsos in January found that global AI use had increased by 48%, as enthusiasm for the technology's practical benefits outweighed concerns about its potential harmful effects.

In February, OpenAI told Reuters that its weekly active users had surpassed 400 million, of which at least 2 million were paying business users. But for neurodiverse users, these aren't just convenient tools. Some AI-powered chatbots are now being created with the neurodiverse community in mind.

Warnings Against Over-Reliance on AI

Despite the transformational nature of this technology, some warn against over-reliance. Larissa Suzuki, a London-based computer scientist and visiting researcher at NASA, who herself has a neurological disorder, says the ability to get results on demand can be "very seductive." Over-reliance can be harmful if it impedes the ability of neurodiverse users to function without it, or if the technology itself becomes unreliable, as is already the case with many search engine results using AI, according to a recent study from the Columbia Journalism Review.

Gianluca Mauro, an AI consultant and co-author of "From Zero to AI," agrees that revealing your secrets to an AI chatbot carries risks. "The goal of AI models is to please the user," he said, raising questions about their willingness to offer critical advice. Unlike therapists, these tools are not subject to ethical standards or professional guidelines. Mauro adds that if AI has the potential to be addictive, it should be regulated.

A recent study by Carnegie Mellon University and Microsoft, a major investor in OpenAI, suggests that long-term overreliance on generative AI tools could undermine users' critical thinking skills and leave them ill-equipped to manage without them. The researchers wrote, "While AI can improve efficiency, it may also reduce critical engagement, particularly in routine or less critical tasks where users simply rely on AI."

While Dr. Melanie Katzman, a clinical psychologist and expert in human behavior, recognizes the benefits of AI for people with neurodiversity, she sees some downsides, such as giving patients an excuse to not interact with others. But for users who have relied on this technology, these concerns are purely academic.

No comments

Powered by Blogger.