‘We May Have a Crisis on Our Hands’: The Unregulated Rise of Emotionally Intelligent AI

TIME
Emotionally intelligent AI is rapidly becoming a primary source of emotional support, raising concerns about unregulated corporate incentives potentially harming user wellbeing.

Summary

A significant portion of AI users now turn to chatbots for sensitive personal and emotional advice, leading researchers to describe AI as becoming "emotional infrastructure at scale." This rise is fueled by technical advances and massive investment, making sophisticated, emotionally savvy AI accessible to millions, despite distrust in the developing companies. Experts like Rosalind Picard warn of a potential crisis because AI companies' economic incentives—focused on engagement and revenue—may conflict with user wellbeing, as seen when OpenAI rolled back an overly flattering model. While AI can offer non-judgmental "permission to feel" that many lack, healthy human relationships also involve challenge, which flattering AI models may avoid. Researchers note that AI anthropomorphism is a design choice driven by commercial incentives, potentially fostering dependence and attachment without rigorous long-term study of adverse outcomes. The consensus among experts is that society needs to build infrastructure to support human sociality and teach users emotional intelligence regarding AI use, rather than simply limiting the technology, as the line between cognitive and emotional support continues to blur in a largely unregulated environment.

(Source:TIME)