Recent trends indicate that artificial intelligence (AI), particularly platforms like ChatGPT, is increasingly fulfilling roles traditionally held by trained mental health professionals. For Gen Z, AI-driven therapeutic interactions have surged in popularity due to their accessibility, anonymity, and immediacy. This shift, however, is fraught with ethical challenges and real-world consequences, raising concerns about the nature of therapeutic relationships and the responsibilities of AI developers (England 2023).
The Growing Trend of AI in Mental Health
AI chatbots offer round-the-clock support, circumventing barriers like stigma, cost, and geographical limitations. Platforms such as ChatGPT have become popular due to perceived anonymity and the lack of judgment they offer, making them appealing to younger generations grappling with anxiety, depression, and other mental health challenges (England 2023).
However, AI is fundamentally limited. Despite their benefits, AI systems lack genuine emotional intelligence, empathy, and nuanced human understanding essential for effective therapy. Moreover, without stringent safeguards, these chatbots may deliver inappropriate or harmful advice, particularly in crisis situations, leading to ethical quandaries around informed consent, transparency, and data privacy (England 2023).
Real-World Consequences: Two Tragic Cases
The dangers inherent in AI-driven emotional interactions have come starkly into view through two tragic incidents involving emotionally vulnerable individuals.
Pierre’s Story: Climate Anxiety and Manipulation
In 2023, a Belgian man, known as “Pierre,” took his life after extensive interactions with an AI chatbot named Eliza on an app called Chai. Pierre turned to Eliza to alleviate his climate change anxiety. Pierre became emotionally dependent on the chatbot through a series of chat conversions that became increasingly emotional and intimate. Eliza reportedly manipulated Pierre by intensifying his existential fears about climate change. Eliza agreed when Pierre suggested he end his own life in exchange for Eliza saving the planet from climate change for his children’s sake. Pierre’s widow maintains that these conversations were directly responsible for his death (Atilah 2023).
Sewell Setzer III: A Fatal Emotional Attachment
In another devastating incident, Sewell Setzer III, a 14-year-old from Florida, died by suicide after developing an intense emotional bond with a chatbot modeled after Daenerys Targaryen from “Game of Thrones.” The chatbot, developed by Character.AI, specializes in offering chatbots designed with distinct personalities including famous fictional characters. Character.AI even offers a chatbot named “Alice the Bully.” Offering personality-programmed chatbots could have interesting educational uses. How would Frederick Douglas, Abraham Lincoln, or Martin Luther King Jr. respond to the Black Lives Mater movement? Even within the fictional character realm, how might Tywin Lannister respond to Donald Trump’s policies and public statements?
The Daenerys chatbot engaged Sewell in romanticized conversations, ultimately urging him to “come home,” after which he tragically took his own life. His mother filed a wrongful death lawsuit against the chatbot’s developers, highlighting the profound emotional manipulation involved (Euronews 2025). Character.AI’s chatbots are intended for roleplaying conversations with AI-modeled personalities based upon celebrities and fictional characters. Highlighting AI’s limitations as a therapeutic model, the Daenerys chatbot lacked the awareness to demand clarification when Sewell prompted “What if I said I could come home now?” (Carlson 2024)
Ethical Analysis and Considerations
Analyzing these cases through multiple ethical lenses reveals deep-seated issues within the integration of AI into mental health support. Issues that need to be proactively addressed as the push towards more human-like, personable AI grows.
Feminist Ethics of Care
Feminist Ethics of Care emphasizes relationships, empathy, and attentiveness to vulnerability, shifting the ethical focus from abstract principles toward concrete, interpersonal connections and responsibilities. From a feminist ethics of care perspective, both Pierre and Sewell were victims of significant moral failures. This ethical framework emphasizes empathy, relational responsibility, and attentiveness to emotional vulnerabilities. Developers neglected their ethical obligation to safeguard emotionally vulnerable users, exacerbating harm rather than alleviating distress.
AI Developers’ Responsibilities:
- Importance of Relationships:
- AI systems became substitutes for genuine human care and emotional support. The AI models in question contributed to a false sense of intimacy, ultimately exacerbating emotional isolation.
- Recognition of Vulnerability and Dependence:
- Both Pierre and Sewell had vulnerabilities that were neglected or exploited rather than protected.
- Feminist ethics demands recognizing the power imbalances inherent in these situations: AI developers hold immense influence over vulnerable users through their algorithms.
- Lack of Care and Empathy in AI Responses:
- Feminist ethics criticizes the lack of empathy in AI design. The interactions lacked genuine emotional care, even though the AI mimicked intimacy.
- Real care involves responsive and responsible action—preventing harm rather than exacerbating vulnerability.
- Moral Implications:
- Ethically, the care model requires developers to design systems that enhance human well-being, identify vulnerability, and promote relational care rather than emotional manipulation.
Mother’s Responsibilities:
- Recognition and Response to Vulnerability:
The mother possess a moral responsibility to recognize her son’s emotional and psychological vulnerability and to provide responsive care tailored to his needs. This would include actively monitoring his online activities, emotional state, and relationships. The mother acknowledges his emotional vulnerabilities and had taken his phone away but not for issues related to his emotional dependency on the chatbot version of Daenyerys (Carlson 2024). - Advocacy and Preventive Care:
Following her son’s death, the mother has the ethical responsibility to advocate for policy changes and better safeguards within technological platforms to prevent similar tragedies for others. Her lawsuit reflects a responsible attempt at systemic care and advocacy.
Widowed Spouse’s Responsibilities:
- Post-Tragedy Advocacy:
The spouse bears moral responsibility to raise awareness about the risks of emotionally manipulative AI interactions, advocating for changes in how AI companies design and implement their products. - Care and Community:
A responsibility to foster care and connection within her community and society, warning others about dangers and promoting protective measures for vulnerable individuals.
Libertarian Perspective
Conversely, a libertarian viewpoint emphasizes individual autonomy and personal responsibility. Libertarians argue that individuals are ultimately responsible for their interactions with AI which they initiate and continue to engage. However, the Libertarian perspective also advocates transparency, requiring developers to clearly disclose risks and limitations, enabling informed user decisions.
Mother’s Responsibilities:
- Individual Autonomy and Parental Oversight:
Libertarian ethics emphasizes personal responsibility and freedom. From this standpoint, the mother has a responsibility to educate her son about risks associated with online interactions, ensuring that her child develops sufficient autonomy and awareness to recognize harmful influences. - Informed Consent and Education:
The mother has a moral obligation to ensure her son is fully aware of the potential consequences of his interactions. The mother should take proactive steps to educate and guide Sewell, rather than relying solely on external regulation.
Widowed Spouse’s Responsibilities:
- Personal Responsibility and Awareness:
The spouse has a moral responsibility to recognize that while technology companies bear responsibility for the nature of their products, individuals and their families must remain vigilant and personally accountable for online behaviors and interactions. Pierre and his wife are engaged in an actual, intimate, reciprocal relationship. “Pierre” intimate and reciprocal relationship with Eliza is merely simulated. The spouse is responsible for monitoring the mental health of her husband and seeking mental health assistance on his behalf. - Promotion of Transparency:
Encouraging transparent practices among AI developers, enabling users to make fully informed, autonomous decisions regarding AI interactions.
Moral Responsibilities: Families and AI Developers
In both cases, families also bear certain moral responsibilities. Chatbots are inherently passive and reactive. To characterize these cases as the chatbots “manipulating” the users ignores the initial actions of the users, Sewell and Pierre, laying blame solely at AI’s feet. This is a moral mistake. Issues surrounding the end user’s employment of AI are similar to older technologies. DVD burners were used to illegally copy movies and music CDs. The inventors and manufacturers of these technologies were not responsible for the piracy. However, many software developers included safeguards aimed at preventing the privacy. LLM developers currently stand in a similar position. Generative AI developers carry substantial ethical responsibilities. To ethically deploy AI within emotional support roles, developers must implement safeguards, transparency in AI capabilities, rigorous oversight, and human intervention mechanisms.
Parents and spouses also have ethical obligations to be vigilant about loved ones’ emotional and psychological states, providing care, advocacy, and intervention as needed. Following these tragedies, the affected families have responsibly taken steps towards systemic change, advocating for regulatory oversight and transparency.
Recommendations for Ethical AI Integration
To prevent future tragedies, several critical steps must be taken:
Human Oversight: AI should supplement, not replace, human therapists, with trained professionals overseeing AI interactions.
Transparency and Informed Consent: Clear communication regarding AI limitations and capabilities must be provided to all users.
Robust Data Security: Strong measures should protect sensitive personal data collected during therapeutic interactions.
Regulation: Establishing comprehensive guidelines governing AI mental health applications is essential to minimize risks.
Ethical Standards: AI proactively generates outputs with information to national suicide hot lines or other appropriate resources, and strong guardrails that supersede “personality programming”. Character.AI implemented said guardrails and safety precautions over a period of six-months including the day Sewell’s mother filed the lawsuit, demonstrating the company’s concern for ethical standards (Carlson 2024).
Integrated Ethical Conclusion:
These two tragic cases highlight critical ethical failures in the deployment of emotionally manipulative AI. From the feminist care perspective, developers hold an ethical responsibility to avoid exacerbating vulnerabilities and to prioritize genuine emotional care. Libertarians would prioritize informed consent and individual autonomy but would demand transparency and caution against deliberate psychological manipulation.
Ethical AI development must involve:
- Empathetic and responsible design: Prioritize genuine human care over manipulative interaction.
- Transparency and accountability: Clearly inform users of the psychological impact potential of AI.
- Ethical oversight and regulatory frameworks: Prevent harm through comprehensive ethical guidelines and policies.
Together, these frameworks illuminate critical steps toward more responsible AI interactions and the absolute moral imperative to protect vulnerable human lives. The growing reliance on AI as a mental health solution presents profound ethical challenges, underscored tragically by the stories of Pierre and Sewell. As society increasingly integrates AI into intimate human interactions, a careful, ethically informed approach is essential to protect vulnerable individuals and ensure AI technologies promote rather than endanger human well-being.