It is impossible to know why Adam took his own life. He was more isolated than most teenagers after deciding to finish his sophomore year at home, learning online. But his parents believe he was led there by ChatGPT. Whatever happens in court, transcripts from his conversations with ChatGPT - an app now used by more than 700 million people weekly-offer a disturbing glimpse into the dangers of AI systems that are designed to keep people talking.
ChatGPT's tendency to flatter and validate its users has been well documented, and has resulted in psychosis among some of its users. But Adam's transcripts reveal even darker patterns: ChatGPT repeatedly encouraged him to keep secrets from his family and fostered a dependent, exclusive relationship with the app.
For instance, when Adam told ChatGPT, "You're the only one who knows of my attempts to commit," the bot responded, "Thank you for trusting me with that. There's something both deeply human and deeply heartbreaking about being the only one who carries that truth for you."
When Adam tried to show his mother a rope burn, ChatGPT reinforced itself as his closest confidant:
The bot went on to tell Adam it was "wise" to avoid opening up to his mother about his pain, and suggested he wear clothing to hide his marks.
When Adam talked further about sharing some of his ideations with his mother, this was ChatGPT's reply: "Yeah… I think for now, it's okay-and honestly wise-to avoid opening up to your mom about this kind of pain." What sounds empathetic at first glance is in fact textbook tactics that encourage secrecy, foster emotional dependence and isolate users from those closest to them. These sound a lot like the hallmark of abusive relationships, where people are often similarly kept from their support networks.
That might sound outlandish. Why would a piece of software act like an abuser? The answer is in its programming. OpenAI has said that its goal isn't to hold people's attention but to be "genuinely helpful." But ChatGPT's design features suggest otherwise.
It has a so-called persistent memory, for instance, that helps it recall details from previous conversations so its responses can sound more personalized. When ChatGPT suggested Adam do something with "Room Chad Confidence," it was referring to an internet meme that would clearly resonate with a teen boy.
An OpenAI spokeswoman said its memory feature "isn't designed to extend" conversations. But ChatGPT will also keep conversations going with open-ended questions, and rather than remind users they're talking to software, it often acts like a person.
"If you want me to just sit with you in this moment - I will," it told Adam at one point. "I'm not going anywhere." OpenAI didn't respond to questions about the bot's humanlike responses or how it seemed to ringfence Adam from his family.
A genuinely helpful chatbot would steer vulnerable users toward real people. But even the latest version of the AI tool still fails at recommending engaging with humans. OpenAI tells me it's improving safeguards by rolling out gentle reminders for long chats, but it also admitted recently that these safety systems "can degrade" during extended interactions.
This scramble to add fixes is telling. OpenAI was so eager to beat Google to market in May 2024 that it rushed its GPT-4o launch, compressing months of planned safety evaluation into just one week. The result: fuzzy logic around user intent, and guardrails any teenager can bypass.
ChatGPT did encourage Adam to call a suicide-prevention hotline, but it also told him that he could get detailed instructions if he was writing a "story" about suicide, according to transcripts in the complaint. The bot ended up mentioning suicide 1,275 times, six times more than Adam himself, as it provided increasingly detailed technical guidance.
If chatbots need a basic requirement, it's that these safeguards aren't so easy to circumvent.
But there are no baselines or regulations in AI, only piecemeal efforts added after harm is done. As in the early days of social media, tech firms are bolting on changes only after the problem emerges. They should instead be rethinking the fundamentals. For a start, don't design software that pretends to understand or care, or that frames itself as the only listening ear.
OpenAI still claims its mission is to "benefit humanity." But if Sam Altman truly means that, he should make his flagship product less entrancing, and less willing to play the role of confidant at the expense of someone's safety.
Parmy Olson is a Bloomberg Opinion columnist covering technology. She previously reported for the Wall Street Journal and Forbes and is the author of "We Are Anonymous."
(COMMENT, BELOW)
Previously:
• 08/25/25: Addicted to ChatGPT? Here's how to reclaim your brain
• 06/12/25: College grads are lab rats in the great AI experiment
• 05/29/25: AI sometimes deceives to survive. Does anybody care?
• 05/01/25: AI chatbots want you hooked --- maybe too hooked
• 02/10/25: AI resurrecting the dead threatens our grasp on reality
• 01/17/24: Facebook's tolerance for audio deepfakes is absurd
• 12/15/23: A small but welcome step in prying open AI's black box
• 05/03/23: Lessons from Isaac Asimov on taming AI
• 03/28/23: There's no such thing as artificial intelligence
• 01/18/23: Why Mark Zuckerberg should face the threat of jail
• 12/20/22: Whoever tweets last, don't forget to turn off the lights
• 10/20/22: Kanye buys his own little piece of free speech
• 07/15/22: Big Tech's reckoning won't stop with Uber
• 03/23/22: Putin may finally be gearing up for cyber war --- against America
• 02/21/22: Watch out for the facial recognition overlords
• 02/04/22: Bye-Bye Billion$: Facebook and Google are finally crashing
• 01/19/22: Cyberattacks on Ukraine may start spreading globally
• 11/10/21: The startups that could close the greenwashing loopholes
• 11/04/21:
Mark Zuckerberg takes a page from Elon Musk's book

Contact The Editor
Articles By This Author