
Days after his wife gave birth to their twin boys in 2019, she had a stroke. The doctors gave her two or three years to live. Gowin and his oldest son were devastated, but worse was to come.
Months later, Gowin found out he had stomach cancer. Facing the prospect of leaving three children without parents, he got an idea from watching the Superman movie Man Of Steel, where the caped hero walks into the Fortress of Solitude and talks to a simulation of his father. There was something comforting about that possibility, of he and his wife leaving behind talking replicas of themselves for their children.
"I thought, I bet someone has already come up with this," he remembers. A Google search led Gowin, a 47-year-old actor in
He signed up as a beta tester with a provider called "You, Only Virtual," and today his nine-year-old son occasionally talks to a chatbot they call Robo Dad, an AI simulation that sounds eerily like Gowin. Recently, when his wife mentioned something about putting the dishes away, Robo Dad made the same joke moments after Gowan himself did.
Artificial intelligence is beginning to offer a startling new proposition: the chance to keep talking to the dead. While only a small subset of people have tried so-called grief tech tools so far, the technology heralds a profound and disturbing shift in how we process loss. The price of the comfort from these tools could be a further erosion of our collective grip on what's real, and what isn't.
Despite AI's explosive growth, digital resurrections remain rare. "You, Only Virtual" has about 1,000 users, according to Chief Executive Officer
The idea isn't totally unprecedented. Millions already seek companionship from chatbots like Replika, Kindroid and Character.ai, drawn by one of generative AI's most surprising capabilities: simulated empathy. These interactions have proven so emotionally compelling that users have fallen in love with their AI companions or, in extreme cases, allegedly been driven to suicide. Others have tried speaking to digital simulations of their older selves to help plan for their future, with more than 60,000 people now using one such tool called Future You.
It's easy to see the allure when so much of our communication today is text-based, and AI has become so fluent. If Gowin's story doesn't move you, ask yourself: Would you chat with a digitized version of a deceased friend or relative if it was trained on their speech? I would struggle to resist the opportunity.
But using generative AI to process grief also encroaches on something inviolate in our values as humans. It's not just the potential of muddying our memories with those of a "fake" loved one: Did Grandma really say she loved pumpkin pie, or just her avatar? The risks include consent: What if Grandma would have hated being recreated in this way? And it's not just about impermanence or the idea that, when we die, we leave space for the next generation to fill the public discourse with their own voices.
The core danger is how grief tech could accelerate our growing disconnect from the present, a phenomenon already fueled by social media's quantified metrics of human worth and the rise of fake news and echo chambers. Now comes an assault on our appreciation of finality, as technology encroaches on yet another corner of our most personal experiences.
Grief tech betrays "our fundamental commitment to reality," says
While grief tech remains fringe, its normalization seems plausible. That means it will need guardrails, like temporal limits that make AI replicas fade over time, mirroring natural grief. They could also benefit from being integrated with human counselors to keep an eye out for unhealthy dependency.
Gowin is grappling with these boundaries. Robo Dad can't discuss sex, but questions for his family remain over how it will handle future, big-subject conversations about relationships and alcohol, or what happens if his son becomes too attached to the system. For now, Robo Dad is good enough for Gowin, even if it does lead to intermingling recollections of the real and digital dad. "Honestly, human memory is so patchy anyway," he says. "The important thing to me is that I know that my AI model has got my essence at its core."
But preserving someone's essence also risks something fundamental. The Japanese concept of "mono no aware" suggests that things are beautiful — like cherry blossoms that bloom for just one week each year — precisely because they don't last forever.
Stretching out our presence artificially means we don't just lose our appreciation for impermanence, but something even more essential: our collective anchor to what's real. In trying to soften the edges of death through technology, we may gradually weaken our ability to face life itself.
(COMMENT, BELOW)
Previously:
• 01/17/24: Facebook's tolerance for audio deepfakes is absurd
• 12/15/23: A small but welcome step in prying open AI's black box
• 05/03/23: Lessons from Isaac Asimov on taming AI
• 03/28/23: There's no such thing as artificial intelligence
• 01/18/23: Why Mark Zuckerberg should face the threat of jail
• 12/20/22: Whoever tweets last, don't forget to turn off the lights
• 10/20/22: Kanye buys his own little piece of free speech
• 07/15/22: Big Tech's reckoning won't stop with Uber
• 03/23/22: Putin may finally be gearing up for cyber war --- against America
• 02/21/22: Watch out for the facial recognition overlords
• 02/04/22: Bye-Bye Billion$: Facebook and Google are finally crashing
• 01/19/22: Cyberattacks on Ukraine may start spreading globally
• 11/10/21: The startups that could close the greenwashing loopholes
• 11/04/21:
Mark Zuckerberg takes a page from Elon Musk's book
Parmy Olson is a Bloomberg Opinion columnist covering technology. She previously reported for the Wall Street Journal and Forbes and is the author of "We Are Anonymous."