top of page

Talking to the Dead: Should AI Avatars Replace Memory?

  • Writer: Anastasia Dedyukhina
    Anastasia Dedyukhina
  • Nov 17
  • 3 min read

“What if the loved ones we’ve lost could be part of our future?” An AI startup allows to recreates deceased loved ones in interactive form. Welcome to the new episode of Black Mirror - except that it's happening in real life.

ree

2Wai’s app works by letting users create a HoloAvatar - a digital, interactive recreation of a deceased loved one - using a short amount of input data. The system analyzes a few minutes of audio, video, and personal information to build an avatar that looks, speaks, and behaves like the person, including mimicking their voice, facial expressions, and conversational style. 


Once created, the avatar can be interacted with through a phone as if it were video-calling: users can talk to it, ask questions, and even have it retell “shared memories” that the model infers or constructs from provided data. The app then stores these avatars in what the company calls a “living archive,” allowing families to keep interacting with them over time as if they remained part of future life events.


Looking at 2Wai’s app through the lens of the stages of grief - denial, anger, bargaining, depression, and acceptance - raises serious concerns about its impact.


  • Denial: Talking to a digital version of a loved one might make it harder to accept that they’ve passed. This is probably going to be the biggest thing, making people go back to it again and again - and not making space for new real-life things in their lives.

  • Anger: If the AI acts strangely or says things wrong, it could make people frustrated or upset.

  • Bargaining: People might rely on the avatar to “fix” unfinished issues, which can stop real emotional healing. Talk about outsourcing of your own life experience!

  • Depression: While it might feel comforting at first, constantly interacting with the avatar could make sadness worse - because it's not the real person and deep inside you know it.

  • Acceptance: The app could make it harder to move on, because it keeps the illusion that the person is still around.


In short, while the idea might offer short-term comfort or nostalgia, psychologically it is risky. Grief is a process meant to help humans emotionally reconcile with loss, and replacing it with AI simulations could stall healing, distort memories, and create dependence on technology rather than human connection. It’s more of a novelty and a money-making machine than a therapeutic tool.


However, it's not just about the stages of grief. When assessing such products, we may want to think about:


  1. Ethical concerns: Using AI to recreate deceased people raises questions about consent, privacy, and whether it’s morally okay to simulate someone who can’t agree to it.

  2. Emotional health: Over-reliance on the avatar could replace real human support, delaying genuine coping and emotional growth. Think loneliness is on the rise? Imagine what happens when such products become popular.

  3. Accuracy and memory distortion: The AI might get details wrong or fill in gaps with imagined behaviors, which can change how people remember the deceased.

  4. Commercialization of grief: Companies could profit from people’s mourning, which might feel exploitative or manipulative once they are vulnerable.

  5. Social impact: Friends and family may react differently - some might find it comforting, others unsettling or even disturbing.

  6. Long-term psychological effects: Repeated interaction with a virtual version of a lost loved one might create confusion between reality and simulation, affecting decision-making, attachment, and emotional resilience.

  7. Security and data privacy: The personal data used to create the avatars (videos, voice recordings, memories) could be vulnerable to hacking or misuse. Let alone your grandad suddenly telling you that you should buy that particular vacuum cleaner brand.


Basically, it’s not just about whether the app is “cool”- it’s about mental health, ethics, and long-term consequences.


What do you think, would you use such a tool?

 
 
 

Comments


bottom of page