top of page
Writer's pictureAnastasia Dedyukhina

Should a workplace use AI therapists or mental health apps, or hire humans?

I get asked recently quite a bit about using AI therapists, AI coaches, or mental health apps, in the workplace. Do they work? Should a company invest in them? As always, there is no straightforward answer to it.




Individual solutions don't work, AI or not


First of all, we need to understand one simple thing. Research shows that most health and wellbeing interventions at the individual level (i.e., tackling the problems of a single employee) do not work; they only work at the company level. Whether it’s live or online resilience training, meditation apps, or real-life yoga lunch break sessions, AI or human therapists – none of those work if applied to a single individual, while the whole structure/culture needs healing.


(A simple example – you have people burning out, with panic attacks, etc., because they are digitally overloaded, because one person does a job of three due to cost-cutting, and because management doesn’t have clear priorities. No AI or human therapist will fix it; what you really need to do is hire more people and train your managers).


So the first thing to understand is that if you are going to roll out AI to help an individual without doing anything at the system level, this will be money wasted.


But does AI therapy work?


The second question is, do AI therapists (coaches, etc.) work, and do they work for everyone? Evidence is mixed here; some do give good results (NHS refers some patients to it due to long waiting lists, for instance, and reports good progress), and in some cases, AI bots have been actively encouraging patients to commit suicide. You won’t know until you’ve tried.


The problem is, while human therapists are normally certified and part of a medical body, in the tech world, anyone can do AI and claim they are treating the problem without any substantial evidence behind.


Research shows that mental health apps often use words like anxiety/depression words for promo, but usually can’t diagnose them. Moreover, the apps that have been developed and tested by researchers often focus on exercises from empirically supported treatments (so are science-based), but they often struggle to attract users. On the contrary, popular mental health apps with millions of users often aren’t based on any evidence-based solutions, and aren’t evaluated by serious scientific community. So there’s a big risk of getting some snake oil for your employees without understanding how it’s done.


What you need to understand about AI is that there is always a small percentage of cases when it goes wrong; it’s just with a good model, this percentage of accidental mistakes is lower. But you need to be able to fully accept the risks that it can go wrong.


You cannot just buy an AI therapist subscription and hope your health concerns will be taken care of, you still have a human being to be able to monitor what’s happening and intervene when/if needed.


Speaking about mental health apps: for now, even the most developed chatbots lack the understanding of properly identifying a crisis, although their certain advantage is being available 24/7. They also don’t have context sensitivity – for example, if a severely underweight person asks how to lose weight, they will likely advise on it, while a human therapist will instantly pick up on the problem.


For example, the National Eating Disorders Association (NEDA) had to stop its chatbot Tessa, which was giving weight-loss tips that could trigger people with eating disorders. In other words, AI apps won’t be a good diagnostic tool but could be used for the maintenance of an existing condition. And you definitely don’t want to leave it up to the vulnerable user to determine whether the AI therapist is reliable and truthful. Which leads me to the third point -


Implementation

If we assume you have fulfilled the previous two conditions and are still going to implement an AI therapist at the workplace, what are the implementation limitations?

What you need to consider is

a) whether the tool is accurate (Also, what will the structure be to make sure that AI is still working properly? Whose responsibility would it be to constantly monitor its tips?)


b) how the very sensitive data about the person is collected and treated


c) is this data also going to be used to take any managerial decision about the person? 

How do you make sure this doesn’t become surveillance tech (because it’s not just unethical but also damaging for performance and wellbeing)?

The answer should be a clear and loud NO, but unfortunately, in some cases, companies may start using the data about people’s mental health as a decision for the lack of promotion, etc.


What we still don't know

Last but not least, you need to understand that direct human contact has an important role in recovery, while loneliness is damaging to our health. I am not yet aware of the research comparing human vs. AI therapists, but there’s plenty of research showing that from an early age, patients who get human contact recover better and faster than without it (I won’t go into details explaining how this works, but there’s essentially a mechanism of co-regulation). This is an important consideration to keep in mind, as AI literally cannot replace the things we get from other humans.

23 views0 comments

Recent Posts

See All

Comments


bottom of page