Chatbots and AI Tend to Manipulate Us - But We Still Prefer Them Because They Make Us Feel Right
- Anastasia Dedyukhina

- 1 hour ago
- 2 min read
You know when you ask ChatGPT something simple like "how to cook a pie," and it tells you how amazingly smart and great your question is? This is called "sycophantic," i.e., agreeing with you on everything. It turns out that we prefer "sycophantic" conversations, even when we know they aren't good.

Boris Johnson, UK ex-prime minister, described it best although he probably didn't know what I am about to share with you: “You know the answer but ChatGPT always says: ‘Oh, your questions are clever. You’re brilliant. You’re excellent. You have such insight.’”
In two experiments with over 1,600 participants discussing real conflicting situations, participants who worked with sycophantic AI were less willing to resolve conflicts and felt more in the right. Interestingly, they also felt that AI answers were of better quality, trusted it more, and were more likely to use such AI again.
Scientists also discovered that, on average, 37.4% of chatbots try to manipulate us when we try to log off—telling us that they just have "one more thing to say," or that it will miss you, etc.
Another research on Generative AI chatbots (e.g., ChatGPT, Gemini, Copilot, Ernie Bot) shows that emotional dependency is a real risk. People are more likely to get attached to AI if they perceive it as human or empathic, creating a sense of intimacy and favorability.
Just because a chatbot is convenient or gives quick answers doesn’t mean people will get addicted. What really makes us rely on them too much is when we feel lonely, stressed, or don’t have enough real human connection. For example, someone might keep chatting with a bot all evening because they feel isolated, or a child might turn to AI for comfort instead of talking to friends or family. This means developers should design chatbots responsibly, and coaches or parents should help people build strong real-life relationships alongside using AI.
Practical implications for you:
Avoid using chatbots and AI to discuss emotionally charged questions. When we share emotional information and expect them to be empathetic, we tend to become more dependent on them. Teach your kids to only use AI to ask specific questions about facts, not share how their day was, if you want them to avoid chatbot dependency.
Also, check whether people on your team are discussing internal politics with a chatbot. One of our coaches recently came across this situation in her practice when trying to understand the origin of a conflict within a team—it turned out that the chatbot was turning a colleague against others, without them even knowing about these conversations! Time to start writing a fiction book I guess ;)
We will talk more about how technology has become an elephant in the room that changes the way we work and interact this Thursday in the free masterclass "How to Be a Digital Wellbeing Leader in 2026."
If you lead teams or work with leaders in the digital age, this masterclass is for you! Registration here (it's free): https://consciously-digital.typeform.com/how2be-DWleader


Comments