top of page

When a robot decides if you are getting a job

  • Writer: Anastasia Dedyukhina
    Anastasia Dedyukhina
  • 6 days ago
  • 3 min read
ree

How would you feel if you were interviewed for a job by a robot?


The other day a friend of mine interviewed for a project management position... and during the interview she realized that on the other end it was AI, not a human. More and more companies these days prefer to involve algorithms at least in the first stages of hiring people - both for scanning resumes, but now increasingly for interviewing them.


And while we are not there yet in terms of counting the blink of the eyes of the candidate and making conclusions about their leadership style based on it, systems that are used in recruitment are often opaque, based on multiple assumptions and often biased.


And these are not just hypothetical risks — they’re already happening. For example, Amazon’s AI tool once downgraded women’s resumes; AI interview systems misjudged people with accents or disabilities; Workday’s algorithm faced a lawsuit for age discrimination; and studies show many tools still rank candidates differently by race or gender.


Example 1 – Amazon’s recruiting tool


A few years ago, Amazon built an AI system to screen job applicants. The problem? It had been trained mostly on resumes from men, so it started giving lower scores to women — even penalizing words like “Women’s College” or “Women’s Club.” Eventually, the company shut it down after realizing the algorithm was reinforcing gender bias instead of removing it.


Example 2 – AI interviews and language or disability bias


In 2025, an Australian study found that AI interview systems made more mistakes when evaluating people with strong accents or speech differences. For example, candidates who weren’t native English speakers were rated unfairly because the algorithm couldn’t understand them properly.A similar case in the US involved a Deaf Indigenous woman whose interview with an AI system (used by Intuit and HireVue) went wrong — the software couldn’t handle her sign language or provide accurate captions, and she was rejected. The ACLU later filed a complaint on her behalf.


Example 3 – Age discrimination with Workday’s hiring AI


In the US, Workday — a major HR software provider — is now facing a lawsuit from candidates who say its AI tools discriminated against people over 40. They claim the system quietly filtered them out before their applications even reached a human recruiter.


Example 4 – Hidden racial and gender bias in screening tools


A 2024 study from the University of Washington showed that even when two resumes had identical skills and experience, AI systems often ranked candidates differently depending on their name, race, or gender — showing how bias can hide deep inside supposedly “neutral” algorithms.


When executives don’t understand the tools they deploy, when HR decisions are outsourced to opaque algorithms, and when “efficiency” replaces empathy, something much deeper breaks.


This is no longer innovation or progress — it’s digital illiteracy weaponized.


What could have prevented these failures? True digital leadership. Leaders should have asked tough questions before deploying these tools — about how the AI was trained, what data it used, and how fairness was tested. They should have included diverse voices in design and testing, involved ethics and HR experts, and made sure human judgment remained part of every hiring decision. Most importantly, they should have understood that technology is not neutral — and that responsible leadership means staying accountable for the systems you choose to trust.


When I am asked what is the expected outcome of the Consciously Digital Institute Digital Wellbeing Leadership Program, I say — to prevent THIS from happening.


Illiteracy. Discrimination at work. Obscure AI models taking decisions on metrics you didn’t even know were being collected about you.


Because this isn’t just about one ridiculous system or one bad decision.


This is about the kind of world we are silently building — where humans are evaluated by emotion-recognition models that might not even be accurate, and their skills and capabilities are reduced to a data point.


I am not against technology.


I am against technology applied without any control, by people who have no understanding of how it works, where it’s applicable, or what ethical standards should guide its use.


We urgently need real leaders with deep knowledge to step in — in every company, school, and community — and help guide where “tech progress” is taking us.We need voices of reason who ask questions and push back on ridiculously incompetent attempts to impose “progress.”We have almost no time left to do it.


Do you want to shape where humanity is moving in the next few years?

Applications for the CDI Certification 2026 intake open next week.

One year.

Only 20 seats.

Life-long relationships and impact.

 
 
 

Comments


bottom of page