
A former Google Cloud executive has put AI hiring and its consequences under renewed scrutiny in a US courtroom.
The unnamed ex-Big Tech employee’s testimony revolved around how automated, agentic systems are increasingly shaping recruitment decisions, not at the final interview stage, but much earlier, where candidates are filtered, ranked and, in many cases, excluded.
Employers have spent years embedding software into hiring processes, from basic applicant tracking systems to more advanced models designed to assess ‘fit’ or predict performance. What has changed, however, is just how much they rely on tech.
Most big companies now use some form of automation to screen candidates.
And in many cases, those systems draw on historical data like CVs and hiring outcomes, to identify what a “successful” candidate looks like in its eyes.
The risk, increasingly flagged by regulators and researchers alike, is that those patterns can be reproduced rather than challenged or rebuked.
That dynamic has triggered a growing number of legal cases across the pond, where claims linked to AI hiring tools have been rising since around 2022, the year that ChatGPT was launched.
Fears over bias in the system
Unlike traditional hiring disputes, these cases are harder to evidence as there is often no clear decision-maker, no single moment to interrogate, only a sequence of automated judgements that shape the final outcome.
Recruitment at scale is costly and inconsistent, and automation offers speed and a way to manage volume, particularly as application numbers rise and hiring teams remain under pressure.
But that efficiency comes with reduced transparency, with many of the newer systems operating as what one legal expert described as “black boxes with consequences”, producing outputs that are difficult to explain in simple terms.
This creates a challenge not just for regulators, but for companies trying to justify their own processes.
Examples from recent years include Amazon, which abandoned an internal hiring tool after it was found to favour male candidates, reflecting the data it had been trained on.
More recent studies have pointed to bias in CV-screening systems and language models used in recruitment workflows.
Adoption shows no sign of slowing
AI is now shaping both sides of the hiring process. Employers are using it to filter and assess candidates, while applicants are using it to write CVs, prepare for interviews and optimise their profiles.
In many cases, automated systems are evaluating candidates before a recruiter reviews an application at all.
More than a million jobs were cut in the US last year, even as companies rebuild teams around automation and new workflows.
Hiring processes are evolving alongside that change, with AI increasingly embedded at the entry point.
Regulation is beginning to respond too, but unevenly. New York City now requires bias audits for automated hiring tools, while other jurisdictions are still developing frameworks.
But in the UK and Europe, policy discussions are ongoing, with a focus on transparency and accountability.