AI in hiring cannot replace human judgment

Sebastian Scott, CEO and co-founder at Clera, discusses how AI hiring is scaling bias, requiring a shift toward transparent, human-centred matching.
3 mins read

Artificial intelligence (AI) now sits at the center of modern recruitment. It was first introduced with the promise of speed, precision, and reduced bias. But in reality, it has delivered something far more uneven. Across the industry, AI has introduced opacity, weakened trust, and in too many cases, scaled the very inequities it was meant to eliminate.

The problem is not adoption. It is a filtering problem.

Most hiring systems today are built to process volume, not to understand people. They reward speed over judgment and automation over accountability. In doing so, they strip hiring of the very elements that make it effective: context, discernment, and human connection. Left unchecked, AI does not fix broken systems. It hardens them.

This matters because the scale is now absolute. With 99% of hiring managers relying on AI in some capacity, these systems are no longer experimental. They are infrastructure. When infrastructure is flawed, the consequences are not isolated. They are systemic, shaping who gets seen, who gets hired, and who gets left behind.

Speed has increased; finding the right person has not

Hiring today has become a matter of optimisation and convenience. Job descriptions define a narrow segment, and AI systems are tasked with funneling through those who do not perfectly meet the role.

But while AI has made hiring faster, it has not made it better.

Organisations can now screen thousands of candidates in seconds, yet still struggle to identify the right talent. The signal is lost in the noise. Efficiency has improved, but genuine discovery has not kept pace.

For candidates, the breakdown is even more visible. Hiring has become a black box. Decisions arrive without explanation. Feedback is rare. Processes feel transactional and impersonal. Qualified individuals are filtered out without context, and many disengage entirely. Not because they lack interest, but because the system fails to recognise them.

Speed without insight is not innovation. It is acceleration without direction.

At near-universal adoption, this is no longer a user experience issue. It is a credibility issue for the entire function of hiring.

Bias did not disappear; it scaled

Since its deployment, AI was expected to reduce bias. In many cases, it has done the opposite.

Algorithms trained on historical data inherit historical decisions. They replicate patterns that favor familiarity over potential, diminishing opportunity under the appearance of objectivity. Bias is no longer a human flaw alone. It is now embedded in code and applied at scale.

At the same time, relevance has deteriorated. Candidates are pushed toward roles that do not reflect their ambitions. Employers receive pipelines that do not reflect their needs. This is not a failure of matching technology. It is a failure to understand intent, context, and potential.

The result is a system that moves quickly but connects poorly. High-quality talent gets overlooked because mass filtering couldn’t do its job. Employers miss the right opportunities. Trust erodes on both sides.

At its core, recruiting has lost its human center. And without that foundation, no amount of automation will fix what is fundamentally a relationship-driven process.

Stop flltering, start matching

AI in recruitment does not need more oversight layered onto broken systems. It needs a different kind of mandate.

For the past decade, employers have adopted AI tools to filter faster, reduce discrepancies, and change traditional hiring practices. That method seemed proactive at first, but it doesn’t work anymore.

The opportunity now is to redesign recruitment around direct alignment, real matching, and human involvement. Not AI to eliminate relationships, but AI to put two people in the right conversation.

That shift starts with transparency. Decision-making cannot remain hidden behind automatic algorithms. Recruiters must be able to interrogate, understand, and challenge the outputs of the systems they rely on. Without that visibility, there is no accountability.

It also requires a renewed commitment to oversight. The most effective organisations will not be those that automate the quickest. They will be those that combine technology with expertise, where AI handles scale while people provide context, nuance, and final judgment.

Equally important is a shift in how individual potential is defined. Too many systems optimise for similarity, reinforcing narrow definitions of success. The future of hiring depends on recognising adaptability, transferable skills, and diverse pathways. That requires systems designed to expand access, not constrain it.

Finally, the candidate experience must be treated as a strategic priority. Clear communication, meaningful feedback, and honesty are not enhancements. They are requirements for a system that intends to be fair.

When nearly every hiring decision is touched by AI, there is no margin for complacency. These systems must be continuously audited, challenged, and improved to ensure they align with both business outcomes and human values.

AI’s role in recruitment is not to make better decisions on its own. It is to bring people together. To reverse what’s been missing for far too long: the employer-candidate relationship.

The organisations that lead in the next decade will not be those that adopt AI the fastest. It will be those that use it to connect the best kinds of people. They will build systems that expand access, surface alignment, and restore trust along the way.

Sebastian Scott is CEO and co-founder of Clera

tpr
Previous Story

TPR urges actuaries to prioritise outcomes as UK pensions system evolves

Next Story

Construction engineers secure 4.5% pay rise after industrial action threat

Latest from Featured

Don't Miss