Dr Lisa Williams

Hiring inclusively with AI: The dangers of screening out neurodiverse talent

Dr Lisa Williams at The Autism Service, discusses how AI hiring tools can unintentionally exclude neurodiverse talent.
4 mins read

Artificial intelligence (AI) has become a key pillar in recruitment, with nearly half of all recruitment agencies adopting some form of AI to streamline their recruitment processes.

In the U.S. alone, four in 10 companies use AI ‘interviewers’ instead of face-to-face interviews with a human, while 72% of UK hiring professionals are opting for skill assessments to evaluate a candidate’s suitability.

AI is often hailed as an effective method for reducing human bias and matching the best candidates for roles more efficiently. However, it’s also making the application process and outcomes harder for neurodivergent applicants.

New hiring processes can create invisible barriers that prevent capable and qualified applicants from progressing past the initial application stage, before they’ve even had a chance to speak to a human.

Here’s how AI software could hamper inclusive hiring processes by screening out neurodiverse talent before they’ve even had a chance to prove their suitability for a role.

AI software adoption for neurodiverse screening

Recruiters are increasingly adopting AI-driven tools and software like applicant tracking systems (ATS), automated video interviews (AVI), predictive analytics platforms and online assessment performance trackers to assess candidate suitability and performance.

The average time it takes to fill a vacancy in the UK is 4.6 weeks, and making quick decisions is necessary for optimal business performance. So, it’s no surprise that 70% of companies now rely on ATS to streamline their hiring processes.

These technologies are designed to create a shortlist of candidates who best match the job description and are considered worthy of an in-person interview. 96% of recruiters are also concerned about unconscious bias, which is likely driving the use of AI to remove this issue.

However, without human oversight, AI tools and software risk not only screening out viable candidates – they can also risk misinterpreting behaviour and communications styles that deviate from neurotypical behaviour.

For example, AVI software may score a neurodiverse candidate lower or provide more negative feedback based on a lack of eye contact, their tone when speaking, or taking longer to respond. AI may consider these behaviours inappropriate for a role, but these are very normal traits for neurodiverse candidates with autism and ADHD.

In these situations, AI is not assessing the candidate’s ability to perform the role, but instead measuring the extent to which they match the learned definition and behavior associated with being engaged and confident on camera. This can cause talented applicants to be removed from the hiring pool before a hiring team has a chance to assess them in context. 

Challenges of bias in AI training data

The most significant problem with how AI recruitment tools operate is their training methodology. AI models learn from historical hiring data; however, in most industries, this data often reflects the preferences of neurotypical hiring decisions. 

Therefore, while hiring teams have expressed concern about human hiring bias, AI is failing to solve this issue as algorithms remain biased towards specific traits and communication styles. This will make it far more likely for them to shortlist neurotypical applicants while automatically rejecting neurodiverse ones who don’t fit the mould.

Over time, this can also create a continuous loop where neurodiverse candidates remain underrepresented in training data as they’re likely to be screened out every time. This issue is intensified by the rigid assessment models that AI-hiring software favours.

Behavioural, logic, and problem-solving tests can disadvantage applicants who apply more methodical logic. Some recruitment tasks can also place heavy emphasis on teamwork or involve quick sensory changes that may be overwhelming or distracting for candidates with sensory issues.

The result? Capable neurodiverse applicants are being rejected by systems that fail to recognise their potential. 

Not only is this an ethical issue, but it’s also a strategic concern. Neurodiverse employees can possess numerous desirable skills and perspectives that can enhance workplace performance. 

Neurodivergent people can possess excellent pattern recognition skills, heightened creativity, the ability to focus intensely, high attention to detail, and logical problem-solving skills, among other notable strengths. 

These skills can significantly impact overall performance, as previous research by Deloitte has revealed that neurodivergent professionals can be up to 30% more productive than their neurotypical counterparts. Organisations risk undermining their own market competitiveness and productivity by missing out on highly skilled employees. 

How to reduce the risk of screening out neurodiverse talent

The risks posed by AI recruitment tools are not fixed, and with intentional effort to rectify their pitfalls, hiring teams can create processes that harness the efficiency of AI without compromising inclusivity.

AI software should never be the sole decision-maker in the candidate screening process. Human oversight should be included in every stage of the hiring process; reviewers should look at applications that have been flagged as the wrong fit, in case they’ve missed any hidden gem applicants.

AI tools themselves must be audited for evidence of bias and recruiters should request transparency from tool providers about how they’re tested and adapted to consider a diverse range of applicants.

Recruiters should also consider making assessment methods more flexible and accommodating of various needs, so that candidates can effectively demonstrate how they fit the requirements of a role. For example, some candidates may thrive in opportunities where they can show portfolios or practical work samples instead of completing a behavioural assessment.

Interview formats must also be adaptable, and where possible, in person. Avoid using AI video analysis software to complete interviews for applicants who may struggle to demonstrate certain mannerisms, like strong eye contact. Consider offering one-on-one meetings or phone interviews so candidates may perform at their best.

Think about how you communicate with applicants as well. Neurodiverse candidates may thrive when expectations are clearly communicated and precise job descriptions are provided.  

Training your hiring teams on how to identify the potential in neurodiverse talent that may be overlooked by AI screening is critical, as this can help teams understand when candidates may be misrepresented by AI algorithms. 

Technology alone cannot offer fairness in the hiring process, especially when it comes to neurodiverse applicants. AI use in hiring needs active intervention to avoid reinforcing the biases it’s supposed to remove.

Dr Lisa Williams is lead clinical psychologist and director of The Autism Service

Previous Story

Gen Z employees prioritise purpose over pay, research reveals

LIDL supermarket chain brand logo
Next Story

Lidl signs legal agreement with EHRC to prevent sexual harassment at work

Latest from Inclusion, Equality & Diversity

Don't Miss