Rebecca Napier (pictured), IT business partner at Gi Group UK, has warned that deepfake scams pose serious risks for recruitment and HR teams.
This comes after the European Commission and Ofcom launched an investigation into Elon Musk’s X platform following reports that its artificial intelligence (AI) feature, Grok, could be used to generate inappropriate deepfakes of real people.
Napier said: “The recent headlines around Grok AI being used to generate inappropriate deepfakes should set off alarm bells for those in recruitment and HR teams.
“We’re now in an era where we can no longer assume that what we see online is real.
“HR teams and recruiters need to be alert to this when it comes to matters of the workplace and codes of conduct, as often these issues are much more complex and sensitive than first meets the eye.”
Research by the Office of the Police Chief Scientific Adviser and Crest Advisory found 67% of people believed they had seen or may have seen a deepfake online.
The UK Government projected around eight million deepfakes would be created and shared in 2025, up from 500,000 in 2023.
Napier added: “The growing accessibility of deepfake technology is particularly worrying and from an IT perspective, the potential impact on recruitment and HR processes is huge.
“Deepfakes can be used to impersonate candidates during virtual interviews, fake employee misconduct, or even manipulate internal communications.
“As AI capabilities continue to evolve, the opportunities for reputational damage and fraud are only going to increase.”
She said: “During the recruitment process, it may be that HR personnel, recruiters or even team heads and potential line managers review a candidate’s social media to get a deeper sense of their authentic thoughts and feelings, as well as the language they use or how they engage with others.
“But due to the constant use of deepfakes, vigilance needs to be used.”
“When assessing video content, it’s important to look for common warning signs such as unnatural eye movement, blurred facial features, poor lip-syncing or inconsistent lighting.
“Beyond this, organisations should consider introducing stronger identity verification measures during online interviews and internal calls to reduce the risk of impersonation.”
She added: “There are also deepfake detection tools available, which can add an extra layer of protection.
“As incidents like those linked to Grok show, deepfakes are a very real issue and businesses need to get policies in place now for how to approach potential deepfake incidents.”


