More than half of businesses have no formal policy in place to govern the use of artificial intelligence (AI) tools in the workplace, leaving them vulnerable to data risks, compliance issues and uncertainty over accountability.
A poll of over 500 employers and HR professionals conducted by WorkNest revealed that 54% of organisations had no AI policy at all, while almost a quarter (24%) were still in the process of developing one.
Only 13% said they currently have clear, documented rules in place.
The findings suggest businesses are struggling to keep up with rapid AI adoption, with many employees already using tools such as ChatGPT despite companies not having agreed on boundaries or responsibilities.
When asked about the biggest concerns surrounding AI use, 41% of respondents cited data protection and privacy risks as the number one issue, followed by misinformation or inaccurate outputs (30%) and legal or compliance challenges (16%).
One in 10 (11%) were worried about overreliance, whilst just 3% said they had no major concerns about AI in the workplace.
The survey also highlighted a lack of clarity over who should take ownership of AI governance.
Almost half of respondents (47%) said senior leadership should set the rules, but over one in five (23%) admitted that no one in their organisation has specific responsibility for setting guidance on AI use.
Experts cautioned that a lack of clear policies around AI can expose organisations to significant risks not only in terms of data protection, compliance, and reputation, but also from an employment law perspective.
Alice Brackenridge, employment law advisor at WorkNest, said: “Employers remain fully responsible for all workplace decisions influenced by AI, even when using third-party tools.
“This means that, should AI-driven decisions result in discrimination or other breaches, intentional or not, the business, not the technology provider, could face employment tribunal claims and substantial financial consequences.
“Without robust processes to monitor, regulate, and review AI outputs, which include conducting regular equality and bias assessments, organisations may inadvertently expose themselves to avoidable and costly legal challenges.”
She added: “Businesses can not wait until something goes wrong. Proactive steps need to be taken by a combination of senior leadership, HR, IT and legal teams, in order to set boundaries, establish policies and provide training.
“AI can deliver huge benefits, but only if it is managed responsibly and transparently.”