The Employment Rights Act changes what AI needs to do in the workplace

Dr Lisa Turner, founder of CETfreedom, discusses how the Employment Rights Act fundamentally changes what AI needs to do in the workplace.
2 mins read

From October 2026, employers will be required to take “all reasonable steps” to prevent workplace harassment, including harassment by clients and contractors. The Employment Rights Act 2025 has shifted the legal centre of gravity from responding to harm to preventing it. That distinction matters more than it sounds.

Most of the tools organisations currently use to understand their culture detect harm too late. Engagement surveys, pulse questionnaires, whistleblowing lines and exit interviews all share the same architecture: they ask people to recognise and report what has happened to them. That works for some things. It works very badly for the patterns the Act is most concerned with.

Coercive control, manipulation, boundary erosion and the slow normalisation of toxic dynamics by their nature evade exactly this kind of detection. They operate below the threshold at which people can easily name them. By the time someone can articulate what happened clearly enough to put it in a survey response, the damage is already done, and the organisation has only learned about it retrospectively. That is, detection used to accuse, not detection used to prevent. It is also, in legal terms, a tribunal-grade evidence trail in waiting.

The question worth asking is not whether to detect, but where in the process detection should happen. The right kind of artificial intelligence (AI) tool can serve three quite different people, at three different points in time.

The first is anyone about to send a message. Employees, managers and team leads can run their own communications through an analysis tool before sending. Think Grammarly, but instead of flagging a misplaced comma, it flags patterns associated with bias, coercion, or boundary violation. 

People often don’t know which turn of phrase will land badly. A tool that shows them, in private, before they send, gives them the ability to self-correct. The communication that comes out the other side is not just safer. It is clearer. Constraint, in this case, produces capability.

The second is anyone who has been accused. Running the communication in question through the tool gives them a way to test whether the accusation holds up, where they may genuinely need to take responsibility, and where they may be facing something unjust. False accusations are real, and people accused of harm deserve a means of seeing their own communication clearly, too.

The third is anyone who senses something is wrong but cannot yet name it, or who wonders whether they are being too sensitive. Running interactions through the same tool gives them language for what they have experienced, in a domain where their own perception has often been deliberately destabilised. This is the use case the current system serves worst, and the one where detection at the right point matters most to the person involved.

A key caveat sits underneath all of this: it simply does not work with generic AI. Generic models are trained to be agreeable, superficial and give the most likely response, not forensic.

Without the frameworks for coercive control, sexual harassment or workplace bias, they cannot identify the specific linguistic markers that distinguish a clumsy message from a coercive one, and they are not calibrated to identify when a pattern crosses a threshold.

For this kind of analysis to do what the Act requires, the frameworks have to be built in, the markers defined and the thresholds tested. Anything less produces confident, fluent, useless output, and worse, output that confirms whatever the user already believed. If two people in conflict each run the same exchange through their own ChatGPT account, both will most likely be told they were right and the other was wrong.

Tribunals will assess “all reasonable steps” by looking at what employers did to prevent harassment. The shift the Act invites is not away from detection. It is towards detection at the point where it can still do some good.

Dr Lisa Turner is founder of CETfreedom

Previous Story

Financial literacy falls among UK employees – Nudge

Next Story

81% of employers plan to increase number of employees overseas – Everywhen

Latest from Employee Relations

Don't Miss