The City of London Corporation has brought in a new standard operating procedure (SOP) for how staff, contractors, and vendors use generative artificial intelligence (GenAI).
The policy lays out what is expected from anyone using these tools at work.
It noted that anyone planning to use GenAI must declare it to the corporation’s information management board, setting out what data is being put in, what comes out, and who it’s being shared with.
The policy also stated that staff must follow UK data protection laws and intellectual property rules at all times.
Additionally, it said any content made with GenAI must be clearly marked as AI-generated, checked for accuracy, and users will be held responsible for anything they produce.
It also covers security, with the corporation calling for strict checks on any AI tools, especially if data is stored overseas.
Any plugins need to be tested for moderation and reliability, and a risk assessment is needed for projects that involve personal data.
The new rules are based on AI principles set by the Digital Services Committee and permanent staff, focusing on making sure AI is used lawfully, transparently, and ethically.
James Tumbridge, chairman of digital services for the City of London Corporation, said: “This policy reflects our commitment to innovation with integrity.
“By setting clear standards for the use of generative AI, we aim to harness its potential while safeguarding public confidence, data security, and ethical governance.
“We want people to use AI with thought, and that is what this policy is all about.”