
An interdisciplinary research team have won funding for a project examining how local authorities in England can effectively govern the integration of ethical and safety principles into the deployment of AI systems, which are increasingly being used to inform urban-scale decisions.
With AI evolving rapidly, and its use expanding to local authorities, there is a growing need to study how these critical public sector actors nationwide navigate this fast-moving field and translate ethical concepts into actionable approaches for safe AI deployment.
Dr Kwadwo Oti-Sarpong
The project titled Guiding the boots on the ground: Advancing ethically informed socio-technical safety of AI systems in the public sector will be led by the University of Cambridge and Anglia Ruskin University.
Funded by the ESPRC and the AI Security Institute (AISI) – a research organisation within the UK Government’s Department for Science, Innovation, and Technology, the project is one of 20 successful applications to receive a Systemic AI Safety Grant out of 300+ submissions.
The project team comprises Dr Kwadwo Oti-Sarpong and Dr Viviana Bastidas from the Cambridge Centre for Smart Infrastructure and Construction (CSIC); Dr Brian Sheil from the Laing O'Rourke Centre for Construction Engineering and Technology; Dr Maya Ganesh from the Leverhulme Centre for the Future of Intelligence and the Institute of Continuing Education; and Professor Jennifer Schooling from Anglia Ruskin University.
“Deploying AI systems brings significant risks, including algorithmic failures, privacy invasion, lack of transparency and discrimination,” said Dr Oti-Sarpong. “These risks can translate into serious public safety concerns.”
To address such risks and protect society, between 2018 and 2024, the UK Government published around 50 guidelines, frameworks and policy documents detailing principles (e.g. transparency, accountability, explainability) for ethical and responsible development and use of AI in the public sector.
Dr Oti-Sarpong said: “These documents, while useful in principle, lack implementation guidelines and neglect the socio-technical dimensions of creating safe AI systems for use in urban and public contexts.
“With AI evolving rapidly, and its use expanding to local authorities, there is a growing need to study how these critical public sector actors nationwide navigate this fast-moving field and translate ethical concepts into actionable approaches for safe AI deployment.
“This project will analyse existing government publications on the safe, responsible and ethical public sector use of AI; evaluate local authorities’ awareness of these documents and their perceived relevance to urban planning; and examine case studies of how local authorities apply Generative AI (GenAI) and Large Language Model (LLM) systems for urban planning. Together, these will inform an ethics-first socio-technical governance framework.”
This project complements and builds on ongoing research on ethically rooted AI in the public sector that is funded by ai@cam under the ‘AI-deas’ challenge – a new University prize supporting ambitious ideas for how AI can address critical societal issues.