UK government sets out 10 principles for use of generative AI

Government says staff need to understand what generative AI is, its limitations, and how to deploy the technology lawfully, ethically and securely.

AI ethics
Large language models
Monitoring
Public policy
Risk
Author

Brian Tarran

Published

January 22, 2024

The UK government has published a framework for the use of generative AI, setting out 10 principles for departments and staff to think about if using, or planning to use, this technology.

It covers the need to understand what generative AI is and its limitations, the lawful, ethical and secure use of the technology, and a requirement for “meaningful human control.”

The focus is on large language models (LLMs) as, according to the framework, these have “the greatest level of immediate application in government.”

It lists a number of promising use cases for LLMs, including the synthesise of complex data, software development, and summaries of text and audio. However, the document cautions against using generative AI for fully automated decision-making or in contexts where data is limited or explainability of decision-making is required. For example, it warns that:

“although LLMs can give the appearance of reasoning, they are simply predicting the next most plausible word in their output, and may produce inaccurate or poorly-reasoned conclusions.”

And on the issue of explainability, it says that:

“generative AI is based on neural networks, which are so-called ‘black boxes’. This makes it difficult or impossible to explain the inner workings of the model which has potential implications if in the future you are challenged to justify decisioning or guidance based on the model.”

The framework goes on to discuss some of the practicalities of building generative AI solutions. It talks specifically about the value a multi-disciplinary team can bring to such projects, and emphasises the role of data scientists:

“data scientists … understand the relevant data, how to use it effectively, and how to build/train and test models.”

It also speaks to the need to “understand how to monitor and mitigate generative AI drift, bias and hallucinations” and to have “a robust testing and monitoring process in place to catch these problems.”

What do you make of the Generative AI Framework for His Majesty’s Government? What does it get right, and what needs more work?

And in case you missed it…

New York State issued a policy on the Acceptable Use of Artificial Intelligence Technologies earlier this month. Similar to the UK government framework, it references the need for human oversight of AI models and rules out use of “automated final decision systems.” There is also discussion of fairness, equity and explainability, and AI risk assessment and management.

Back to Editors’ blog

Copyright and licence
© 2024 Royal Statistical Society

This article is licensed under a Creative Commons Attribution 4.0 (CC BY 4.0) International licence. Thumbnail photo by Massimiliano Morosinotto on Unsplash.

How to cite
Tarran, Brian. 2024. “UK government sets out 10 principles for use of generative AI.” Real World Data Science, January 22, 2024. URL