What a difference a year makes! That was the general tone of the conversation coming out of techUK’s Digital Ethics Summit this week. At last year’s event, ChatGPT was but a few days old. An exciting, enticing prospect, sure – but not yet the phenomenon it would soon become. My notes from last year include only two mentions of the AI chatbot: Andrew Strait of the Ada Lovelace Institute expressing concern about the way ChatGPT had been released straight to the public, and Jack Stilgoe of UCL warning of the threat such technology poses to the social contract – public data trains it, while private firms profit.
A lot has happened since last December, as many of the speakers at Wednesday’s summit pointed out. UNESCO’s Gabriela Ramos commented on how the UK’s AI Safety Summit, US President Joe Biden’s executive order on AI, and other international initiatives had brought about “a change in the conversation” on AI risk, safety, and assurance. Simon Staffell of Microsoft spoke of “a huge amount of progress” being made, building from principles into voluntary actions that companies and countries can take.
Luciano Floridi of Yale University described 2023 as a “remarkable, eventful year which we didn’t quite expect,” with various international efforts helping to build consensus on what needs to be done, and what needs to be regulated, to ensure the benefits of AI can be realised while harms are minimised. Camille Ford of the Centre for European Policy Studies noted that while attempts at global governance of AI make for a “crowded space” – with more than 200 documents in circulation – there are at least principles in common across the various initiatives, focusing on aspects such as transparency, reliability and trustworthiness, safety, privacy, and accountability and liability.
However, in some respects, we’ve not come as far as we could or should have over the past 12 months. Ford, for instance, called for more conversation on AI safety, and a frank discussion about on whose terms AI safety is defined. Not only are there the risks and harms of AI outputs to consider, but also environmental harms, exploitative labour practices, and more besides. Echoing the Royal Statistical Society’s recent AI debate, Ford said we need to focus on the risks we face now, rather than being consumed by discussions about the existential and catastrophic risks of AI – which, for many, are still firmly in the realm of science fiction.
There also remains “a big mismatch” between the AI knowledge and skills that reside within tech companies and that of other communities, said Zeynep Engin of Data for Policy. And many speakers were clear that the global south needs a more prominent voice in the AI debate.
Regulatory approaches
The UK government’s AI Safety Summit has been criticised for focusing too much on the hypothetical existential risks of AI. But, on regulation at least, there was broad agreement that the UK’s principles- and sector-based approach, outlined in a March 2023 white paper, is the right one. That’s not to say it’s perfect: discussions were had about whether regulatory bodies would be adequately funded to regulate the use of AI in their sectors, while Hetan Shah of the British Academy wondered “where was the golden thread” linking the AI white paper to the AI Safety Summit and its various pronouncements, including plans for an AI Safety Institute. (On the Safety Institute in particular, Lord Tim Clement-Jones was sceptical of yet another body being drafted in to debate these issues – a point made by panellists at the RSS’s recent AI debate.)
Delegates also got to hear from the UK’s Information Commissioner directly. John Edwards delivered a keynote address in which he acknowledged the huge excitement surrounding the benefits AI promises to bring, while cautioning that deployment and use of AI must be done in accordance with existing rules on data protection and privacy. The technology may be new, he said, but the same old data rules apply: “Our legislation is founded on technology-neutral principles of general application. They are capable of adapting to numerous new technologies, as they have over the last 30 years and will continue to do.”
He warned that noncompliance with data protection rules and regulations “will not be profitable,” and that persistent misuse of AI and personal data for competitive advantage would be punished. Edwards concluded by saying that AI is built on the data of human individuals and should therefore be used to improve their lives, and not put them or their personal data at risk.
Elections in an era of generative AI
One major looming risk is the use of generative AI to create mis- and disinformation during election campaigns. Hans-Petter Dalen of IBM suggested that next year is perhaps the biggest year for elections in the history of mankind, with votes due in the UK, US, and India, to name but a few. Generative AI represents not a new threat, he said, but an “amplified” one – a point further developed by Henry Parker of Logically.ai. Parker spoke of the risk of large-scale breakdown in trust due to mis- or disinformation campaigns. Thanks to AI tools, he said, we are now seeing the “democratisation of disinformation.” What once might have cost millions of dollars and required a team of hundreds of people can now be done much more cheaply and with fewer human resources. As the Royal Society’s Areeq Chowdhury said, the challenge of disinformation has only become harder.
Asked how to counter this, Dalen said that if he were a politician, “I would certainly get my own blockchain and all my content would have been digitally watermarked from source – that’s what the blockchain does.” But digital watermarking is only part of the answer, added Parker. Identifying mis- and disinformation is both a question of provenance and of dissemination. Logically.ai is using AI as a tool to analyse behaviours around the circulation of mis- and disinformation, Parker said – positioning AI as but one solution to a problem it has helped exacerbate.
- Copyright and licence
- © 2023 Royal Statistical Society
This article is licensed under a Creative Commons Attribution 4.0 (CC BY 4.0) International licence. Thumbnail photo by Kajetan Sumila on Unsplash.
- How to cite
- Tarran, Brian. 2023. “AI and digital ethics in 2023: a ‘remarkable, eventful year.’” Real World Data Science, December 8, 2023. URL