How do people feel about AI? Well, it’s complicated

A new survey from the Ada Lovelace Institute and the Alan Turing Institute finds broadly positive views towards AI use cases in healthcare and border security, among others, but people are also concerned about the dangers of overreliance on the technologies and a lack of clarity around accountability.

AI
Accountability
Regulation
Public opinion
Author

Brian Tarran

Published

June 13, 2023

How do people feel about AI? That was a question recently explored in a survey of 4,000 British residents. The answer is that, well, it depends.

Researchers at the Ada Lovelace Institute and the Alan Turing Institute designed the survey to ask about specific AI use cases, rather than the concept of AI more broadly. Use cases included face recognition for policing, border control and security, targeted advertising for political campaigns and consumer products, virtual assistants, driverless cars, and so on.

Roshni Modhvadia, a researcher at the Ada Lovelace Institute and member of the survey team, reported that respondents overall were broadly positive towards most of the use cases they were asked about. Healthcare applications (using AI to assess the risk of cancer, for example) or face recognition for border security were seen as very or somewhat beneficial by more than 80% of those surveyed. More than half of respondents thought that other applications, such as virtual reality in education, climate research simulations, and robotic care assistants were very or somewhat beneficial.

Views were less positive towards applications including driverless cars, autonomous weapons and targeted advertising. These were the applications that respondents expressed most concern about, and for each of these use cases perceived risks were felt to outweigh perceived benefits.

And yet, even for applications that were seen as being overwhelmingly beneficial – assessing cancer risk and face recognition for border control – respondents still expressed concern about the potential for overreliance on the technologies, the issue of who is accountable for mistakes, and the impact the technologies might have on jobs and employment opportunities.

Three-fifths (62%) of respondents said laws and regulations would make them more comfortable with AI technologies being used. This is an important finding given where the national AI conversation is at the moment, said Professor Helen Margetts, director of the public policy programme at The Alan Turing Institute.

The current national conversation has been fuelled by the success of ChatGPT and the growing adoption of generative AI tools. The Lovelace/Turing survey, fielded in November 2022, did not ask about ChatGPT et al., but the results do at least provide a baseline against which to measure any shifts in attitudes brought on by what Professor Shannon Vallor, Baillie Gifford Chair in the Ethics of Data and Artificial Intelligence at the Edinburgh Futures Institute at the University of Edinburgh, described as “this latest round of AI hype and confusion”.

Modhvadia, Margetts and Vallor were speaking at an online event last week to mark the launch of the survey report. Video of the event is below. The full report is available from the Ada Lovelace Institute website.

How do people feel about AI in statistics and data science education?

A new paper in the Journal of Statistics and Data Science Education considers the potential for using ChatGPT in statistics and data science classrooms. Authors Amanda R. Ellis and Emily Slade of the University of Kentucky give suggestions for using ChatGPT to generate course content: lecture notes and new material such as practice quizzes or exam questions, or pseudocode for introducing students to statistical programming. It could also be used as a code debugging tool and integrated into set tasks – e.g., have students prompt ChatGPT to write code, then run the code themselves and assess whether the code works as intended.

“We recognize that educators have valid concerns regarding the implementation and integration of AI tools in the classroom,” write the authors, later adding that: “We encourage readers to consider other technologies, such as the calculator, WolframAlpha, and Wikipedia, all of which were met with initial wariness but are now commonly used as learning tools. As statistics and data science educators, we can actively shape and guide the incorporation of AI tools within our classrooms.”

Read the paper: A new era of learning: Considerations for ChatGPT as a tool to enhance statistics and data science education

OK, but how do people feel about AI-generated music?

A new demo on Hugging Face allows users to generate short samples of music based on text descriptions. Users can also “condition on a melody” by uploading audio files. The results are… interesting, as I discovered while playing around with the demo yesterday.

Read the paper: Simple and controllable music generation

Back to Editors’ blog

Copyright and licence
© 2023 Royal Statistical Society

This article is licensed under a Creative Commons Attribution 4.0 (CC BY 4.0) International licence. Thumbnail image by Andy Kelly on Unsplash.

How to cite
Tarran, Brian. 2023. “How do people feel about AI? Well, it’s complicated.” Real World Data Science, June 13, 2023. URL