Beyond Quantification: Interview with Professor Sylvie Delacroix on Navigating Uncertainty with AI

Professor Sylvie Delacroix discusses why AI systems must move beyond quantification to support professional judgment in high-stakes contexts.
Interviews
AI
Ethics
Uncertainty
Author

Annie Flynn

Published

January 29, 2026

We recently published a Data Science Bite breaking down the first position paper of the newly launched journal, RSS: Data Science and Artificial Intelligence. The paper, Beyond Quantification: Navigating Uncertainty in Professional AI Systems, argues that if AI is truly to support professional decision-making in high-stakes fields, we must move beyond probabilistic measures and use participatory approaches that allow experts to collectively express and navigate non-quantifiable forms of uncertainty.

Real World Data Science recently had the opportunity to speak to the paper’s lead author, Professor Sylvie Delacroix, about how AI can better support human judgment, why it is crucial to recognise forms of uncertainty that can’t be reduced to numbers, and how participatory design can make AI a true partner, rather than a replacement, for professionals.

Watch the full interview below and scroll down for key takeaways and some analysis.


Interview: Beyond Quantification and Uncertainty in AI


Key Takeaways at a Glance

Not all uncertainty is measurable

AI often focuses on quantifiable uncertainty, like probabilities or confidence scores, but ethical and contextual uncertainties are equally important in professions like healthcare, education, and justice.

“The problem is that if we design these systems in a way that means they’re only capable of communicating these quantifiable types of uncertainty, we risk systematically undermining the significance and importance of non-quantifiable types of uncertainty… which are fundamentally ethical and contextual.”

Participatory AI matters

Systems should let professionals shape how uncertainty is expressed, supporting collaboration and collective judgment rather than replacing human decision-making.

“The intervention that we want is ideally one that means the systems are mouldable by the users over time… that’s what we mean by participatory interfaces.”

The goal is to support and foster human intelligence, not replace it

The most valuable AI tools help professionals reflect, reason, and intuitively navigate complex situations, rather than just process more data faster.

Real-world AI is already in use

GPs, teachers, and other professionals are using AI in sensitive ways, sometimes for informal “sense-making” conversations that influence moral judgments.

Small refinements have big impact

Features like expressing incompleteness, ethical uncertainty, or alternative perspectives can significantly strengthen professional agency when developed with participatory input.

“You could imagine a GP flagging an output and saying… it turns out the output could have been very dangerous because it didn’t include key diagnostic tools… and you could then imagine an interesting conversation with other GPs to figure out together how incompleteness should be expressed.”

Efficiency should not undermine judgment

AI can save time, but systems must preserve the dynamic, normative nature of the professional practices within which they are deployed to ensure long-term effectiveness.

The time to act is now

Professionals, designers, and regulators need to collectively shape AI tools before design choices are frozen, ensuring they support human-centred, ethical practice.

“If professionals just wait for regulation to intervene, there’s a risk that regulation will arrive only when design choices are frozen… we all have agency in this; we can’t afford to be passive.”


Join the conversation

RSS: Data Science and Artificial Intelligence has an open call for submissions responding to the paper.

Sylvie Delacroix’s work is a call to action for data scientists, designers, and professionals alike. We have a window of opportunity to shape AI systems that encourage humans to keep re-articulating the values they care about.

We want to hear from you. As AI tools become more integrated into high-stakes professions, how can we ensure that systems support human judgment in all its facets rather than simply optimising for efficiency?

Read the full paper here, or our accessible digest here, and join the conversation about building AI tools that truly serve people, not just processes.

Explore more data science ideas

About the speaker:
Professor Sylvie Delacroixis the Inaugural Jeff Price Chair in Digital Law at Kings College London. She is also the director of the Centre for Data Futures and a visiting professor at Tohoku University. Her research focuses on the role played by habit within ethical agency, the social sustainability of the data ecosystem that makes generative AI possible and bottom-up data empowerment.