Real World Data Science recently had the opportunity to sit down with [Professor Neil Lawrence],(https://www.cst.cam.ac.uk/people/ndl21) Editor-in-Chief of the Royal Statistical Society’s new journal, RSS: Data Science and Artificial Intelligence. Neil, who is the DeepMind Professor of Machine Learning at the University of Cambridge, a Senior AI Fellow at the Alan Turing Institute, and a Visiting Professor at the University of Sheffield, is a leading voice in machine learning and AI. He has previous experience as Director of Machine Learning at Amazon and research interests spanning probabilistic models and real-world applications in health and developing economies. He is also passionate about public engagement—he co-hosts the Talking Machines podcast and is the author of The Atomic Human.
We recently published a Data Science Bite breaking down the first position paper of the newly launched journal, and had the opportunity to speak to its lead author Professor Sylvie Delacroix about its themes: how AI can better support human judgment, why it is crucial to recognise forms of uncertainty that can’t be reduced to numbers, and how participatory design can make AI a true partner, rather than a replacement, for professionals.
In this conversation, Neil discusses the paper and how it aligns with the journal’s vision, plus the importance of bridging machine learning and related fields to keep the human element at the heart of AI systems.
Watch the full interview below and scroll down for key takeaways and some analysis.
Interview
Key Takeaways at a Glance
1. The journal aims to convene, not conclude
The first paper is intentionally a position paper: an invitation to discussion rather than a definitive answer. Lawrence emphasises that solutions to these challenges are distributed across the community. Progress depends on creating spaces—like the RSS journal—for thoughtful, cross-disciplinary exchange grounded in real-world practice.
2. Data scientists must reassess habits, not just adopt new tools
While AI can dramatically increase technical efficiency, Lawrence warns against using that efficiency to simply “do more of the same.” Instead, practitioners should reinvest time in understanding the broader human, societal, and institutional implications of their work.
3. Overconfidence and lack of accountability in AI systems pose real risks
As the journal’s position paper highlights, AI systems, unlike human stakeholders, do not carry social or reputational stakes. This can lead to overconfident outputs without accountability—particularly dangerous in high-stakes domains like healthcare, law, and education. Without better interfaces for uncertainty, professionals risk being distanced from the information they need to make sound judgments.
4. “Conversational uncertainty” is now central to real-world AI use
In many professional settings, decisions are not made through formal statistical outputs alone, but through dialogue—between clinicians, experts, or increasingly, humans and machines. Understanding how uncertainty is communicated and interpreted in these conversational settings is critical, especially as large language models become more influential.
5. Bridging qualitative and quantitative thinking is essential
A recurring theme is the need to close the long-standing divide between quantitative methods and qualitative insight. Many real-world decisions are inherently qualitative, yet current AI systems—and much of data science—are optimised for quantification. Failing to integrate these perspectives risks repeating past mistakes where “the numbers” were treated as unquestionable truth.
6. Participatory approaches lead to better long-term decisions
Although slower upfront, participatory and deliberative processes—bringing together diverse expertise and perspectives—can prevent costly mistakes and misaligned systems. In the long run, they are more effective than purely efficiency-driven approaches.
Join the conversation
This conversation touches on a theme we often explore here at Real World Data Science: the idea that the future of data science and AI will not be defined by technical capability alone, but by how well we integrate human judgment, context, and responsibility into our systems. The position paper—and RSS: Data Science and AI more broadly—is an open invitation to engage with these questions. Whether through research, case studies, or reflections from practice, there is a clear call for contributions that connect technical work with real-world impact.
As Neil suggests, the answers are unlikely to come from any single discipline or organisation. They will emerge from a broader conversation across the data science community.
Now is the time to be part of that conversation: answer RSS: Data Science and AI’s call for submissions.