We’re about a year late coming to Stephanie Hare’s book, Technology is Not Neutral: A Short Guide to Technology Ethics. But, as discussed in our interview below, time has only made the text more relevant. The book was written pre-ChatGPT, but Hare’s explorations of ethical questions in the context of facial recognition technology and Covid-19 exposure tracking apps feel both pointed and urgent at this moment, when researchers, regulators, and regular people are weighing the opportunities and potential harms of large language models and generative AI tools.
“We’re having some sort of moment with technology ethics – AI ethics being just a branch of that,” says Hare. Reflecting on her career, spanning 25 years, she says: “The stuff that we’re talking about today that dominates the headlines – that is dominating the discussion in the tech sector – was not discussed at all at the turn of the century, other than by maybe people in the science and technology studies domain or academics. But it wasn’t filtering into boardrooms. It wasn’t on the front pages of newspapers, and it wasn’t being covered in the national news. So, it’s amazing. A whole field has sprung up.”
However, as Hare makes clear in our interview, we still have a long way to go to build a culture of technology ethics throughout society. Check out the full conversation below or on YouTube.
Timestamps
- ChatGPT: just another “flavour of the month” in the tech industry? (1:18)
- Has concern about large language models helped put technology ethics on the map? (3:17)
- What will it take to build a culture of technology ethics – in society, in academia, in industry? (9:16)
- Drawing lessons from history (12:15)
- Why technology ethics is a “wicked problem” (24:57)
- Checklists and changing mindsets (29:49)
Quotes
“The European Union has the AI Act coming down the pike. It doesn’t cover stuff like ChatGPT specifically, but then I don’t know if you want good regulation to cover the technology itself, or how technology is used. I talked about this in my book: do you want to regulate forks – a tool – or do you want to regulate use cases for forks? We’ve regulated the use case, if you will, of murder, or of injury with a fork – or, frankly, any other tool. So it’s the use case we focus on. We don’t really regulate forks. [But] we do regulate some technologies, like biomedical technologies, human genetic stuff, anything nuclear. So we just need to think about where does AI fit with that?” (5:42)
“Move fast and break things was the mantra for this culture [in the technology industry] for a really long time, at least out of the US. And it made a lot of people a lot of money, and they got worshipped by the media. And, you know, they have a whole audience of ‘bros’ who are fans of them. And they’ve never really, any of them, been held to account for what they’ve built.” (11:56)
“Another generation or two, when we’re older, might look at some of what technology we’ve built or our behaviour on climate change, our track record – did we do what we could have done to slow global warming, to improve biodiversity? – and they might hold us to account, saying, ‘You could have stopped this and you didn’t, right? It’s not just what you did. It’s what you did not do.’ So we have to be super careful when we think about ethics, because ethics change, values change over time. And what seems okay today may not be okay in 10, 20, 30 years time. That is on my mind all the time. It’s not very relaxing.” (18:30)
“[Laws and regulations] are important, they’re necessary, but they’re insufficient. You can act a lot faster if you can get people preventing stuff from being built in the first place, and that means you need to have a culture of people working in technology, both within the organisations – whether that’s research labs, government, companies, universities, whatever – and on the outside – journalists, academics, thinkers, etc., or just the public, an informed public – who can see something and sound the alarm and go, ‘Wait a minute, hang on. That’s not okay.’” (33:02)
Transcript
This transcript has been produced using speech-to-text transcription software. It has been only lightly edited to correct mistranscriptions and remove some repetitions.
Brian Tarran
Hello, and welcome to another Real World Data Science interview. I’m Brian Tarran. And today I’m joined by Stephanie Hare, a researcher and broadcaster and author of the book Technology is Not Neutral: A Short Guide to Technology Ethics, which is the focus of our conversation today. Welcome, Stephanie.
Stephanie Hare
Thank you so much for having me here.
Brian Tarran
I feel I’m a bit late to the party with the book. The Financial Times picked it up as one of the best books of summer 2022. But I’ve only just got around to reading it.
Stephanie Hare
I mean, I only just got around to reading War and Peace last year. So there’s no rush with these things.
Brian Tarran
Okay. Well, I mean, I can definitely say it’s one of the best books I’ve read in spring 2023, if that helps, and the only other one I read was Lord of the Rings. So–
Stephanie Hare
Wow. I mean that’s some pretty august company, you couldn’t thrill me more.
Brian Tarran
Excellent, excellent. Well, I actually thought coming late to the book might actually have been of benefit to me as a reader because, you know, you’re talking about technology ethics quite broadly. But then you focus in on a couple of use cases, specifically around facial recognition, technology, Covid-19 exposure tracking apps and things like that. But, you know, obviously, since the book was published, the whole discussion around technology ethics has kind of been dominated maybe or taken on a new dimension following the launch and adoption of ChatGPT. I wonder, you know, when the technology was launched, people started using it, adoption rates, you know, went through, went through the roof, what was your kind of initial reaction to all that and what have you made of the kinds of conversations and criticisms that have followed?
Stephanie Hare
Well, I mean, like everybody I was curious and fascinated and wanted to play around with it a bit. I don’t think I’m of the school of thought that seems to be circulating that this will either you know, destroy mankind as we know it or take everybody’s jobs or potentially upend civilization. There’s been some quite extreme, some quite extreme views put across in the media in the past few months since this was widely released to the public. I don’t know, for some reason, I didn’t drink the Kool-Aid, when I started working in technology. So I always take these things with a grain of salt. And I guess my, my cautionary note to anybody listening to this is you know, at this time last year, all of my clients were wanting presentations and analysis about web3 and NFTs and cryptocurrency and before that, it was blockchain. There’s like always a sort of flavour of the month. AI, for people who’ve worked in this field, is known to have winters, summers, springs, autumns, you know, these seasons of when it’s like coming on and really exciting or not? I’m more excited by DeepMind’s use of artificial intelligence, I think they’re actually working on interesting problems, right, around like protein discovery, like real science, as opposed to like, oh, look, I can have a new sci fi avatar or do a deep fake, you know, people can do deep fakes already. We’re just doing them now in even more disturbing ways. So I guess, I guess it’s that. I’m intrigued by it. But I don’t I don’t feel the need to sort of freak out. Either way, positively or negatively, I have a much more sort of detached satellite-level view, I think probably just because I’m older, seeing these trends come and go. And it’s like, let’s just let’s just wait this out and see how it goes.
Brian Tarran
From your perspective as someone who’s interested in and researching in the area of technology ethics, right, do you see a kind of almost a benefit that the conversation around this has put technology ethics, the conversation around that, on the map? Or do you worry that we’re kind of obsessing over this one technology and this one application? We’re not looking at the field more broadly?
Stephanie Hare
Well, there’s a few things to say on this. So like, first of all, a couple of weeks ago, a bunch of people working in AI, about 1000-plus people – including some fictitious people, by the way – sign this this letter calling for a moratorium on AI research for six months, which was unenforceable, clearly not going to happen, was signed by Elon Musk, who then very quickly announced he was developing his own rival to OpenAI, the company that invented ChatGPT. So you take all of that with a grain of salt. But again, if you’re, if you’re an historian, or if you just have a long memory, you’ll remember that there have been several letters like this. There’s always somebody, you know, very big-wig people. It’s not that we want to dismiss it. But Stephen Hawking and Elon Musk were warning, you know, over around 10 years ago that AI was going to kill humanity if we didn’t put guardrails on it. Professor Stuart Russell talked about this in his Reith Lectures a couple of years ago, which are still online and you can listen to them. And you know, he’s not an alarmist. He’s a serious person and a serious thinker. So we want to listen to him. But I guess what I’m just saying is, you know, every time some sort of new technology or new use case for technology comes up, there’s a group of people who come out and freak out and they get lots of op-eds. It’s usually men, I must say. There’s a lot of women doing some really interesting scholarship in this area that don’t get the op-eds and quite the publicity. So there’s that. Then it is interesting because it makes people think about technology ethics, usually, again, from a place of either fear, right – Are they going to kill us? Are they going to take our jobs? Are they going to remove human agency? – or money – How are people going to make a huge amount of money? Who’s going to make the money? And by displacing whom, right? So, we have two levers: incredible doom or incredible opportunity. And that leaves the rest of us, I think, probably somewhere in the middle, scratching our heads and going like, is this going to actually change my life? And if so, how, and do I really care, given that I’ve got like, you know, a cost of living crisis, recovering from the Covid pandemic for the past few years? Like, if you’re not in this world, it can seem like a lot of shouting. There’s also the question of, do we need new laws? So we know that the European Union has the AI Act coming down the pike. That’s supposed to be passed this year, and there’ll be a two year implementation grace period. So that’s interesting. It doesn’t cover stuff really, like ChatGPT specifically, but then I don’t know if you want good regulation to cover the technology itself, or how technology is used. And I talked about this in my book, like, do you want to regulate forks – a tool – or do you want to regulate use cases for forks? So if I’m, if I stab you, or kill you with a fork, which is totally possible, that is something that we’ve regulated; we’ve regulated the use case, if you will, of murder, or of injury with a fork or frankly, any other tool. So it’s the use case we focus on. We don’t really regulate forks. We do regulate some technologies like bioethics technologies, or biomedical technologies, excuse me, sort of human genetic stuff, anything nuclear. Those technologies we do specifically regulate. So we just need to think about where does AI fit with that? And also, do we need new regulations for everything, or can we use existing ones? And that’s what’s becoming really interesting is that in the US, where I’m from, the main regulator, the FTC, seems to think that it can use a lot of existing laws already. So they’ve been like, if your AI is claiming to do stuff that it can’t, we’re gonna come after you under, like, kind of sort of false advertising, if you will, misrepresenting yourself. They might come after some of the big AI companies based on anti-competition law, right? So no new laws needed for that. And then with the music industry, they’ve been going after all the people who are like, oh, let’s like remix a Drake song, and saying, well, actually, you can’t do that, because you’re violating copyright law, take it down. Right. So again, I don’t want to be like too, too calm about it. Like, we do need to look at some of the use cases that are really problematic and hurting people. But we might actually have a lot more in our arsenal to combat this than we’re currently using. And I think what’s going to happen is, unfortunately, the pace of crafting legislation, and then regulators never fully enforce regulations– Look at the GDPR: no company’s ever been given the full fine. And here in the UK, the ICO is famous for letting companies off the hook, giving them less than half of the original fine. It’s ridiculous. So if you’re going to pin your hopes on regulation, I’m not sure that’s great. I’m weirdly more optimistic about landmark legal cases. So we’re seeing an Australian mayor who was totally defamed by ChatGPT, in Australia, he’s going to be taking or is taking OpenAI to court. And then we might see some of these copyright issues, that could be taken to court, right. And like, that’s where people I think will get more action and, frankly, more respect, because these companies are really happy to pay a lot of money to lobby our lawmakers, and water stuff down. And they always say, Oh, my God, it’s going to constrain innovation. And if they really get desperate, they’ll be like, China! If we don’t, if we’re not allowed to do everything we want, China will win! That is like a– that is just a game that takes everybody nowhere, whereas in the lawsuit angle, that’s interesting, because you’re demonstrating responsibility, you’re discussing liability, you’re having to demonstrate harm. And in the process of discovery, right, you might be able to actually get some of these companies to open up their datasets, how their algorithms work, like, I’m much more intrigued to see where that’s gonna go.
Brian Tarran
Yeah, but I think I mean, having read your book, I would, I would have thought you might perceive all of this sort of stuff as kind of sticking plasters to put over the the injuries that might be caused by these technologies, right? Your argument seems to be that we have an issue whereby we don’t have a culture of technology ethics. So when we’re thinking about building these tools, or when we’re starting off down the path of creating something like this, we’re not already thinking about, you know, the use cases, the harms that might arise from that and things like that. What does it take to build a culture of technology ethics, do you think, in our society, in our academic institutions in our companies?
Stephanie Hare
Honestly, I think, I think this whole accountability piece is going to be what it takes. Because you see, like Alphabet CEO Sundar Pichai gave an interview recently to CBS 60 Minutes in the US where he was like, yeah, there’s a risk that this technology could get out of control, like dot dot dot, this would be terrible for mankind. And you see him kind of be like, hope somebody does something about that. And it’s like, I know somebody that might do something about that, Mr Pichai – you! But clearly he feels, and you can see his point, he feels that right now, if it’s not illegal, then it’s permissible. And he has to win market share. If he doesn’t do it, he’s going to lose. And companies have this all the time. If they wait too long, they lose their first-mover advantage, and they get destroyed. We can go through like countless examples of that in business, particularly in technology. So I get it. But what he isn’t understanding is that if his company is the one that puts out the technology that leads to terrible harm – you know, physically killing people, harming them, destroying the national security infrastructure, something like that – right now, I don’t think he’s thinking about how that’s going to affect him. And that’s because we don’t really penalise executives very often. The worst that might happen is they might leave with a huge golden parachute, and go off and sort of retire in Hawaii with their millions, right? Like nothing really happens to them. So how do you have a culture of technology ethics, where the people who are creating technology and have the power to stop, right, to maybe like back off on stuff, they aren’t really thinking about how will I personally be held responsible if this goes south? So like, Sam Altman and OpenAI, same thing, he was like, he gave an interview where he’s like, I’m really scared about this technology I’m building. It’s like, okay, you could slow down or back off, you could make your datasets open, you could make your algorithms open. You’re called Open AI, that was supposed to be your whole mission, why you were created was to benefit humanity, like, what are you doing? So it’s weird. And I think it comes from the fact that, you know, move fast and break things was the mantra for this culture for a really long time, at least out of the US. And it made a lot of people a lot of money, and they got worshipped by the media. And you know, they have a whole audience of bros who are fans of them. And they’ve never really, any of them, been held to account for what they’ve built.
Brian Tarran
So your interest in technology ethics clearly predates you know, that all the noise at the moment around large language models and generative AI and things like that. What was it that got you interested in this subject? Was it a particular application, something that caused some concern? Or– How did it come about?
Stephanie Hare
This is gonna sound completely weird, but it didn’t come from my experience in tech, really, at all. I have had two paths in my adult life, one has been working in these technology companies with a brief but happy foray in political risk, which is now sort of part of the skill set for tech. But I trained as an historian, and I interviewed someone who was a French civil servant, who at the end of his life was put on trial for crimes against humanity for his actions as a young civil servant during the Second World War. So he collaborated, as so many French civil servants did. And in the course of that collaboration, over a period of many years, went from just, you know, just signing documents and kind of doing what he was told to do, to deporting people and sending them to Auschwitz. So I was very young when I interviewed him, and that marked me, as I would hope it would mark anybody. I talked with him on and off for about three years, until he died. And that was the subject of my PhD. And then I did a fellowship at St. Anthony’s College, Oxford, and spent years looking at it further. And that’s actually going to be my next book. I needed a long time to sit with that material and to read around it, but I had to get the interview while he was still alive. He was so old, he was 93 when I started talking to him, and so it was important to get that down for posterity’s sake first, and then circle back and do some analysis later when I was a bit older. When you talk to somebody who in his case was, you know, not antisemitic, not of the far right politically, was actually like centre-left, had lots of Jewish friends, etc. Was like top of his class, you know, came from a milieu and a background and a formation that I think many of us would read and be like, Okay, that seems pretty reasonable. You ask yourself, how on earth did that person in his, like, young period of his late 20s, early 30s end up being involved and actively participating in what ends up being mass murder. It’s probably the most extreme case study of ethics, or one of the most extreme case studies of ethics that I could have stumbled upon. And it stayed with me and to be honest, it shapes a lot of my work and how I think about human rights and civil liberties and the freedoms that we so often take for granted because I’ve studied, as an historian looking at France and Germany, how quickly those things can be taken away – very quickly, terrifyingly so, in fact. So that lens is always with me. And when I was then working in technology, and seeing some of the things that could be done with these tools and watching this lack of accountability, down to the point of gross negligence in some cases. And also, as a young technologist, not being given any training – we were given no training at all in ethics, in, like, discussing data protection – it was basically: this is the law, just obey the law, like, that’s the, that’s the box that you have to play in. Other than that, like, go for it. And when I look back on that now it’s like, Oh, my God, that’s the equivalent of putting your family in the car, and everybody goes off without wearing their seatbelts on and, you know, all this sort of safety design that we take for granted in cars now, it’s just mad when you think about it, or the way we used to fly. We’re in this phase, it’s really interesting, just over the course of my career – 25 years – where the stuff that we’re talking about today that dominates the headlines, right, that is dominating the discussion in the tech sector, was not discussed at all at the turn of the century, other than by maybe people in the science and technology studies domain or academics. But it wasn’t filtering into boardrooms. It wasn’t on the front pages of newspapers, and it wasn’t being covered in the national news, whereas like now that is all I’m doing. So it’s amazing. A whole field has sprung up.
Brian Tarran
I think that that kind of origin story, if you like, explains some of your, perhaps, belief in the importance of exploring this accountability question when it comes to technology, ethics?
Stephanie Hare
Yeah, because I watched it, and I think what was so fascinating– So as I say, I was in my early 20s. In fact, I was 20, when this man was put on trial, and I had just moved to France. It was the longest trial in French legal history – it was a big deal, you could not not watch it. So I was reading this and seeing it in the press every day, and I watched the French people discussing it around me, you know, really being divisive, this stuff does not go away. And his view was: I was just following orders, I was doing what I was told to do. Which you know, you hear that a lot from engineers or people who are like, this is the design spec I’ve been given, or this is what my boss has told me to do, or this is what our investors want, etc. Or people feel they don’t have the power to stand up because, you know what, they’ve got a mortgage, they’ve got kids, and employers know that, like, they know that and they use it as leverage against people to silence them. Or they’ve signed an NDA, because we get made to sign these NDAs when we work in tech, and then we get made to sign another NDA when we leave, right, so we can’t disparage our employer, and maybe we’re given some money so we don’t talk about the things we’ve seen. You know, it’s, it’s gross – it’s a gross little world, and like you have to be very, very solid and take good care of yourself to work in it, I reckon. To try and keep your ethical and moral compass. It’s hard. So I think because I saw that. And I saw that someone who – whether we believe him or not, this is what he claimed – in his 30s, he was just doing kind of what everybody around him was doing under a situation of crisis. He was let off the hook. I mean, he wasn’t just not persecuted in 1945, he was actually promoted. And then he became France’s top civil servant, and then he became an MP, and then he became budget minister. I mean, this guy’s career was not hurt in any way by what he did. On the contrary, right, he advanced. And yet, by the end of his life, French values had changed, so a new generation wanted to hold him to account. And I think about that a lot for all of us, right, who are sort of walking around in our 30s or 40s. Another generation or two, when we’re older, might look at some of what technology we’ve built or our behaviour on climate change, our track record – did we do what we could have done to slow global warming, to improve biodiversity? – and they might, they might hold us to account saying, you could have stopped this and you didn’t, right? It’s not just what you did. It’s what you did not do. Right. So we have to be super careful when we think about ethics, because ethics change, values change over time. And what seems okay today may not be okay in 10, 20, 30 years time, and we might be the 80- or 90-year-olds who are put on trial. That is on my mind all the time, right. It’s not very relaxing.
Brian Tarran
No, and I guess it makes me think. Well, I mean, this is getting into the hypotheticals right. But is it– if we can’t necessarily predict or plan out how values might evolve over time, is it enough to be able to, to just say or to document that we asked the right questions at the time, rather than just doing things blindly. Is that where we need to kind of almost formalise our process of writing down, setting out, you know, we want to do this, we’ve considered these potential harms, we’ve considered these potential benefits, and we kind of document that so at least, you know, future generations can say well, “They thought about it. They might have not thought about it in the right way, but they tried”?
Stephanie Hare
Absolutely, I think, you know, show your work and be like, these were our, you know, these were our sort of first principles of where we were starting from, this is the context in which we were making this decision. Because again, I don’t, I don’t necessarily fear the judgement of history in terms of if I get something wrong. People get stuff wrong all the time. That’s just being human. It’s, did I not care? You know, was I like, well, sorry, little little boys and girls who are babies now, like, I need to do my stuff, and like, I don’t care about you, right? That attitude is tough. Or I decided I just really, you know, I really needed to buy a flat. So I decided to work for some dodgy company, or dodgy, dodgy company that’s owned by a foreign government, but I knew it was going to be fine, and they’re offering me a tonne of money, and now I can go on nicer holidays. I’ve had these conversations with people about this literally this past week, like, these are live issues for people. There’s a cost of living crisis, ethics can feel like a luxury for some people rather than a necessity. And human beings are very bad, all of us, at thinking about, you know, future selves, right? Like we kind of, we optimise for how we’re feeling now, and we’ll deal with 20 years from now later if we even get there. So I think there’s that. There’s also– this really inspired the writing of the book, Technology is Not Neutral. I knew, I had this weird sense – I had just gone independent, so I had left working for these companies, I was not under any NDAs anymore, which right there gives you a clue; I could say what I wanted – but I also knew there was a chance that I was going to have to go back either into industry, or maybe work for a government, I don’t know what I’m going to need to do in the future or who I’m going to want to work with or what reasons I might even have for that. But I knew I had this window of being an independent researcher and broadcaster, that I could say whatever I wanted, and I had that thing of like, okay, if you’ve had a window of, say, five years, for example, what would you say if you were not afraid? If you were not scared? If you were like, you know, screw the money, screw the corporate pressure, screw the government, whatever, what do you want to talk to the public about? And my views were, I really wanted to talk to them about facial recognition, because I feel people just fundamentally do not understand how dangerous that technology is and how it can be used. I wanted to talk about the pandemic technologies, because we were, you know, I was writing it during the pandemic, and I thought, well, if a pandemic ever happens again, let’s have a nice little tidy case study for potentially future historians or future medical personnel, public health officials to pull out, because when the pandemic hit, we all had to go back and look at stuff from the Spanish flu. You know, there’s a lot of discussion of like, why has this come as such a surprise? Are we going to use these technologies again or not? Right. Like, you know, is it worth it? Is the return on investment worth it in all senses – ethically, as well as medically, all of those things? So I thought I would lay down a couple of markers that I hoped would stand the test of time. But the big thing I wanted to do, because I was always thinking I might have to go and sign another NDA and go work because I too must earn my living, was I wanted to write something so that anyone who cares about technology, is working in it, is investing in it, right – it’s not just people who code, it’s people who fund the people who code. Buying technology – procurement is massive, you’re a really powerful person if you’re in charge of procurement. But also just consumers, and citizens and parents, and teachers and kids. If I could write up everything that I had learned in my 25 years, and succinctly as possible, right – as short as possible, because people are tired, they’re busy – I could pass that baton on, so that if I ever have to stop going on television and radio, and I’m no longer allowed to write in newspapers and warn people about the stuff I’m seeing and the abuses of power, and showing them examples of history of how this can go so terribly wrong, maybe it will have, like, lit somebody else. And I’m delighted to report – I mean, we’ll see; time will tell, it’s only been out a year – the amount of people who have brought me in to train their staff, to talk to their board. I’ve talked to children. I’ve talked to university students, I’ve taught classes all over the world, because we can now do online teaching. I’ve taken a lot of it on television and radio and in the newspapers. People wanted this, and I’m not the only person working on it, of course – there’s been a whole flowering of people, scholars, etc., working in putting out amazing books and documentaries. It’s really, we’re having some sort of moment with technology ethics – AI ethics being just a branch of that. So that’s really encouraging. So I sort of feel like, you know, again, if I, if I’m gonna have to account for myself at the age of 93, I would like to be able to point to that and go, I tried. I tried. And I have no idea if it will succeed or not, but I stood up to the plate and I swung the bat and, you know, I aimed for the bleachers.
Brian Tarran
One of the things I thought was really interesting about the book, it comes towards the end when you’re kind of talking about, you’re summing up, and you talk about how your thinking about almost like the the approach or solution to the technology ethics issue has changed over the course of the writing of the book. You had, like, a list of potential, like, proposals, proposed actions that you wanted to analyse, but then you realised that actually technology ethics is a “wicked problem”. I wonder if you could explain what that term means for people who might not be familiar with it and and why you think of it that way?
Stephanie Hare
Yeah, I’m so grateful to have learned about the term “wicked problem”. My friend Jason Crabtree, who wrote an amazing book about electricity grids, like smart electricity grids, for Cambridge University Press, had asked me to read his manuscript maybe 10 years ago, and I read it, and one thing I took away that just absolutely blew my mind was this concept. So I shall gift it to you for those of you have not heard it. Because then suddenly, you’re like, God, it makes so much sense. There are certain problems, I would say, like, the climate crisis, and biodiversity loss would be a good example of this. There’s certain problems that have many causes, many causes, so there isn’t going to be one solution to fix them. So people constantly ask me, oh, is this the magic bullet? No, they’re like, there are certain problems that there is no magic bullet – the pandemic is probably another, actually. Then, if you do try to solve these wicked problems, the mere act of solving them can introduce a whole new set of problems to them. So like, it becomes even more of a head– You know, I’m trying not to swear, but a messing with your head moment. And it’s exhausting, you know, and it gives you your forehead wrinkles and makes you just sort of want to bang your forehead onto the nearest wall. And yet, you also can’t opt out and be like, well, it’s just too hard. It’s a wicked problem. There’s no solution, there’s nothing to be done, you know, throw up your hands, because you’re like, Yeah, but the problem is, is if we don’t do anything, like literally people are dying; literally, climates are becoming uninhabitable, we’re going to have massive climate migration, that’s going to cause all sorts of problems, water scarcity, we can have wars over this, like, we have to do something. So like, you have to still act on a wicked problem, all while knowing that it’s not going to be solved in a binary sense of like zero, one, black and white, I can point to it and measure it. And for people who like metrics, that’s a real pain, because they’re like, I want to know what good looks like and I want to know how we’ll know when we get there, you know, what’s the percentage, what’s the number? And you kind of have to be like, Well, with a wicked problem, you might never solve it, or you won’t solve it once. Because again, with something like climate change, or pandemics, these are things you’re probably gonna have to solve again, and again, and again, because it’s dynamic. And it’s constant, you know, we’re always going to be managing our relationship with the climate, with the environment. So, you know, we can pick a certain temperature, or a certain percentage of landmass that, you know, has trees or whatever, and like, come up with a little metric for our metric-oriented friends. But that’s still not very meaningful. So it’s more that when you think of a wicked problem, like facial recognition technologies, like, we need to be able to identify people in certain situations, and like, we want that, right. Like, we want to be able to catch criminals, and we want to be able to catch terrorists when they’ve managed to pull off a terrible act of, of harm to people. But at the same time, do you want to turn your society into a sort of permanent dragnet? Do we not care about privacy? If so, like, when do we care about privacy? And when are we okay with maybe sacrificing that for the greater good and who decides that? It’s really problematic if you live in London as I do, and your police force, which is using this technology, has admitted, they’ve admitted themselves to being misogynist, institutionally misogynist, homophobic, racist, right? And then we’re gonna give them a technology that doesn’t work very well on people with certain skin. It doesn’t identify people of a certain age as well. It’s got all sorts of problems. So you’re kind of like, hmm? Facial recognition is covered a bit under the EU AI Act. But even then there’s like so many loopholes. And the thing is, if you cite national security, it usually gets waved through because no one wants to be the person who said, I said we couldn’t use this technology, and then something bad happened. Right? So you err on the side of, like, the precautionary principle. The default is, let them use it. We must trust them. Except what do you do if your police force has given you quite ample evidence not to trust them? Or companies. You know, this is not to bash on the police, by the way – companies are some of the worst offenders in this area. So that’s what I mean about it being a wicked problem is, it’s out there. It’s installed. It’s all over the UK, it’s definitely all over the US as well. And we don’t really have a good framework for it.
Brian Tarran
But then this is where we loop back around to the kind of culture, right, creating a culture of technology ethics? You know, we can’t just put a checklist in place once, do it, tick it off, yeah, we’ve done that for facial recognition technology, we’re good to go. Because there are always new potential use cases for it, new applications, new integrations with different systems that we always need to be thinking, every time, is this the right thing to do? Or–
Stephanie Hare
I mean, I’m still a big fan of checklists. So it’s not that I’m anti-checklist. And I’m not saying that you said that, by the way, I’m more thinking aloud. Checklists can still be useful, right? Like, whenever I’m in a really bad mood, I’m like, Okay, hang on, have I slept? Do I just need a glass of water? Am I hungry? You know, what I think is angry may just be that I skipped lunch. You can kind of go through those things, and anybody who’s had to like troubleshoot why a baby or a small child is unhappy will also have a checklist and what’s on the thing – ah, they missed naptime, where’s their bear? That sort of thing. Companies need checklists, because you’re trying to get loads of people singing from the same hymn sheet, I get that. But what I wanted to get away from was this idea that, like, one person or one team gets this checklist and maybe does that once a year, once a quarter, pick your cadence – and everybody else gets a pass. Ethics doesn’t work that way, because again, ethics is kind of I think in the wicked problem scenario of, like, how do we decide what our values are and how to live them? And how do we know, where do we draw the line? And then how do you, how do you decide if you’ve gone over the line or not? And all of that, who decides who decides? Those are really complex questions that mean that really you can’t abdicate. This is, in a company’s case, it’s the CEO, it’s the board, all the way on down – it has to be baked in to every single employee, and also investors, mindset. And I was thinking about it in terms of cybersecurity, I once had a colleague who gave me an analogy that I think is helpful, so I’ll share it for what it’s worth: When you go on to, say, an oil rig, in the North Sea – a highly dangerous environment – you might be the most junior person there, and you’re there for your very first day of work. But if you spot something on that rig that is a health and safety risk, you have to speak up. You’re not going to go, like, Oh, my boss might say something, whatever, because, like, everybody’s life on that rig is depending on everyone having that culture of careful, that’s not okay. And that really put me in mind where it was like, Oh, wow, we’re going to have to like inculcate an entire new mindset. And we think about technology ethics a lot because of technology being the word, and we think it must mean like hardware or software, it’s always about coding, and it’s often guys in hoodies coding. But my preferred method of hacking is culture. Right. So like, again, if we tried to just solve everything through regulation and laws, that takes, you know – if you look at the average time it takes to pass a law and then for regulators to enforce it – ages, we’re talking years, like it’s too late. These technologies will have moved on. Ditto, calls for international treaties. Do it, by all means. Have a look at how long it takes most international treaties to get passed and then ratified – and then, P.S. What happens when people break them? Really, right. So like, they’re important, they’re necessary, but they’re insufficient. You can act a lot faster if you can get people preventing stuff from being built in the first place, and that means you need to have a culture of people working in technology, both within the organisations – whether that’s research labs, government, companies, universities, whatever – and on the outside – journalists, academics, thinkers, etc, just the public, an informed public – who can see something and do what I just described on the oil rig, like, sound the alarm and go, Wait a minute, hang on. That’s not okay. That is, that to me feels faster. And I’m way more into prevention than cure, for all sorts of reasons. So I think like, yes, to laws and regulations, yes, to treaties; this will be faster. And I think it will be more resilient.
Brian Tarran
Yeah, I agree. And I have to say, wrapping up, that I think Technology is Not Neutral is a great place to start to inculcate that mind shift, that mindset change. So, Stephanie, thank you very much for joining us today.
Stephanie Hare
Thank you for having me.
Brian Tarran
You said you’re working on a new book. Have you got a timeline for that? Or a title?
Stephanie Hare
No. I am the slowest thinker and writer, I’m like the opposite of move fast and break things, I’m like move slowly and like think it over maybe several times. So I’m just getting started out. I’ll sort of go five years. It’s gonna be, it’s a history book. Alright. So this is, this is different, I’m having to take my classes in French and German right now to get kind of match fit in those languages again, and then you know, I’ll be off and writing. But, yeah, I hope to have another book out, you know, in five years.
Brian Tarran
Well, if the year it took me to read Technology is Not Neutral is any indication, in three, four or five years time it will still be relevant today. So–
Stephanie Hare
That’s the thing with history, it always stands the test of time.
Brian Tarran
Well, thank you, thank you again for joining us on Real World Data Science. It’s been a pleasure talking to you.
- Copyright and licence
-
© 2023 Royal Statistical Society
This interview is licensed under a Creative Commons Attribution 4.0 (CC BY 4.0) International licence. Photo of Stephanie Hare is not covered by this licence. Photo is by Mitzi de Margary, supplied by Stephanie Hare and used with permission.
- How to cite
-
Tarran, Brian. 2023. “‘I’m way more into prevention than cure’: Stephanie Hare on why we need a culture of technology ethics.” Real World Data Science, April 28, 2023. URL