Oversharing
Oversharing Pod
Stephanie Hare on the 'wicked problem' of technology ethics
0:00
-33:36

Stephanie Hare on the 'wicked problem' of technology ethics

And why thinking about ethics earlier could help startups make more money.
Photo by Dan Kitwood/Getty Images

Hello and welcome to Oversharing, a newsletter about the proverbial sharing economy. If you’re returning from last time, thanks! If you’re new, nice to have you! (Over)share the love and tell your friends to sign up here.

I’m back from the Micromobility Europe conference and an extra week off. Lots of good micromobility stuff to come this week, but in the meantime, I was delighted to speak with my friend Dr. Stephanie Hare about her new book, Technology Is Not Neutral: A Short Guide to Technology Ethics. Hare is a researcher and broadcaster with a background in technology and political risk. Her book surveys pressing issues like facial recognition technology and biometrics and examines the role of a technology ethicist in confronting some of these ‘wicked’ problems. We chatted about how she became interested in technology ethics, when startups should bake ethics into their business models, and the use of facial recognition technology to monitor low-wage workers in the gig economy.

Full interviews like this are typically reserved for paid subscribers, but from time to time I like to make them available to everyone. To access more interviews like this, as well as to gain exclusive access to comments, additional posts, and the community of Oversharing readers, upgrade your subscription.

Upgrade subscription

The transcript of our conversation has been condensed and edited for clarity.


Oversharing: I loved your book. I learned a ton, especially now that I live in London, because so much of it was about the frightening state of surveillance in London. And so I thought we could talk about what got you interested in ethics and technology, and also why you chose to take a stance in the title of your book.

Stephanie Hare: Yes. The easy one is the why I chose to take a stance and the title of the book. Which was just simply that it drives me nuts to sit in on conversations where you tackle a really tough topic and at the end the writer kind of just goes, it's just really complicated.

And you're like, yeah, I know, but some of us have to make decisions based on your insights. I want actual insights. What do I need to do? Like, do I build this tool or not? Do I invest in it or not? Should I use it or not? Or buy it or not as a consumer? So I wanted to push myself to get off the fence about the question is technology neutral and take a stand.

And I also just thought that if anyone disagreed with me, they would have to still write in their review my argument every time they critiqued the book, by having to name the title. So it was also a bit of a little immature writerly joke on my part, but it was genuinely to force myself to say if you had 30 seconds to say what the argument of your book was before you disappear forever, this is it.

And then why did I get into it? That's more complicated. Mainly just because I'm so old, I've worked in technology for a really long time, starting in 2000 and also worked around it, by which I mean, working on political risk and sovereign debt crises and cybersecurity issues. Approaching it from a really different perspective than many people who work in technology, which was like securing data, but also questions of national security and the like interests me a lot.

And obviously being American, you cannot help if you've grown up during the dot.com era and then watched the rise of the big technology giants. That's part of a national story for us as Americans and for my generation. We watched those companies go from being little startups to being behemoths that everybody loves to criticize. So that story that was happening as a sort of national story and a sector story was also happening as my career was developing.

So I watched a lot of that and it interested me. And what I can report from that experience is that we never talked about technology ethics at all when I started out. It just was not on the radar. It wasn't in any of our training. I went to the University of Illinois Urbana-Champaign, which has a superb computer engineering and computer science program. And I'm sure it was being discussed in the science and technology studies courses or the digital humanities courses, but I never had to take those. And none of my friends did who all were computer science graduates. I was the sort of freakish person studying French, but I hung out with lots of engineers.

Yeah, a lot of our education's very siloed.

Hare: Exactly. You have to study math and science to quite an advanced level in the U.S. even if you are doing a humanities degree, unlike here in the United Kingdom, where if you wanted to study French at uni, we wouldn't be having this conversation because you would never study it. But I did, and it wasn't there. And it wasn't at any of the companies I worked in, including all the way up until my last employer that I was working for before I went independent, which was in 2017 to 2018.

This was why I left and decided to go on my own. I think part of the problem when you are working on technology ethics in a company, or as a civil servant if you're doing this on behalf of your government, is you are either signing NDAs, so you're not legally allowed to discuss the stuff you're seeing, or as a civil servant, you've got a civil servant code that is, again, forbidding you from—you'd be taking a massive risk. You'd have to whistle blow, effectively. By being an independent researcher and broadcaster, I was able to do a lot more of the work I wanted to do raising awareness to the public and helping to build the public's knowledge of things that were happening. And that's what the book sort of came out of.

I think this is a good opportunity for you to introduce yourself properly because I called you a very smart person, which is true, but you do have a proper title.

Hare: I'm an independent researcher and broadcaster, and now I'm an author of this book: Technology Is Not Neutral: A Short Guide to Technology Ethics. The research side is something I've been doing for a really long time. I trained as an academic. I did my master's and PhD here at the London School of Economics and I was a fellow at St. Antony’s College, Oxford, after that. And then in parallel, when I wasn’t doing a degree or teaching and doing my fieldwork, I was having this career in technology or doing political risk analysis and then back in technology again. So I've worked mainly for American companies, but always in Europe, which has led to a different perspective.

I would say that Western Europe is the region I am most an expert in, along with the United States, obviously. But always looking at how those two regions interact amongst themselves and then with the rest of the world. And that's the thing about technology is it's such a global force that there will be no shortage of problems to work on. So as you noted the book has a lot of British case studies. And I really wanted to highlight that because again, I sometimes feel that we hear the same stories over and over again. It's very easy to focus on stuff in the United States. If somebody wants to get really exotic, they might mention China and the U.S. tech cold war, but it's often parsed through a U.S. lens.

And I'm really interested in what's happening here in Europe with the European union and specifically here with the United Kingdom. Now that it's like Brexit. And what we can learn from the UK, which, it's part of the Five Eyes intelligence sharing partnership. It's part of NATO, it used to be part of the EU. It's got its own Commonwealth thing going on. It might lose Scotland again if Scotland votes independence. There's just a lot happening here. And then what's really interesting, it's a liberal democracy, but as you noted, it's one of the most heavily surveyed countries in the world. So that's weird. That felt like it had to be a topic in its own right.

In the book, you talk about the concept of a ‘wicked problem’ and why tech ethics is a wicked problem. Could you briefly describe that?

Hare: There are linear, easy, straightforward problems, like 1 + 1, or how do I make brownies? I can get some ingredients and follow a set of steps and 40 minutes later you have brownies. We have solved the brownie problem.

By contrast, wicked problems would be something like the pandemic or loss of biodiversity and climate change. They have many causes, so you can't isolate one cause and tackle that problem. And even in the act of trying to solve the problems of the pandemic or climate change, you will end up creating even more problems. They're problems that can't be solved by traditional, convention, linear methods. And they're never really solved, they're better. It doesn't translate well to binary thinking like zeros and ones, which is what we think of with technology. Wicked problems are going to be more like, is the pandemic ebbing, is it becoming endemic, are we able to live with it? We might have high cases, but we have fewer deaths and hospitalizations. And then we have to look at long covid. So it's, how do we get it to where eventually having covid is like having a common cold? That would be the goal of success, that's how you would define solving that. Eradication is very rare in medicine.

So it's a bit like that really. Technology ethics is like, well, what is the problem we're even trying to solve with that? Like whose ethics, who defines what is ethical in technology? Maybe China thinks what it's doing in terms of surveying its population and locking people up in camps in the west of the country is totally fine by their ethical system.

Well, and I think part of the problem, as you get to in the book, is that there isn't really one person or one organization or one system set up to deal with these questions.

Hare: Exactly. So who's your ethical authority? Even if you step away from technology and just go, who's your authority on ethics in a given society: Is it God, is it Congress, your democratically elected representatives? Is it your parents or teachers, or is it the individual or is it the collective?

People have been reckoning with ethics for thousands of years. Then merge technology with it and it's like, oh my God, it's a whole other wicked problem, which is technology already poses all sorts of problems requiring solutions. Ethics is the same. Merge them together into a Venn diagram and you're going to have a migraine, but you're also going to have a really rewarding conversation and career because these are never going away.

In the book, you talk about how ethics or lack thereof get created through the venture capital ecosystem. You break down the stages these products and services go through from idea to creation, and you make the good point that at each stage, there's a chance for various people and stakeholders to weigh in and give feedback on ethical concerns.

You write that technology ethics “should be on the risk radar of anyone who funds the creation and deployment of technology.” But I think what we often see in practice is that ethics either aren't a priority, especially for early-stage startups which are hustling and trying to scale and grow, or when they do come up, they're often in conflict with the chance to make a lot of money.

Hare: That's the interesting thing: where does ethics enter the system between idea and execution and rollout of product or service? So first of all, it could be baked in at any point. And I think the argument would be you'd want to bring it in as early as you can. It'll save you headaches in the long run if you can use ethical thinking as a way of mitigating risk. But second, it could actually become a selling point. So if you are able to do technology ethics early and bake that into your strategy and how you're talking to investors, how you're hiring people, and how you tell your story to the market, there is such an appetite for that because I think people have really matured and grown up. I would say really since 2016, when we had the Edward Snowden revelations, as well as the Cambridge Analytica Facebook story that led to Surveillance Capitalism, the book by Shoshana Zuboff, and many, many more where people suddenly understood: my God, I am someone who generates data, there's a data trail, all about me. There's a shadow profile about me and other people are making money or weaponizing it against me or against my society.

So anyone who can meet that pain point and take the pain away and be like, let me offer you some reassurance, or you will always be in control of your data, you have rights over your data. That becomes really sexy. And we saw that when Apple, of course, changed the way that it was doing its default settings on its App Store. Suddenly most people were choosing privacy when given the choice. Very few people were choosing to be surveyed by our friends at Facebook and Google. So it just showed you when people were saying people don't care about privacy, it wasn't necessarily true, is was just that they hadn't had the chance to exercise that.

We make privacy very hard to access, from terms and conditions that you have to accept that are too long for anyone to plausibly read, to GDPR rules here. Everyone knows you go to a website and you get the pop-up saying, do you want the trackers, but it's so annoying. It's a question of, is it the right solution to the problem, because it's just not feasible even for someone who is concerned about privacy to always be going through and disabling all the trackers every time.

Hare: I mean, who has the time or the energy. The consent model is broken. We should just have certain things that are not allowed. And I think what's really frustrating for a lot of people who work in technology ethics, is, it isn't a question of, there are these problems, what should we do about it? Loads of people have named the problems and what needs to be done. And then it falls down. Because it's kind of like, well, who is going to take out the third-party data broker model, which everyone agrees is an absolute dumpster fire for all sorts of reasons. I'm not saying anything particularly revolutionary here, even Mark Zuckerberg and Tim Cook have said we need to get rid of third-party data brokers. But who's the We? Is that Congress and the European Parliament? Why are they not doing it?

And obviously protections for kids, which again is a huge market in terms of technology ethics. If you can actually do something that makes it where kids can be online safely, and that reassures parents and teachers, you're going to laugh all the way to the bank. But right now nobody trusts anyone who's doing anything with kids because we constantly hear about abuses.

So I think it's that. It's understanding that the earlier you can bring this kind of thinking into your product or service development, it's going to be an advantage for you. It will only help you. It's never a case of like, hm, we can either do ethics or make more money? Doing ethics could be part of you making more money.

One place we’ve seen surveillance and facial recognition crop up is in the monitoring of low-wage workers. In your book, you give the example of Uber storing face biometrics on drivers and asking them to take selfies at the start of their work shifts to verify their identities. Amazon uses a similar selfie-based login for delivery workers, as well as more intrusive monitoring tactics like AI-powered cameras in its delivery vehicles. There was a report earlier this year that facial recognition firm Clearview AI was marketing its products specifically to gig companies.

What should we make of these aggressive and invasive methods of monitoring low-wage workers, especially when these are people in the gig economy who are purportedly ‘independent’ in their work life?

Hare: It drives me insane. I think we have to completely rethink workers' rights and trade unions, which seem to be very silent on a lot of this. And being an independent worker, not having union protections, what does that mean as well? Our lawmakers don't seem at all aware of this and the unfairness of it. And if anything, I would love to see more monitoring of people in government and positions of quite senior power, where, when they are abusing their power and privileges, CEOs, senior executives, and the like, they can take out entire industries. That's just me talking from having lived through the financial crisis or watching VW with the way that their engineers were gaming their emissions. That's stuff that can have life or death consequences for people, to say nothing of fiduciary criminal liability.

To simplify it even further, a question I had after reading that section of your book was: Does it matter if you can set your own hours if your data privacy, your face, your biometrics are no longer your own?

Hare: Arguably no, and you're right to raise that point. I sometimes feel like the way that companies use technology is they pick off workers group by group. So again, if you're in a company or in an existence professionally where this isn't affecting you yet, you probably don't care. And then if it is affecting you, maybe you're having to work so hard and hustle right now that standing up for your privacy and human rights and civil liberties in the workplace is unfortunately no. 2,000 on your list of things to do today. So it's the whole question of whose job is it to be looking after this and thinking about it?

For this question of independence and who's being surveyed, it's not equal opportunity surveillance. Police use it on the rest of society. Employers are using it on some workers, increasingly more. Obviously what you're doing at home or in your car can be surveyed in a different way than if you're on a physical premise of an office or a shop or a hospital. So it's that whole thing of safe spaces from surveillance versus not. And it's very, very blurry, and no one seems to be really looking at it yet. So I think it's a massive future piece of work for regulators, lawmakers, and the like.

It goes back again to what you were saying about choice and false choice. Because if the option is provide a selfie, work, and get paid; or not provide a selfie, not work and not get paid—that's not really a choice, right?

Hare: No, it's not a choice at all. And leaving it to the good people at the ACLU in the U.S. or Liberty and Big Brother here in the United Kingdom to fight those good fights—it just makes you wonder again what are our elected lawmakers working on? Because this is affecting millions of people, it’s not some niche area of interest. It's important and it's only going to be the future and we don't seem to have clarity about it. So do workers even know what their rights are? This technology was supposed to come in and it's often presented as a voluntary thing first, where you can choose to do this, or you can use old-school options to prove your ID or clock in or whatever. And at a certain point a line gets crossed and it's no longer voluntary, it's mandatory. And now it's not a choice anymore.

I want to talk a bit more about Clearview AI, which has reportedly harvested more than 20 billion facial images worldwide. Its methods are controversial. It's unclear whether it's always had permission—a lot of people think it hasn't—to take these images and put them in its database. The UK data privacy regulator, the ICO, recently fined them £7.5 million for illegally storing facial images, which on the one hand seems, seems good, but on the other is a lot less than the £17 million fine the ICO had initially threatened.

In response to the ICO, Clearview’s CEO said: “I am deeply disappointed that the UK Information Commissioner has misinterpreted my technology and intentions… I am disheartened by the misinterpretation of Clearview AI's technology to society.” I think this hits squarely at the problem of technology ethics. How else are we supposed to interpret Clearview’s ‘intentions’?

Hare: Well, it's also weird because the way that Clearview has built up its database of facial images is by scraping the internet. So it's taken probably the face of everybody who's listening, because I’m going to guess that most of us are online in at least some ways. So from LinkedIn to Twitter to Facebook to any online photo albums you have. And you might not have put those photos up yourself, you might just be in someone else's photo. Tagged or untagged, it will find you.

This is the controversial bit.

Hare: Exactly. They've done that without your knowledge, without your consent. You're not getting a piece of that sweet action in terms of money. You probably didn't even know about it. And then suppose you're uncomfortable, how do you go to Clearview AI and go, ‘I'd like to find out if I'm in your database and if I am, I would like to be removed.’ At which point it’s actually irrelevant because they've built the tool, it’s too late. You're locking the stable door after the horse has bolted. So even if they were to have removed some of our data at this point, if someone were to request it, it can now be used. And it is indeed being used, largely by U.S. law enforcement at the federal, state, and municipal level. Clearview has also given its tool for free to Ukraine, which has been using it to identify dead and living Russian soldiers, and then posting pictures of those dead soldiers onto social media so that Russian families would find out about it. They say that's to establish ground truth and fight misinformation. But the fact of the matter is Clearview AI has taken everybody's faces and it's now being used as a weapon of war and law enforcement. Which, if you didn't consent to that, or if you have a problem with that in any way, shape, or form, the answer is kind of well, too bad.

So lots of regulators around the world—the UK just issued its fine notice, but many others have: Canada, Australia, lots of them in the European Union—have said we're not only fining you, but you have to delete all of the data. So again, great. But it's being used in the U.S. and the U.S. has been challenged quite a lot. And now it's only supposed to be used by law enforcement. It was being used by private companies. I happen to come from the state of Illinois, which has, I'm delighted to share this good news with you for fellow Illinoisans, the strongest biometric protections of anywhere in the nation.

Wow. Amazing.

Hare: The Biometric Information Privacy Act was a groundbreaking piece of legislation and it makes companies like Facebook and Google shake in their boots. It's empowered the ACLU to file lawsuits and Clearview AI just sort of ran away from the state of Illinois because it's just not worth it.

If BIPA is so great, why haven't we seen other forward-thinking states copy that model?

Hare: A couple have been doing a little bit of work on biometrics protection. Texas is certainly one of them and there's a few others that will do it private sector only, but not law enforcement. Children only, but not adults, that sort of thing. So again, because it's the United States, it's a massive hodgepodge, a melting pot of different protections or lack thereof.

Honestly, I think a lot of people were just asleep at the wheel. If you look back in 2006, we were only a few years after 9/11, biometrics technology was being used by the U.S. military and other NATO powers in Iraq and Afghanistan to great effect, from the U.S. and NATO perspective. And then we went into the financial crisis in 2008. It's been basically a dumpster fire ever since. We've had a pandemic, we're in another war. We've had crazy presidential elections and the like. People were really worried about data online and not thinking about body data: face, voice, fingerprints, all of that stuff. And most people's experience with it is just like Face ID or fingerprint to unlock their phone. They don't realize that they can be surveyed in real time.

What happens then is they say nothing to hide, nothing to fear. Most people put all their stuff online anyways, or your phones are tracking you 24/7 so we don't have privacy anyhow. I've heard all of these arguments. So I think that is why a lot of people just didn't get it. And I think they will start to get it more and more. Clearview AI has really freaked a lot of people out because they had their investor deck leaked to the Washington Post in February, which said by the end of this year, we'll be able to identify every single person on earth. To go back to the original point of let’s sit with this CEO's remarks where he just feels really misunderstood: Well, his story changes all the time. Sometimes he wants to help the military and the police to fight crime and keep the world safe. Other times he wants to revolutionize private sector work. I think he'll sell to whoever it is that he could, which is totally his right. But the point is he's built his tool off the back of other people’s data. And it's their body data. So this is more than just like, do you like coffee versus tea on Facebook?

You say in your book, you can leave your phone at home, but you can't leave your face.

Hare: Also it's not just about you, it even gets to your DNA, which starts to be about your family. So the world of biometrics technology is fascinating. It's one of my passions of research, but it absolutely would have to be put into the same category as AI warfare technology or anything to do with genetics and genomic engineering. This is experimental emerging technology that could have very serious impacts on people. No one is thinking this through, except obviously the good people of Illinois, whose lawmakers shockingly drafted an amazing law which I think everybody else is kind of looking at.

How much of the problem with regulating these issues is that our elected officials—or the people with the power to deal with this—just don’t understand the tech enough? We've all seen the footage of Congressional debates where you have lawmakers who don't know what Twitter is. So how do you expect them to understand fast-changing facial recognition technology and big data and all these things?

Hare: I am available if our friends in Congress would like somebody to come in and do a tutorial. And indeed, as you know, in the book, there are in fact several scholars in the U.S. who have explained how facial recognition works. The ignorance excuse I think was more valid around 2016, where people just didn't seem to get it. Now what's encouraging is it really is on the front page of the New York Times and the Washington Post and on your television, podcast, and radio. And it's in increasing numbers of books. Lawmakers really should know. I’ll be presenting to the all-party parliamentary group on the 4th of July here in the United Kingdom, open to the public. I will be talking about this to UK lawmakers, who to be fair, have had many people discuss this with them before.

So it's not an ignorance thing. It's a priorities thing. And also, as we just discussed with the Clearview decision here in the UK, the regulator could have fined Clearview the full amount of £17 million. And instead it gave them a £10 million discount and only fined them £7.8 million. And it's like, why? The regulators should be hitting these companies with the full force of the law. Why are they pulling their punches? And they do this all the time. And then these companies price that in. They're happy to be fined.

Well, it's just a cost of doing business, right? We see this with antitrust also. When you make money at the rate that a Google or an Amazon does, it doesn't matter how big the fine is. You make the fine up in a couple of days and then it just facilitates your business.

Hare: Somebody needs to write something beautiful on how our regulation models are from the 1950s or maybe the 1980s tops. If they don't seem fit for purpose and not for technology at all, they were designed for companies and banks. They don't make any sense for this type of technology and these companies that we're seeing that are global money-minting factories. And they're happy to pay those fines. In which case, why wouldn't you, if you were them? You would take data and build whatever the hell you want with it and just pay it and kind of be like, who cares?

I don't know how the regulators feel about this, and I don't know how the lawmakers feel about this. And I think the public is just kind of exhausted and frankly has so many other things on their plate that there's an exhaustion factor of nobody knows really where to start to solve this problem. But I think looking at: how do you have a regulator that's fit for purpose? How do you have laws that are fit for purpose? And how do you do enforcement? You can have all the great laws you want. GDPR is a great example of that. It’s not enforced enough. So that then creates bad faith among people.

To bring it back to where we started, what role could a technology ethicist have on a systematic level to start to address some of these issues?

Hare: So there's internal and external. I think if you are a technology ethicist working within a company, ditto in government, if you're working in the civil service—there's a lot of technology ethicists working in government now, which is very exciting or advising lawmakers, advising regulators, that sort of thing—you can actually make a huge difference.

And you might not even call yourself a technology ethicist, by the way, you might be a data scientist or a software developer or a strategist. You won't necessarily be just working in the legal and compliance teams of these places. You could be the person who is scoping out problems and deciding what are we going to work on and how are we going to do it? What's in scope and out of scope and all of that. As a journalist, you will have an incredibly important role. If you are a reporter or an editor, who's deciding are we going to focus on biometrics and alert the public or not? Are we going to help people so that they understand how to protect themselves and make better consumer choices or really hold certain companies’ feet to the fire or not?

That's massive because it's almost like with public health, you have to build people's base of knowledge first so that they are even receptive to the arguments and choices that will then be put in front of them. But if you don't build that knowledge base, they don't even know where to get started on that. Unless you're working in the sector, you don't know, because none of us can know everything. So I reckon you cannot but make a difference as a technology ethicist. Don't worry about that part. Just pick a problem and go at it with hammer and tongs. There's so many problems to solve. Just go.

Discussion about this podcast

Oversharing
Oversharing Pod
More than you wanted to know about the sharing economy