Skip to main content

Got Bias in Your AI Bots? Due Diligence Can Root it Out. (AI Ethics, Part 3)

Roland Alston, Appian
October 4, 2018

Joanna Bryson, AI Expert
Joanna Bryson, AI expert and computer scientist at the University of Bath in the United Kingdom.

(This is the final episode of our three-part series on artificial intelligence, featuring computer scientist Joanna Bryson (@j2bryson), ranked by Cognilytica as a Top 50 AI Influencer to Follow on Twitter.) Read Part 2 here.

In the previous two episodes, Bryson cut through the smoke and mirrors around resistance to AI regulation, and broke down the importance of due diligence in the software development process.

In this final installment, Bryson revisits the challenge of rooting out AI bias, explains why machines won't take over the world, and reveals the secret to AI success.

Hope you enjoy the conversation.

Appian: Earlier you said that you don't think a company on a mission to "move fast and break things" will be able to prove they did due diligence, when bad things happen with their software. What's the takeaway for business and IT leaders?

Bryson: No matter what industry you're in, you should be able to show that you followed very careful methods of software construction including using machine learning.

If you run a bank, and you have an accounting problem, you don't look at the synapses in the brain of your accountant. You look at: "Were the accounts done properly"?

And I think the same thing can be done with machine learning: Here's the data that I used. Here's how I trained it. Here's how I used best practices. That's how you reduce risk and protect yourself.

We Don't Need New Regulations for AI

Appian: Which brings us back to the question of AI regulation.

Bryson: If we can get through to companies that this is the expectation, that this is just normal life...In the UK, we don't think that we need any new legislation. We just need to help people understand how to apply existing legislation to the software industry.

https://twitter.com/j2bryson/status/1045693197396135936

Appian: That said, do you think the European approach of GDPR (General Data Protection Regulation) is a good model for us to follow?

Bryson: Nothing is perfect, but I think the GDPR is leading the way in AI policy. You can always improve it. But if we're not going to improve it, we should just adopt it (laughter).

It's funny, because the Europeans all worry about "why don't we have companies like Facebook, why don't we have Google, why don't we have big companies like that in Europe?" ... the fact that there are these incredibly powerful companies in the U.S. doesn't necessarily mean that they're doing something right.

The question is, do you want that much money and that much power, knowledge, or money piled up in just one large company?

And I think that's a big reason Microsoft is taking ethics and AI seriously. They've already been through the antitrust drama. They are very aware of the potential liabilities of AI. And they're looking forward to be the adult in the room on the governance question and maybe they are the adult in the room.

Digital Leaders Prioritize AI Governance

Appian: So, some of the big tech companies are making the right moves on AI and governance?

Bryson: I wouldn't have said that 5 years ago. But the moves they've (Microsoft and Google) been making in the last 18 months have been interesting.

Appian: Let's move on to another hot topic algorithmic bias. Is it reality or is it just hype?

Bryson: Machine learning will pick up the same biases that psychologist call implicit biases, when humans express them.

Appian: Can you give an example of that?

Bryson: Relatively speaking, women's names are associated closer to terms that are more domestic. And men's names are associated closer to terms that are more career oriented. So, that's the implicit association test, which psychologists have done.

So that's a really good example of how, if you're training AI by machine learning, you're going to wind up with the same prejudices that we already have. But that's only one of three ways that you can get biases into AI.

https://twitter.com/j2bryson/status/1044332545893109761

When AI Breaks Bad

Appian: How much should we worry about bias in AI?

Bryson: I'm very worried about it. One of my favorite stories about AI bias has to do with soap dispensers that won't give you soap if you don't have a certain skin tone. [News reports say the infrared sensors weren't designed to detect darker skin tones.] So, people in South Asia would end up using toilet paper to get soap out of the dispenser.

In other words, none of the people who tested these dispensers were Asian. They were all incredibly Caucasian (laughter).

That's just insane. But these are the easiest kinds of biases to fix. And that's actually one of the good things about AI.

With human implicit biases, it's harder to tell what's behind them. But with AI, as with accidents involving self-driving vehicles, you can go out and look at the data logs, and see what the AI was perceiving, and figure out why the AI did what it did.

Appian: Are you concerned about situations where bias is deliberately built into AI?

Bryson: ...This is the one (kind of AI bias) that I think people are missing where you can deliberately build bias into your process. It's not about the algorithm being evil. It's someone who walks through the door and says: "I just got elected, I want to cut taxes, and the people who didn't vote for me, let's take money away from them."

Flawed Algorithm Cuts Disability Benefits

There was a bizarre case in Idaho where the state built an algorithm for allocating disability benefits. But the formula caused disability benefits to suddenly drop for many people, in some cases by as much as 42%. When beneficiaries complained, the state declined to disclose the formula, claiming that it was IP (intellectual property).

Appian: So, what happened?

Bryan: The ACLU (American Civil Liberties Union) took up the case and prevailed on due process and forced the state to reveal its formula for allocating benefits. And it was a mess. The same thing is happening with recidivism programs.

Some judges are using AI software that has the capacity to predict the likelihood that a person will re-offend. The software behind these recidivism programs is worse than anything any academic can do. We can't figure out how they are doing such a bad job of prediction.

So, then you have to say, what are they doing wrong? Are they really that bad at programming? Or maybe the biased programming is intentional.

This is why I said before that the most important thing [with AI] is accountability. The only way that we can determine if a human has introduced a bias, and that it's not simply a mistake, is through accountability and logging.

https://twitter.com/j2bryson/status/1043612582844792832

AI Won't Take Over the World

Appian: Let's switch gears for a minute. So, we've seen tremendous progress in with intelligent automation. We've reached the point where machines are using sophisticated algorithms to mimic human behavior. But does that make them intelligent?

Bryson: To me, a thermostat is intelligent. If you want to define intelligence as being human, what is the purpose of that? There are lots of different ways to be intelligent. But I think what people really, really care about is moral agency and moral patiency.

Appian: Moral agency? Moral patiency? What does that mean?

Bryson: Moral agency is about who or what is responsible for the actions an agent takes.Moral patiency is about the "who" or "what" society is responsible for.

Now that we can have that conversation, the two things that people care about most are:

    • Is AI going to be like us?

    • Do we have to worry about it taking over the world?

Appian: So, is an AI apocalypse something we should be worried about is AI taking over the world?

Bryson: I don't believe any one machine can take over the world. The world is a pretty big place. Cooperatively though, humanity is doing a very good job of taking over the entire ecosystem.

We are the ones that are changing society by using AI. So, how should we regulate that? How should we change the laws to protect people, now that we have big data and we know all these things about them?

Fragmentation to Get Worse with Explosion of AI

Appian: In spite of the progress you mentioned, skeptics are still throwing shade on AI, claiming that it's all hype and science fiction fantasy. Some critics say that all we're talking about is advanced machine learning and augmented intelligence, not true AI. What do you make of that argument?

Bryson: If all you're saying is that we don't have AI that looks exactly like a person, we're not going to get that. Nothing you do with silicon and wires will have as much phenomenological consciousness as a rat does. And we poison rats.

One of the problems we're having with the evolution of AI is fragmentation. Think about how different our communities would be if everyone came out and talked to each other. The fragmentation problem came about because of communications technology. And the problem is going to get worse with the rise of AI.

https://twitter.com/UniofBath/status/1042093599041757185

Appian: Let's wrap up our conversation with a few more questions... In all of the conversations you've had with business and public policy leaders, what's the biggest misconception about AI?

Bryson: There are several things. One goes back to a point I made earlier in our conversation. Which is the fear that you'll lose the magic if you regulate AI.

No, you can regulate AI. And you can regulate it on performance.

Another misconception is that you'll lose IP or innovation, if you regulate AI.

But medicine is heavily regulated, and they have 10X the IP as the tech industry. A lot of the resistance to regulation is coming from people being unwilling to change. They don't realize that regulation can actually help them.

So, when I talk to really big companies, the main thing I want to communicate is the importance of accountability and getting on top of their software development process. And machine learning is just another tool in the tool box.

Which means you need to be doing your systems engineering more carefully. You need to know where your libraries came from.

Whether you're talking about software libraries that you're linking to, or data libraries that you're training from, you need to know where they came from, and who has access to them.

Time to Figure Out How to Integrate AI into Our Lives

Appian: Finally, what are your expectations for AI in 2019 and beyond?

Bryson: I think it's important to understand that AI is everywhere. And the biggest challenges that we're facing right now are the political, economic and social consequences of how it affects us.

We made this huge leap between 2007 and 2017 in AI capabilities. Because we had more data, and we got better at machine learning.

In the long term, I think that this will accelerate our rate of progress.

But in the short term, I think the rate of acceleration is going to slow down somewhat.

So, now is the best time to figure out how to integrate AI into our lives.