Skip to main content

AI Expert Joanna Bryson Dishes on Due Diligence and Rooting Out AI Bias, Part 1

Roland Alston, Appian
February 18, 2021

There's a lot of hype out there about artificial intelligence (AI) and how it's revolutionizing this or transforming that. But in this timely remix of a previously published post, AI expert Joanna Bryson (@j2bryson) calls out AI hyperbole and helps us cut through the smoke and mirrors on:

    • Rooting out bias in AI.

    • Prioritizing due diligence in software development.

    • Regulating AI without taking the magic out of AI innovation.

Note: Bryson is Professor of Ethics and Technology at the Hertie School of Governance in Berlin where she educates future technologists and policymakers on digital inclusion and AI governance. She's a leading voice in the movement to improve the governance and ethics of digital technology. And she was recently recognized as a top digital influencer by the European Digital Development Alliance.

If you scroll through tech trend news, you can see that AI continues to accelerate from fringe to mainstream at remarkable speed. Big tech is betting everything on it. Government agencies are pouring billions into it. And calls to prioritize ethical considerations in deploying AI are reverberating through C-suites and boards everywhere.

So, Where Do we Go From Here?

It seems like we're at a crossroads in what Harvard Business Review callsthe age of deployed AI. Where we go from here, says Bryson, is more about the governance we choose to guide the evolution of AI than anything else. And, so, there's a growing sense of urgency among businesses, lawmakers and technologists to do a better job of due diligence with use cases involving AI-based risk scoring systems in areas such as criminal justice, social services, healthcare and the like.

Accountability for AI ethics may ultimately rest with boards and C-level execs. But the big question is: How can we ethically scale the best AI use cases?

"What's important, says Bryson, is for us to understand how AI makes decisions, how it determines what we see and what we don't, and who's accountable when it breaks bad."

Check out the following remix of our 2018 but still timely conversation with leading AI thought leader Joanna Bryson.

Appian: Are you an optimist or pessimist about the impact of AI on society?

Bryson: Some people call me an optimist. But others think I'm a complete techno phobic pessimist.

Appian: When you think about the amazing evolution of AI, what worries you about the future, what keeps you up at night?

Bryson: A future where we cannot be ourselves, where our entire background is online, and we can be penalized for that. I'm seriously worried about that right now. So, I'm very aware of the downside of the "surveillance state".And I don't really see a way to get around that. People are going to know who we are, what we do, what we've done, and what we're inclined to do.

Appian: Some experts say that AI reflects the values of our culture and that we can't get AI right if we're unwilling to get right with ourselves. What's the key to getting AI right?

Bryson:

"I think that what we have to do, and what's becoming the most important topic, is how do we manage our governance of AI. How do we coordinate our actions through governance to protect individuals."

Should Algorithms Be Regulated?

Appian:So, in the context of regulation and governance, what advice would you give business leaders about ethically developing AI?

Bryson: The most important thing I'm pushing right now with businesses and regulators is that we need more accountability with AI and this is doable.

Appian:What about fears that regulating AI would stifle innovation. What do you make of that?

Bryson: There has been a lot of smoke and mirrors around AI. We recently had high-profile engineers going out there and saying that if you regulate us, you're going to lose deep learning, which is the magic juice of AI innovation. But, now you have major tech companies saying that they do believe in regulation for AI.

I think it's important to recognize that we can track and document whether or not you follow due diligence in AI development. The process is no different than in any other industry. It's just that AI has been flying under the radar.

Appian: What's the big deal about transparency in AI, why does it matter?

"Right now, a lot of people are releasing code, and they don't even know which [code] libraries they're linked to, where they got these libraries from, and whether they've been compromised or have back doors."

So, we just need to be more careful. It's like the (Morandi) bridge that collapsed in Italy in 2018. When you don't know how good the materials were that went into a construction project, or whether shortcuts were taken, then you can't really say how strong your bridge is.

Today, we have laws about that with respect to bridges and buildings. But if you go back a few centuries, anyone rich enough could construct a building anywhere they wanted to.

Now, if you want to put up a building, you've got to go before the planning commission. Your architects have to be licensed. And all that stuff happens because buildings really matter. If they aren't constructed well, they fall down.

https://open.spotify.com/episode/29wGMEgijFq6cfLzgSmAF0?si=vETGXR2ZRGS_-61Rjw7X7A

Can You Stand Behind Your Software?

Appian: So, you think we need to get more serious about due diligence in software development?

Bryson:We should go through procedures to make sure that the innovations we make are sustainable.We should be able to prove that we can stand behind our software. And we should be held accountable for what our software does.

Appian: That sounds reasonable. But, practically speaking, how do you do that in the real world?

Bryson: So, I heard a really great story along these lines. Almost every car on the road has some level of AI in it. And one of the things that AI does is help you stay in your lane.

But a man in Germany had a stroke while driving. And it was the kind of stroke that left his hands on the wheel and his eyes open.AI looks to see if you're falling asleep at the wheel. But in this case, the car thought the driver was okay. So, the AI maintained his lane and kept the car going straight. And it ended up in a horrible accident.

Prosecutors looked at the car company to find out what they had done wrong. But the company was able to show that they had followed best practices and convinced the prosecutors that there was no case to bring.

Appian: What's the takeaway for business leaders?

Bryson: No matter what industry you're in, you should be able to show that you followed very careful methods of software construction including using machine learning.

If you run a bank, and you have an accounting problem, you don't look at the synapses in the brain of your accountant. You look at whether the accounts were done properly.

And I think the same thing can be done with machine learning: "Here's the data that I used. Here's how I trained it. Here's how I used best practices." That's how you reduce risk.

(PS: Watch this spacefor the final episode of this two-part conversation with leading AI expert Joanna Bryson. Meanwhile, for information about AI and low-code development, check out thisreport by Cognilytica Research.)