Skip to main content

Will AI Make Humans Obsolete? Not in the Short Run (Part 2)

Roland Alston, Appian
February 7, 2019

(This is the final episode of our two-part series on artificial intelligence (AI) with AI expert and Duke University Computer Science Professor, Vincent Conitzer @conitzer.Read Part 1 here.)

The math is staggering: By 2020, there will be more than 50 billion connected devices on the internet.

But the bigger takeaway is that AI is already embedded in more than a billion of them.

Just ask Siri and Alexa.

Which brings us to the final installment of our conversation with AI expert and Duke University Computer Science Professor, Vincent Conitzer.

Conitzer agrees that we are on the cusp of an astonishing revolution in intelligent automation. But he also argues that being human still has its advantages.

Should we worry about AI taking over the planet? This may be an important question in the long run. But Conitzer says that today's AI is still far too limited for that.

On the other hand, he highlights controversial trends thatare on the horizon right now autonomous warfare, technological unemployment and bias in algorithms. He warns against hasty regulation of AI, and debunks some popular misconceptions about AI as well.

Hope you enjoy the conversation.

Appian: There's a lively debate going on around AI and ethics. What's your take on the ethics question? What do you see as the biggest ethical challenges facing AI?

Conitzer: One of the things I see in AI these days is that as we deploy AI in real-life settings, the objectives we pursue with it really start to matter.In the past, this wasn't a big concern, because AI was still in the laboratory...There's the example of reinforcement learning. This is a subtopic in AI where the system learns how to take action to optimize an objective.

For example, there's a problem where you have a cart moving along a track. And you have a pole standing on the cart that's connected to a hinge. So, there's risk of the pole falling over one way or the other.So, what the system is supposed to do is move the cart back and forth in such a way that the pole stays upright.

Appian: That's not a an easy problem to solve.

Conitzer: No, it's not. And it's a good benchmark for algorithms. Specifying the objective is quite easy: that the pole doesn't fall over.

But let's face it. There aren't many people in the world that have to balance poles on carts. But as we move (laboratory) objectives into the real world, they start to matter.

Going to school on machine learning

Appian: You've also talked about something called supervised classification. What exactly is that, and how does it relate to AI?

Conitzer: That's a typical problem in machine learning. This is where we have lots of input data. For some of this data, we also have a label. For example, we may have lots of pictures, and some are labeled with who's in them. And we may want to give this data to an AI system and train it to determine, on its own, who is in each picture.

Appian: Is that the same process that's used in speech recognition systems?

Conitzer:Yes. So, you might have a speech recognition system that you try to train on data in speech files. The goal is to be able to transcribe speech that wasn't in the data set. And you hope that the AI will learn how to do that in the laboratory.

Appian: So, how do you know when it's ready for the real world?

Conitzer: Well, one way is to track the percentage of decisions the system is getting right. But when you deploy a system like this in the world, there can be other issues.

Maybe you do really well on the majority dialect in the population. But you do poorly on a minority dialect. So, you could end up placing some people at a severe disadvantage.

And this is the kind of thing that really does happen.I remember when my kids were small, they tried to speak to Siri in my wife's iPhone. And it was amazing how wrongly it interpreted what they were asking. There probably weren't many children in their data set, so Siri didn't do well with my kids. So this wasn't a big deal. But you can think of other cases where this would be a serious problem.

https://twitter.com/AIESConf/status/1092172899845312512

Appian: Can you give some examples of that?

Conitzer: For example, tech companies like Google and Facebook want people to sign up for accounts. But they also want to detect accounts that don't correspond to real people. So, they ask for a person's name and other information to authenticate their identity. And one of the things that they try to do is figure out which accounts don't correspond to a real person.

So one of the factors you take into account when trying to determine if a person is real is their name. And it turns out that many Native American names share features with accounts that tend not to be real.

Appian:Can you give an example of that?

Conitzer: The names may have more words in them, which could be indicative of a fake account. So, Native American accounts were being classified as not real significantly more often than other customers.

Simple objectives don't always produce good outcomes

Appian: So, the AI didn't spot the misclassification?

Conitzer: No, the AI system didn't understand any of the broader context of this. So, it unfairly disadvantaged an entire community of consumers.

So this is the kind of thing that we've got to be careful with, because simple objectives don't always give you good outcomes.

Appian: So, how do we guard against that kind of unintentional bias? How do we prevent it?

Conitzer: It's a difficult question. It's even hard to determine what we mean by a system that doesn't have any bias. There are different definitions that aren't always equivalent. So, there's that problem in the first place. But sometimes it's just obvious that something has gone wrong. I don't think there's a single method to eliminate AI bias.

I think it's good to have a diverse group of people involved in the creation of the software and inspecting the software.

The software should be tested for bias before it's deployed, so it doesn't create some of the bad outcomes we've been talking about.

It's essential to understand what you're regulating

Appian: On the topic of ethics, transparency and accountability, where do you come down on debate around regulation and AI. Should it be regulated? Or will regulation stifle innovation in the evolution of AI?

Conitzer: I'm not opposed to regulation. I think you have to look at it on a case by case basis. When you use AI to predict which ads somebody sees, that's different than using AI to decide whether or not somebody gets out on bail. Or if you're using AI to recommend sentences for people convicted of crimes.

When you regulate AI, you should have a good understanding of what the systems actually do, and what you're trying to achieve with the regulation.

The problem with hasty regulation, where people don't understand what they're regulating and why, is that it's not likely to be beneficial.

But there are cases where regulation is appropriate. When you're talking about law enforcement, and you're trying to decide whether somebody gets out on bail, these AI systems should be transparent and accountable.

Appian:Speaking of accountability, there's a strong debate going on around AI and accountability. What do you make of that debate?

Conitzer: Think about self-driving cars. When an accident happens, who is responsible? That can be a difficult question. We have clear traffic rules, and we know what's expected. When AI systems take over, and something bad happens, it's sometimes more difficult to decide exactly what went wrong and where to assign responsibility.

Is the original programmer at fault, or perhaps it's the fault of the person who provided the data that the system was trained on? These are tricky questions that people in the legal world are thinking about very hard.

The biggest myths about AI

Appian: You've heard the hype around AI. Of all of things that you've read about, what's the biggest misconception about the capabilities of AI?

Conitzer: I think on one hand, there's been tremendous advancement in the evolution of AI. So, the fact that progress has been made isn't a misconception. That said, I think people tend to extrapolate a little too far. One of the tricky things in AI has always been that what we perceive as things that really require intelligence aren't always the things that are hard for AI systems to do.

Appian: Can you give us some examples of this AI myth?

Conitzer: Before AI, in the early days of computer science, we may have thought that the game of chess was reflective of the highest form of human intelligence. We assumed that someone who plays chess well is really an intelligent person. But later on, we found out that playing chess might be easier than playing soccer. There are people working on AI soccer. You should look at the work they're doing. It can be very entertaining.

The point is that our own concept of where we're intelligent in ways that other things aren't has changed as a result of AI research.

Sometimes this comes at the frustration of AI researchers, because if they solve a problem that's deemed to be a benchmark for AI, the bench-marking goal post gets moved. Which leads to frustration from the people who solved it.

Appian: So our own concept of what is unique about being human is always shifting, as a result of AI research.

Fear and loathing of general AI: Is it Justified?

Conitzer: Yes, but will it always be that way? I don't know.There are people who are genuinely concerned about AI becoming broadly more intelligent than humans. Not just on narrow tasks, but AI that is equally flexible and as broad in understanding as humans. There are all kinds of disaster scenarios that can unfold from that.

Appian: What do you make of that fear of general AI?

Conitzer: It's been difficult for the AI community to approach that question.

The AI community used to make bold predictions. But many of them didn't come true, because solving the problems was harder than people thought.

So, the community has pulled back from making bold predictions.

Appian: But people outside of the community have started to raise those concerns.

Conitzer: Yes, but the issue of timing is important.

Some concerns are really on the horizon right now, like autonomous weapons, technological unemployment, bias in algorithms.

These things are happening right now. And we need to be concerned about them. But AI taking over the world? That's more futuristic.

Appian: So, AI taking over the world is not something we need to worry about today?

Conitzer: Today's algorithms can't achieve that. It's not crazy to think about those things. So, I'm supportive of the people who do. But it's important to keep in mind that we're talking about different time scales and different levels of uncertainty.

Beyond pattern recognition. What comes next?

Appian:Speaking of the future, as you think about 2019 and beyond, what do you expect to see in terms of AI trends, especially as it relates to ethics and accountability?

Conitzer: Near term, we'll see a lot of successes achieved on the machine learning side and pattern recognition techniques. I think we'll start to see them deployed in the real world in many different places. As that trend happens, it will also generate new problems that people didn't anticipate.

Appian: Can you give us an example?

Conitzer: Yes, one will be that just recognizing patterns may not be enough. We're already seeing this in self-driving cars.

The AI systems in our cars don't just detect patterns, but they're also capable of taking corrective action, based on what they perceive. We're going to see much more of this kind of AI in the near term.

Appian: And how does ethics fit into that narrative?

Conitzer: Generally, these systems will require you to specify some high-level objective to pursue. Any C-level executive knows that the objective that you give someone to pursue should be specified in the right way. Or you won't get the results that you expect.

The same is true of AI systems: If you specify the wrong objective, you may be surprised by the outcome that you get.