Skip to Content
Artificial intelligence

We need to design distrust into AI systems to make them safer

Ayanna Howard, an acclaimed roboticist and educator, says our excessive faith in automated systems can lead us into dangerous situations.

May 13, 2021
Ayanna Howard
courtesy of ACM

Ayanna Howard has always sought to use robots and AI to help people. Over her nearly 30-year career, she has built countless robots: for exploring Mars, for cleaning hazardous waste, and for assisting children with special needs. In the process, she’s developed an impressive array of techniques in robotic manipulation, autonomous navigation, and computer vision. And she’s led the field in studying a common mistake humans make: we place too much trust in automated systems.

On May 12, the Association for Computing Machinery granted Howard this year’s Athena Lecturer Award, which recognizes women who have made fundamental contributions in computer science. The organization honored not only Howard’s impressive list of scientific accomplishments but also her passion and commitment to giving back to her community. For as long as she has been a celebrated technologist, she has also created and led many programs designed to increase the participation and retention of young women and underrepresented minorities in the field.

In March, after 16 years as a professor at the Georgia Institute of Technology, she began a new position as dean of the college of engineering at Ohio State University. She is the first woman to hold the position. On the day she received the ACM award, I spoke to Howard about her career and her latest research.

The following has been edited for length and clarity.

I’ve noticed that you use the term “humanized intelligence” to describe your research, instead of “artificial intelligence.” Why is that?

Yeah, I started using that in a paper in 2004. I was thinking about why we work on intelligence for robotics and AI systems. It isn’t that we want to create these intelligent features outside of our interactions with people. We are motivated by the human experience, the human data, the human inputs. “Artificial intelligence” implies that it’s a different type of intelligence, whereas “humanized intelligence” says it’s intelligent but motivated by the human construct. And that means when we create these systems, we’re also ensuring that it has some of our societal values as well.

How did you get into this work?

It was primarily motivated by my PhD research. At that time, I was working on training a robot manipulator to remove hazards in a hospital. This was back in the days when you didn’t have those nice safe places to put needles. Needles were put into the same trash as everything else, and there were cases where hospital workers got sick. So I was thinking about: How do you design robots for helping in that environment?

So very early on, it was about building robots that are useful for people. And it was acknowledging that we didn’t know how to build robots to do some of these tasks very well. But people do them all the time, so let’s mimic how people do it. That’s how it started.

Then I was working with NASA and trying to think about future Mars rover navigation. And again, it was like: Scientists can do this really, really well. So I would have scientists tele-operate these rovers and look at what they were seeing on the cameras of these rovers, then try to correlate how they drive based on that. That was always the theme: Why don’t I just go to the human experts, code up what they’re doing in an algorithm, and then get the robot to understand it?

Were other people thinking and talking about AI and robotics in this human-centered way back then? Or were you a weird outlier?

Oh, I was a total weird outlier. I looked at things differently than everyone else. And back then there was no guide for how to do this kind of research. In fact, when I look back now at how I did the research, I would totally do it differently. There’s all this experience and knowledge that has since come out in the field.

At what point did you shift from thinking about building robots that help humans to thinking more about the relationship between robots and humans?

It was largely motivated by this study we did on emergency evacuation and robot trust. What we wanted to see was when humans are in a high-risk, time-critical situation, will they trust the guidance of a robot? So we brought people into an abandoned office building off campus, and they were let in by a tour guide robot. We made up a story about the robot and how they had to take a survey—that kind of thing. While they were in there, we filled the building with smoke and set off the fire alarm.

So we wanted to see, as they navigated out, would they head to the front door, would they head to the exit sign, or would they follow the guidance of the robot leading them in a different direction?

We thought people would head to the front door because that was the way they came in, and prior research has said that when people are in an emergency situation, they tend to go where they’re familiar. Or we thought they would follow the exit signs, because that’s a trained behavior. But the participants did not do this. They actually follow the guidance of the robot.

Then we introduced some mistakes. We had the robot break down, we had it go in circles, we had it take you in a direction that required you to move furniture. We thought at some point the human would say, “Let me go to the front door, or let me go to the exit sign right there.” It literally took us to the very end before people stopped following the guidance of the robot.

That was the first time that our hypotheses were totally wrong. It was like, I can’t believe people are trusting the system like this. This is interesting and fascinating, and it’s a problem.

Since that experiment, have you seen this phenomenon replicated in the real world?

Every time I see a Tesla accident. Especially the earlier ones. I was like, “Yep, there it is.” People are trusting these systems too much. And I remember after the very first one, what did they do? They were like, now you’re required to hold the steering wheel for something like five-second increments. If you don’t have your hand on the wheel, the system will deactivate.

But, you know, they never came and talked to me or my group, because that’s not going to work. And why that doesn’t work is because it’s very easy to game the system. If you’re looking at your cell phone and then you hear the beep, you just put your hand up, right? It’s subconscious. You’re still not paying attention. And it’s because you think the system’s okay and that you can still do whatever it was you were doing—reading a book, watching TV, or looking at your phone. So it doesn’t work because they did not increase the level of risk or uncertainty, or disbelief, or mistrust. They didn’t increase that enough for someone to re-engage.

It’s interesting that you’re talking about how, in these kinds of scenarios, you have to actively design distrust into the system to make it more safe.

Yes, that’s what you have to do. We’re actually trying an experiment right now around the idea of denial of service. We don’t have results yet, and we’re wrestling with some ethical concerns. Because once we talk about it and publish the results, we’ll have to explain why sometimes you may not want to give AI the ability to deny a service either. How do you remove service if someone really needs it?

But here’s an example with the Tesla distrust thing. Denial of service would be: I create a profile of your trust, which I can do based on how many times you deactivated or disengaged from holding the wheel. Given those profiles of disengagement, I can then model at what point you are fully in this trust state. We have done this, not with Tesla data, but our own data. And at a certain point, the next time you come into the car, you’d get a denial of service. You do not have access to the system for X time period.

It’s almost like when you punish a teenager by taking away their phone. You know that teenagers will not do whatever it is that you didn’t want them to do if you link it to their communication modality.

What are some other mechanisms that you’ve explored to enhance distrust in systems?

The other methodology we’ve explored is roughly called explainable AI, where the system provides an explanation with respect to some of its risks or uncertainties. Because all of these systems have uncertainty—none of them are 100%. And a system knows when it’s uncertain. So it could provide that as information in a way a human can understand, so people will change their behavior.

As an example, say I’m a self-driving car, and I have all my map information, and I know certain intersections are more accident prone than others. As we get close to one of them, I would say, “We’re approaching an intersection where 10 people died last year.” You explain it in a way where it makes someone go, “Oh, wait, maybe I should be more aware.”

We’ve already talked about some of your concerns around our tendency to overtrust these systems. What are others? On the flip side, are there also benefits?

The negatives are really linked to bias. That’s why I always talk about bias and trust interchangeably. Because if I’m overtrusting these systems and these systems are making decisions that have different outcomes for different groups of individuals—say, a medical diagnosis system has differences between women versus men—we’re now creating systems that augment the inequities we currently have. That’s a problem. And when you link it to things that are tied to health or transportation, both of which can lead to life-or-death situations, a bad decision can actually lead to something you can’t recover from. So we really have to fix it.

The positives are that automated systems are better than people in general. I think they can be even better, but I personally would rather interact with an AI system in some situations than certain humans in other situations. Like, I know it has some issues, but give me the AI. Give me the robot. They have more data; they are more accurate. Especially if you have a novice person. It’s a better outcome. It just might be that the outcome isn’t equal.

In addition to your robotics and AI research, you’ve been a huge proponent of increasing diversity in the field throughout your career. You started a program to mentor at-risk junior high girls 20 years ago, which is well before many people were thinking about this issue. Why is that important to you, and why is it also important for the field?

It’s important to me because I can identify times in my life where someone basically provided me access to engineering and computer science. I didn’t even know it was a thing. And that’s really why later on, I never had a problem with knowing that I could do it. And so I always felt that it was just my responsibility to do the same thing for those who have done it for me. As I got older as well, I noticed that there were a lot of people that didn’t look like me in the room. So I realized: Wait, there’s definitely a problem here, because people just don’t have the role models, they don’t have access, they don’t even know this is a thing.

And why it’s important to the field is because everyone has a difference of experience. Just like I’d been thinking about human-robot interaction before it was even a thing. It wasn’t because I was brilliant. It was because I looked at the problem in a different way. And when I’m talking to someone who has a different viewpoint, it’s like, “Oh, let’s try to combine and figure out the best of both worlds.”

Airbags kill more women and kids. Why is that? Well, I’m going to say that it’s because someone wasn’t in the room to say, “Hey, why don’t we test this on women in the front seat?” There’s a bunch of problems that have killed or been hazardous to certain groups of people. And I would claim that if you go back, it’s because you didn’t have enough people who could say “Hey, have you thought about this?” because they’re talking from their own experience and from their environment and their community.

How do you hope AI and robotics research will evolve over time? What is your vision for the field?

If you think about coding and programming, pretty much everyone can do it. There are so many organizations now like Code.org. The resources and tools are there. I would love to have a conversation with a student one day where I ask, “Do you know about AI and machine learning?” and they say, “Dr. H, I’ve been doing that since the third grade!” I want to be shocked like that, because that would be wonderful. Of course, then I’d have to think about what is my next job, but that’s a whole other story.

But I think when you have the tools with coding and AI and machine learning, you can create your own jobs, you can create your own future, you can create your own solution. That would be my dream.

Deep Dive

Artificial intelligence

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.