Skip to content
All Podcast Episodes

FLI Podcast- Artificial Intelligence: American Attitudes and Trends with Baobao Zhang

Published
January 25, 2019

Our phones, our cars, our televisions, our homes: they’re all getting smarter. Artificial intelligence is already inextricably woven into everyday life, and its impact will only grow in the coming years. But while this development inspires much discussion among members of the scientific community, public opinion on artificial intelligence has remained relatively unknown.

Artificial Intelligence: American Attitudes and Trends, a report published earlier in January by the Center for the Governance of AI, explores this question. Its authors relied on an in-depth survey to analyze American attitudes towards artificial intelligence, from privacy concerns to beliefs about U.S. technological superiority. Some of their findings—most Americans, for example, don’t trust Facebook—were unsurprising. But much of their data reflects trends within the American public that have previously gone unnoticed.

This month Ariel was joined by Baobao Zhang, lead author of the report, to talk about these findings. Zhang is a PhD candidate in Yale University's political science department and research affiliate with the Center for the Governance of AI at the University of Oxford. Her work focuses on American politics, international relations, and experimental methods.

In this episode, Zhang spoke about her take on some of the report’s most interesting findings, the new questions it raised, and future research directions for her team. Topics discussed include:

  • Demographic differences in perceptions of AI
  • Discrepancies between expert and public opinions
  • Public trust (or lack thereof) in AI developers
  • The effect of information on public perceptions of scientific issues

Research and publications discussed in this episode include:

Transcript

Ariel: Hi there. I'm Ariel Conn with the Future of Life Institute. Today, I am doing a special podcast, which I hope will be just the first in a continuing series, in which I talk to researchers about the work that they've just published. Last week, a report came out called Artificial Intelligence: American Attitudes and Trends, which is a survey that looks at what Americans think about AI. I was very excited when the lead author of this report agreed to come join me and talk about her work on it, and I am actually now going to just pass this over to her, and let her introduce herself, and just explain a little bit about what this report is and what prompted the research.

Baobao: My name is Baobao Zhang. I'm a PhD candidate in Yale University's political science department, and I'm also a research affiliate with the Center for the Governance of AI at the University of Oxford. We conducted a survey of 2,000 American adults in June 2018 to look at what Americans think about artificial intelligence. We did so because we believe that AI will impact all aspects of society, and therefore, the public is a key stakeholder. We feel that we should study what Americans think about this technology that will impact them. In this survey, we covered a lot of ground. In the past, surveys about AI tend to have very specific focus, for instance on automation and the future of work. What we try to do here is cover a wide range of topics, including the future of work, but also lethal autonomous weapons, how AI might impact privacy, and trust in various actors to develop AI.

So one of the things we found is Americans believe that AI is a technology that should be carefully managed. In fact, 82% of Americans feel this way. Overall, Americans express mixed support for developing AI. 41% somewhat support or strongly support the development of AI, while there's a smaller minority, 22%, that somewhat or strongly opposes it. And in terms of the AI governance challenges that we asked—we asked about 13 of them—Americans think all of them are quite important, although they prioritize preventing AI-assisted surveillance from violating privacy and civil liberties, preventing AI from being used to spread fake news online, preventing AI cyber attacks, and protecting data privacy.

Ariel: Can you talk a little bit about what the difference is between concerns about AI governance and concerns about AI development and more in the research world?

Baobao: In terms of the support for developing AI, we saw that as a general question in terms of support—we didn't get into the specifics of what developing AI might look like. But in terms of the governance challenges, we gave quite detailed, concrete examples of governance challenges, and these tend to be more specific.

Ariel: Would it be fair to say that this report looks specifically at governance challenges as opposed to development?

Baobao: It's a bit of both. I think we ask both about the R&D side, for instance we ask about support for developing AI and which actors the public trusts to develop AI. On the other hand, we also ask about the governance challenges. Among the 13 AI governance challenges that we presented to respondents, Americans tend to think all of them are quite important.

Ariel: What were some of the results that you expected, that were consistent with what you went into this survey thinking people thought, and what were some of the results that surprised you?

Baobao: Some of the results that surprised us is how soon the public thinks that high-level machine intelligence will be developed. We find that they think it will happen a lot sooner than what experts predict, although some past research suggests similar results. What didn't surprise me, in terms of the AI governance challenge question, is how people are very concerned about data privacy and digital manipulation. I think these topics have been in the news a lot recently, given all the stories about hacking or digital manipulation on Facebook.

Ariel: So going back real quick to your point about the respondents expecting high-level AI happening sooner: how soon do they expect it?

Baobao: In our survey, we asked respondents about high-level machine intelligence, and we defined it as when machines are able to perform almost all tasks that are economically relevant today better than the median human today at each task. My co-author, Allan Dafoe, and some of my other team members, we've done a survey asking AI researchers—this was back in 2016—a similar question, and there we had a different definition of high-level machine intelligence that required a higher bar, so to speak. So that might have caused some difference. We're trying to ask this question again to AI researchers this year. We're doing continuing research, so hopefully the results will be more comparable. Even so, I think the difference is quite large.

I guess one more caveat is—we have in the footnote—we did ask the same definition as we asked AI experts in 2016 in a pilot survey on the American public, and we also found that the public thinks high-level machine intelligence will happen sooner than experts predict. So it might not just be driven by the definition itself, but the public and experts have different assessments. But to answer your question, the median respondent in our American public sample predicts that there's a 54% probability of high-level machine intelligence being developed within the next 10 years, which is quite high of a probability.

Ariel: I'm hesitant to ask this, because I don't know if it's a very fair question, but do you have thoughts on why the general public thinks that high-level AI will happen sooner? Do you think it is just a case that there's different definitions that people are referencing, or do you think that they're perceiving the technology differently?

Baobao: I think that's a good question, and we're doing more research to investigate these results and to probe at it. One thing is that the public might have a different perception of what AI is compared to experts. In future surveys, we definitely want to investigate that. Another potential explanation is that the public lacks understanding of what goes into AI R&D.

Ariel: Have there been surveys that are as comprehensive as this in the past?

Baobao: I'm hesitant to say that there are surveys that are as comprehensive as this. We certainly relied on a lot of past survey research when building our surveys. The Eurobarometer had a couple of good surveys on AI in the past, but I think we cover both sort of the long-term and the short-term AI governance challenges, and that's something that this survey really does well.

Ariel: Okay. The reason I ask that is I wonder how much people's perceptions or misperceptions of how fast AI is advancing would be influenced by just the fact that we have had significant advancements just in the last couple of years that I don't think were quite as common during previous surveys that were presented to people.

Baobao: Yes, that certainly makes sense. One part of our survey tries to track responses over time, so I was able to dig up some surveys going all the way back to the 1980s that were conducted by the National Science Foundation on the question of automation—whether automation will create more jobs or eliminate more jobs. And we find that compared with the historical data, the percentage of people who think that automation will create more jobs than it eliminates—that percentage has decreased, so this result could be driven by people reading in the news about all these advances in AI and thinking, "Oh, AI is getting really good these days at doing tasks normally done by humans," but again, you would need much more data to sort of track these historical trends. So we hope to do that. We just recently received a grant from the Ethics and Governance of AI Fund, to continue this research in the future, so hopefully we will have a lot more data, and then we can really map out these historical trends.

Ariel: Okay. We looked at those 13 governance challenges that you mentioned. I want to more broadly ask the same two-part question of: looking at the survey in its entirety, what results were most expected and what results were most surprising?

Baobao: In terms of the AI governance challenge question, I think we had expected some of the results. We'd done some pilot surveys in the past, so we were able to have a little bit of a forecast, in terms of the governance challenges that people prioritize, such as data privacy, cyber attacks, surveillance, and digital manipulation. These were also things that respondents in the pilot surveys had prioritized. I think some of the governance challenges that people still think of as important, but don't view as likely to impact large numbers of people in the next 10 years, such as critical AI systems failure—these questions are sort of harder to ask in some ways. I know that AI experts think about it a lot more than, say, the general public.

Another thing that sort of surprised me is how much people think value alignment— which is sort of an abstract concept—how much people think that's quite important, and also likely to impact large numbers of people within the next 10 years. It's up there with safety of autonomous vehicles or biased hiring algorithms, so that was somewhat surprising.

Ariel: That is interesting. So if you're asking people about value alignment, were respondents already familiar with the concept, or was this something that was explained to them and they just had time to consider it as they were looking at the survey?

Baobao: We explained to them what it meant, and we said that it means to make sure that AI systems are safe, trustworthy, and aligned with human values. Then we gave a brief paragraph definition. We think that maybe people haven't heard of this term before, or it could be quite abstract, so therefore we gave a definition.

Ariel: I would be surprised if it was a commonly known term. Then looking more broadly at the survey as a whole, you looked at lots of different demographics. You asked other questions too, just in terms of things like global risks and the potential for global risks, or generally about just perception of AI in general, and whether or not it was good, and whether or not advanced AI was good or bad, and things like that. So looking at the whole survey, what surprised you the most? Was it still answers within the governance challenges, or did anything else jump out at you as unexpected?

Baobao: Another thing that jumped out at me is that respondents who have computer science or engineering degrees tend to think that the AI governance challenges are less important across the board than people who don't have computer science or engineering degrees. These people with computer science or engineering degrees also are more supportive of developing AI. I suppose that result is not totally unexpected, but I suppose in the news there is a sense that people who are concerned about AI safety, or AI governance challenges, tend to be those who have a technical computer background. But in reality, what we see are people who don't have a tech background who are concerned about AI. For instance, women, those with low levels of education, or those who are low-income, tend to be the least supportive of developing AI. That's something that we want to investigate in the future.

Ariel: There's an interesting graph in here where you're showing the extent to which the various groups consider an issue to be important, and as you said, people with computer science or engineering degrees typically don't consider a lot of these issues very important. I'm going to list the issues real quickly. There's data privacy, cyber attacks, autonomous weapons, surveillance, autonomous vehicles, value alignment, hiring bias, criminal justice bias, digital manipulation, US-China arms race, disease diagnosis, technological unemployment, and critical AI systems failure. So as you pointed out, the people with the CS and engineering degrees just don't seem to consider those issues nearly as important, but you also have a category here of people with computer science or programming experience, and they have very different results. They do seem to be more concerned. Now, I'm sort of curious what the difference was between someone who has experience with computer science and someone who has a degree in computer science.

Baobao: I don't have a very good explanation for the difference between the two, except for I can say that the people with experience, that's a lower bar, so there are more people in the sample who have computer science or programming experience—and in fact, there's 735 of them, compared to people who have computer science or engineering undergrad or graduate degrees, and that's 195 people. I suppose those who have the CS or programming experience, that comprises a greater number of people. Going forward, in future surveys, we want to probe at this a bit more. We might look at what industries various people are working in, or how much experience they have either using AI or developing AI.

Ariel: And then I'm also sort of curious—I know you guys still have more work that you want to do—but I'm curious what you know now about how American perspectives are either different or similar to people in other countries.

Baobao: The most direct comparison that we can make is with respondents in the EU, because we have a lot of data based on the Eurobarometer surveys, and we find that Americans share similar concerns with Europeans about AI. So as I mentioned earlier, 82% of Americans think that AI is a technology that should be carefully managed, and that percentage is similar to what the EU respondents have expressed. Also, we find similar demographic trends, in that women, those with lower levels of income or lower levels of education, tend to be not as supportive of developing AI.

Ariel: I went through this list, and one of the things that was on it is the potential for a US-China arms race. Can you talk a little bit about the results that you got from questions surrounding that? Do Americans seem to be concerned about a US-China arms race?

Baobao: One of the interesting findings from our survey is that Americans don't necessarily think the US or China is the best at AI R&D, which is surprising, given that these two countries are probably the best. That's a curious fact that I think we need to be cognizant of.

Ariel: I want to interject there, and then we can come back to my other questions, because I was really curious about that. Is that a case of the way you asked it—it was just, you know, "Is the US in the lead? Is China in the lead?"—as opposed to saying, "Do you think the US or China are in the lead?" Did respondents seem confused by possibly the way the question was asked, or do they actually think there's some other country where there's even more research happening?

Baobao: We asked this question in a way that it has been asked about general scientific achievements that Pew Research Center has asked about, so we did it such that it's a survey experiment where half of the respondents were randomly assigned to consider the US and half of the respondents were randomly assigned to consider China. We wanted to ask this question in this manner, so we get more specific distribution of responses. When you just ask who is in the lead, you're only allowed to put down one, whereas we give respondents a number of choices, so you can be either best in the world or above average, et cetera.

In terms of people underestimating US R&D, I think this is reflective of the public underestimating US scientific achievements in general. Pew had a similar question in a 2015 survey, and while 45% of the scientists they interviewed think that scientific achievement in the US are the best in the world, only 15% of Americans expressed the same opinion. So this could just be reflecting this general trend.

Ariel: I want to go back to my questions about the US-China arms race, and I guess it does make sense, first, to just define what you are asking about with a US-China arms race. Is that focused more on R&D, or were you also asking about a weapons race?

Baobao: This is actually a survey experiment, where we present different messages to respondents about a potential US-China arms race, and we asked both about investment in AI military capabilities as well as developing AI in a more peaceful manner, and cooperation between the US and China in terms of general R&D. We found that Americans seem to both support the US investing more in AI military capabilities, to make sure that it doesn't fall behind China's, even though it would exacerbate a AI military arms race. On the other hand, they also support the US working hard with China to cooperate to avoid the dangers of a AI arms race, and they don't seem to understand that there's a trade-off between the two.

I think this result is important for policymakers trying to not exacerbate an arms race, or to prevent one, when communicating with the public—to communicate these trade-offs, although we find that messages that explain the risks of an arm race tend to decrease respondent support for the US investing more in AI military capabilities, but the other information treatments don't seem to change public perceptions.

Ariel: Do you think it's a misunderstanding of the trade-offs, or maybe just hopeful thinking that there's some way to maintain military might while still cooperating?

Baobao: I think this is a question that involves further investigation. I apologize that I keep saying this.

Ariel: That's the downside to these surveys. I end up with far more questions than get resolved.

Baobao: Yes, and we're one of the first groups who are asking these questions, so we're just at the beginning stages of probing this very important policy question.

Ariel: With a project like this, do you expect to get more answers or more questions?

Baobao: I think in the beginning stages, we might get more questions than answers, although we are certainly getting some important answers—for instance that the American public is quite concerned about the societal impacts of AI. With that result, then we can probe and get more detailed answers hopefully. What are they concerned about? What can policymakers do to alleviate these concerns?

Ariel: Let's get into some of the results that you had regarding trust. Maybe you could just talk a little bit about what you asked the respondents first, and what some of their responses were.

Baobao: Sure. We asked two questions regarding trust. We asked about trust in various actors to develop AI, and we also asked about trust in various actors to manage the development and deployment of AI. These actors include parts of the US government, international organizations, companies, and other groups such as universities or nonprofits. We found that among the actors that are most trusted to develop AI, these include university researchers and the US military.

Ariel: That was a rather interesting combination, I thought.

Baobao: I would like to give it some context. In general, trust in institutions is low among the American public. Particularly, there's a lot of distrust in the government, and university researchers and the US military are the most trusted institutions across the board, when you ask about other trust issues.

Ariel: I would sort of wonder if there's political sides with which people are more likely to trust universities and researchers versus trust the military. Is that across the board respondents on either side of the political aisle trusted both, or were there political demographics involved in that?

Baobao: That's something that we can certainly look into with our existing data. I would need to check and get back to you.

Ariel: The other thing that I thought was interesting with that—and we can get into the actors that people don't trust in a minute—but I know I hear a lot of concern that Americans don't trust scientists. As someone who does a lot of science communication, I think that concern is overblown. I think there is actually a significant amount of trust in scientists; There's just some certain areas where it's less, and I was sort of wondering what you've seen in terms of trust in science, and if the results of this survey have impacted that at all.

Baobao: I would like to add that among the actors that we asked who are currently building AI or planning to build AI, trust is relatively low amongst all these groups.

Ariel: Okay.

Baobao: So, even with university scientists: 50% of respondents say that they have a great amount of confidence or a fair amount of confidence in university researchers developing AI in the interest of the public, so that's better than some of these other organizations, but it's not super high, and that is a bit concerning. And in terms of trust in science in general—I used to work in the climate policy space before I moved into AI policy, and there, it's a question that we struggle with in terms of trust in expertise with regards to climate change. I found that in my past research, communicating the scientific consensus in climate change is actually an effective messaging tool, so your concerns about distrust in science being overblown, that could be true. So I think going forward, in terms of effective scientific communication, having AI researchers deliver an effective message: I think that could be important in bringing the public to trust AI more.

Ariel: As someone in science communication, I would definitely be all for that, but I'm also all for more research to understand that better. I also want to go into the organizations that Americans don't trust.

Baobao: I think in terms of tech companies, they're not perceived as untrustworthy across the board. I think trust is still relatively high for tech companies, besides Facebook. People really don't trust Facebook, and that could be because of all the recent coverage of Facebook violating data privacy, the Cambridge Analytica scandal, digital manipulation on Facebook, et cetera. So we conducted this survey a few months after the Cambridge Analytica Facebook scandal had been in the news, but we've also run some pilot surveys before all that press coverage of the Cambridge Analytica Facebook scandal had broke, and we also found that people distrust Facebook. So it might be something particular to the company, although that's a cautionary tale for other tech companies, that they should work hard to make sure that the public trusts its products.

Ariel: So I'm looking at this list, and under the tech companies, you asked about Microsoft, Google, Facebook, Apple, and Amazon. And I guess one question that I have—the trust in the other four, Microsoft, Google, Apple, and Amazon appears to be roughly on par, and then there's very limited trust in Facebook. But I wonder, do you think it's just—since you're saying that Facebook also wasn't terribly trusted beforehand—do you think that has to do with the fact that we have to give so much more personal information to Facebook? I don't think people are aware of giving as much data to even Google, or Microsoft, or Apple, or Amazon.

Baobao: That could be part of it. So, I think going forward, we might want to ask more detailed questions about how people use certain platforms, or whether they're aware that they're giving data to particular companies.

Ariel: Are there any other reasons that you think could be driving people to not trust Facebook more than the other companies, especially as you said, with the questions and testing that you'd done before the Cambridge Analytica scandal broke?

Baobao: Before the Cambridge Analytica Facebook scandal, there were a lot of news coverage around the 2016 elections of vast digital manipulation on Facebook, and on social media, so that could be driving the results.

Ariel: Okay. Just to be consistent and ask you the same question over and over again, with this, what did you find surprising and what was on par with your expectations?

Baobao: I suppose I don't find the Facebook results that unsurprising, given its negative press coverage, and also from our pilot results. What I did find surprising is the high levels of trust in the US military to develop AI, because I think some of us in the AI policy community are concerned about military applications of AI, such as lethal autonomous weapons. But on the other hand, Americans seem to place a high general level of trust in the US military.

Ariel: Yeah, that was an interesting result. So if you were going to move forward, what are some questions that you would ask to try to get a better feel for why the trust is there?

Baobao: I think I would like to ask some questions about particular uses or applications of AI these various actors are developing. Sometimes people aren't aware that the US military is perhaps investing in this application of AI that they might find problematic, or that some tech companies are working on some other applications. I think going forward, we might do more of these survey experiments, where we give information to people and see if that increases or decreases trust in the various actors.

Ariel: What did Americans think of high-level machine learning and AI?

Baobao: What we found is that the public thinks, on balance, it will be more bad than good: So we have 15% of respondents who think it will be extremely bad, possibly leading to human extinction, and that's a concern. On the other hand, only 5% thinks it will be extremely good. There's a lot of uncertainty. To be fair, it is about a technology that a lot of people don't understand, so 18% said, "I don't know."

Ariel: What do we take away from that?

Baobao: I think this also reflects on our previous findings that I talked about, where Americans expressed concern about where AI is headed: that there are people with serious reservations about AI's impact on society. Certainly, AI researchers and policymakers should take these concerns seriously, invest a lot more research into how to prevent the bad outcomes and how to make sure that AI can be beneficial to everyone.

Ariel: Were there groups who surprised you by either being more supportive of high-level AI and groups who surprised you by being less supportive of high-level AI?

Baobao: I think the results for support of developing high-level machine intelligence versus support for developing AI, they're quite similar. The correlation is quite high, so I suppose nothing is entirely surprising. Again, we find that people with CS or engineering degrees tend to have higher levels of support.

Ariel: I find it interesting that people who have higher incomes seem to be more supportive as well.

Baobao: Yes. That's another result that's pretty consistent across the two questions. We also performed analysis looking at these different levels of support for developing high-level machine intelligence, controlling for support of developing AI, and what we find there is that those with CS or programming experience have greater support of developing high-level machine intelligence, even controlling for support of developing AI. So there, it seems to be another tech optimism story, although we need to investigate further.

Ariel: And can you explain what you mean when you say that you're analyzing the support for developing high-level machine learning with respect to the support for AI? What distinction are you making there?

Baobao: Sure. So we use a multiple linear regression model, where we're trying to predict support for developing high-level machine intelligence using all these demographic characteristics, but also including respondent's support for developing AI, to see if there's something driving the support for developing high-level machine intelligence in spite of controlling for developing AI. And we find that controlling for support for developing AI, having CS or programming experience is further correlated with support of developing high-level machine intelligence. I hope that makes sense.

Ariel: For the purposes of the survey, how do you distinguish between AI and high-level machine learning?

Baobao: We defined AI as computer systems that perform tasks or make decisions that usually require human intelligence. So that's a more general definition, versus high-level machine intelligence defined in such a way where the AI is doing most economically relevant tasks at the level of the median human.

Ariel: Were there inconsistencies between those two questions, where you were surprised to find support for one and not support for the other?

Baobao: We can sort of probe it further, to see if there's people who answer differently for those two questions. We haven't looked into it, but certainly that's something that we can with our existing data.

Ariel: Were there any other results that you think researchers specifically should be made aware of, that could potentially impact the work that they're doing in terms of developing AI?

Baobao: I guess here's some general recommendations. I think it's important for researchers or people working in an adjacent space to do a lot more scientific communication to explain to the public what they're doing—particularly maybe AI safety researchers, because I think there's a lot of hype about AI in the news, either how scary it is or how great it will be, but I think some more nuanced narratives would be helpful for people to understand the technology.

Ariel: I'm more than happy to do what I can to try to help there. So for you, what are your next steps?

Baobao: Currently, we're working on two projects. We're hoping to run a similar survey in China this year, so we're currently translating the questions into Chinese and changing the questions to have more local context. So then we can compare our results—the US results with the survey results from China—which will be really exciting. We're also working on surveying AI researchers about various aspects of AI, both looking at their predictions for AI development timelines, but also their views on some of these AI governance challenge questions.

Ariel: Excellent. Well, I am very interested in the results of those as well, so I hope you'll keep us posted when those come out.

Baobao: Yes, definitely. I will share them with you.

Ariel: Awesome. Is there anything else you wanted to mention?

Baobao: I think that's it.

Ariel: Thank you so much for joining us.

Baobao: Thank you. It's a pleasure talking to you.

View transcript
Podcast

Related episodes

If you enjoyed this episode, you might also like:
All episodes

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram