Skip to content

Op-ed: Poll Shows Strong Support for AI Regulation Though Respondents Admit Limited Knowledge of AI

Published:
April 13, 2017
Author:
Matt Scherer

Contents


On April 11, Morning Consult released perhaps the most wide-ranging public survey ever conducted on AI-related issues.  In the poll, 2200 Americans answered 39 poll questions about AI (plus a number of questions on other issues).

The headline result that Morning Consult is highlighting is that overwhelming majorities of respondents supported national regulation (71% support) and international regulation (67%) of AI.  Thirty-seven percent strongly support national regulation, compared to just 4% who strongly oppose it (for international, those numbers were 35% and 5%, respectively). However, nearly half of respondents also indicated they had very limited knowledge of what AI actually is.

Perhaps even more strikingly, the proportion of respondents who support regulation was very consistent across political and socioeconomic lines.  A full 74% of Republicans, 73% of Democrats, and 62% of independents support national regulations, as do 69% of people making less than $50k/yr, 73% making $50k-$100k, and 65% of those who make more than $100k.  Education likewise matters little: 70% of people without a college degree support national regulation, along with 74% of college grads and 70% of respondents with post-graduate degrees.  Women (75%) were slightly more likely to support such regulations than men (67%).

(Interestingly, the biggest “outlier” demographic group in terms of supporting regulation was…Jewish people.  Only 56% of Jewish respondents support national regulations for AI, by far the smallest proportion of any group.  The difference is largely attributable to the fact that more than a quarter of Jewish respondents weren’t sure if they supported regulation or not (compared to 15% of respondents as a whole).  The most pro-regulation groups were Republican women (80%) and those with blue-collar jobs (76%).)

Support for international regulations was only slightly lower: 67% of respondents overall, with a similar level of consistency among different demographic groups.

The poll’s AI regulation results are interesting, to be sure, but the responses to a number of other questions in the poll are also worth highlighting.

  • How much have you seen, read, or heard about A.I.?: A solid 21% of respondents said that they had heard “nothing at all” and 27% answered “not much.”  This jibes with my impressions of public consciousness on the issue, but what this suggests is that a good many people support regulations for AI despite not knowing much about it.  There are no cross-tabs for different poll questions, so there is no way to tell if support for regulation rises or falls depending on how much people know about it, but my gut tells me that higher familiarity with the technology correlates with lower support for regulation.
  • As you may know, A.I. is the science and engineering of making intelligent machines that can perform computational tasks which normally require human intelligence. Do you think we should increase or decrease our reliance on A.I.?: Equal proportions of people answered “increase” and “decrease,” with 39% each.  Incidentally, Morning Consult stole the circular definition of “artificial intelligence” that I used in my (shameless plug alert!) Regulating AI paper.  I ain’t mad, though.  Circular definitions are the only ones that work for AI.
    • Respondents were also about equally split on whether AI was safe (41%) or unsafe (38%).
  • 57% of respondents said that their lives were already being affected by AI; just 20% said their lives had not yet been affected
  • A long series of questions focused on whether respondents would “feel comfortable or uncomfortable delegating the following tasks to a computer with artificial intelligence.”  Unsurprisingly, people were more comfortable delegating mundane tasks than they were with tasks affecting their safety or personal life. Some of the more interesting responses:
    • Driving a car: 28% comfortable, 67% uncomfortable
    • Flying an airplane: 23% comfortable, 70% uncomfortable (including 53% “very uncomfortable”)
      • It was especially interesting to see that this drew some of the strongest negative responses, given how long commercial planes have used autopilot systems.
    • Medical diagnosis: 27% comfortable, 65% uncomfortable
    • Performing surgery: 22% comfortable, 69% uncomfortable (including 51% “very uncomfortable”)
    • Picking your romantic partner: 23% comfortable, 68% uncomfortable
    • Cleaning your house: 61% comfortable, 31% uncomfortable
    • Cooking meals: 45% comfortable, 47% uncomfortable
  • Another series of questions focused on “whether each statement makes you more or less likely to support further A.I. research.”
    • A.I. can replace human beings in many labor intensive tasks: 40% more likely, 41% less likely
    • Robots can cause mass unemployment: 31% more likely (??), 51% less likely
    • Machines may become smart enough to control humans: 22% more likely, 57% less likely
  • Do you agree or disagree that A.I. is humanity’s greatest existential threat: 50% agree, 31% disagree

So what’s the takeaway from all this?  Well, certainly from a law-and-AI perspective, the strong support for regulation is the most interesting result.  Support for regulation is certainly broad, but it does not appear to be especially deep.  Just over a third of respondents strongly support regulation–nothing to sniff at, but not enough to make this a campaign issue anytime soon.

But given that nearly half of respondents knew little-to-nothing about AI, that number could be highly volatile. Support could rise or fall quite quickly if AI’s encroachment into the human world continues apace.  Which direction it goes will depend on whether AI is mainly seen as something that makes our lives easier or puts our lives (or our livelihoods) at risk.

Given US Treasury Secretary Steve Mnuchin’s recent comments dismissing the potential impact of AI on the labor market, it seems unlikely that AI regulation is coming to the US for at least the next 4 years.  The EU has shown some interest in AI-related issues, but they seem to have plenty else on their plate at the moment, and I doubt that AI regulation becomes a European priority.  The same can be said of Australia, Japan, South Korea, and China (although China’s state-driven economic model makes them something of a special case).

That means that despite the broad support for AI regulation, we’re unlikely to see any actual regulations coming down national or international government pipelines over the next few years.  Private sector and industry groups seem to have a window of at least a few years to establish their own system(s) of ethics and self-regulation before they need to worry about the government getting involved.

But that window could close in a hurry.  A major book, documentary, or news story can turn fence-sitters into strong proponents of regulation.  It was no coincidence that the US established the National Transportation Safety Board just one year after Ralph Nader published Unsafe at Any Speed.  The American auto industry quickly went from being almost completely free of national regulation to being one of the most heavily regulated industries in the world.  The same thing could happen to the burgeoning AI industry if it ignores safety concerns just because they don’t seem to pose a business problem right now.  So if Silicon Valley companies want to avoid facing the same fate as Detroit, the AI world will need to figure a way to effectively police itself.


This post originally appeared on Law and AI.

Note from FLI: Among our objectives is to inspire discussion and a sharing of ideas. As such, we post op-eds that we believe will help spur discussion within our community. Op-eds do not necessarily represent FLI’s opinions or views.

This content was first published at futureoflife.org on April 13, 2017.

About the Future of Life Institute

The Future of Life Institute (FLI) is a global non-profit with a team of 20+ full-time staff operating across the US and Europe. FLI has been working to steer the development of transformative technologies towards benefitting life and away from extreme large-scale risks since its founding in 2014. Find out more about our mission or explore our work.

Our content

Related content

Other posts about ,

If you enjoyed this content, you also might also be interested in:

The Pause Letter: One year later

It has been one year since our 'Pause AI' open letter sparked a global debate on whether we should temporarily halt giant AI experiments.
March 22, 2024

Catastrophic AI Scenarios

Concrete examples of how AI could go wrong
February 1, 2024

Gradual AI Disempowerment

Could an AI takeover happen gradually?
February 1, 2024

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram