Skip to content

Computers Gone Wild: Impact and Implications of Developments in Artificial Intelligence on Society

Published:
May 6, 2016
Author:
a guest blogger

Contents

The following summary was written by Samantha Bates:

The second “Computers Gone Wild: Impact and Implications of Developments in Artificial Intelligence on Society” workshop took place on February 19, 2016 at Harvard Law School.  Marin Soljačić, Max Tegmark, Bruce Schneier, and Jonathan Zittrain convened this informal workshop to discuss recent advancements in artificial intelligence research.  Participants represented a wide range of expertise and perspectives and discussed four main topics during the day-long event: the impact of artificial intelligence on labor and economics, algorithmic decision-making, particularly in law, autonomous weapons, and the risks of emergent human-level artificial intelligence. Each session opened with a brief overview of the existing literature related to the topic from a designated participant, followed by remarks from two or three provocateurs.  The session leader then moderated a discussion with the larger group. At the conclusion of each session, participants agreed upon a list of research questions that require further investigation by the community. A summary of each discussion as well as the group’s recommendations for additional areas of study are included below.

Session #1: Labor and Economics

Summary

The labor and economics session focused on two main topics: the impact of artificial intelligence on labor economics and the role of artificial intelligence in financial markets.  Participants observed that capital will eventually displace labor and that the gradual bifurcation of the market into the financial sector and the real economy will accelerate and exacerbate the impact of AI on capital concentration.  While the group seemed to be persuaded that current unemployment rates suggest that artificial intelligence is not yet replacing human workers to a great extent, participants discussed how artificial intelligence may cause greater social inequality as well as decrease the quality of work in the gig economy specifically.  The group also discussed how recent events in the financial market, such as the flash crashes in 2010 and 2014, exemplify how the interaction between technology, specifically high frequency trading, and human behavior can lead to greater instability in the market.[1]

On the impact of AI on labor economics, the group agreed that social inequality, particularly when it prevents social mobility, is concerning. For example, the growth of technology has expanded student access to information regarding college applications and scholarships, which increases competition among students.  Consequently, colleges and scholarship boards may overlook candidates who have the necessary skill set to succeed due to the growing size of the applicant pool.  While some social inequality is necessary to motivate students to work harder, increased competition may also take away opportunities from students who do not have the financial means to pay for college on their own.  Participants expressed concern that rather than creating opportunities for all users and helping society identify talent, technology may at times reinforce existing socioeconomic structures.

Similar to the college application example, the growth of technology has started to re-organize labor markets by creating more “talent markets” or jobs that require a very specific skill set.  Workers who possess these skills are highly sought after and well paid, but the current system overlooks many of these candidates, which can lead to fewer opportunities for individuals from the working classes to climb the social ladder.  Participants did not agree about the best way to solve the problem of social immobility.  Several participants emphasized the need for more online education and certification programs or a restructuring of the education system that would require workers to return to school periodically to learn about advancements in technology.  Others pointed out that we may be moving towards a society in which private companies start hoarding information, so online education programs and/or restructuring the education system will not make a difference when information is no longer widely available.

During the discussion about high frequency trading and flash crashes, the group struggled to determine what we should be most concerned about.  We do not fully understand why flash crashes occur, but past examples suggest that flash crashes are caused by a “perfect financial storm” in which a combination of technical flaws and malicious actors trying to manipulate the system happen all at the same time.[2]  Several participants expressed surprise that flash crashes do not occur more frequently and that the market is able to recover so quickly.  In response, it was pointed out that while the market appears to recover quickly, the impact of these short drops in the market can prove devastating for individual investors.  For example, we heard about a librarian who lost half of her retirement savings in three minutes.

Ultimately, participants agreed that policy regulation will play a major role in managing the impact of artificial intelligence on labor and the financial market.  For example, it was suggested that time limits on high frequency trading could prevent rapid-fire transactions and reduce the risk of causing a flash crash.  Other areas identified for further research included investigating whether there are flash trips rather than crashes that do not cause as much damage and are therefore overlooked and if we have a model to predict how frequently the system will trip.  Participants also agreed that the financial system is an important area for further research because it serves as a predictor of problems that may arise in other sectors.  There was also discussion of pros and cons of different approaches to reducing income inequality through redistribution, ranging from guaranteed income and negative income tax to provision of subsidized education, healthcare, and other social services.  Lastly, participants agreed that we should define what the optimal education system would look like and how it could help solve the social immobility problem.

Questions and Conclusions

  • Concluded that the financial system is a good measure of what the major AI challenges will be in other sectors because it changes quickly and frequently as well as has a large impact on people’s lives.
  • The group discussed whether we should be concerned that flash crashes are largely unexplained. One research area to explore is whether there are flash trips rather than crashes, which do not cause as much damage and are therefore, more difficult to detect.
    • Do we have any models to predict how frequently the system will trip aside from models of complex systems?
  • Since 2011, there has been no progress in mitigating the risks of high frequency trading. Maybe we should call attention to this problem, especially since we have no explanation for why flash crashes occur.
    • Why have we not solved the flash crash problem by setting time limitations that prevent rapid-fire transactions? This may be a policy issue rather than a tech problem.
    • Given that the algorithms are not perfect, should the financial market incorporate more human oversight?
  • What education model is an optimal preparation for a society in which AI is very advanced and getting rapidly and exponentially better: is it liberal-arts? Is our educational model (~20yrs of education/specialization followed by 40yrs of specialized work) outdated?
  • Does AI affect “financial democracy” (will everyone have access to investment and participate in the appreciation of financial assets (e.g. stock or bond market), or will only those individuals/institutions which have most powerful finance-AI capture all the profits, leaving retirement funds with no growth prospects)?
  • Many developing countries relied on export-based growth models to drag themselves out of poverty, exploring their cheap manufacturing. If robots and AI in developed countries replace cheap manufacturing coming from the undeveloped countries, how will poor countries ever pull themselves out of poverty?
  • When and in what order should we expect various jobs to become automated, and how will this impact income inequality?
  • What policies could help increasingly automated societies flourish? How can AI-generated wealth best support underemployed populations?  What are the pros and cons of interventions such as educational reform, apprenticeships programs, labor-demanding infrastructure projects, income redistribution and the social safety net?
  • What are potential solutions to the social immobility problem?
    • Can we design an education system that solves social immobility problems? What is the optimal education model?  Is education the best way to solve this problem?
    • Maybe we should investigate where we should invest our greatest resources. Where should we direct our brightest minds and greatest efforts?
    • How do we encourage the private sector to share knowledge rather than holding onto trade secrets?

Session #2: Algorithms and law

Summary

The discussion mainly focused on the use of algorithms in the criminal justice system.  More specifically, participants considered the use of algorithms to determine parole for individual prisoners, predict the location where the greatest number of crimes will be committed and which individuals are most likely to commit a crime or become a victim of a crime, as well as decide bail for prisoners.

Algorithmic bias was one major concern expressed by participants.  Bias is caused by the use of certain data types such as race, criminal history, gender, and socioeconomic status, to inform an algorithmic output.  However, excluding these data types can make the bias problem worse because it means that outputs are determined by less data.  Additionally, as certain data types are linked and can be used to infer other characteristics about an individual, eliminating specific data types from the dataset may not be sufficient to overcome algorithmic bias. Eliminating data types, criminal history in particular, may also counteract some of the basic aims of the criminal justice system, such as discouraging offenders from committing additional crimes in the future.  Judges consider whether prisoners are repeat offenders when determining sentences.  How should the algorithm simulate this aspect of judicial decision-making if criminal history is eliminated from the dataset?  Several members of the group pointed out that even if we successfully created an unbiased algorithm, we would need to require the entire justice system to use the algorithm in order to make the system fair.  Without universal adoption, certain communities may be treated less fairly than others.  For example, what if the wealthier communities refused to use the algorithm and only disadvantaged groups were evaluated by the algorithm?  Other participants countered that human judges can be unfair and biased, so an algorithm that at least ensures that judicial opinions are consistent may benefit everyone.  An alternative solution suggested was to introduce the algorithm as an assist tool for judges, similar to a pilot assist system, to act as a check or red flag on judicial outcomes.

Another set of concerns expressed by participants involved issues related to human perceptions of justice, accountability, and legitimacy.  The conversation touched on the human need for retribution and feeling that justice is served.  Similarly, the group talked about ways in which the use of algorithms to make or assist with judicial decisions may shift accountability away from the judge. If the judge does not have any other source of information on which to challenge the output of the algorithm, he or she may rely on the algorithm to make the decision.  Furthermore, if the algorithmic output intimates an official seal of acceptance, the judge will not as readily be held accountable for his or her decisions. Another concern was the possibility that, by trusting the technology to determine the correct outcome of a given case, the judge will become less personally invested in the courts system, which will ultimately undermine the legitimacy of the courts.

The other major topic discussed was the use of algorithms to inform crime prevention programs. The Chicago Police Department uses algorithms to identify individuals likely to commit a crime or become a victim of a crime and proactively offers these individuals support from social services.[3]  A similar program in Richmond, CA provides monetary incentives to encourage good behavior among troubled youth in the community.[4]  Unlike the Chicago Police Department, this program relies on human research rather than an algorithm to identify individuals at the greatest risk.  The group discussed how predictive algorithms in this case could automate the identification process.  However, the group also considered whether the very premise of these programs is fair as, in both examples, the state offers help only to a specific group of people.  Again, bias might influence which individuals are targeted by these programs.

The group concluded that algorithms can be most useful if built to leverage human abilities and act as a “tool” rather than a “friend.” More specifically, participants agreed that there needs to be greater transparency about how a given algorithm works and what types of data are used by the algorithm to make determinations. Aside from the legitimacy issue and the possibility that an assist program would substitute for human decisions, participants seemed to feel more comfortable with judges using algorithms as an assist tool if there is transparency about how the process worked and if the algorithm is required to explain its reasoning.  Furthermore, several participants made the point that transparency about the decision-making process may discourage future crime.

Questions and Conclusions

  • The group concluded that algorithms can be most useful if built to leverage human abilities and act as a “tool” rather than a “friend.”
    • Participants seemed to feel more comfortable with judges using algorithms as an assist tool if there is transparency about how the process worked and if the algorithm is required to explain its reasoning.
    • How do we make AI systems more transparent?
  • What data should we use to design algorithms? How can we eliminate bias in algorithms without compromising the effectiveness of the technology?
  • How can we design algorithms that incorporate human elements of decision-making such as accountability and a sense of justice?
  • Should we invest in designing algorithms that evaluate evidence? How do we apply the evidence data available to a given case?

Session #3: Autonomous Weapons

Summary

The group defined AWS as “a weapon system that is capable of independently selecting and engaging targets.”  However, determining which weapons should be considered autonomous led to some disagreement within the group, especially since there are weapons currently used by the military that some would have considered autonomous in the recent past but no longer do (while drones do not select and engage targets, aspects of drones as well as certain types of missiles (e.g. heat-speaking missiles), are autonomous).

There was discussion about how best to regulate AWS or whether we should ban the development and use of this technology in the United States or internationally.  The overarching question that emerged was whether we are concerned about AWS because they are “too smart” and we worry about their capabilities in the future or if we are concerned because the technology is “too stupid” to comply with the laws of war.  For example, some experts in the field have questioned whether existing legal requirements that outline the process for determining the proportionality and necessity of an attack can be automated.  A poll of the room demonstrated that while almost all participants wanted to encourage governments to negotiate an international treaty banning the use of lethal autonomous weapons, there was one participant who thought that there should not be any restrictions on AI development, and one participant who thought that the U.S. should not pursue AWS research regardless of whether other governments agreed to negotiate international regulations.  An additional participant was conflicted about whether the U.S. should ban AWS research altogether or should negotiate a set of international regulations, so his or her vote was counted in both categories. Several participants reasoned that a ban would not be effective because AWS is likely to benefit non-state actors. Others argued that the absence of an international ban would trigger an AWS arms race that would ultimately strengthen non-state actors and weaken state actors, because mass-produced AWS would become cheap and readily available on the black market.  Those in favor of a ban also highlighted the moral, legal, and accountability concerns raised by delegating targeting determinations to machines.

One person argued that from a defensive perspective, the U.S. has no choice but to develop autonomous weapons because other actors (state and non-state) will continue to develop this technology. Therefore, it was suggested that the U.S. restrict research to only defensive AWS.  However, even if the U.S. develops only defensive AWS, there may still be undesirable second order effects that will be impossible to anticipate.  In addition, as the leader of the market, the U.S., by refusing to continue AWS development, may be able to encourage other nations to abandon this area of AI research as well.

Despite differing opinions on how best to regulate AWS, the group agreed that enforcement of these regulations may be difficult. For example, it may be impossible to enforce restrictions when only the deployer knows whether the weapon is autonomous.  Furthermore, would one hold the individual deployer or the nation state responsible for the use of autonomous weapons? If we choose to hold nation states accountable, it will be difficult to regulate the development and use of AWS because we may not be able to trust that other countries will comply with regulations.  It was pointed out that the regulation would also need to take into account that technology developed for civilian uses could potentially be re-purposed for military uses.  Last, several participants argued that regulatory questions are not a large concern because past examples of advanced weaponry that may have previously been defined as autonomous demonstrate that the U.S. has historically addressed regulatory concerns on a case-by-case basis.

The costs and benefits of using AWS in the military also generated a discussion that incorporated multiple perspectives.  Several participants argued that AWS, which may be more efficient at killing, could decrease the number of deaths in war. How do we justify banning AWS research when so many lives could be preserved?  On the other side, participants reasoned that the loss of human life discourages nations from entering into future wars. In addition, participants considered the democratic aspect of war.  AWS may create an uneven power dynamic in democracies by transferring the power to wage war from the general public to the politicians and social elite.  There was also a discussion about whether the use of AWS would encourage more killing because it eliminates the messy and personal aspects from the act of killing.  Some participants felt that historical events, such as the genocide in Rwanda, suggest that the personal aspects of killing do not discourage violence and therefore, the use of AWS would not necessarily decrease or increase the number of deaths.

Lingering research questions included whether AWS research and use should be banned in the U.S. or internationally and if not, whether research should be limited to defensive AWS.  Additionally, if there was an international treaty or body in charge of determining global regulations for the development and use of AWS, what would those regulations look like and how would they be enforced?  Furthermore, should regulations contain stipulations about incorporating certain AWS features, such as kill switches and watermarks, in order to address some of the concerns about accountability and enforcement?  And regardless of whether AWS are banned, how should we account for the possibility that AI systems intended for civilian uses may be repurposed for military uses?

Questions and Conclusions

  • Most participants (but not all) agreed that the world powers should draft some form of international regulations to guide AWS research and use.
    • What should the terms of the agreement be? What would a ban or regulations look like? How would the decision-makers reach consensus?
    • How would regulations be enforced, especially since non-state actors are likely to acquire and use AWS?
    • How do we hold nation states accountable for AWS research and development? Should states be held strictly liable, for example?
  • Alternatively, do we already have current regulations that account for some of the concerns regarding AWS research? Should the U.S. continue to evaluate weapon development on a case-by-case basis?
  • Should certain types of AWS development and use be banned? Only offensive? Only lethal weapons? Would a ban be effective?
    • Should we require AWS to have certain safety/defensive features, such as kill switches?
  • Even if the U.S. limits research to defensive AWS, how do we account for unanticipated outcomes that may be undesirable? For example, how do we define whether a weapon is defensive?  What if a “defensive” weapon is sent into another state’s territory and fires back when it is attacked? Is it considered an offensive or defensive weapon?
  • How do we prevent military re-purposing of civilian AI?
  • Need to incorporate more diverse viewpoints in this conversation. We need more input from lawyers, philosophers, and economists to solve these problems.

Session #4: The Risks of Emergent Human-Level AI

Summary

The final session featured a presentation about ongoing research funded by the Future of Life Institute, which focuses on ensuring that any future AI with super-human intelligence will be beneficial. Group members expressed a variety of views about when, if ever, we will successfully develop human-level AI.  It was argued that there will be many issues to resolve in order to converge current AI technology, which is limited to narrow AI, to form an advanced, general AI system.  Conversely, several participants argued that while the timeline remains uncertain, human-level AI is achievable.  They emphasized the need for further research in order to prepare for the social, economic, and legal consequences of this technology.

The second major point debated was the potential dangers of human-level AI.  Participants disagreed about the likelihood that advanced AI systems will have the ability to act beyond the control of human creators.  In response, other group members explained that an advanced, general AI system may be harder to control because it is designed to learn new tasks, rather than follow a specific set of instructions like narrow AI systems, and therefore, may act out of self-preservation or the need for resources to complete a task.  Given the difference between general AI and narrow AI, it will be important to consider the social and emotional aspects of general AI systems when designing the technology.  For example, there was some discussion about the best way to build a system that allows the technology to learn and respect human values.  At the same time, it was pointed out that current AI systems that do not possess human-level intelligence could also pose a threat to humans if released “into the wild.”  Similar to animals, AI systems with code that can be manipulated by malicious actors or have unforeseeable and unintended consequences may prove equally dangerous to humans.  It was suggested that some of the recommendations for overseeing and maintaining control of human-level AI be applied to existing AI systems.

In conclusion, the group identified several areas requiring additional research including the need to define robotic and AI terminology, including vocabulary related to the ethics of safety design.  For example, should we define a robot as a creature with human-like characteristics or as an object or assist tool for humans?  Experts from different disciplines such as law, philosophy and computer science will require a shared vocabulary in order to work together to devise solutions to a variety of challenges that may be created by advanced AI.  Additionally, as human-level AI can quickly become super-human AI, several participants emphasized the need for more research in this field related to safety, control, and alignment of AI values with human values.  It was also suggested that we investigate whether our understanding of sociability is changing as it may impact how we think about the social and emotional elements of AI.  Lastly, lawyers should investigate liability and tort law in order to determine whether there are guiding ethical principles that should be incorporated into future AI systems.

Questions and Conclusions

  • Group displayed the full spectrum of perspectives about the timeline and the potential outcomes of human-level AI. A large number of questions remain that still need to be considered.
  • Group identified the need to define robotic and AI terminology, including vocabulary related to the ethics of safety design. For example, should we define a robot as a creature or as an object?  Experts from different disciplines such as law, philosophy and computer science will require a shared vocabulary in order to work together to devise solutions to a variety of challenges that may be created by advanced AI.
  • The group considered the social and emotional aspects of AI systems. What is the nature of a collaborative relationship between a human and a machine? We should investigate whether our understanding of sociability is changing as it may impact how we think about the social and emotional elements of AI.
  • How should we design legal frameworks to oversee research and use of human-level intelligence AI?
  • Lawyers should investigate liability and tort law in order to determine whether there are guiding ethical principles that should be incorporated into future AI systems.
    • How can we build the technology in a way that can adapt as human values, laws, and social norms change over time?

Participants:

  • Ryan Adams – Assistant Professor of Computer Science, School of Engineering and Applied Sciences, Harvard University.
  • Kenneth Anderson – Professor of Law, American University, and visiting Professor of Law, Harvard Law School (spring 2016).
  • Peter Asaro – Assistant Professor, School of Media Studies, The New School.
  • David Autor – Professor and Associate Department Head, Department of Economics, MIT.
  • Cynthia Breazeal – Associate Professor of Media Arts and Sciences, MIT.
  • Rebecca Crootof – Ph.D. in Law candidate at Yale Law School and a Resident Fellow with the Yale Information Society Project, Yale Law School.
  • Kate Darling – Research Specialist at the MIT Media Lab and a Fellow at the Berkman Center for Internet & Society, Harvard University.
  • Bonnie Docherty – Lecturer on Law and Senior Clinical Instructor, International Human Rights Clinic, Harvard Law School.
  • Peter Galison – Joseph Pellegrino University Professor, Department of the History of Science, Harvard University.
  • Viktoriya Krakovna – Doctoral Researcher at Harvard University, and a co-founder of the Future of Life Institute.
  • Andrew W. Lo – Charles E. and Susan T. Harris Professor, Director, MIT Laboratory for Financial Engineering, MIT.
  • Richard Mallah – Director of AI Projects at the Future of Life Institute.
  • David C. Parkes – Harvard College Professor, George F. Colony Professor of Computer Science and Area Dean for Computer Science, School of Engineering and Applied Sciences, Harvard University.
  • Steven Pinker – Johnstone Family Professor, Department of Psychology, Harvard University.
  • Lisa Randall – Frank B. Baird, Jr., Professor of Science, Department of Physics, Harvard University.
  • Susanna Rinard – Assistant Professor of Philosophy, Department of Philosophy, Harvard University.
  • Cynthia Rudin – Associate Professor of Statistics, MIT Computer Science and Artificial Intelligence Lab and Sloan School of Management, MIT.
  • Bruce Schneier – Security Technologist, Fellow at the Berkman Center for Internet & Society, Harvard University, and Chief Technology Officer of Resilient Systems, Inc.
  • Stuart Shieber – James O. Welch, Jr. and Virginia B. Welch Professor of Computer Science, School of Engineering and Applied Sciences, Harvard University.
  • Marin Soljačić – Professor of Physics, Department of Physics, MIT.
  • Max Tegmark – Professor of Physics, Department of Physics, MIT.
  • Jonathan Zittrain – George Bemis Professor of International Law, Harvard Law School and the Harvard Kennedy School of Government, Professor of Computer Science, Harvard School of Engineering and Applied Sciences, Vice Dean, Library and Information Resources, Harvard Law School, and Faculty Director of the Berkman Center for Internet & Society.

[1] “One big, bad trade,” The Economist Online, October 1, 2010, http://www.economist.com/blogs/newsbook/2010/10/what_caused_flash_crash. See also Pam Martens and Russ Martens, “Treasury Flash Crash of October 15, 2014 Still Has Wall Street in a Sweat,” April 9, 2015, http://wallstreetonparade.com/2015/04/treasury-flash-crash-of-october-15-2014-still-has-wall-street-in-a-sweat/.

[2] In 2015, authorities arrested a trader named Navinder Singh Sarao, who contributed to the flash crash that occurred on May 6, 2010.  See Nathaniel Popper and Jenny Anderson, “Trader Arrested in Manipulation That Contributed to 2010 ‘Flash Crash’” April 21, 2015, http://www.nytimes.com/2015/04/22/business/dealbook/trader-in-britain-arrested-on-charges-of-manipulation-that-led-to-2010-flash-crash.html.

[3] Matt Stroud, “The minority report: Chicago’s new police computer predicts crimes, but is it racist?” The Verge, February 19, 2014,  http://www.theverge.com/2014/2/19/5419854/the-minority-report-this-computer-predicts-crime-but-is-it-racist.

[4] Tim Murphy, “Did This City Bring Down Its Murder Rate by Paying People Not to Kill?” Mother Jones, July/August 2014, http://www.motherjones.com/politics/2014/06/richmond-california-murder-rate-gun-death.

This content was first published at futureoflife.org on May 6, 2016.

About the Future of Life Institute

The Future of Life Institute (FLI) is a global non-profit with a team of 20+ full-time staff operating across the US and Europe. FLI has been working to steer the development of transformative technologies towards benefitting life and away from extreme large-scale risks since its founding in 2014. Find out more about our mission or explore our work.

Our content

Related content

Other posts about ,

If you enjoyed this content, you also might also be interested in:

Why You Should Care About AI Agents

Powerful AI agents are about to hit the market. Here we explore the implications.
4 December, 2024
Our content

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up
linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram