AI Policy Challenges
This page is intended as an introduction to the major challenges that society faces when attempting to govern Artificial Intelligence (AI). FLI acknowledges that this list is not comprehensive, but rather a sample of the issues we believe to be consequential.
AI systems have enormous potential to serve and benefit the world. In the long-term, these systems could well enable discoveries in medicine, basic and applied science, managing complex systems, and creating currently-unimagined products and services. At present, AI already helps people in increasingly diverse ways. This includes breakthroughs in acquiring new skills and training, democratizing mental health services, designing and delivering faster production times, providing real-time environmental monitoring for pollution, enhancing cybersecurity defences, reducing healthcare inefficiencies, creating new kinds of enjoyable experiences, and improving real-time translation services to connect people. Overall, AI can foreseeably help manage some of the world’s hardest problems and improve countless lives.
Alongside AI’s many advantages, there are important challenges to address. Below are ten areas of particular concern for the safe and beneficial development of AI in the near- and far-future. These should be prioritised by policymakers seeking to prepare for and mitigate the risks of AI, as well as harness its benefits.
1. Global Governance and International Cooperation
The adoption and development of stronger AI systems will severely test and likely shift existing power dynamics. Discussion of an “AI race” between great powers has become commonplace, and many countries have outlined national strategies that describe efforts to attain or retain a competitive advantage in this field. However, there are important examples of international cooperation that will be increasingly critical in guiding the safe and beneficial development of AI, while reducing race conditions and global security threats.
2. Maximising Beneficial AI Research and Development
The challenges associated with Research and Development (R&D) programs revolve around ensuring AI is not only competent and useful, but also beneficial to humans. To this end, researchers aim to make high quality and standardised datasets more accessible and convince teams to implement risk analyses and mitigation practices in their programs. Similarly, R&D programs can prioritise ‘AI Safety’ by improving their systems’ robustness, benefits and technical design, incorporating core safety mechanisms to mitigate the “control problem,” and avoiding accidents and unwanted side-effects. Additionally, AI safety focuses on the consideration of value alignment between systems and humans. More of these efforts can be found in FLI’s AI safety research landscape.
3. Impact on the Workforce
There are two dimensions to the effects of AI on the workforce. First, there is technology’s ability to enable greater automation. This could impact many industries and worsen economic disparities by generating wealth for a smaller number of people than previous technological revolutions. As a result, society could face significant job losses, necessitating improved retraining programs as well as updated social security measures. Some popular proposals to address this challenge include redistributive economic policies like universal basic income and a “robot tax” to offset some of the increases in economic inequality.
The second dimension centres on the supply of labor. As this technology becomes the cornerstone of the economy, the difficulty in hiring people with the right combination of skills to build reliable, high-quality products will increase. Limits on immigration and work visas could further exacerbate the shortage of qualified individuals. These constraints might force governments to update educational programs that include training to build safe and beneficial AI systems.
4. Accountability, Transparency, and Explainability
Holding an AI system or its designers accountable for its decision-making poses several challenges. The lack of transparency and explainability associated with machine learning means that it can be hard or impossible to know why an algorithm made a particular choice. There is also the question of who has access to key algorithms and how understandable they are, a problem exacerbated by the use of proprietary information. As decision-making is ceded to AI systems, there are few clear guidelines about who should be held accountable for undesirable effects. FLI recently published a position paper providing feedback on the European Commission’s proposal for an AI Liability directive, suggesting ways it can better protect consumers from AI-related harms.
5. Surveillance, Privacy, and Civil Liberties
AI expands surveillance possibilities because it enables real-time monitoring and analysis of video and other data streams, including facial recognition. These uses raise questions about privacy, justice, and civil liberties, particularly in the law enforcement context. Police forces in the US are already experimenting with the use of AI for enhanced predictive policing. There is also increasing pressure on AI companies and institutions to be more transparent about their data and privacy policies. The EU GDPR is one prominent example of a recent data privacy regulation that has profound implications for AI development given its requirements for data collection and management as well as the “right to explanation.” The California Consumer Privacy Act of 2018 is another important privacy regulation that gives consumers greater rights over their personal information.
6. Fairness, Ethics, and Human Rights
The field of AI ethics is growing rapidly to address multiple challenges. One of them is the relative homogeneity in computer science and AI, lacking in gender, racial, and other kinds of diversity, which can lead to skewed product design, blind spots, and false assumptions. Another is the potential for algorithms to reproduce and magnify social biases and discrimination because they are trained on data sets that mirror existing biases in society or misrepresent reality. The field of AI ethics encompasses the issues of value systems and goals encoded into machines, design ethics, and systemic impacts and their effects on social, political, and economic structures. As a result, some have called for justice and ethics to be a more explicit goal of fair, accountable, and transparent (or “FAT”) AI development.
AI can enable and scale micro-targeting practices that are particularly persuasive and can manipulate behaviour and emotions. People could arguably lose autonomy if AI systems nudge their behaviour and even alter their perception of the world. As society cedes control to machines in various areas of our lives, a proportion of individuals might experience an increasing psychological dependency on these systems. Importantly, it is unclear what kinds of relationships people will form with AI systems once they are as capable as humans, or how this will impact human relationships.
AI systems are also capable of amplifying information wars, enabling the rise of highly personalised, and targeted computational propaganda. Fake news and social media bots can be used to tailor messages for political ends. Improvements in the creation of fake videos are making this challenge even greater. Many worry that manipulating the information people see and compromising their ability to make informed decisions through AI could undermine democracy itself.
8. Implications for Health
AI is capable of interpreting massive amounts of biomedical data that can assist diagnostics, patient treatment, and drug development. This can yield positive advances in precision medicine, yet it also raises issues of care access, data control, and opposing beliefs about human health choices. Some people want to use AI to augment human ability through “smart drugs,” nanobots and devices implanted in our bodies, or by directly linking our brains to computer interfaces. Such uses raise safety and ethical challenges, including the possibility of exacerbating inequalities between people.
9. National Security
AI impacts the national and global security landscape by generating new modes of information warfare, expanding the threat landscape, and contributing to destabilisation. Moreover, increasingly powerful AI systems are used to carry out cyberattacks that amplify existing threats and introduce novel ones, even from unsophisticated actors.
These systems also have myriad vulnerabilities: its software can be hacked and the data it relies upon can be manipulated. Adversarial machine learning, in which data inputs are used to confuse a system and cause a mistake, is also a threat. As AI is increasingly featured in a variety of bots and interfaces with which we form connections, there will also be novel security risks relating to the abuse of human trust and reliance.
The question of how much autonomy is acceptable in weapon systems is another ongoing international debate. Many civil society organisations support international and national bans on autonomous weapon systems that target humans. Arguments against these systems include the fact that they violate international humanitarian law by “removing a human from the loop,” that it is morally wrong to let a machine determine whom to kill, and that we need to avoid an AI arms race, which could lower the threshold of war, or alter the speed, scale, and scope of its effects. After many years of UN Conventions on Certain Conventional Weapons have proved unsuccessful, states are now looking to new fora to reach a treaty on these systems. You can read about FLI’s position and work on this particular issue here.
10. Artificial General Intelligence and Superintelligence
The notion of a machine with intelligence equal to humans in most or all domains is called strong AI or artificial general intelligence (AGI). Many AI experts agree that AGI is possible, and disagree only about the timelines and qualifications. AGI technology would encounter all of the challenges of narrow AI, but would additionally pose its own risks, such as containment. Key strategists, AI researchers, and business leaders believe that this advanced AI poses one of the greatest threats to human survival, and an extinction-level risk to life in the long-term. On top of that, the combinations of AI with cyber, nuclear, robotic, drone, or biological weapons throw in numerous other devastating possibilities.
About the Future of Life Institute
The Future of Life Institute (FLI) is a global non-profit with a team of 20+ full-time staff operating across the US and Europe. FLI has been working to steer the development of transformative technologies towards benefitting life and away from extreme large-scale risks since its founding in 2014. Find out more about our mission or explore our work.