Artificial Intelligence

Artificial Intelligence, which encompasses everything from recommender algorithms to self-driving cars, is racing forward. Today we have 'narrow AI' systems which perform isolated tasks. These already pose major risks, such as the erosion of democratic processes, financial flash crashes, or an arms race from autonomous weapons.
Looking ahead, many researchers are pursuing 'AGI', general AI which can perform as well as or better than humans at a wide range of cognitive tasks. Once AI systems can themselves design smarter systems, we may hit an 'intelligence explosion', very quickly leaving humanity behind. This could eradicate poverty or war; it could also eradicate us.
That risk comes not from AI's potential malevolence or consciousness, but from its competence - in other words, not from how it feels, but what it does. Humans could, for instance, lose control of a high-performing system programmed to do something destructive, with devastating impact. And even if an AI is programmed to do something beneficial, it could still develop a destructive method to achieve that goal.
AI doesn't need consciousness to pursue its goals, any more than heat-seeking missiles do. Equally, the danger is not from robots, per se, but from intelligence itself, which requires nothing more than an internet connection to do us incalculable harm.
Misconceptions about this still loom large in public discourse. However, thanks to experts speaking out on these issues, and machine learning reaching certain milestones far earlier than expected, an informed interest in AI safety as a major concern has blossomed in recent years.
Super-intelligence is not necessarily inevitable, yet nor is it impossible. It might be right around the corner; it might never happen. But either way, civilisation only flourishes as long as we can win the race between the growing power of technology and the wisdom with which we design and manage it. With AI, the best way to win that race is not to impede the former, but to accelerate the latter by supporting AI safety research and risk governance.
Since it may take decades to complete this research, it is prudent to start now. AI safety research prepares us better for the future by pre-emptively making AI beneficial to society and reducing its risks.
Meanwhile, policy cannot possibly form and reform at the same pace as AI risks; it too must therefore be pre-emptive, inclusive of dangers both present and forthcoming.

Recommended reading
Benefits & Risks of Artificial Intelligence
Featured resources

Introductory Resources on AI Risks

Global AI Policy

AI Value Alignment Research Landscape
Featured videos
Featured posts

Characterizing AI Policy using Natural Language Processing

Superintelligence survey

A Principled AI Discussion in Asilomar

When AI Journalism Goes Bad

Introductory Resources on AI Safety Research

Hawking Reddit AMA on AI

AI FAQ

Open letter on AI weapons
Featured podcasts




Featured open letters
Pause Giant AI Experiments: An Open Letter
Foresight in AI Regulation Open Letter
Autonomous Weapons Open Letter: Global Health Community
Lethal Autonomous Weapons Pledge
Autonomous Weapons Open Letter: AI & Robotics Researchers
Research Priorities for Robust and Beneficial Artificial Intelligence: An Open Letter
Other cause areas

Nuclear Weapons
