About Artificial Intelligence
Contents
Most benefits of civilization stem from intelligence, so how can we enhance these benefits with artificial intelligence without being replaced on the job market and perhaps altogether? Future computer technology can bring great benefits, and also new risks, as described in the resources below.
Videos
- Stuart Russell – The Long-Term Future of (Artificial) Intelligence
- Humans Need Not Apply
- Nick Bostrom on Artificial Intelligence and Existential Risk
- Stuart Russell Interview on the long-term future of AI
- Value Alignment – Stuart Russell: Berkeley IdeasLab Debate Presentation at the World Economic Forum
- Social Technology and AI: World Economic Forum Annual Meeting 2015
- Stuart Russell, Eric Horvitz, Max Tegmark – The Future of Artificial Intelligence
Media Articles
- Concerns of an Artificial Intelligence Pioneer
- Transcending Complacency on Superintelligent Machines
- Why We Should Think About the Threat of Artificial Intelligence
- Stephen Hawking Is Worried About Artificial Intelligence Wiping Out Humanity
- Artificial Intelligence could kill us all. Meet the man who takes that risk seriously
- Artificial Intelligence Poses ‘Extinction Risk’ To Humanity Says Oxford University’s Stuart Armstrong
- What Happens When Artificial Intelligence Turns On Us?
- Can we build an artificial superintelligence that won’t kill us?
- Artificial intelligence: Our final invention?
- Artificial intelligence: Can we keep it in the box?
- Science Friday: Christof Koch and Stuart Russell on Machine Intelligence (transcript)
- Transcendence: An AI Researcher Enjoys Watching His Own Execution
- Science Goes to the Movies: ‘Transcendence’
- Our Fear of Artificial Intelligence
Articles by AI Researchers
- Stuart Russell: What do you Think About Machines that Think?
- Stuart Russell: Of Myths and Moonshine
- Jacob Steinhardt: Long-Term and Short-Term Challenges to Ensuring the Safety of AI Systems
Research Papers
- Aligning Superintelligence with Human Interests: A Technical Research Agenda (MIRI)
- Intelligence Explosion: Evidence and Import (MIRI)
- Intelligence Explosion and Machine Ethics (Luke Muehlhauser, MIRI)
- Artificial Intelligence as a Positive and Negative Factor in Global Risk (MIRI)
- MIRI research collection
- Bruce Schneier – Resources on Existential Risk, p. 110
- Racing to the Precipice: a Model of Artificial Intelligence Development
- The Ethics of Artificial Intelligence
- The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents
- AI Risk and Opportunity: A Strategic Analysis
- Where We’re At – Progress of AI and Related Technologies: An introduction to the progress of research institutions developing new AI technologies.
Case Studies
- The Asilomar Conference: A Case Study in Risk Mitigation (Katja Grace, MIRI)
- Pre-Competitive Collaboration in Pharma Industry (Eric Gastfriend and Bryan Lee, FLI): A case study of pre-competitive collaboration on safety in industry.
Books
- Our Final Invention: Artificial Intelligence and the End of the Human Era
- Facing the Intelligence Explosion
- Superintelligence: Paths, Dangers, Strategies
Organizations
- Machine Intelligence Research Institute: A non-profit organization whose mission is to ensure that the creation of smarter-than-human intelligence has a positive impact.
- Centre for the Study of Existential Risk (CSER): A multidisciplinary research center dedicated to the study and mitigation of risks that could lead to human extinction.
- Future of Humanity Institute: A multidisciplinary research institute bringing the tools of mathematics, philosophy, and science to bear on big-picture questions about humanity and its prospects.
- Global Catastrophic Risk Institute: A think tank leading research, education, and professional networking on global catastrophic risk.
- Cal Poly Ethics + Emerging Sciences Group: A non-partisan organization focused on the risk, ethical, and social impact of emerging sciences and technologies.
- Organizations Focusing on Existential Risks: A brief introduction to some of the organizations working on existential risks.
Many of the organizations listed on this page and their descriptions are from a list compiled by the Global Catastrophic Risk institute; we are most grateful for the efforts that they have put into compiling it. These organizations above all work on computer technology issues, though many cover other topics as well. This list is undoubtedly incomplete; please contact us to suggest additions or corrections.
About the Future of Life Institute
The Future of Life Institute (FLI) is a global non-profit with a team of 20+ full-time staff operating across the US and Europe. FLI has been working to steer the development of transformative technologies towards benefitting life and away from extreme large-scale risks since its founding in 2014. Find out more about our mission or explore our work.