Contents
FLI October, 2017 Newsletter
The Inaugural Future of Life Award Presented to Vasili Arkhipov Family
London, UK – On October 27, 1962, a soft-spoken naval officer named Vasili Arkhipov single-handedly prevented nuclear war during the height of the Cuban Missile Crisis. Arkhipov’s submarine captain, thinking their sub was under attack by American forces, wanted to launch a nuclear weapon at the ships above. Arkhipov, with the power of veto, said no, thus averting nuclear war.
Now, 55 years after his courageous actions, the Future of Life Institute has presented the Arkhipov family with the inaugural Future of Life Award to honor humanity’s late hero.
Arkhipov’s surviving family members, represented by his daughter Elena and grandson Sergei, flew into London for the ceremony, which was held at the Institute of Engineering & Technology. After explaining Arkhipov’s heroics to the audience, Max Tegmark, president of FLI, presented the Arkhipov family with their award and $50,000. Elena and Sergei were both honored by the gesture and by the overall message of the award.
Elena explained that her father “always thought that he did what he had to do and never consider his actions as heroism. … Our family is grateful for the prize and considers it as a recognition of his work and heroism. He did his part for the future so that everyone can live on our planet.”
This event was covered by The Times, The Guardian, The Independent, and The Atlantic.
ICAN Wins Nobel Peace Prize
By Ariel Conn
We at FLI offer an excited congratulations to the International Campaign to Abolish Nuclear Weapons (ICAN), this year’s winners of the Nobel Peace Prize. We could not be more honored to have had the opportunity to work with ICAN during their campaign to ban nuclear weapons.
Over 70 years have passed since the bombs were first dropped on Hiroshima and Nagasaki, but finally, on July 7 of this year, 122 countries came together at the United Nations to establish a treaty outlawing nuclear weapons. Behind the effort was the small, dedicated team at ICAN, led by Beatrice Fihn.
FLI welcomes Jessica Cussins to the team!
All of us at FLI are excited to welcome Jessica Cussins to the core team! Jessica will specialize as an AI policy expert for FLI, where she will work to develop programs, strategies, and research regarding AI policy for the short and long-term. Her first two projects include helping different stakeholders implement the Asilomar AI Principles and developing the campaign against lethal autonomous weapon systems. She will also provide support on the other focus areas of FLI, with particular attention to biotechnology.
Check us out on SoundCloud and iTunes!
Podcast: AI Ethics, the Trolley Problem, and a Twitter Ghost Story
with Joshua Greene and Iyad Rahwan
As technically challenging as it may be to develop safe and beneficial AI, this challenge also raises some thorny questions regarding ethics and morality, which are just as important to address before AI is too advanced. How do we teach machines to be moral when people can’t even agree on what moral behavior is? And how do we help people deal with and benefit from the tremendous disruptive change that we anticipate from AI?
To help consider these questions, Joshua Greene and Iyad Rawhan kindly agreed to join the podcast. Josh is a professor of psychology and member of the Center for Brain Science Faculty at Harvard University. Iyad is the AT&T Career Development Professor and an associate professor of Media Arts and Sciences at the MIT Media Lab.
In this episode, we discuss the trolley problem with autonomous cars, how automation will affect rural areas more than cities, how we can address potential inequality issues AI may bring about, and a new way to write ghost stories.
ICYMI: This Month’s Most Popular Articles
Unlike AlphaGo, AlphaGo Zero learned entirely from playing against itself, with no prior knowledge of the game. In a DeepMind blog about the announcement, the authors write, “This technique is more powerful than previous versions of AlphaGo because it is no longer constrained by the limits of human knowledge. Instead, it is able to learn tabula rasa from the strongest player in the world: AlphaGo itself.”
Artificial general intelligence (AGI) is something of a holy grail for many artificial intelligence researchers. How close are we to developing AGI? How can we ensure that the power of AGI will benefit the world, and not just the group who develops it first? FLI volunteers met with Dr. Hiroshi Yamakawa, one of the leading AGI researchers in Japan, to discuss AGI.
By Viktoriya Krakovna
It was inspiring to take part in an increasingly global conversation about AI impacts, and interesting to see how the Japanese AI community thinks about these issues. Overall, Japanese researchers seemed more open to discussing controversial topics like human-level AI and consciousness than their Western counterparts. Most people were more interested in near-term AI ethics concerns but also curious about long term problems.
News from Partner Organizations
The Threat of Nuclear Terrorism: Dr. William J. Perry
Former U.S. Secretary of Defense William Perry launched a MOOC on the changing nuclear threat. With a team of international experts, Perry explores what can be done about the threat of nuclear terrorism.
The course will answer questions like: Is the threat of nuclear terrorism real? What would happen if a terror group was able to detonate a nuclear weapon? And, what can be done to lower the risk of a nuclear catastrophe? This course is free, and you can enroll here.
The Center for the Study of Existential Risks (CSER) has launched their new website. CSER is “an interdisciplinary research centre within CRASSH at the University of Cambridge dedicated to the study and mitigation of existential risks.” Visit their new site to learn more about their research and efforts to keep humanity safe!
What We’ve Been Up to This Month
Ariel Conn attended a special showing of The Bomb this month in San Francisco. The event was hosted by N Square, an organization that pushes the boundaries of nuclear communication, and it included a panel session with Eric Schlosser, author of Command and Control, Smriti Keshari, who directed the film, along with many experts in the nuclear field.
Viktoriya Krakovna attended the AI & Society Symposium in Tokyo, Japan, where speakers from industry and academia discussed the future of AI technologies and their societal impacts. Speakers focused on AI safety, and ranged from machine learning experts to ethicists and legal scholars. Topics included artificial general intelligence (AGI), artificial consciousness, and business applications for AI. Vika also participated in the adjoint Beneficial AI Tokyo workshop, which specifically discussed the Asilomar principles.
Viktoriya Krakovna also spoke at the Workshop for Reliable AI in Zurich. WRAI is a one-day workshop on technical aspects of building robust and safe agents. Topics include safe reinforcement learning, safe control and exploration, value learning, and formal analysis of agent behavior.
FLI in the News
FLI
THE GUARDIAN: Soviet submarine officer who averted nuclear war honoured with prize
“Vasili Arkhipov, who prevented escalation of the cold war by refusing to launch a nuclear torpedo against US forces, is to be awarded new ‘Future of Life’ prize.”
THE ATLANTIC: When the World Lucked Out of a Nuclear War
“The escalating crisis with North Korea coincides with the 55th anniversary of a Russian naval captain’s fateful decision to prevent a torpedo launch at the height of the Cuban Missilesis.”
THE TIMES: Soviet officer Vasili Arkhipov, who averted nuclear war, is honoured
“Fifty-five years ago today, at the zenith of the Cuban missile crisis, the US navy hunted down a nuclear-armed Soviet submarine off the coast of Cuba.”
THE INDEPENDENT: Soviet officer Vasili Arkhipov who prevented nuclear war 50 years ago honoured in London
“Bravery recognised at time when ‘risk of nuclear war is on rise.'”
BUSINESS INSIDER: Avrio AI Inc, names Richard Mallah as Head of Artificial Intelligence
“Avrio AI, Inc. is thrilled to announce that Richard Mallah has joined our team as Head of Artificial Intelligence. His proven ability to transform businesses through the implementation of machine learning, deep learning, advanced analytics, knowledge representation, and computational linguistics makes him a perfect fit for Avrio AI.”
PARTNERSHIP ON AI: Partnership on AI Announces Executive Director Terah Lyons and Welcomes New Partners
“Today we also welcome 21 new Partners to the Partnership on AI, who join our existing 32 Partners, some of whom joined in May. The latest partners hail from three continents and represent a broad cross-section of industry, nonprofit, and academic organizations.”
Life 3.0
THE TIMES: Why Elon Musk thinks Max Tegmark is the geek who will save the world
“The Swedish scientist Max Tegmark’s job includes saving humanity from robotic Armageddon. Most people are in denial, he says.”
NAUTILUS: The Last Invention of Man: How AI might take over the world
“This excerpt from Life 3.0 details how artificial intelligence and the companies that develop it might seek to rule the world.”
WASHINGTON POST: Think humans are superior to AI? Don’t be a ‘carbon chauvinist’
“Within the lifetime of most who are reading this column, software will develop the ability to complete complex tasks without human intercession. And it will do it faster and better. And that is a very disquieting thought.”
NPR: How To Make AI The Best Thing To Happen To Us
“I’m optimistic that we can thrive with advanced AI as long as we win the race between the growing power of our technology and the wisdom with which we manage it. But this requires ditching our outdated strategy of learning from mistakes.”
MOTHERBOARD: The Divide Between People Who Hate and Love Artificial Intelligence Is Not Real
“It might seem like there are two competing schools of thought: the AI pessimists and the AI optimists. But this dichotomy is misleading. “
BIGTHINK: What It Will Take for AI to Surpass Human Intelligence
“Chances also are that the piece of machinery that you’re looking at right now has the capability to outsmart you many times over in ways that you can barely comprehend. That’s the beauty and the danger of AI — it’s becoming smarter and smarter at a rate that we can’t keep up with.”
Get Involved
FHI: AI policy and Governance Internship
The Future of Humanity Institute at the University of Oxford seeks interns to contribute to our work in the areas of AI policy, AI governance, and AI strategy. Our work in this area touches on a range of topics and areas of expertise, including international relations, international institutions and global cooperation, international law, international political economy, game theory and mathematical modelling, and survey design and statistical analysis. Previous interns at FHI have worked on issues of public opinion, technology race modelling, the bridge between short-term and long-term AI policy, the development of AI and AI policy in China, case studies in comparisons with related technologies, and many other topics.
If you or anyone you know is interested in this position, please follow this link.
FHI: AI Safety and Reinforcement Learning Internship
The Future of Humanity Institute at the University of Oxford seeks interns to contribute to our work in the area of technical AI safety. Examples of this type of work include Cooperative Inverse Reinforcement Learning, Learning the Preferences of Ignorant, Inconsistent Agents, Learning the Preferences of Bounded Agents, and Safely Interruptible Agents. The internship will give the opportunity to work on a specific project. The ideal candidate will have a background in machine learning, computer science, statistics, mathematics, or another related field. This is a paid internship. Candidates from underrepresented demographic groups are especially encouraged to apply.
If you or anyone you know is interested in this position, please follow this link.
FHI: AI Safety Postdoctoral Research Scientist
You will advance the field of AI safety by conducting technical research. You can find examples of related work from FHI on our website. Your research is likely to involve collaboration with researchers at FHI and with outside researchers in AI or computer science. You will publish technical work at major conferences, raise research funds, manage your research budget, and potentially hire and supervise additional researchers.
If you or anyone you know is interested in this position, please follow this link.
FHI: AI Safety Research Scientist
You will be responsible for conducting technical research in AI Safety. You can find examples of related work from FHI on our website. Your research is likely to involve collaboration with researchers at FHI and with outside researchers in AI or computer science. You will co-publish technical work at major conferences, own research budget for your project, and contribute to the recruitment process and supervision of additional researchers.
If you or anyone you know is interested in this position, please follow this link.
The Fundraising Manager will have a key role in securing funding to enable us to meet our aim of increasing the preparedness, resources & ability (knowledge & technology) of governments / corporations / humanitarian organizations / NGOs / people to be able to feed everyone in the event of a global catastrophe through recovery of food systems. To reach this ambitious goal, the Fundraising Manager will develop a strategic fundraising plan, and interface with the funder community, including foundations, major individual donors, and other non-corporate institutions.
If you or anyone you know is interested in this position, please follow this link.
To learn more about job openings at our other partner organizations, please visit our Get Involved page.