FLI July 2022 Newsletter

FLI May 2022 Newsletter

FLI April 2022 Newsletter

FLI March 2022 Newsletter

Special Newsletter: 2021 Future of Life Award

FLI Special Newsletter: Future of Life Award 2021

This picture shows Joe Farman (left), Susan Solomon (centre) and Stephen O. Andersen (right), recipients of the 2021 Future of Life Award.

Three Heroes Who Helped Save the Ozone Layer

What happens when the technologies that we can’t live without become technologies that we can’t live with?

The Future of Life Institute is thrilled to present Joe Farman, Susan Solomon, and Stephen O. Andersen with the 2021 Future of Life Award for their important contributions to the passage and success of the Montreal Protocol. The Protocol banned the production and use of ozone-depleting chlorofluorocarbon gases (CFCs) and as a result the Antarctic ozone hole is now closing.

Had the world not acted, the global ozone layer would have collapsed by 2050. By 2070, the UV index would have been 30 – anything over 11 is extreme – causing roughly 2.8 million excess skin cancer deaths and 45 million cataracts. It’s estimated that the world would have also been been 4.5 degrees (F) warmer – a level most climatologists agree is disastrously high – prompting the collapse of entire ecosystems and agriculture.

In 1985, geophysicist Joe Farman and his team from the British Antarctic Survey discovered the ozone hole above Antarctica. Their measurements indicated an alarming rate of ozone depletion and effectively shocked the scientific community, as well as governments and wider society, into action.

Atmospheric chemist Susan Solomon determined the cause of the hole: stratospheric clouds that form only above Antarctica were catalyzing additional ozone-depleting reactions when lit by the sun during spring. Her research drove momentum on the road to regulation, and she herself acted as an important bridge between the scientific community and policymakers, showing the importance of interdisciplinary communication in creating a united front against global challenges.

From medical to military uses, 240 industrial sectors would need to be reorganized to prevent global catastrophe. Stephen O. Anderson, Deputy Director for Stratospheric Ozone Protection at the US Environmental Protection Agency during the Reagan administration, took on the challenge of transforming this industrial juggernaut. He founded and, from 1988 to 2012, co-chaired the Technology and Economic Assessment Panel, working with industry to develop hundreds of innovative solutions for phasing out CFCs. His work was critical to the Protocol’s success.

The Future of Life Award is given annually to individuals who, without having received much recognition at the time, have helped make today dramatically better than it may otherwise have been. You can find out more about the Award here, and please explore the educational materials we have produced about this year’s winners below!

MinuteEarth’s “How to Solve Every Crisis” Video

To celebrate the fifth anniversary of the Future of Life Award, FLI collaborated with popular YouTube channel MinuteEarth to produce a video drawing together lessons from the stories of the Montreal Protocol, the focus of this year’s award, and the eradication of smallpox, the focus of last year’s award, for managing global catastrophic threats — from ecological devastation to the spread of global pandemics and beyond.

Watch the video here.

Special Podcast Episodes

Photo of Susan Solomon and Stephen O. Andersen, podcast guests and winners of the 2021 Future of Life Award. FLI Podcast Special: Susan Solomon and Stephen O. Andersen on Saving the Ozone Layer

In this special episode of the FLI Podcast, Lucas Perry speaks with our 2021 Future of Life Award winners about what Stephen Andersen describes as “science and politics at its best” — the scientific research that revealed ozone depletion and the work that went into the Montreal Protocol, which steered humanity away from the chemical compounds that caused it.

Among other topics, Susan Solomon discusses the inquiries and discoveries that led her to study the atmosphere above the Antarctic, and Stephen describes how together science and public pressure moved industry faster than the speed of politics. To wrap up, the two apply lessons learnt to today’s looming global threats, including climate change.

Cosmic Queries in the O-Zone: Saving the World with Susan Solomon & Stephen Andersen


What happens to our planet without ozone? How did entire industries move to new, safer chemicals? How does the public’s interest in environmental issues create the possibility for meaningful action — and could we do it all again in today’s divided world?

Astrophysicist Neil deGrasse Tyson and Comedian Chuck Nice speak with Susan and Stephen on the popular podcast StarTalk about what they did to save the planet — and what’s left to be done.

Further (Non-FLI) Resources

The Hole: A Short Film on the Montreal Protocol, narrated by Sir David Attenborough

A short film produced by the United Nations Ozone Secretariat explaining the scientific and policy demands that drove the Montreal Protocol, a global, cooperative ban on CFC’s to save the ozone layer. The protocol was the first universally ratified treaty with the United Nations and, more importantly, had we not taken action, NASA estimates the ozone hole could have been 10 times worse.

FLI is a 501c(3) non-profit organisation, meaning donations are tax exempt in the United States.
If you need our organisation number (EIN) for your tax return, it’s 47-1052538.

FLI is registered in the EU Transparency Register. Our ID number is 787064543128-10.

FLI Summer 2021 Newsletter

FLI Summer 2021 Newsletter

FLI’s Take on the EU AI Act

How you understand risk may differ from how your neighbors understand it. But when threats appear, it’s critical for everyone to agree — and act. That’s what’s driving our work on the European Union’s AI Act, defined as “one of the first major policy initiatives worldwide focused on protecting people from harmful AI” in a recent article in Wired magazine.

The article references our work and priorities in the EU: With the very definition of “High Risk” under negotiation, we’re making the case that the threshold for what counts as “subliminal manipulation” should be lowered — and should include addictive adtech, which contributes to misinformation, extremism and, arguably, poor mental health.

The European Commission is the first major regulator in the world to propose a law on AI and will ultimately set policy for the EU’s 27 member states. FLI has submitted its feedback on this landmark act, which you can read here. Our top recommendations include:

  • Ban any and all AI manipulation that adversely impacts fundamental rights or seriously distorts human decision-making.
  • Ensure AI providers consider the social impact of their systems — because applications that do not violate individual rights may nonetheless have broader societal consequences.
  • Require a complete risk assessment of AI systems, rather than classifying entire systems by a single use. The current proposal, for example, would regulate an AI that assesses students’ performance, but would have nothing to say when that same AI offers biased recommendations in educational materials.


Taken together, there are 10 recommendations that build on FLI’s foundational Asimolar Principles for AI governance.  

Policy & Outreach Efforts

How do you prove that you’ve been harmed by an AI when you can’t access the data or algorithm that caused it? If a self-learning AI causes harm 11 years after the product was put on the market, should its producer be allowed to disavow liability? And can a car producer shift liability of an autonomous vehicle simply by burying a legal clause in lengthy terms and conditions?

FLI explored these and other questions in our response to the EU’s new consultation on AI liability. We argued that new rules are necessary to protect the rights of consumers and to encourage AI developers to make their products safer. You can download our full response here.

“Lethal Autonomous Weapons Exist; They Must Be Banned”

Following a recent UN report stating that autonomous weapons were deployed to kill Libyan National Army forces in 2020, Stuart Russell and FLI’s Max Tegmark, Emilia Javorsky and Anthony Aguirre co-authored an article in IEEE Spectrum calling for an immediate moratorium on the development, deployment, and use of lethal autonomous weapons.

Future of Life Institute set to launch $25 million grant program for Existential Risk Reduction


FLI intends to launch its $25M grants program in the coming weeks! This program will focus on reducing existential risks, events that could cause the permanent collapse of civilization or even human extinction.

Watch this space!

New Podcast Episodes

Michael Klare on the Pentagon’s view of Climate Change and the Risk of State Collapse

The US Military views climate change as a leading threat to national security, says Michael Klare. On this episode of the FLI Podcast, Klare, the Five College Professor of Peace & World Security Studies, discussed the Pentagon’s strategy for adapting to this emergent threat.

In the interview, Klare notes that climate change has already done “tremendous damage” to US military bases across the Gulf of Mexico. Later, he discusses how global warming is driving new humanitarian crises that the military must respond to. Also of interest: the military’s view of climate change as a “threat multiplier,” a complicating factor in the complex web of social, economic, and diplomatic tensions that could heighten the probability of armed conflict.

Avi Loeb on Oumuamua, Aliens, Space Archeology, Great Filters and Superstructures

Oumuamua, an object with seemingly unnatural properties, appeared from beyond our solar system in 2017. Its appearance raised questions – and controversial theories – about where it came from. In this episode of the FLI Podcast, Avi Loeb, Professor of Science at Harvard University, shared theories of Oumuamua’s origins — and why science sometimes struggles to explore extraordinary events.

Loeb describes the common properties of space debris – “bricks left over from the construction project of the solar system” – and what he finds so unique about Oumuamua among these celestial objectsHe shares why many mainstream theories don’t satisfy him, and the history of scientists investigating challenging questions back to the days of Copernicus.


A new preliminary report from the US Office of the Director of National Intelligence reported 144 cases of what it called “unidentified aerial phenomena” — a new phrase for UFOs. In this bonus episode, Lucas continues his conversation with Avi Loeb to discuss the importance of this report and what it means for science and the search for extraterrestrial intelligence.

News & Reading

The Centre for the Study of Existential Risk is an interdisciplinary research centre at the University of Cambridge dedicated to the study and mitigation of risks that could lead to human extinction or civilizational collapse.

They are seeking a Senior Research Associate / Academic Programme Manager to play a central role in the operation and delivery of research programmes; including the management of major research projects, line management of postdoctoral researchers, strategic planning, and fundraising.

For consideration, apply by 20 September.

How the U.S. Military can Fight the ‘Existential Threat’ of Climate Change

After the US Secretary of Defense called climate change “a profoundly destabilizing force for our world,” our recent podcast guest Michael Klare penned an Op-Ed in the LA Times.Klare, the Five College Professor of Peace & World Security Studies, calls on the Pentagon to outline specific actions that would lead to “far greater reductions in fossil fuel use and greenhouse gas emissions,” including allocating research funds to green technologies.
Rain Observed at the Summit of Greenland Ice Sheet for the First Time

Rain was reported in area that has only seen temperatures above freezing three times in recorded history. Rain on the ice sheet, which is 10,551 feet above sea level, is warmer than the ice, creating conditions for melting water to run off, or re-freeze.

A recent UN report has suggested that sustained global temperatures beyond 2 degrees Celsius would lead to the total collapse of the ice sheet. The presence of rain could accelerate a melt-off already underway, eventually elevating sea levels by as much as 23 feet.

FLI is a 501c(3) non-profit organisation, meaning donations are tax exempt in the United States.
If you need our organisation number (EIN) for your tax return, it’s 47-1052538.

FLI is registered in the EU Transparency Register. Our ID number is 787064543128-10.

FLI June 2021 Newsletter

FLI June 2021 Newsletter

The Future of Life Institute is delighted to announce a $25M multi-year grant program aimed at reducing existential risk. Existential risks are events that could cause human extinction or permanently and drastically curtail humanity’s potential, and currently efforts to mitigate these risks receive remarkably little funding and attention relative to their importance. This program is made possible by the generosity of cryptocurrency pioneer Vitalik Buterin and the Shiba Inu community.

Specifically, the program will support interventions designed to directly reduce existential risk; prevent politically destabilising events that compromise international cooperation; actively improve international cooperation; and develop positive visions for the long-term future that incentivise both international cooperation and the development of beneficial technologies. The emphasis on collaboration stems from our conviction that technology is not a zero-sum game, and that in all likelihood it will cause humanity to either flourish, or else flounder.

Shiba Inu Grants will support projects; particularly research. Vitalik Buterin Fellowships will bolster the pipeline through which talent flows towards our areas of focus; this may include funding for high school summer programs, college summer internships, graduate fellowships and postdoctoral fellowships.

To read more about the program, click here.

New Podcast Episodes

Nicolas Berggruen on the Dynamics of Power, Wisdom and Ideas in the Age of AI

In this episode of the Future of Life Institute Podcast, Lucas is joined by investor and philanthropist Nicolas Berggruen to discuss the nature of wisdom, why it lags behind technological growth and the power that comes with technology, and the role ideas play in the value alignment of technology.

Later in the episode, the conversation turns to the increasing concentration of power and wealth in society, universal basic income and a proposal for universal basic capital.

To listen, click here.

Reading & Resources

The Centre for the Study of Existential Risk is hiring for a Deputy Director!

The Centre for the Study of Existential Risk, University of Cambridge, is looking for a new Deputy Director. This role will involve taking full operational responsibility for the day-to-day activities of the Centre, including people and financial management, and contributing to strategic planning for the centre.

CSER is looking for someone with strong experience in operations and strategy, with the interest and intellectual versatility to engage with and communicate the Centre’s research.

The deadline for applications is Sunday 4 July. More details on both the role and person profile are available in the further particulars, here.

The Leverhulme Centre for the Future of Intelligence (CFI) and CSER are also hiring for a Centre Administrator to lead the Department’s professional services support team. Further details can be found here.

The Global Catastrophic Risk Institute is looking for collaborators and advisees!

The Global Catastrophic Risk Institute (GCRI) is currently welcoming inquiries from people who are interested in seeking their advice and/or collaborating with them. These inquiries can concern any aspect of global catastrophic risk but GCRI is particularly interested to hear from those interested in its ongoing projects. These projects include AI policy, expert judgement on long-term AI, forecasting global catastrophic risks and improving China-West relations.

Participation can consist of anything from a short email exchange to more extensive project work. In some cases, people may be able to get involved by contributing to ongoing dialogue, collaborating on research and outreach activities, and co-authoring publications. Inquiries are welcome from people at any career point, including students, any academic or professional background, and any place in the world. People from underrepresented groups are especially encouraged to reach out.

Find more details here!

FLI May 2021 Newsletter

FLI May 2021 Newsletter

The outreach team is now recruiting Spanish and Portuguese speakers for translation work!

The goal is to make our social media content accessible to our rapidly growing audience in Central America, South America, and Mexico. The translator would be sent between one and five posts a week for translation. In general, these snippets of text would only be as long as a single tweet.

We prefer a commitment of two hours per week but do not expect the work to exceed one hour per week. The hourly compensation is $15. Depending on outcomes for this project, the role may be short-term.

For more details and to apply, please fill out this form. We are also registering other languages for future opportunities so those with fluency in other languages may fill out this form as well.

New Podcast Episodes

Bart Selman on the Promises and Perils of Artificial Intelligence

In this new podcast episode, Lucas is joined by Professor of Computer Science at Cornell University Bart Selman to discuss all things artificial intelligence.

Highlights of the interview include Bart talking about what superintelligence could consist in, whether superintelligent systems might solve problems like income inequality and whether they could teach us anything about moral philosophy. He also discusses the possibility of AI consciousness, the grave threat of lethal autonomous weapons and whether the global race to advanced artificial intelligence may negatively affect our chances of successfully solving the alignment problem. Enjoy!

Reading & Resources

The Centre for the Study of Existential Risk is hiring for a Deputy Director!

The Centre for the Study of Existential Risk, University of Cambridge, is looking for a new Deputy Director. This role will involve taking full operational responsibility for the day-to-day activities of the Centre, including people and financial management, and contributing to strategic planning for the centre.

CSER is looking for someone with strong experience in operations and strategy, with the interest and intellectual versatility to engage with and communicate the Centre’s research.

The deadline for applications is Sunday 4 July. More details on both the role and person profile are available in the further particulars, here.

The Leverhulme Centre for the Future of Intelligence (CFI) and CSER are also hiring for a Centre Administrator to lead the Department’s professional services support team. Further details can be found here.

The Global Catastrophic Risk Institute is looking for collaborators and advisees!

The Global Catastrophic Risk Institute (GCRI) is currently welcoming inquiries from people who are interested in seeking their advice and/or collaborating with them. These inquiries can concern any aspect of global catastrophic risk but GCRI is particularly interested to hear from those interested in its ongoing projects. These projects include AI policy, expert judgement on long-term AI, forecasting global catastrophic risks and improving China-West relations.

Participation can consist of anything from a short email exchange to more extensive project work. In some cases, people may be able to get involved by contributing to ongoing dialogue, collaborating on research and outreach activities, and co-authoring publications. Inquiries are welcome from people at any career point, including students, any academic or professional background, and any place in the world. People from underrepresented groups are especially encouraged to reach out.

Find more details here!

This article in the New York Times details how scientific breakthroughs together with advocacy efforts caused the average lifespan to double between 1920 and 2020. We were particularly pleased to see last year’s Future of Life Award winner Bill Foege mentioned for his crucial role in the eradication of smallpox.

“The story of our extra life span almost never appears on the front page of our actual daily newspapers, because the drama and heroism that have given us those additional years are far more evident in hindsight than they are in the moment. That is, the story of our extra life is a story of progress in its usual form: brilliant ideas and collaborations unfolding far from the spotlight of public attention, setting in motion incremental improvements that take decades to display their true magnitude.”

The International Committee of the Red Cross (ICRC) recently released its official position on autonomous weapons; “Unpredictable autonomous weapon systems should be expressly ruled out…This would best be achieved with a prohibition on autonomous weapon systems that are designed or used in a manner such that their effects cannot be sufficiently understood, predicted and explained.”

FLI April 2021 Newsletter

FLI April 2021 Newsletter

Exciting Updates to the FLI Website!

Thanks to the tireless efforts of Anna Yelizarova and Meia Chita-Tegmark, there have been some exciting updates to our website! We have a new and improved homepage as well as new landing pages for each of our four areas of focus; AI, biotechnology, nuclear weapons and climate change. Our hope is that these changes will make the site easier to navigate and the educational resources easier to access for both historical and new visitors.

European Commission releases its proposal for a comprehensive regulation of AI systems

The European Commission has published its long-awaited proposal for a comprehensive regulation of AI systems. It recommends that systems considered a clear threat to the safety, livelihoods and rights of EU citizens be banned, including systems or applications that manipulate human behaviour, and that other “high risk” systems be subject to strict safety requirements. If adopted by the European Parliament, these regulations would apply across all the member states of the European Union.

Having actively participated in the Commission’s debate about future AI governance, our policy team is looking forward to reviewing and providing feedback on the proposal at the earliest opportunity.

FLI’s Ongoing Policy Efforts in the U.S.

The 
U.S. Congress has introduced a number of bills that would dramatically reform U.S. government funding for research and development. We continue to support policymakers as they evaluate how to advance innovation in emerging technologies while being attuned to safety and ethical concerns. This builds on the work FLI did to support the National AI Initiative Act that passed last December.

New Podcast Episodes

Jaan Tallinn on Avoiding Civilizational Pitfalls and Surviving the 21st Century

In this episode of the Future of Life Institute Podcast, Lucas Perry is joined by Jaan Tallinn, an investor, philanthropist, founding engineer of Skype and co-founder of the Future of Life Institute and the Centre for the Study of Existential Risk.

“AI is the only meta-technology such that if you get AI right, you can fix the other technologies.”

Jaan explains why he believes we should prioritise the mitigation of risks from artificial intelligence and synthetic biology ahead of those from climate change and nuclear weapons, why it’s productive to think about AI adoption as a delegation process and why, despite his concern about the possibility of unaligned artificial general intelligence, he continues to invest heavily in AI research. He also discusses generational forgetfulness and his current strategies for maximising philanthropic impact, including funding the development of promising software.

Joscha Bach and Anthony Aguirre on Digital Physics and Moving Towards Beneficial Futures

Joscha Bach, cognitive scientist and AI researcher, and Anthony Aguirre, UCSC Professor of Physics and FLI co-founder, come together to explore the world through the lens of computation and discuss the difficulties we face on the way to beneficial futures.

In this mind-blowing episode, Joscha and Anthony discuss digital physics, the idea that all quantities in nature are finite and discrete, making all physical processes intrinsically computational, and the nature of knowledge and human consciousness. In addition, they consider bottlenecks to beneficial futures, the role mortality plays in preventing poorly aligned incentives within institutions and whether competition between multiple AGIs could produce positive outcomes.

Reading & Resources

Malaria vaccine hailed as potential breakthrough

The Jenner Institute at the University of Oxford announces that a newly developed malaria vaccine proved to be 77% effective when trialled in 450 children in Burkina Faso.

If these findings hold up in larger trials, this will be the first malaria vaccine to reach the World Health Organisation’s goal of at least 75% efficacy, with the most effective malaria vaccine to date having only shown 55% efficacy.

FLI March 2021 Newsletter

FLI March 2021 Newsletter

The Future of Life Institute is hiring for a Director of European Policy, Policy Advocate, and Policy Researcher.

The Director of European Policy will be responsible for leading and managing FLI’s European-based policy and advocacy efforts on both lethal autonomous weapon systems and on artificial intelligence.

The Policy Advocate will be responsible for supporting FLI’s ongoing policy work and advocacy in the U.S. government, especially (but not exclusively) at a federal level. They will be focused primarily on influencing near-term policymaking on artificial intelligence to maximise the societal benefits of increasingly powerful AI systems. Additional policy areas of interest may include synthetic biology, nuclear weapons policy, and the general management of global catastrophic and existential risk.

The Policy Researcher will be responsible for supporting FLI’s ongoing policy work in a wide array of governance for through the production of thoughtful, practical policy research. In this role, this position will be focused primarily on researching near-term policymaking on artificial intelligence to maximise the societal benefits of increasingly powerful AI systems. Additional policy areas of interest may include lethal autonomous weapon systems, synthetic biology, nuclear weapons policy, and the general management of global catastrophic and existential risk.

The positions are remote, though from varying locations, and pay is negotiable, competitive, and commensurate with experience.

Applications are now rolling until the positions are filled.

For further information about the roles and how to apply, click here.

FLI Relaunches autonomousweapons.org

We are pleased to announce that thanks to the brilliant efforts of Emilia Javorsky and Anna Yelizarova, we have now relaunched autonomousweapons.org. This site is intended as a comprehensive educational resource where anyone can go to learn about lethal autonomous weapon systems; weapons that can identify, select and target individuals without human intervention.

Lethal autonomous weapons are not the stuff of science fiction, nor do they look like anything like the Terminator; they are already here in the form of unmanned aerial vehicles, vessels, and tanks. As the United States, United Kingdom, Russia, China, Israel and South Korea all race to develop and deploy them en masse, the need for international regulation to maintain meaningful human control over the use of lethal force has become ever more pressing.

Using autonomousweapons.org, you can read up on the global debate surrounding these emerging systems, the risks – from the potential for violations of international humanitarian law and algorithmic bias in facial recognition technologies to their being the ideal weapon for terror and assassination – the policy options and how you can get involved.

Nominate an Unsung Hero for the 2021 Future of Life Award!

We’re excited to share that we’re accepting nominations for the 2021 Future of Life Award!

The Future of Life Award is given to an individual who, without having received much recognition at the time, has helped make today dramatically better than it may otherwise have been.

The first two recipients, Vasili Arkhipov and Stanislav Petrov, made judgements that likely prevented a full-scale nuclear war between the U.S. and U.S.S.R. In 1962, amid the Cuban Missile Crisis, Arkhipov, stationed aboard a Soviet submarine headed for Cuba, refused to give his consent for the launch of a nuclear torpedo when the captain became convinced that war had broken out. In 1983, Petrov decided not to act on an early-warning detection system that had erroneously indicated five incoming US nuclear missiles. We know today that a global nuclear war would cause a nuclear winter, possibly bringing about the permanent collapse of civilisation, if not human extinction. The third recipient, Matthew Meselson, was the driving force behind the 1972 Biological Weapons Convention. Having been ratified by 183 countries, the treaty is credited with preventing biological weapons from ever entering into mainstream use. The 2020 winners, William Foege and Viktor Zhdanov, made critical contributions towards the eradication of smallpox. Foege pioneered the public health strategy of ‘ring vaccination’ and surveillance while Zhdanov, the Deputy Minister of Health for the Soviet Union at the time, convinced the WHO to launch and fund a global eradication programme. Smallpox is thought to have killed 500 million people in its last century and its eradication in 1980 is estimated to have saved 200 million lives so far.

The Award is intended not only to celebrate humanity’s unsung heroes, but to foster a dialogue about the existential risks we face. We also hope that by raising the profile of individuals worth emulating, the Award will contribute to the development of desirable behavioural norms.

If you know of someone who has performed an incredible act of service to humanity but been overlooked by history, nominate them for the 2021 Award. This person may have made a critical contribution to a piece of groundbreaking research, set an important legal precedent, or perhaps alerted the world to a looming crisis; we’re open to suggestions! If your nominee wins, you’ll receive $3,000 from FLI as a token of our gratitude.

 

New Podcast Episodes

Roman Yampolskiy on the Uncontrollability, Incomprehensibility, and Unexplainability of AI

In this episode of AI Alignment Podcast, Roman Yampolskiy, Professor of Computer Science at the University of Louisville, joins us to discuss whether we can control, comprehend, and explain AI systems, and how this constrains the project of AI safety.

Among other topics, Roman discusses the need for impossibility results within computer science, the halting problem, and his research findings on AI explainability, comprehensibility, and controllability, as well as how these facets relate to each other and to AI alignment.

Stuart Russell and Zachary Kallenborn on Drone Swarms and the Riskiest Aspects of Lethal Autonomous Weapons

In this episode of the Future of Life Podcast, we are joined by Stuart Russell, Professor of Computer Science at the University of California, Berkeley, and Zachary Kallenborn, self-described “analyst in horrible ways people kill each other” and drone swarms expert, to discuss the highest risk aspects of lethal autonomous weapons.

Stuart and Zachary cover a wide range of topics, including the potential for drone swarms to become weapons of mass destruction, as well as how they could be used to deploy biological, chemical and radiological weapons, the risks of rapid escalation of conflict, unpredictability and proliferation, and how the regulation of lethal autonomous weapons could set a precedent for future AI governance.

To learn more about lethal autonomous weapons, visit autonomousweapons.org.

Reading & Resources

Max Tegmark on the INTO THE IMPOSSIBLE Podcast

Max Tegmark joined Dr. Brian Keating on the INTO THE IMPOSSIBLE podcast to discuss questions such as whether we can grow our prosperity through automation without leaving people lacking income or purpose, how we can make future artificial intelligence systems more robust such that they do what we want without crashing, malfunctioning or getting hacked, and whether we should fear an arms race in lethal autonomous weapons.

How easy would it be to snuff out humanity?

“If you play Russian roulette with one or two bullets in the cylinder, you are more likely to survive than not, but the stakes would need to be astonishingly high – or the value you place on your life inordinately low – for this to be a wise gamble.”

Read this fantastic overview of the existential and global catastrophic risks humanity currently faces by Lord Martin Rees, Astronomer Royal and Co-founder of the Centre for the Study of Existential Risk, University of Cambridge.

Disease outbreaks more likely in deforestation areas, study finds

“Diseases are filtered and blocked by a range of predators and habitats in a healthy, biodiverse forest. When this is replaced by a palm oil plantation, soy fields or blocks of eucalyptus, the specialist species die off, leaving generalists such as rats and mosquitoes to thrive and spread pathogens across human and non-human habitats.”

A new study suggests that epidemics are likely to increase as a result of environmental destruction, in particular, deforestation and monoculture plantations.

 

Boris Johnson is playing a dangerous nuclear game

“By deciding to increase the cap, the UK – the world’s third country to develop its own nuclear capability – is sending the wrong signal: rearm. Instead, the world should be headed to the negotiating table to breathe new life into the arms control talks…The UK could play an important role in stopping the new nuclear arms race, instead of restarting it.”

A useful analysis by Professor of History at Harvard University Serhii Plokhy on how Prime Minister Boris Johnson may fuel a nuclear arms race by increasing the United Kingdom’s nuclear stockpile by 40%.