Skip to content
All Newsletters

FLI November, 2019 Newsletter

Published
11 December, 2019
Author
Revathi Kumar

Contents

FLI November, 2019 Newsletter

Existential Risk, AI Governance, Climate Change & Giving Tuesday


Reminder: Giving Tuesday is tomorrow! Double your impact by donating to our Facebook fundraiser right at 8 am EST. Find more detailed instructions here.





FLI Podcast: The Psychology of Existential Risk and Effective Altruism with Stefan Schubert

We could all be more altruistic and effective in our service of others, but what exactly is it that’s stopping us? What are the biases and cognitive failures that prevent us from properly acting in service of existential risks, statistically large numbers of people, and long-term future considerations? How can we become more effective altruists? Stefan Schubert, a researcher at University of Oxford’s Social Behaviour and Ethics Lab, explores questions like these at the intersection of moral psychology and philosophy. This conversation explores the steps that researchers like Stefan are taking to better understand psychology in service of doing the most good we can. Listen here.





AI Alignment Podcast: Machine Ethics and AI Governance with Wendell Wallach

Wendell Wallach has been at the forefront of contemporary emerging technology issues for decades now. As an interdisciplinary thinker, he has engaged at the intersections of ethics, governance, AI, bioethics, robotics, and philosophy since the beginning formulations of what we now know as AI alignment were being codified. Wendell began with a broad interest in the ethics of emerging technology and has since become focused on machine ethics and AI governance. On this month’s AI Alignment Podcast, Wendell explores his intellectual journey and participation in these fields. Listen here.


You can find all the FLI Podcasts here and all the AI Alignment Podcasts here. Or listen on SoundCloud, iTunes, Google Play, and Stitcher.

Not Cool: A Climate Podcast





We wrapped up our climate change podcast series, Not Cool with Ariel Conn, this past Tuesday. Over the course of 26 interviews, Ariel spoke to 31 scientists, policy experts, journalists, activists and other field leaders about the nature of the threat we face and the solutions currently available to us. Episodes covered everything from the basic science behind global warming to the steps we should be taking to mitigate and adapt to the crisis we’ve created. Our final episode features Naomi Oreskes, author and Harvard professor, on why we should trust climate science. You can also listen to a brief epilogue, in which Ariel reflects on what she’s learned during the making of Not Cool, the questions she’s left with, and the actions she’ll be taking going forward.



Episode 20: Deborah Lawrence on deforestation



Episode 21: Libby Jewett on ocean acidification



Episode 22: Cullen Hendrix on climate change and armed conflict



Episode 23: Brian Toon on nuclear winter: the other climate change



Episode 24: Ellen Quigley and Natalie Jones on defunding the fossil fuel industry



Episode 25: Mario Molina on climate action



Episode 26: Naomi Oreskes on trusting climate science

Giving Tuesday


Tomorrow is Giving Tuesday!

This year, Facebook will be matching up to $7 million in donations, so you can double your impact by contributing to our Facebook fundraiser. Last year, Facebook’s matching funds ran out in 15 seconds, so EA Giving is recommending that donors try to contribute within the first second after the match begins, on December 3rd at 8 am EST. They’ve provided more detailed instructions for those interested.

But why should you donate to us? Take a look below at what we’ve accomplished with the help of donors like you.

This year, FLI has:
  • Hosted three well-received podcasts that regularly rank among the top 100 in the technology category on Apple Podcasts
  • Advocated for, and helped inform, the development of sensible US governmental policies on emerging technology, especially related to artificial intelligence. Some examples include: advising on numerous AI legislative efforts in California and in the U.S. Congress; formal regulatory comments to NIST and HUD, numerous consultations with and recommendations to the Defense Innovation Board on AI Principles for Defense, and much more.
  • Supported growing international efforts to establish norms and governance for artificial intelligence, including at the OECD, the United Nations, and by the European Union.
  • Represented scientists who signed our letters against the development and use of lethal autonomous weapons at the United Nations Convention on Conventional Weapons
  • Produced a video on lethal autonomous weapons that aired at the UN and has received ~20,000 views
  • Held a workshop to produce a roadmap for first steps on prohibiting lethal autonomous weapons, including a 5-year use moratorium and strategies for verification, non-proliferation and de-escalation
  • Held a second Puerto Rico conference on beneficial artificial general intelligence, bringing together a group of AI researchers from academia and industry, along with thought leaders in economics, law, policy, ethics, and philosophy for five days
  • Co-organized the Augmented Intelligence Summit, which brought together a group of policy, research, and business leaders to imagine and interact with a simulated model of a positive future
  • Gave the annual $50,000 Future of Life Award to Dr. Matthew Meselson, who was a driving force behind the 1972 Biological Weapons Convention
  • Organized Women for the Future, a Women’s History Month campaign celebrating women in the field of existential risk
  • Maintained a website that receives on average 72,000 visitors a month

FLI in the News


VICE: This Guy Studies the ‘Global Systems Death Spiral’ That Might End Humanity

SAPIENS: Your Body as Part Machine

Our newsletter

Regular updates about the Future of Life Institute, in your inbox

Subscribe to our newsletter and join over 20,000+ people who believe in our mission to preserve the future of life.

Recent newsletters

Future of Life Institute Newsletter: Tool AI > Uncontrollable AGI

Max Tegmark on AGI vs. Tool AI; magazine covers from a future with superintelligence; join our new digital experience as a beta tester; and more.
2 December, 2024

Future of Life Institute Newsletter: Illustrating Superintelligence

Need a break from US election news? Explore the results of our $70K creative contest; new national security AI guidance from the White House; polling teens on AI; and much more.
1 November, 2024

Future of Life Institute Newsletter: On SB 1047, Gov. Newsom Caves to Big Tech

A disappointing outcome for the AI safety bill, updates from UNGA, our $1.5 million grant for global risk convergence research, and more.
1 October, 2024
All Newsletters

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram