Skip to content
All Newsletters

FLI December, 2018 Newsletter

Published
January 8, 2019
Author
Revathi Kumar

Contents

FLI December, 2018 Newsletter

Wishing you Happy Holidays and a wonderful New Year from everyone at FLI!


It may seem as if we at FLI spend a lot of our time worrying about existential risks, but it’s helpful to remember that we don’t do this because we think the world will end tragically: We address issues relating to existential risks because we’re so confident that if we can overcome these threats, we can achieve a future greater than any of us can imagine!

As we end 2018 and look toward 2019, we want to focus on a message of hope, a message of existential hope

In order to achieve a better future, though, we, as a society, must first determine what that collective future should be. At FLI, we’re looking forward to working with global partners and thought leaders as we consider what “better futures” might look like and how we can work together to build them.

As FLI President Max Tegmark says, “There’s been so much focus on just making our tech powerful right now, because that makes money, and it’s cool, that we’ve neglected the steering and the destination quite a bit. And in fact, I see that as the core goal of the Future of Life Institute: help bring back focus on the steering of our technology and the destination.”

But establishing a positive, collective future for “just” the 7.7 billion people who are alive now is no small feat, and it’s a challenge that gets trickier as we look further into a future with many more billions of people, all of whom will likely have very different values from ours today. Fortunately, there are already quite a few people who have begun considering how we can all unite under a shared vision of the future — and how we might unite under a vision of the future that includes room for many different definitions of utopia.

For the existential hope podcast, Ariel spoke with FLI co-founders Max Tegmark and Anthony Aguirre, as well as existentialhope.com founder Allison Duettmann, futurist and researcher Anders Sandberg, tech enthusiast and entrepreneur Gaia Dempsey, and Josh Clark who hosts The End of the World with Josh Clark.

You can read some of the highlights from the podcast here, or just listen to the whole podcast here.

And while we’re excited to dive into existential hope more in the coming year, we’re also happy to report some of our accomplishments in 2018:





LAWS Pledge

Over 240 AI-related companies and organizations from 36 countries, and nearly 3200 individuals from 90 countries signed FLI’s pledge not to develop lethal autonomous weapons systems (LAWS).





AGI Safety Grants

FLI launched our AGI Safety grants competition. $2 million has been awarded to 10 teams for research that anticipates artificial general intelligence (AGI) and how it can be designed beneficially.





CA State Legislation Supporting 23 Asilomar Principles

On August 30th, California adopted state legislation supporting FLI’s 23 Asilomar Principles. Assemblyman Kevin Kiley championed the new law, which passed with unanimous support.





Stanislav Petrov Receives 2nd Future of Life Award

To celebrate that September 26th, 2018 was not the 35th anniversary of World War III, Stanislav Petrov, the man who helped avert an all-out nuclear exchange between Russia and the U.S. on September 26 1983, was honored in New York with the $50,000 Future of Life Award.





Max Tegmark’s TED Talk on AI

In this talk, Max separates the real opportunities and threats of artificial intelligence from the myths, describing the concrete steps we should take today to ensure that AI ends up being the best — rather than worst — thing to ever happen to humanity.





AI Alignment Podcast

The AI alignment podcast was launched in April 2018 by FLI’s Lucas Perry, and guests to date have included Peter SingerWilliam MacAskill, and David Pearce. The newest episode features Rohin Shah, and is titled “Inverse Reinforcement Learning and the State of AI Alignment.”





Global AI Policy Resource

FLI’s Jessica Cussins published a Global AI Policy resource, the most comprehensive resource for research publications, information on national strategies, and policy recommendations for AI.




Benefits & Risks of Biotechnology

This resource covers everything you need to know about biotech, including: a brief history of the field, the latest breakthroughs and mishaps, the risks of unintended consequences & weaponizing biology, the ethics of biotechnology, and biotech’s four main tools: DNA Sequencing, Recombinant DNA, DNA Synthesis, and Genome Editing.






Get Your Money Out of Nuclear Weapons

In March, FLI released a resource on nuclear divestment, dedicated to helping individuals, governments, and institutions reduce the risk from nuclear weapons by getting their money out of nuclear weapons production. Learn what you can do to help.





The FLI Podcast

Ariel Conn continued to host FLI’s monthly podcast throughout 2018, discussing existential risks and existential hope regarding AI, nuclear weapons, biotechnology, and climate change with various experts. This year’s most popular episodes were Six Experts Explain the Killer Robots Debate and Martin Rees on the Prospects for Humanity.

Listen, like & follow:
You can find all FLI podcast episodes, including the AI Alignment Podcast, on SoundCloudiTunesGoogle Play, and Stitcher.

FLI’s Volunteer of the Year





Lina has been volunteering with the Future of Life Institute since June of 2016. Her helpful and effective personality led her to being involved with a number of writing and research projects during her time here. Primarily, she has been instrumental in setting up the Chinese translations now available on our website. In addition to writing these translations herself, Lina has helped to build a small team of volunteers within FLI dedicated to this undertaking. Under her mentorship, this team has contributed over 30 Chinese translations to the site and Chinese now ranks as the 4th most common language spoken by people visiting the site.

FLI’s 2018 News Highlights


VOX: 35 years ago today, one man saved us from world-ending nuclear war

NEW YORKER: How Frightened Should We Be of A.I.?

CNN: Scientists call for boycott of South Korean university over killer robot fears

IEEE SPECTRUM: Debating Slaughterbots and the Future of Autonomous Weapons

THE ATLANTIC: Why Earth’s History Appears So Miraculous

FORBES: Let’s Talk About AI Ethics; We’re On A Deadline

VICE: The Dawn of Killer Robots

INSIDE BIG DATA: A Chat on AI with MIT Professor Max Tegmark and Cheetah Mobile CEO Fu Sheng

What We Were Up to in 2018


Anthony Aguirre was featured in an Atlantic article titled “Why Earth’s History Appears So Miraculous,” where he discussed existential risks and our lucky evolutionary path. Anthony attended Effective Altruism Global in San Francisco this June.


Jessica Cussins participated in the Global Governance of AI Roundtable in Dubai this February as part of the World Government Summit. In March, she gave a talk about preventing lethal autonomous weapons to a delegation of European policymakers. Jessica participated in the Partnership on AI’s Fair, Transparent, and Accountable AI working group in May, August and November. Also in May, she attended the “Cognitive Revolution Symposium” organized by Swissnex in San Francisco. In June, she attended the Foresight Institute’s “AI Coordination: Great Powers” strategy meeting in San Francisco; the “US-China AI Technology Summit” organized by The AI Alliance in Half Moon Bay; an “AI and Human Decisionmaking” workshop led by Technology for Global Security in Palo Alto; and “Effective Altruism Global” in San Francisco. In August, Jessica gave testimony on behalf of FLI at a State Senate Committee Hearing in Sacramento, California. In December, she spoke on an AI panel at the Foresight Institute’s Vision Weekend, “Toward Futures of Existential Hope”.


Ariel Conn kicked off 2018 speaking on a panel at a side event at the World Economic Forum in Davos, Switzerland, as part of the Global Challenges Foundation’s (GCF) effort to consider new forms of global governance to address catastrophic risks. She later participated in GCF’s New Shape Forum in Stockholm on the same subject. Ariel was also a keynote speaker talking about AI’s impact on society at the GEN Summit in Lisbon. This summer, she gave a statement on the UN floor on behalf of FLI regarding lethal autonomous weapons systems (LAWS), and she spoke at a UN-CCW side event hosted by the Campaign to Stop Killer Robots. Her comments at the UN were later incorporated into a resolution passed by the European Parliament in support of a ban on LAWS. She also organized and helped moderate a two-day meeting in Pretoria, South Africa for local and regional UN delegates, regarding the nuclear ban treaty and LAWS.


Victoria Krakovna kicked off 2018 with an AI strategy retreat co-run by the Future of Humanity Institute and the Center for Effective Altruism. Later in the spring, Victoria attended a workshop on AI safety at the Center for Human-Compatible AI (CHAI) at UC Berkeley. In the fall she ran a session at EA Global London on the machine learning approach to AI safety with Jan Leike, where they explored different research agendas.


Richard Mallah participated in Effective Altruism Global in Boston earlier this year, co-hosting a brief workshop on x-risk related skills. He also participated in two of the Partnership on AI’s working groups: one on AI, Labor, and the Economy, and another on Safety-Critical AI. Richard attended the Foresight Institute’s “AI Coordination: Great Powers” strategy meeting in San Francisco, and was interviewed by George Gantz and the Long Now Boston audience on FLI and the future of AI alignment with Lucas Perry. Richard gave a talk on technical AI safety at the Partnership on AI’s “All Partners Meeting” in November.


Lucas Perry gave a talk at the Cambridge “Don’t Bank on the Bomb” forum in January, where he presented opportunities and strategies for individuals and institutions to divest from nuclear weapons-producing companies. In February, he participated in an Individual Outreach Forum with the Center for Effective Altruism, and in April he participated in Effective Altruism Global in Boston, co-hosting a workshop on x-risk related skills with Richard Mallah. In October, Lucas was interviewed by George Gantz and the Long Now Boston audience on FLI and the future of AI Alignment with Richard Mallah.


Max Tegmark started off 2018 giving talks on AI safety and his book, Life 3.0. Max spoke a the American Museum of Natural History, gave the 2018 Beyond Annual Lecture at Arizona State University, and spoke with Sam Harris and Rebecca Goldstein on the Waking Up podcast about the foundations of human knowledge and morality. Max spoke at Effective Altruism Global about AGI and nuclear disarmament, he debated the future of AI with Andrew Ng and Jim Breyer at the Microsoft CEO Summit, and he gave an invited talk titled “Beneficial Intelligence & Intelligible Intelligence” at the IJCAI/ECAI AI conference in Stockholm, Sweden. Lastly, Max spent much of the fall on a book tour throughout Southeast Asia, giving talks on AI Safety in South Korea, Japan and China.


As a nonprofit, FLI relies on donations from people like you to keep doing the awesome things we get to do. If you’d like to help out, please check out our Get Involved page.

Our newsletter

Regular updates about the Future of Life Institute, in your inbox

Subscribe to our newsletter and join over 20,000+ people who believe in our mission to preserve the future of life.

Recent newsletters

Future of Life Institute Newsletter: A pause didn’t happen. So what did?

Reflections on the one-year Pause Letter anniversary, the EU AI Act passes in EU Parliament, updates from our policy team, and more.
April 2, 2024

Future of Life Institute Newsletter: FLI x The Elders, and #BanDeepfakes

Former world leaders call for action on pressing global threats, launching the campaign to #BanDeepfakes, new funding opportunities from our Futures program, and more.
March 4, 2024

Future of Life Institute Newsletter: The Year of Fake

Deepfakes are dominating headlines - with much more disruption expected, the Doomsday Clock has been set for 2024, AI governance updates, and more.
February 2, 2024
All Newsletters

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram