Skip to content
All Newsletters

FLI March, 2018 Newsletter

Published
April 4, 2018
Author
Revathi Kumar

Contents

FLI March, 2018 Newsletter

What Can You Do About Nuclear Weapons?


FLI Resources: Nuclear Divestment Made Easy


This month, FLI launched our new page dedicated to helping individuals, governments, and institutions reduce the risk from nuclear weapons by getting their money out of nuclear weapons production.

The risk that nuclear weapons will be used seems to increase daily, and over a trillion dollars are slated for more lethal, more easy-to-use, and more destabilizing nuclear weapons. Most of that money will come from taxes, but it will be paid to companies that produce a variety of products. In fact, for many companies, the nuclear weapons and components they create are only a small fraction of total production.

Stigmatizing these companies with actions like divestment and boycotts can make it financially beneficial for them to pull out of nuclear weapons production — as occurred with land mines and cluster munitions — as well as draw attention to risk-increasing use of our taxes.

Learn more about divestment on this page.




FLI continues to work with local groups to spread awareness about nuclear risk and what we can do about it. On April 7th, Lucas Perry will represent FLI at a conference and workshop co-organized by Massachusetts Peace Action and Radius MIT. The event is free for students, and speakers will include former Representative John Tierney, Harvard professor Elaine Scarry, and Peace Action’s Cole Harrison.

Learn more about the event here, and sign up here.

Check us out on SoundCloud and iTunes!

Podcast: The Malicious Use of Artificial Intelligence
with Shahar Avin and Victoria Krakovna


Is it possible to create AI that isn’t used maliciously? If the history of technological progress has taught us anything, it’s that every “beneficial” technological breakthrough can be used to cause harm. How can we keep AI technology away from bad actors and dangerous governments?

On this month’s podcast, Ariel spoke with FLI co-founder Victoria Krakovna and Shahar Avin from the Center for the Study of Existential Risk (CSER). They talk about CSER’s recent report on forecasting, preventing, and mitigating the malicious uses of AI, along the many efforts to ensure safe and beneficial AI.

Topics discussed in this episode include:

    • the Facebook Cambridge Analytica scandal,
    • Goodhart’s Law with AI systems,
    • spear phishing with machine learning algorithms,
    • why it’s so easy to fool ML systems,
    • and why developing AI is still worth it in the end.

You can listen to this podcast here.





Future of Humanity Institute Job Opening:  Senior Administrator
Applications close April 20th, 2018

FHI is entering a period of rapid expansion, and they’re excited to invite applications for a full time Senior Administrator to oversee the effective and efficient day-to-day non-academic management and administration of FHI and the Global Priorities Institute (GPI). Candidates should apply via this link.


ICYMI: This Month’s Most Popular Articles





Stephen Hawking in Memoriam
By Max Tegmark

As we mourn the loss of Stephen Hawking, we should remember that his legacy goes far beyond science. Yes, of course he was one of the greatest scientists of the past century, but he also had a remarkable legacy as a social activist, who looked far beyond the next election cycle and used his powerful voice to bring out the best in us all.






How AI Handles Uncertainty: An Interview With Brian Ziebart
By Tucker Davey

Ziebart is conducting research to improve AI systems’ ability to operate amidst the inherent uncertainty around them. The physical world is messy and unpredictable, and if we are to trust our AI systems, they must be able to safely handle it.





Many scientists see the 1.5 degree target as an impossible goal, but a study by Richard Miller et al. concludes that the 1.5 degree limit is still physically feasible, if only narrowly. It also provides an updated “carbon budget”—a projection of how much more carbon dioxide we can emit without breaking the 1.5 degree limit.


What We’ve Been Up to This Month


Max Tegmark spoke with Sam Harris and Rebecca Goldstein on Harris’s Waking Up podcast this month.


Ariel Conn participated in the second session of the N Square Innovators Network to develop new ideas and means for communicating nuclear risks and threats.


Jessica Cussins gave a talk about efforts to prevent lethal autonomous weapons to a delegation of European policymakers organized by the German Marshall Fund in San Francisco on March 15th.

FLI in the News




Why Earth’s History Appears So Miraculous
An interview with Anthony Aguirre, published in The Atlantic

In this article, Peter Brannen from The Atlantic interviews FLI co-founder and physics professor at UC Santa Cruz, Anthony Aguirre, and Anders Sandberg from the Future of Humanity Institute. They discuss existential risks, why we’ve never had a nuclear war, and how lucky and strange our evolutionary path has become.



WAKING UP PODCAST: #120 – WHAT IS AND WHAT MATTERS
“In this episode of the Waking Up podcast, Sam Harris speaks with Rebecca Goldstein and Max Tegmark about the foundations of human knowledge and morality.”

True artificial intelligence is on its way, and we aren’t ready for it. Just as our forefathers had trouble visualizing everything from the modern car to the birth of the computer, it’s difficult for most people to imagine how much truly intelligent technology could change our lives as soon as the next decade — and how much we stand to lose if AI goes out of our control.

 CGTN AMERICA: Metaculus website hosts predictions about science and technological issues
“I think Elon Musk’s real worry is that somebody will develop an artificial intelligence that is really really superior to what everybody else in the world has. And if one organization, whether it’s a company, country or an individual in their basement has a system like this, it grants a huge amount of power,” Anthony Aguirre, University of California, Santa Cruz Physics Professor, said.


If you’re interested in job openings, research positions, and volunteer opportunities at FLI and our partner organizations, please visit our Get Involved page.

Our newsletter

Regular updates about the Future of Life Institute, in your inbox

Subscribe to our newsletter and join over 20,000+ people who believe in our mission to preserve the future of life.

Recent newsletters

Future of Life Institute Newsletter: A pause didn’t happen. So what did?

Reflections on the one-year Pause Letter anniversary, the EU AI Act passes in EU Parliament, updates from our policy team, and more.
April 2, 2024

Future of Life Institute Newsletter: FLI x The Elders, and #BanDeepfakes

Former world leaders call for action on pressing global threats, launching the campaign to #BanDeepfakes, new funding opportunities from our Futures program, and more.
March 4, 2024

Future of Life Institute Newsletter: The Year of Fake

Deepfakes are dominating headlines - with much more disruption expected, the Doomsday Clock has been set for 2024, AI governance updates, and more.
February 2, 2024
All Newsletters

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram