Contents
FLI March, 2018 Newsletter
What Can You Do About Nuclear Weapons?
FLI Resources: Nuclear Divestment Made Easy
This month, FLI launched our new page dedicated to helping individuals, governments, and institutions reduce the risk from nuclear weapons by getting their money out of nuclear weapons production.
Stigmatizing these companies with actions like divestment and boycotts can make it financially beneficial for them to pull out of nuclear weapons production — as occurred with land mines and cluster munitions — as well as draw attention to risk-increasing use of our taxes.
Learn more about divestment on this page.
FLI continues to work with local groups to spread awareness about nuclear risk and what we can do about it. On April 7th, Lucas Perry will represent FLI at a conference and workshop co-organized by Massachusetts Peace Action and Radius MIT. The event is free for students, and speakers will include former Representative John Tierney, Harvard professor Elaine Scarry, and Peace Action’s Cole Harrison.
Learn more about the event here, and sign up here.
Check us out on SoundCloud and iTunes!
Podcast: The Malicious Use of Artificial Intelligence
with Shahar Avin and Victoria Krakovna
Is it possible to create AI that isn’t used maliciously? If the history of technological progress has taught us anything, it’s that every “beneficial” technological breakthrough can be used to cause harm. How can we keep AI technology away from bad actors and dangerous governments?
On this month’s podcast, Ariel spoke with FLI co-founder Victoria Krakovna and Shahar Avin from the Center for the Study of Existential Risk (CSER). They talk about CSER’s recent report on forecasting, preventing, and mitigating the malicious uses of AI, along the many efforts to ensure safe and beneficial AI.
Topics discussed in this episode include:
-
- the Facebook Cambridge Analytica scandal,
- Goodhart’s Law with AI systems,
- spear phishing with machine learning algorithms,
- why it’s so easy to fool ML systems,
- and why developing AI is still worth it in the end.
You can listen to this podcast here.
Future of Humanity Institute Job Opening:Â Senior Administrator
Applications close April 20th, 2018
FHI is entering a period of rapid expansion, and they’re excited to invite applications for a full time Senior Administrator to oversee the effective and efficient day-to-day non-academic management and administration of FHI and the Global Priorities Institute (GPI). Candidates should apply via this link.
ICYMI: This Month’s Most Popular Articles
Stephen Hawking in Memoriam
By Max Tegmark
As we mourn the loss of Stephen Hawking, we should remember that his legacy goes far beyond science. Yes, of course he was one of the greatest scientists of the past century, but he also had a remarkable legacy as a social activist, who looked far beyond the next election cycle and used his powerful voice to bring out the best in us all.
How AI Handles Uncertainty: An Interview With Brian Ziebart
By Tucker Davey
Ziebart is conducting research to improve AI systems’ ability to operate amidst the inherent uncertainty around them. The physical world is messy and unpredictable, and if we are to trust our AI systems, they must be able to safely handle it.
By Kirsten Gronlund
Many scientists see the 1.5 degree target as an impossible goal, but a study by Richard Miller et al. concludes that the 1.5 degree limit is still physically feasible, if only narrowly. It also provides an updated “carbon budget”—a projection of how much more carbon dioxide we can emit without breaking the 1.5 degree limit.
What We’ve Been Up to This Month
Max Tegmark spoke with Sam Harris and Rebecca Goldstein on Harris’s Waking Up podcast this month.
Ariel Conn participated in the second session of the N Square Innovators Network to develop new ideas and means for communicating nuclear risks and threats.
Jessica Cussins gave a talk about efforts to prevent lethal autonomous weapons to a delegation of European policymakers organized by the German Marshall Fund in San Francisco on March 15th.
FLI in the News
An interview with Anthony Aguirre, published in The Atlantic
In this article, Peter Brannen from The Atlantic interviews FLI co-founder and physics professor at UC Santa Cruz, Anthony Aguirre, and Anders Sandberg from the Future of Humanity Institute. They discuss existential risks, why we’ve never had a nuclear war, and how lucky and strange our evolutionary path has become.
WAKING UP PODCAST: #120 – WHAT IS AND WHAT MATTERS
“In this episode of the Waking Up podcast, Sam Harris speaks with Rebecca Goldstein and Max Tegmark about the foundations of human knowledge and morality.”
“I think Elon Musk’s real worry is that somebody will develop an artificial intelligence that is really really superior to what everybody else in the world has. And if one organization, whether it’s a company, country or an individual in their basement has a system like this, it grants a huge amount of power,” Anthony Aguirre, University of California, Santa Cruz Physics Professor, said.
If you’re interested in job openings, research positions, and volunteer opportunities at FLI and our partner organizations, please visit our Get Involved page.