This month, FLI launched our new page dedicated to helping individuals, governments, and institutions reduce the risk from nuclear weapons by getting their money out of nuclear weapons production.
Stigmatizing these companies with actions like divestment and boycotts can make it financially beneficial for them to pull out of nuclear weapons production — as occurred with land mines and cluster munitions — as well as draw attention to risk-increasing use of our taxes.
Learn more about divestment on this page.
with Shahar Avin and Victoria Krakovna
Is it possible to create AI that isn’t used maliciously? If the history of technological progress has taught us anything, it’s that every “beneficial” technological breakthrough can be used to cause harm. How can we keep AI technology away from bad actors and dangerous governments?
On this month’s podcast, Ariel spoke with FLI co-founder Victoria Krakovna and Shahar Avin from the Center for the Study of Existential Risk (CSER). They talk about CSER’s recent report on forecasting, preventing, and mitigating the malicious uses of AI, along the many efforts to ensure safe and beneficial AI.
Topics discussed in this episode include:
- the Facebook Cambridge Analytica scandal,
- Goodhart’s Law with AI systems,
- spear phishing with machine learning algorithms,
- why it’s so easy to fool ML systems,
- and why developing AI is still worth it in the end.
You can listen to this podcast here.
ICYMI: This Month’s Most Popular Articles
What We’ve Been Up to This Month
Max Tegmark spoke with Sam Harris and Rebecca Goldstein on Harris’s Waking Up podcast this month.
Ariel Conn participated in the second session of the N Square Innovators Network to develop new ideas and means for communicating nuclear risks and threats.
Jessica Cussins gave a talk about efforts to prevent lethal autonomous weapons to a delegation of European policymakers organized by the German Marshall Fund in San Francisco on March 15th.
FLI in the News
WAKING UP PODCAST: #120 – WHAT IS AND WHAT MATTERS
“In this episode of the Waking Up podcast, Sam Harris speaks with Rebecca Goldstein and Max Tegmark about the foundations of human knowledge and morality.”
“I think Elon Musk’s real worry is that somebody will develop an artificial intelligence that is really really superior to what everybody else in the world has. And if one organization, whether it’s a company, country or an individual in their basement has a system like this, it grants a huge amount of power,” Anthony Aguirre, University of California, Santa Cruz Physics Professor, said.
If you’re interested in job openings, research positions, and volunteer opportunities at FLI and our partner organizations, please visit our Get Involved page.