Skip to content
All Newsletters

FLI September, 2016 Newsletter

Published
14 October, 2016
Author
admin

Contents

FLI September, 2016 Newsletter


Highlighting the Work of AI Safety Researchers

artificial_intelligence_fancy_robot

In spring of 2015, FLI launched our AI Safety Research program, funded primarily by a generous donation from Elon Musk. By fall of that year, 37 researchers and institutions had received over $2 million in funding to begin various projects that will help ensure artificial intelligence will remain safe and beneficial. Now with research and publications in full swing, we want to highlight the great work the AI safety researchers have accomplished, which includes 45 scientific publications and a host of conference events.

We’ve put together a complete list of all the researchers, their projects, and publications to date. Check out all the great work they’ve done so far!





Training Artificial Intelligence to Compromise
By Ariel Conn

David Parkes summarized his work for FLI, saying, “The work that I’m doing as part of the FLI grant program is all about aligning incentives so that when autonomous AIs decide how to act, they act in a way that’s not only good for the AI system, but also good for society more broadly.”



Join former Secretary of Defense William Perry for his MOOC about the real risks of nuclear weapons facing the world today.





“I have dedicated the balance of my life to educating the public about the dangers of nuclear weapons because I believe they pose one of the greatest existential threats to humanity we have ever faced. Today, I am honored to announce that I have been joined by an outstanding and uniquely qualified group of educators, scientists and non-proliferation experts who share my concerns, in creating the first free, online course devoted to the history and dangers of nuclear weapons. The 10-week course is hosted by Stanford University and will begin October 4, 2016. I believe this course can be an important tool in our shared struggle to reduce the dangers of nuclear weapons, educating a new generation as well as serving as an ongoing resource for all levels of expertise.”
-William J. Perry, former Secretary of Defense



FLI’s Latest Podcasts Don’t forget to follow us on SoundCloud!





Dr. Robin Hanson talks about the Age of Em, the future and evolution of humanity, and his research for his next book.





Nuclear Risk in the 21st Century: Interview with Lucas Perry

Lucas and Ariel discuss the concepts of nuclear deterrence, hair trigger alert, the potential consequences of nuclear war, and how individuals can do their part to lower the risks of nuclear catastrophe.


ICYMI: This Month’s Most Popular Articles




The Biggest Companies in AI Partner to Keep AI Safe
By Ariel Conn

Industry leaders in the world of artificial intelligence just announced the Partnership on AI.  This exciting new partnership was “established to study and formulate best practices on AI technologies, to advance the public’s understanding of AI, and to serve as an open platform for discussion and engagement about AI and its influences on people and society.”




Elon Musk’s Plan to Colonize Mars
By Tucker Davey

In an announcement to the International Astronautical Congress on Tuesday, Musk unveiled his Interplanetary Transport System (ITS). His goal: allow humans to colonize a city on Mars within the next 50 to 100 years.



Book Review: The Age of Em
By Ariel Conn

Can we study the future? Robin Hanson talks about his attempts to do just that with his book The Age of Em. And while many reviews of his book consider how accurate the details are, FLI’s Ariel Conn asks: Can we use futuristic studies like this one to shape the future we want, rather than sit idly by and just let the future unfold?




Book Review: The Age of Em
By Ariel Conn

Can we study the future? Robin Hanson talks about his attempts to do just that with his book The Age of Em. And while many reviews of his book consider how accurate the details are, FLI’s Ariel Conn asks: Can we use futuristic studies like this one to shape the future we want, rather than sit idly by and just let the future unfold?





The Federal Government Updates Biotech Regulations
By Wakanene Kamau

As researchers and companies scramble to apply the latest advances in synthetic biology, like the gene-editing technique CRISPR, the public has grown increasingly wary of embracing technology that they perceive as a threat to their health or the health of the environment. How, and to what degree, can the drive to develop and deploy new biotechnologies be reconciled with the need to keep the public safe and informed?





Taking Down the Internet
By Ariel Conn

Imagine the world without Internet. Not what the world was like before Internet, but what would happen in today’s world if the Internet suddenly went down.

How many systems today rely on the Internet to run smoothly? If the Internet were to go down, that would disrupt work, government, financial transactions, communications, shipments, travel, entertainment – nearly every aspect of modern life could be brought to a halt. If someone were able to intentionally take down the Internet, how much damage could they cause?


Note From FLI:

The FLI website often includes op-eds. Among our objectives for the website is to inspire discussion and a sharing of ideas, and as such, we’re posting opinion pieces that we believe will help spur discussion within our community. However, these op-eds do not necessarily represent FLI’s opinions or views. 

Our newsletter

Regular updates about the Future of Life Institute, in your inbox

Subscribe to our newsletter and join over 20,000+ people who believe in our mission to preserve the future of life.

Recent newsletters

Future of Life Institute Newsletter: Tool AI > Uncontrollable AGI

Max Tegmark on AGI vs. Tool AI; magazine covers from a future with superintelligence; join our new digital experience as a beta tester; and more.
2 December, 2024

Future of Life Institute Newsletter: Illustrating Superintelligence

Need a break from US election news? Explore the results of our $70K creative contest; new national security AI guidance from the White House; polling teens on AI; and much more.
1 November, 2024

Future of Life Institute Newsletter: On SB 1047, Gov. Newsom Caves to Big Tech

A disappointing outcome for the AI safety bill, updates from UNGA, our $1.5 million grant for global risk convergence research, and more.
1 October, 2024
All Newsletters

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram