Contents
MIRI
MIRI, one of our partner organizations, has just sent out their November newsletter, put together by Rob Bensinger. Check out the links below to learn more about the great work they do!
Research updates
- A new paper: Leó Szilárd and the Danger of Nuclear Weapons
- New at IAFF: Subsequence Induction
- A shortened version of the Reflective Oracles paper has been published in theLORI 2015 conference proceedings.
General updates
- Castify has released professionally recorded audio versions of Eliezer Yudkowsky’sRationality: From AI to Zombies: Part 1, Part 2, Part 3.
- I’ve put together a list of excerpts from the many responses to the 2015 Edge.org question, “What Do You Think About Machines That Think?”
News and links
- Nick Bostrom speaks on AI risk at the United Nations. (Further information.)
- Bostrom gives a half-hour BBCÂ interview. (UK-only video.)
- Elon Musk and Sam Altman discuss futurism and technology with Vanity Fair.
- From the Open Philanthropy Project:Â What do we know about AI timelines?
- From the Global Priorities Project:Â Three areas of research on the superintelligence control problem.
- Paul Christiano writes on inverse reinforcement learning and value of information.
- The Centre for the Study of Existential Risk is looking to hire four post-docs to study technological risk. The application deadline is early November 12th.
Best,
Rob Bensinger
Machine Intelligence Research Institute
rob@intelligence.org
Machine Intelligence Research Institute 2030 Addison Street #300
Berkeley, CA 94704 |
Our newsletter
Regular updates about the Future of Life Institute, in your inbox
Subscribe to our newsletter and join over 20,000+ people who believe in our mission to preserve the future of life.
Recent newsletters
Future of Life Institute Newsletter: Illustrating Superintelligence
Need a break from US election news? Explore the results of our $70K creative contest; new national security AI guidance from the White House; polling teens on AI; and much more.
Maggie Munro
1 November, 2024
Future of Life Institute Newsletter: On SB 1047, Gov. Newsom Caves to Big Tech
A disappointing outcome for the AI safety bill, updates from UNGA, our $1.5 million grant for global risk convergence research, and more.
Maggie Munro
1 October, 2024
Future of Life Institute Newsletter: California’s AI Safety Bill Heads to Governor’s Desk
Latest policymaking updates, OpenAI safety team reportedly halved, moving towards an AWS treaty, and more.
Maggie Munro
30 August, 2024
All Newsletters