Open letter on AI weapons

At a press conference at the IJCAI AI-meeting in Buenos Aires today, Stuart Russell and Toby Walsh announced an open letter on autonomous weapons that we’ve helped organize. We’re delighted that it’s been signed by over 1700 AI/robotics researchers and over 11000 others. Signatories include 14 current and past presents of AI/robotics organizations (AAAI, IEEE-RAS, IJCAI, ECCAI, etc).

If you support the letter, please sign it here.

GCRI News Summary June 2015

Here is the June 2015 global catastrophic risk news summary, written by Robert de Neufville of the lobal Catastrophic Risk Institute. The news summaries provide overviews across the world of global catastrophic risk. This summary includes Pope Francis’s encyclical about the global environment, tensions between NATO and Russia, a joint NASA-NSA program for asteroid and comet protection, and more.

AI safety research on NPR

I just had the pleasure of discussing our new AI safety research program on National Public Radio. I was fortunate to be joined by two of the winners of our grants competition: CMU roboticist Manuela Veloso and Tom Dietterich, president of the Association for the Advancement of Artificial Intelligence.

Are we heading into a second Cold War?

US-Russia tensions are at their highest since the end of the Cold War, and some analysts are warning about the growing possibility of a nuclear war. Their estimates of risk are comparable to some estimates of background risks of accidental nuclear war.

ITIF panel on superintelligence with Russell and Soares

The Information Technology and Innovation Foundation held a panel discussion on June 30, “Are Superintelligent Computers Really A Threat to Humanity?“. The panelists were Stuart Russell (FLI board member and grant recepient), Nate Soares (MIRI executive director), Manuela Veloso (AI researcher and FLI grant recepient), Ronald Arkin (AI researcher), and Robert Atkinson (ITIF President).

The event was a spirited discussion of the advances in AI, the nature of the risks, how policymakers should respond, the importance of value alignment, and other interesting questions. Watch the video here!

And the winners are…

After a grueling expert review of almost 300 grant proposals from around the world, we are delighted to announce the 37 research teams that have been recommended for funding to help keep AI beneficial. We plan to award these teams a total of about $7M from Elon Musk and the Open Philanthropy Project over the next three years, with most of the research projects starting by September 2015. The winning teams will research a host of questions in computer science, law, policy, economics, and other fields relevant to coming advances in AI.