AI Economics Open Letter

Inspired by our Puerto Rico AI conference and open letter, a team of economists and business leaders have now launched their own open letter specifically on how to make AI’s impact on the economy beneficial rather than detrimental. It includes lists of specific policy suggestions.

CBS takes on AI

CBS News interviewed me for this morning’s segment on the future of AI, which avoided the tired old “robots-will-turn-evil” message and reported on the latest DARPA challenge.

Wait But Why: ‘The AI Revolution’

Tim Urban of Wait But Why has an engaging two-part series on the development of superintelligent AI and the dramatic consequences it would have on humanity. Equal parts exciting and sobering, this is a perfect primer for the layperson and thorough enough to be read-worthy to acquaintances of the topic as well.

Part 1: The Road to Superintelligence

Part 2: Our Immortality or Extinction

AI Ethics in Nature

Nature just published four interesting perspectives on AI Ethics, including an article and podcast on Lethal Autonomous Weapons by Stuart Russell.

Sam Altman Investing in ‘AI Safety Research’

(image Matt Weinberger, Business Insider)

Sam Altman, head of Y Combinator, gave an interview with Mike Curtis at Airbnb’s Open Air 2015 conference and brought up (among other issues) his concerns about AI value alignment. He didn’t pull any punches:

(from the Business Insider article)

On the growing artificial-intelligence market: “AI will probably most likely lead to the end of the world, but in the meantime, there’ll be great companies.”

On what Altman would do if he were President Obama: “If I were Barack Obama, I would commit maybe $100 billion to R&D of AI safety initiatives.” Altman shared that he recently invested in a company doing “AI safety research” to investigate the potential risks of artificial intelligence.”

$100 billion would be four orders of magnitude larger than FLI’s research grant program. If FLI were a PAC, it might be time for us to run Altman-for-president ads…

Stuart Russell on the long-term future of AI

Professor Stuart Russell recently gave a public lecture on The Long-Term Future of (Artificial) Intelligence, hosted by the Center for the Study of Existential Risk in Cambridge, UK. In this talk, he discusses key research problems in keeping future AI beneficial, such as containment and value alignment, and addresses many common misconceptions about the risks from AI.

“The news media in recent months have been full of dire warnings about the risk that AI poses to the human race, coming from well-known figures such as Stephen Hawking, Elon Musk, and Bill Gates. Should we be concerned? If so, what can we do about it? While some in the mainstream AI community dismiss these concerns, I will argue instead that a fundamental reorientation of the field is required.”