Entries by Viktoriya Krakovna

Machine Learning Security at ICLR 2017

The overall theme of the ICLR conference setting this year could be summarized as “finger food and ships”. More importantly, there were a lot of interesting papers, especially on machine learning security, which will be the focus on this post. (Here is a great overview of the topic.)

AI Safety Highlights from NIPS 2016

This year’s Neural Information Processing Systems (NIPS) conference was larger than ever, with almost 6000 people attending, hosted in a huge convention center in Barcelona, Spain. The conference started off with two exciting announcements on open-sourcing collections of environments for training and testing general AI capabilities – the DeepMind Lab and the OpenAI Universe. Among […]

OpenAI Unconference on Machine Learning

The following post originally appeared here. Last weekend, I attended OpenAI’s self-organizing conference on machine learning (SOCML 2016), meta-organized by Ian Goodfellow (thanks Ian!). It was held at OpenAI’s new office, with several floors of large open spaces. The unconference format was intended to encourage people to present current ideas alongside with completed work. The […]

New AI Safety Research Agenda From Google Brain

Google Brain just released an inspiring research agenda, Concrete Problems in AI Safety, co-authored by researchers from OpenAI, Berkeley and Stanford. This document is a milestone in setting concrete research objectives for keeping reinforcement learning agents and other AI systems robust and beneficial. The problems studied are relevant both to near-term and long-term AI safety, […]

Introductory Resources on AI Safety Research

The resources are selected for relevance and/or brevity, and the list is not meant to be comprehensive. Motivation For a popular audience: FLI: AI risk background and FAQ. At the bottom of the background page, there is a more extensive list of resources on AI safety. Tim Urban, Wait But Why: The AI Revolution. An accessible […]

Risks From General Artificial Intelligence Without an Intelligence Explosion

“An ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind.” – Computer scientist I. J. Good, 1965 Artificial intelligence systems we have today can be referred to as narrow AI – they perform well at specific tasks, like playing […]

ITIF panel on superintelligence with Russell and Soares

The Information Technology and Innovation Foundation held a panel discussion on June 30, “Are Superintelligent Computers Really A Threat to Humanity?“. The panelists were Stuart Russell (FLI board member and grant recepient), Nate Soares (MIRI executive director), Manuela Veloso (AI researcher and FLI grant recepient), Ronald Arkin (AI researcher), and Robert Atkinson (ITIF President). The […]

Stuart Russell on the long-term future of AI

Professor Stuart Russell recently gave a public lecture on The Long-Term Future of (Artificial) Intelligence, hosted by the Center for the Study of Existential Risk in Cambridge, UK. In this talk, he discusses key research problems in keeping future AI beneficial, such as containment and value alignment, and addresses many common misconceptions about the risks […]

What AI Researchers Say About Risks from AI

As the media relentlessly focuses on the concerns of public figures like Elon Musk, Stephen Hawking and Bill Gates, you may wonder – what do AI researchers think about the risks from AI? In his informative article, Scott Alexander does a comprehensive review of the opinions of prominent AI researchers on these risks. He selected […]

MIRI’s New Executive Director

Big news from our friends at MIRI: Nate Soares is stepping up as the new Executive Director, and Luke Muehlhauser has accepted a research position at GiveWell. Luke has done an awesome job leading MIRI for the past three years, and it’s been a pleasure for us at FLI to collaborate with him. We wish […]

Jaan Tallinn on existential risks

An excellent piece about existential risks by FLI co-founder Jaan Tallinn on Edge.org: “The reasons why I’m engaged in trying to lower the existential risks has to do with the fact that I’m a convinced consequentialist. We have to take responsibility for modeling the consequences of our actions, and then pick the actions that yield […]

Recent AI discussions

1. Brookings Institution post on Understanding Artificial Intelligence, discussing technological unemployment, regulation, and other issues. 2. A recap of the Science Friday episode with Stuart Russell, Erik Horvitz and Max Tegmark. 3. Ryan Calo on What Ex Machina’s Alex Garland Gets Wrong About Artificial Intelligence. 40