2015: An Amazing Year in Review

Just four days before the end of the year, the Washington Post published an article, arguing that 2015 was the year the beneficial AI movement went mainstream. FLI couldn’t be more excited and grateful to have helped make this happen and to have emerged as integral part of this movement. And that’s only a part of what we’ve accomplished in the last 12 months. It’s been a big year for us…

 

In the beginning

conference150104

Participants and attendees of the inaugural Puerto Rico conference.

2015 began with a bang, as we kicked off the New Year with our Puerto Rico conference, “The Future of AI: Opportunities and Challenges,” which was held January 2-5. We brought together about 80 top AI researchers, industry leaders and experts in economics, law and ethics to discuss the future of AI. The goal, which was successfully achieved, was to identify promising research directions that can help maximize the future benefits of AI while avoiding pitfalls. Before the conference, relatively few AI researchers were thinking about AI safety, but by the end of the conference, essentially everyone had signed the open letter, which argued for timely research to make AI more robust and beneficial. That open letter was ultimately signed by thousands of top minds in science, academia and industry, including Elon Musk, Stephen Hawking, and Steve Wozniak, and a veritable Who’s Who of AI researchers. This letter endorsed a detailed Research Priorities Document that emerged as the key product of the conference.

At the end of the conference, Musk announced a donation of $10 million to FLI for the creation of an AI safety research grants program to carry out this prioritized research for beneficial AI. We received nearly 300 research grant applications from researchers around the world, and on July 1, we announced the 37 AI safety research teams who would be awarded a total of $7 million for this first round of research. The research is funded by Musk, as well as the Open Philanthropy Project.

 

Forging ahead

On April 4, we held a large brainstorming meeting on biotech risk mitigation that included George Church and several other experts from the Harvard Wyss Institute and the Cambridge Working Group. We concluded that there are interesting opportunities for FLI to contribute in this area, and we endorsed the CWG statement on the Creation of Potential Pandemic Pathogens.

On June 29, we organized a SciFoo workshop at Google, which Meia Chita-Tegmark wrote about for the Huffington Post. We held a media outreach dinner event that evening in San Francisco with Stuart Russell, Murray Shanahan, Ilya Sutskever and Jaan Tallinn as speakers.

SF_event

All five FLI-founders flanked by other beneficial-AI enthusiasts. From left to right, top to bottom: Stuart Russell. Jaan Tallinn, Janos Kramar, Anthony Aguirre, Max Tegmark, Nick Bostrom, Murray Shanahan, Jesse Galef, Michael Vassar, Nate Soares, Viktoriya Krakovna, Meia Chita-Tegmark and Katja Grace

Less than a month later, we published another open letter, this time advocating for a global ban on offensive autonomous weapons development. Stuart Russell and Toby Walsh presented the autonomous weapons open letter at the International Joint Conference on Artificial Intelligence in Buenos Aires, Argentina, while Richard Mallah garnered more support and signatories engaging AGI researchers at the Conference on Artificial General Intelligence in Berlin. The letter has been signed by over 3,000 AI and robotics researchers, including leaders such as Demis Hassabis (DeepMind), Yann LeCun (Facebook), Eric Horvitz (Microsoft), Peter Norvig (Google), Oren Etzioni (Allen Institute), six past presidents of the AAAI, and over 17,000 other scientists and concerned individuals, including Stephen Hawking, Elon Musk, and Steve Wozniak.

This was followed by an open letter about the economic impacts of AI, which was spearheaded by Erik Brynjolfsson, a member of our Scientific Advisory Board. Inspired by our Puerto Rico AI conference and the resulting open letter, a team of economists and business leaders launched their own open letter about AI’s future impact on the economy. It includes specific policy suggestions to ensure positive economic impact.

By October 2015, we wanted to try to bring more public attention to not only artificial intelligence, but also other issues that could pose an existential risk, including biotechnology, nuclear weapons, and climate change. We launched a new incarnation of our website, which now focuses on relevant news and the latest research in all of these fields. The goal is to draw more public attention to both the risks and the opportunities that technology provides.

Besides these major projects and events, we also organized, helped with, and participated in numerous other events and discussions.

 

Other major events

Richard Mallah, Max Tegmark, Francesca Rossi and Stuart Russell went to the Association for the Advancement of Artificial Intelligence conference in January, where they encouraged researchers to consider safety issues. Stuart spoke to about 500 people about the long-term future of AI. Max spoke at the first annual International Workshop on AI, Ethics, and Society, organized by Toby Walsh, as well as at a funding workshop, where he presented the FLI grants program.

Max spoke again, at the start of March, this time for the Helen Caldicott Nuclear Weapons Conference, about reducing the risk of accidental nuclear war and how this relates to automation and AI. At the end of the month, he gave a talk at Harvard Effective Altruism entitled, “The Future of Life with AI and other Powerful Technologies.” This year, Max also gave talks about the Future of Life Institute at a Harvard-Smithsonian Center for Astrophysics colloquium, MIT Effective Altruism, and the MIT “Dissolve Conference” (with Prof. Jonathan King), at a movie screening of “Dr. Strangelove,” and at a meeting in Cambridge about reducing the risk of nuclear war.

In June, Richard presented at Boston University’s Science and the Humanities Confront the Anthropocene conference about the risks associated with emerging technologies. That same month, Stuart Russell and MIRI Executive Director, Nate Soares, participated in a panel discussion about the risks and policy implications of AI (video here).

military_AI

Concerns about autonomous weapons led to an open letter calling for a ban.

Richard then led the FLI booth at the International Conference on Machine Learning in July, where he engaged with hundreds of researchers about AI safety and beneficence. He also spoke at the SmartData conference in August about the relationship between ontology alignment and value alignment, and he participated in the DARPA Wait, What? conference in September.

Victoria Krakovna and Anthony Aguirre both spoke at the Effective Altruism Global conference at Google headquarters in July, where Elon Musk, Stuart Russell, Nate Soares and Nick Bostrom also participated in a panel discussion. A month later, Jaan Tallin spoke at the EA Global Oxford conference. Victoria and Anthony also organized a brainstorming dinner on biotech, which was attended by many of the Bay area’s synthetic biology experts, and Victoria put together two Machine Learning Safety meetings in the Bay Area. The latter were dinner meetings, which aimed to bring researchers and FLI grant awardees together to help strengthen connections and discuss promising research directions. One of the dinners included a Q&A with Stuart Russell.

September saw FLI and CSER co-organize an event at the Policy Exchange in London where Huw Price, Stuart Russell, Nick Bostrom, Michael Osborne and Murray Shanahan discussed AI safety to the scientifically minded in Westminster, including many British members of parliament.

Only a month later, Max Tegmark and Nick Bostrom were invited to speak at a United Nations event about AI safety, and our Scientific Advisory Board member, Stephen Hawking released his answers to the Reddit “Ask Me Anything” (AMA) about artificial intelligence.

Toward the end of the year, we began to focus more effort on nuclear weapons issues. We’ve partnered with the Don’t Bank on the Bomb campaign, and we’re pleased to support financial research to determines which companies and institutions invest in and profit from the production of new nuclear weapons systems. The goal is to draw attention to and stigmatize such production, which arguably increases the risk of accidental nuclear war without notably improving today’s nuclear deterrence. In November, Lucas Perry presented some of our research at the Massachusetts Peace Action conference.

Anthony launched a new site, Metaculus.com. The Metaculus project, which is something of an offshoot of FLI, is a new platform for soliciting and aggregating predictions about technological breakthroughs, scientific discoveries, world happenings, and other events.  The aim of this project is to build an all-purpose, crowd-powered forecasting engine that can help organizations (like FLI) or individuals better understand the trajectory of future events and technological progress. This will allow for more quantitatively informed predictions and decisions about how to optimize the future for the better.

 

NIPS-panel-3

Richard Mallah speaking at the third panel discussion of the NIPS symposium.

In December, Max participated in a panel discussion at the Nobel Week Dialogue about The Future of Intelligence and moderated two related panels. Richard, Victoria, and Ariel Conn helped organize the Neural Information Processing Systems symposium, “Algorithms Among Us: The Societal Impacts of Machine Learning,” where Richard participated in the panel discussion on long-term research priorities. To date, we’ve posted two articles with takeaways from the symposium and NIPS as a whole. Just a couple days later, Victoria rounded out the active year with her attendance at the Machine Learning and the Market for Intelligence conference in Toronto, and Richard presented to the IEEE Standards Association.

 

In the Press

We’re excited about all we’ve achieved this year, and we feel honored to have received so much press about our work. For example:

The beneficial AI open letter has been featured by media outlets around the world, including WIRED, Financial Times, Popular Science, CIO, BBC, CNBC, The Independent,  The Verge, Z, DNet, CNET, The Telegraph, World News Views, The Economic Times, Industry Week, and Live Science.

You can find more media coverage of Elon Musk’s donation at Fast Company, Tech Crunch , WIRED, Mashable, Slash Gear, and BostInno.

Max, along with our Science Advisory Board member, Stuart Russell, and Erik Horvitz from Microsoft, were interviewed on NPR’s Science Friday about AI safety.

Max was later interviewed on NPR’s On Point Radio, along with FLI grant recipients Manuela Veloso and Thomas Dietterich, for a lively discussion about the AI safety research program.

Stuart Russell was interviewed about the autonomous weapons open letter on NPR’s All Things Considered (audio) and Al Jazeera America News (video), and Max was also interviewed about the autonomous weapons open letter on FOX Business News and CNN International.

Throughout the year, Victoria was interviewed by Popular Science, Engineering and Technology Magazine, Boston Magazine and Blog Talk Radio.

Meia Chita-Tegmark wrote five articles for the Huffington Post about artificial intelligence, including a Halloween story of nuclear weapons and highlights of the Nobel Week Dialogue, and Ariel wrote two about artificial intelligence.

In addition we had a few extra special articles on our new website:

Nobel-prize winning physicist, Frank Wilczek, shared a sci-fi short story he wrote about a future of AI wars. FLI volunteer, Eric Gastfriend, wrote a popular piece, in which he consider the impact of exponential increase in the number of scientists. Richard wrote a widely read article laying out the most important AI breakthroughs of the year. We launched the FLI Audio Files with a podcast about the Paris Climate Agreement. And Max wrote an article comparing Russia’s warning of a cobalt bomb to Dr. Strangelove

On the last day of the year, the New Yorker published an article listing the top 10 tech quotes of 2015, and a quote from our autonomous weapons open letter came in at number one.

 

A New Beginning

2015 has now come to an end, but we believe this is really just the beginning. 2016 has the potential to be an even bigger year, bringing new and exciting challenges and opportunities. The FLI slogan says, “Technology is giving life the potential to flourish like never before…or to self-destruct.” We look forward to another year of doing all we can to help humanity flourish!

Happy New Year!

happy_new_year_2016

2015: An Amazing Year in Review

Just four days before the end of the year, the Washington Post published an article, arguing that 2015 was the year the beneficial AI movement went mainstream. FLI couldn’t be more excited and grateful to have helped make this happen and to have emerged as integral part of this movement. And that’s only a part of what we’ve accomplished in the last 12 months. It’s been a big year for us…

 

In the beginning

conference150104

Participants and attendees of the inaugural Puerto Rico conference.

2015 began with a bang, as we kicked off the New Year with our Puerto Rico conference, “The Future of AI: Opportunities and Challenges,” which was held January 2-5. We brought together about 80 top AI researchers, industry leaders and experts in economics, law and ethics to discuss the future of AI. The goal, which was successfully achieved, was to identify promising research directions that can help maximize the future benefits of AI while avoiding pitfalls. Before the conference, relatively few AI researchers were thinking about AI safety, but by the end of the conference, essentially everyone had signed the open letter, which argued for timely research to make AI more robust and beneficial. That open letter was ultimately signed by thousands of top minds in science, academia and industry, including Elon Musk, Stephen Hawking, and Steve Wozniak, and a veritable Who’s Who of AI researchers. This letter endorsed a detailed Research Priorities Document that emerged as the key product of the conference.

At the end of the conference, Musk announced a donation of $10 million to FLI for the creation of an AI safety research grants program to carry out this prioritized research for beneficial AI. We received nearly 300 research grant applications from researchers around the world, and on July 1, we announced the 37 AI safety research teams who would be awarded a total of $7 million for this first round of research. The research is funded by Musk, as well as the Open Philanthropy Project.

 

Forging ahead

On April 4, we held a large brainstorming meeting on biotech risk mitigation that included George Church and several other experts from the Harvard Wyss Institute and the Cambridge Working Group. We concluded that there are interesting opportunities for FLI to contribute in this area, and we endorsed the CWG statement on the Creation of Potential Pandemic Pathogens.

On June 29, we organized a SciFoo workshop at Google, which Meia Chita-Tegmark wrote about for the Huffington Post. We held a media outreach dinner event that evening in San Francisco with Stuart Russell, Murray Shanahan, Ilya Sutskever and Jaan Tallinn as speakers.

SF_event

All five FLI-founders flanked by other beneficial-AI enthusiasts. From left to right, top to bottom: Stuart Russell. Jaan Tallinn, Janos Kramar, Anthony Aguirre, Max Tegmark, Nick Bostrom, Murray Shanahan, Jesse Galef, Michael Vassar, Nate Soares, Viktoriya Krakovna, Meia Chita-Tegmark and Katja Grace

Less than a month later, we published another open letter, this time advocating for a global ban on offensive autonomous weapons development. Stuart Russell and Toby Walsh presented the autonomous weapons open letter at the International Joint Conference on Artificial Intelligence in Buenos Aires, Argentina, while Richard Mallah garnered more support and signatories engaging AGI researchers at the Conference on Artificial General Intelligence in Berlin. The letter has been signed by over 3,000 AI and robotics researchers, including leaders such as Demis Hassabis (DeepMind), Yann LeCun (Facebook), Eric Horvitz (Microsoft), Peter Norvig (Google), Oren Etzioni (Allen Institute), six past presidents of the AAAI, and over 17,000 other scientists and concerned individuals, including Stephen Hawking, Elon Musk, and Steve Wozniak.

This was followed by an open letter about the economic impacts of AI, which was spearheaded by Erik Brynjolfsson, a member of our Scientific Advisory Board. Inspired by our Puerto Rico AI conference and the resulting open letter, a team of economists and business leaders launched their own open letter about AI’s future impact on the economy. It includes specific policy suggestions to ensure positive economic impact.

By October 2015, we wanted to try to bring more public attention to not only artificial intelligence, but also other issues that could pose an existential risk, including biotechnology, nuclear weapons, and climate change. We launched a new incarnation of our website, which now focuses on relevant news and the latest research in all of these fields. The goal is to draw more public attention to both the risks and the opportunities that technology provides.

Besides these major projects and events, we also organized, helped with, and participated in numerous other events and discussions.

 

Other major events

Richard Mallah, Max Tegmark, Francesca Rossi and Stuart Russell went to the Association for the Advancement of Artificial Intelligence conference in January, where they encouraged researchers to consider safety issues. Stuart spoke to about 500 people about the long-term future of AI. Max spoke at the first annual International Workshop on AI, Ethics, and Society, organized by Toby Walsh, as well as at a funding workshop, where he presented the FLI grants program.

Max spoke again, at the start of March, this time for the Helen Caldicott Nuclear Weapons Conference, about reducing the risk of accidental nuclear war and how this relates to automation and AI. At the end of the month, he gave a talk at Harvard Effective Altruism entitled, “The Future of Life with AI and other Powerful Technologies.” This year, Max also gave talks about the Future of Life Institute at a Harvard-Smithsonian Center for Astrophysics colloquium, MIT Effective Altruism, and the MIT “Dissolve Conference” (with Prof. Jonathan King), at a movie screening of “Dr. Strangelove,” and at a meeting in Cambridge about reducing the risk of nuclear war.

In June, Richard presented at Boston University’s Science and the Humanities Confront the Anthropocene conference about the risks associated with emerging technologies. That same month, Stuart Russell and MIRI Executive Director, Nate Soares, participated in a panel discussion about the risks and policy implications of AI (video here).

military_AI

Concerns about autonomous weapons led to an open letter calling for a ban.

Richard then led the FLI booth at the International Conference on Machine Learning in July, where he engaged with hundreds of researchers about AI safety and beneficence. He also spoke at the SmartData conference in August about the relationship between ontology alignment and value alignment, and he participated in the DARPA Wait, What? conference in September.

Victoria Krakovna and Anthony Aguirre both spoke at the Effective Altruism Global conference at Google headquarters in July, where Elon Musk, Stuart Russell, Nate Soares and Nick Bostrom also participated in a panel discussion. A month later, Jaan Tallin spoke at the EA Global Oxford conference. Victoria and Anthony also organized a brainstorming dinner on biotech, which was attended by many of the Bay area’s synthetic biology experts, and Victoria put together two Machine Learning Safety meetings in the Bay Area. The latter were dinner meetings, which aimed to bring researchers and FLI grant awardees together to help strengthen connections and discuss promising research directions. One of the dinners included a Q&A with Stuart Russell.

September saw FLI and CSER co-organize an event at the Policy Exchange in London where Huw Price, Stuart Russell, Nick Bostrom, Michael Osborne and Murray Shanahan discussed AI safety to the scientifically minded in Westminster, including many British members of parliament.

Only a month later, Max Tegmark and Nick Bostrom were invited to speak at a United Nations event about AI safety, and our Scientific Advisory Board member, Stephen Hawking released his answers to the Reddit “Ask Me Anything” (AMA) about artificial intelligence.

Toward the end of the year, we began to focus more effort on nuclear weapons issues. We’ve partnered with the Don’t Bank on the Bomb campaign, and we’re pleased to support financial research to determines which companies and institutions invest in and profit from the production of new nuclear weapons systems. The goal is to draw attention to and stigmatize such production, which arguably increases the risk of accidental nuclear war without notably improving today’s nuclear deterrence. In November, Lucas Perry presented some of our research at the Massachusetts Peace Action conference.

Anthony launched a new site, Metaculus.com. The Metaculus project, which is something of an offshoot of FLI, is a new platform for soliciting and aggregating predictions about technological breakthroughs, scientific discoveries, world happenings, and other events.  The aim of this project is to build an all-purpose, crowd-powered forecasting engine that can help organizations (like FLI) or individuals better understand the trajectory of future events and technological progress. This will allow for more quantitatively informed predictions and decisions about how to optimize the future for the better.

 

NIPS-panel-3

Richard Mallah speaking at the third panel discussion of the NIPS symposium.

In December, Max participated in a panel discussion at the Nobel Week Dialogue about The Future of Intelligence and moderated two related panels. Richard, Victoria, and Ariel Conn helped organize the Neural Information Processing Systems symposium, “Algorithms Among Us: The Societal Impacts of Machine Learning,” where Richard participated in the panel discussion on long-term research priorities. To date, we’ve posted two articles with takeaways from the symposium and NIPS as a whole. Just a couple days later, Victoria rounded out the active year with her attendance at the Machine Learning and the Market for Intelligence conference in Toronto, and Richard presented to the IEEE Standards Association.

 

In the Press

We’re excited about all we’ve achieved this year, and we feel honored to have received so much press about our work. For example:

The beneficial AI open letter has been featured by media outlets around the world, including WIRED, Financial Times, Popular Science, CIO, BBC, CNBC, The Independent,  The Verge, Z, DNet, CNET, The Telegraph, World News Views, The Economic Times, Industry Week, and Live Science.

You can find more media coverage of Elon Musk’s donation at Fast Company, Tech Crunch , WIRED, Mashable, Slash Gear, and BostInno.

Max, along with our Science Advisory Board member, Stuart Russell, and Erik Horvitz from Microsoft, were interviewed on NPR’s Science Friday about AI safety.

Max was later interviewed on NPR’s On Point Radio, along with FLI grant recipients Manuela Veloso and Thomas Dietterich, for a lively discussion about the AI safety research program.

Stuart Russell was interviewed about the autonomous weapons open letter on NPR’s All Things Considered (audio) and Al Jazeera America News (video), and Max was also interviewed about the autonomous weapons open letter on FOX Business News and CNN International.

Throughout the year, Victoria was interviewed by Popular Science, Engineering and Technology Magazine, Boston Magazine and Blog Talk Radio.

Meia Chita-Tegmark wrote five articles for the Huffington Post about artificial intelligence, including a Halloween story of nuclear weapons and highlights of the Nobel Week Dialogue, and Ariel wrote two about artificial intelligence.

In addition we had a few extra special articles on our new website:

Nobel-prize winning physicist, Frank Wilczek, shared a sci-fi short story he wrote about a future of AI wars. FLI volunteer, Eric Gastfriend, wrote a popular piece, in which he consider the impact of exponential increase in the number of scientists. Richard wrote a widely read article laying out the most important AI breakthroughs of the year. We launched the FLI Audio Files with a podcast about the Paris Climate Agreement. And Max wrote an article comparing Russia’s warning of a cobalt bomb to Dr. Strangelove.

On the last day of the year, the New Yorker published an article listing the top 10 tech quotes of 2015, and a quote from our autonomous weapons open letter came in at number one.

 

A New Beginning

2015 has now come to an end, but we believe this is really just the beginning. 2016 has the potential to be an even bigger year, bringing new and exciting challenges and opportunities. The FLI slogan says, “Technology is giving life the potential to flourish like never before…or to self-destruct.” We look forward to another year of doing all we can to help humanity flourish!

Happy New Year!

happy_new_year_2016

The Top A.I. Breakthroughs of 2015

Progress in artificial intelligence and machine learning has been impressive this year. Those in the field acknowledge progress is accelerating year by year, though it is still a manageable pace for us. The vast majority of work in the field these days actually builds on previous work done by other teams earlier the same year, in contrast to most other fields where references span decades.

Creating a summary of a wide range of developments in this field will almost invariably lead to descriptions that sound heavily anthropomorphic, and this summary does indeed. Such metaphors, however, are only convenient shorthands for talking about these functionalities. It’s important to remember that even though many of these capabilities sound very thought-like, they’re usually not very similar to how human cognition works. The systems are all of course functional and mechanistic, and, though increasingly less so, each are still quite narrow in what they do. Be warned though: in reading this article, these functionalities may seem to go from fanciful to prosaic.

The biggest developments of 2015 fall into five categories of intelligence: abstracting across environments, intuitive concept understanding, creative abstract thought, dreaming up visions, and dexterous fine motor skills. I’ll highlight a small number of important threads within each that have brought the field forward this year.

 

Abstracting Across Environments

A long-term goal of the field of AI is to achieve artificial general intelligence, a single learning program that can learn and act in completely different domains at the same time, able to transfer some skills and knowledge learned in, e.g., making cookies and apply them to making brownies even better than it would have otherwise. A significant stride forward in this realm of generality was provided by Parisotto, Ba, and Salakhutdinov. They built on DeepMind’s seminal DQN, published earlier this year in Nature, that learns to play many different Atari games well.


Instead of using a fresh network for each game, this team combined deep multitask reinforcement learning with deep-transfer learning to be able to use the same deep neural network across different types of games. This leads not only to a single instance that can succeed in multiple different games, but to one that also learns new games better and faster because of what it remembers about those other games. For example, it can learn a new tennis video game faster because it already gets the concept — the meaningful abstraction of hitting a ball with a paddle — from when it was playing Pong. This is not yet general intelligence, but it erodes one of the hurdles to get there.

Reasoning across different modalities has been another bright spot this year. The Allen Institute for AI and University of Washington have been working on test-taking AIs, over the years working up from 4th grade level tests to 8th grade level tests, and this year announced a system that addresses the geometry portion of the SAT. Such geometry tests contain combinations of diagrams, supplemental information, and word problems. In more narrow AI, these different modalities would typically be analyzed separately, essentially as different environments. This system combines computer vision and natural language processing, grounding both in the same structured formalism, and then applies a geometric reasoner to answer the multiple-choice questions, matching the performance of the average American 11th grade student.

 

Intuitive Concept Understanding

A more general method of multimodal concept grounding has come about from deep learning in the past few years: Subsymbolic knowledge and reasoning are implicitly understood by a system rather than being explicitly programmed in or even explicitly represented. Decent progress has been made this year in the subsymbolic understanding of concepts that we as humans can relate to. This progress helps with the age-old symbol grounding problem — how symbols or words get their meaning. The increasingly popular way to achieve this grounding these days is by joint embeddings — deep distributed representations where different modalities or perspectives on the same concept are placed very close together in a high-dimensional vector space.

Last year, this technique helped power abilities like automated image caption writing, and this year a team from Stanford and Tel Aviv University have extended this basic idea to jointly embed images and 3D shapes to bridge computer vision and graphics. Rajendran et al. then extended joint embeddings to support the confluence of multiple meaningfully related mappings at once, across different modalities and different languages. As these embeddings get more sophisticated and detailed, they can become workhorses for more elaborate AI techniques. Ramanathan et al. have leveraged them to create a system that learns a meaningful schema of relationships between different types of actions from a set of photographs and a dictionary.

As single systems increasingly do multiple things, and as deep learning is predicated on, any lines between the features of the data and the learned concepts will blur away. Another demonstration of this deep feature grounding, by a team from Cornell and WUStL, uses a dimensionality reduction of a deep net’s weights to form a surface of convolutional features that can simply be slid along to meaningfully, automatically, photorealistically alter particular aspects of photographs, e.g., changing people’s facial expressions or their ages, or colorizing photos.

 

One hurdle in deep learning techniques is that they require a lot of training data to produce good results. Humans, on the other hand, are often able to learn from just a single example. Salakhutdinov, Tenenbaum, and Lake have overcome this disparity with a technique for human-level concept learning through Bayesian program induction from a single example. This system is then able to, for instance, draw variations on symbols in a way indistinguishable from those drawn by humans.

 

Creative Abstract Thought

Beyond understanding simple concepts lies grasping aspects of causal structure — understanding how ideas tie together to make things happen or tell a story in time — and to be able to create things based on those understandings. Building on the basic ideas from both DeepMind’s neural Turing machine and Facebook’s memory networks, combinations of deep learning and novel memory architectures have shown great promise in this direction this year. These architectures provide each node in a deep neural network with a simple interface to memory.

Kumar and Socher’s dynamic memory networks improved on memory networks with better support for attention and sequence understanding. Like the original, this system could read stories and answer questions about them, implicitly learning 20 kinds of reasoning, like deduction, induction, temporal reasoning, and path finding. It was never programmed with any of those kinds of reasoning. Weston et al’s more recent end-to-end memory networks then added the ability to perform multiple computational hops per output symbol, expanding modeling capacity and expressivity to be able to capture things like out-of-order access, long term dependencies, and unordered sets, further improving accuracy on such tasks.

Programs themselves are of course also data, and they certainly make use of complex causal, structural, grammatical, sequence-like properties, so programming is ripe for this approach. Last year, neural Turing machines proved deep learning of programs to be possible. This year, Grefenstette et al. showed how programs can be transduced, or generatively figured out from sample output, much more efficiently than with neural Turing machines, by using a new type of memory-based recurrent neural networks (RNNs) where the nodes simply access differentiable versions of data structures such as stacks and queues. Reed and de Freitas of DeepMind have also recently shown how their neural programmer-interpreter can represent lower-level programs that control higher-level and domain-specific functionalities.

Another example of proficiency in understanding time in context, and applying that to create new artifacts, is a rudimentary but creative video summarization capability developed this year. Park and Kim from Seoul National U. developed a novel architecture called a coherent recurrent convolutional network, applying it to creating novel and fluid textual stories from sequences of images.

Another important modality that includes causal understanding, hypotheticals, and creativity in abstract thought is scientific hypothesizing. A team at Tufts combined genetic algorithms and genetic pathway simulation to create a system that arrived at the first significant new AI-discovered scientific theory of how exactly flatworms are able to regenerate body parts so readily. In a couple of days it had discovered what eluded scientists for a century. This should provide a resounding answer to those who question why we would ever want to make AIs curious in the first place.

 

Dreaming Up Visions

AI did not stop at writing programs, travelogues, and scientific theories this year. There are AIs now able to imagine, or using the technical term, hallucinate, meaningful new imagery as well. Deep learning isn’t only good at pattern recognition, but indeed pattern understanding and therefore also pattern creation.

A team from MIT and Microsoft Research have created a deep convolution inverse graphic network, which, among other things, contains a special training technique to get neurons in its graphics code layer to differentiate to meaningful transformations of an image. In so doing, they are deep-learning a graphics engine, able to understand the 3D shapes in novel 2D images it receives, and able to photorealistically imagine what it would be like to change things like camera angle and lighting.

A team from NYU and Facebook devised a way to generate realistic new images from meaningful and plausible combinations of elements it has seen in other images. Using a pyramid of adversarial networks — with some trying to produce realistic images and others critically judging how real the images look — their system is able to get better and better at imagining new photographs. Though the examples online are quite low-res, offline I’ve seen some impressive related high-res results.

Also significant in ’15 is the ability to deeply imagine entirely new imagery based on short English descriptions of the desired picture. While scene renderers taking symbolic, restricted vocabularies have been around a while, this year has seen the advent of a purely neural system doing this in a way that’s not explicitly programmed. This University of Toronto team applies attention mechanisms to generation of images incrementally based on the meaning of each component of the description, in any of a number of ways per request. So androids can now dream of electric sheep.

There has even been impressive progress in computational imagination of new animated video clips this year. A team from the University of Michigan created a deep analogy system that recognizes complex implicit relationships in exemplars and is able to apply that relationship as a generative transformation of query examples. They’ve applied this in a number of synthetic applications, but most impressive is the demo (from the 10:10-11:00 mark of the video embedded below), where an entirely new short video clip of an animated character is generated based on a single still image of the never-before-seen target character, along with a comparable video clip of a different character at a different angle.

While the generation of imagery was used in these for ease of demonstration, their techniques for computational imagination are applicable across a wide variety of domains and modalities. Picture these applied to voices, or music, for instance.

 

Agile and Dexterous Fine Motor Skills

This year’s progress in AI hasn’t been confined to computer screens.

Earlier in the year, a German primatology team has recorded the hand motions of primates in tandem with corresponding neural activity, and they’re able to predict, based on brain activity, what fine motions are going on. They’ve also been able to teach those same fine motor skills to robotic hands, aiming at neural-enhanced prostheses.

In the middle of the year, a team at U.C. Berkeley announced a much more general and easier way to teach robots fine motor skills. They applied deep reinforcement learning-based guided policy search to get robots to be able to screw caps on bottles, to use the back of a hammer to remove a nail from wood, and other seemingly every day actions. These are the kind of actions that are typically trivial for people but very difficult for machines, and this team’s system matches human dexterity and speed at these tasks. It actually learns to do these actions by trying to do them using hand-eye coordination, and by practicing, refining its technique after just a few tries.

 

Watch This Space

This is by no means a comprehensive list of the impressive feats in AI and machine learning (ML) for the year. There are also many more foundational discoveries and developments that have occurred this year, including some that I fully expect to be more revolutionary than any of the above. But those are in early days and so out of the scope of these top picks.

This year has certainly provided some impressive progress. But we expect to see even more in 2016. Coming up next year, I expect to see some more radical deep architectures, better integration of the symbolic and subsymbolic, some impressive dialogue systems, an AI finally dominating the game of Go, deep learning being used for more elaborate robotic planning and motor control, high-quality video summarization, and more creative and higher-resolution dreaming, which should all be quite a sight. What’s even more exciting are the developments we don’t expect.

Who’s In Control?

The Washington Post just asked one of the most important questions in the field of artificial intelligence: “Are we fully in control of our technology?”

There are plenty of other questions about artificial intelligence that are currently attracting media attention, such as: Is superintelligence imminent and will it kill us all? As necessary as it is to consider those questions now, there are others that are equally relevant and timely, but often overlooked by the press:

How much destruction could be caused today — or within the next few years — by something as simple as an error in the algorithm?

Is the development of autonomous weapons worth the risk of an AI arms race and all the other risks it creates?

And…

How much better could life get if we design the right artificial intelligence?

Joel Achenbach, the author of the Washington Post article, considers these questions and more, as he writes about his interviews with people like Nick Bostrom, Stuart Russell, Marvin Minsky, our very own Max Tegmark, and many other leading AI researchers. Achenbach provides a balanced look at artificial intelligence, as he talks about Bostroms hopes and concerns, the current state of AI research, what the future might hold, and the many accomplishments of FLI this year.

Read the full story here.

 

 

Highlights and impressions from NIPS conference on machine learning

This year’s NIPS was an epicenter of the current enthusiasm about AI and deep learning – there was a visceral sense of how quickly the field of machine learning is progressing, and two new AI startups were announced. Attendance has almost doubled compared to the 2014 conference (I hope they make it multi-track next year), and several popular workshops were standing room only. Given that there were only 400 accepted papers and almost 4000 people attending, most people were there to learn and socialize. The conference was a socially intense experience that reminded me a bit of Burning Man – the overall sense of excitement, the high density of spontaneous interesting conversations, the number of parallel events at any given time, and of course the accumulating exhaustion.

Some interesting talks and posters

Sergey Levine’s robotics demo at the crowded Deep Reinforcement Learning workshop (we showed up half an hour early to claim spots on the floor). This was one of the talks that gave me a sense of fast progress in the field. The presentation started with videos from this summer’s DARPA robotics challenge, where the robots kept falling down while trying to walk or open a door. Levine proceeded to outline his recent work on guided policy search, alternating between trajectory optimization and supervised training of the neural network, and granularizing complex tasks. He showed demos of robots successfully performing various high-dexterity tasks, like opening a door, screwing on a bottle cap, or putting a coat hanger on a rack. Impressive!

Generative image models using a pyramid of adversarial networks by Denton & Chintala. Generating realistic-looking images using one neural net as a generator and another as an evaluator – the generator tries to fool the evaluator by making the image indistinguishable from a real one, while the evaluator tries to tell real and generated images apart. Starting from a coarse image, successively finer images are generated using the adversarial networks from the coarser images at the previous level of the pyramid. The resulting images were mistaken for real images 40% of the time in the experiment, and around 80% of them looked realistic to me when staring at the poster.

Path-SGD by Salakhutdinov et al, a scale-invariant version of the stochastic gradient descent algorithm. Standard SGD uses the L2 norm in as the measure of distance in the parameter space, and rescaling the weights can have large effects on optimization speed. Path-SGD instead regularizes the maximum norm of incoming weights into any unit, minimizing the max-norm over all rescalings of the weights. The resulting norm (called a “path regularizer”) is shown to be invariant to weight rescaling. Overall a principled approach with good empirical results.

End-to-end memory networks by Sukhbaatar et al (video), an extension of memory networks – neural networks that learn to read and write to a memory component. Unlike traditional memory networks, the end-to-end version eliminates the need for supervision at each layer. This makes the method applicable to a wider variety of domains – it is competitive both with memory networks for question answering and with LSTMs for language modeling. It was fun to see the model perform basic inductive reasoning about locations, colors and sizes of objects.

Neural GPUs (video), Deep visual analogy-making (video), On-the-job learning, and many others.

Algorithms Among Us symposium (videos)

A highlight of the conference was the Algorithms Among Us symposium on the societal impacts of machine learning, which I helped organize along with others from FLI. The symposium consisted of 3 panels and accompanying talks – on near-term AI impacts, timelines to general AI, and research priorities for beneficial AI. The symposium organizers (Adrian Weller, Michael Osborne and Murray Shanahan) gathered an impressive array of AI luminaries with a variety of views on the subject, including Cynthia Dwork from Microsoft, Yann LeCun from Facebook, Andrew Ng from Baidu, and Shane Legg from DeepMind. All three panel topics generated lively debate among the participants.

Andrew Ng took his famous statement that “worrying about general AI is like worrying about overpopulation on Mars” to the next level, namely “overpopulation on Alpha Centauri” (is Mars too realistic these days?). But he also endorsed long-term AI safety research, saying that it’s not his cup of tea but someone should be working on it. Ng’s main argument was that even superforecasters can’t predict anything 5 years into the future, so any predictions on longer time horizons are useless. However, as Murray pointed out, having complete uncertainty past a 5-year horizon means that you can’t rule out reaching general AI in 20 years either.

With regards to roadmapping the remaining milestones to general AI, Yann LeCun gave an apt analogy of traveling through mountains in the fog – there are some you can see, and an unknown number hiding in the fog. He also argued that advanced AI is unlikely to be human-like, and cautioned against anthropomorphizing it.

In the research priorities panel, Shane Legg gave some specific recommendations – goal-system stability, interruptibility, sandboxing / containment, and formalization of various thought experiments (e.g. in Superintelligence). He pointed out that AI safety is both overblown and underemphasized – while the risks from advanced AI are not imminent the way they are usually portrayed in the media, more thought and resources need to be devoted to the challenging research problems involved.

One question that came up during the symposium is the importance of interpretability for AI systems, which is actually the topic of my current research project. There was some disagreement about the tradeoff between effectiveness and interpretability. LeCun thought that the main advantage of interpretability is increased robustness, and improvements to transfer learning should produce that anyway, without decreases in effectiveness. Percy Liang argued that transparency is needed to explain to the rest of the world what machine learning systems are doing, which is increasingly important in many applications. LeCun also pointed out that machine learning systems that are usually considered transparent, such as decision trees, aren’t necessarily so. There was also disagreement about what interpretability means in the first place – as Cynthia Dwork said, we need a clearer definition before making any conclusions. It seems that more work is needed both on defining interpretability and on figuring out how to achieve it without sacrificing effectiveness.

Overall, the symposium was super interesting and gave a lot of food for thought (here’s a more detailed summary by Ariel from FLI). Thanks to Adrian, Michael and Murray for their hard work in putting it together.

AI startups

It was exciting to see two new AI startups announced at NIPS – OpenAI, led by Ilya Sutskever and backed by Musk, Altman and others, and Geometric Intelligence, led by Zoubin Ghahramani and Gary Marcus.

OpenAI is a non-profit with a mission to democratize AI research and keep it beneficial for humanity, and a whopping $1Bn in funding pledged. They believe that it’s safer to have AI breakthroughs happening in a non-profit, unaffected by financial interests, rather than monopolized by for-profit corporations. The intent to open-source the research seems clearly good in the short and medium term, but raises some concerns in the long run when getting closer to general AI. As an OpenAI researcher emphasized in an interview, “we are not obligated to share everything – in that sense the name of the company is a misnomer”, and decisions to open-source the research would in fact be made on a case-by-case basis.

While OpenAI plans to focus on deep learning in their first few years, Geometric Intelligence is developing an alternative approach to deep learning that can learn more effectively from less data. Gary Marcus argues that we need to learn more from how human minds acquire knowledge in order to build advanced AI (an inspiration for the venture was observing his toddler learn about the world). I’m looking forward to what comes out of the variety of approaches taken by these new companies and other research teams.

(Thanks to Janos Kramar for his help with editing this post.)

Think-tank dismisses leading AI researchers as luddites

By Stuart Russell and Max Tegmark

2015 has seen a major growth in funding, research and discussion of issues related to ensuring that future AI systems are safe and beneficial for humanity. In a surprisingly polemic report, ITIF think-tank president Robert Atkinson misinterprets this growing altruistic focus of AI researchers as innovation-stifling “Luddite-induced paranoia.” This contrasts with the filmed expert testimony from a panel that he himself chaired last summer. The ITIF report makes three main points regarding AI:

1) The people promoting this beneficial-AI agenda are Luddites and “AI detractors.”

This is a rather bizarre assertion given that the agenda has been endorsed by thousands of AI researchers, including many of the world’s leading experts in industry and academia, in two open letters supporting beneficial AI and opposing offensive autonomous weapons. ITIF even calls out Bill Gates and Elon Musk by name, despite them being widely celebrated as drivers of innovation, and despite Musk having landed a rocket just days earlier. By implication, ITIF also labels as Luddites two of the twentieth century’s most iconic technology pioneers – Alan Turing, the father of computer science, and Norbert Wiener, the father of control theory – both of whom pointed out that super-human AI systems could be problematic for humanity. If Alan Turing, Norbert Wiener, Bill Gates, and Elon Musk are Luddites, then the word has lost its meaning.

Contrary to ITIF’s assertion, the goal of the beneficial-AI movement is not to slow down AI research, but to ensure its continuation by guaranteeing that AI remains beneficial. This goal is supported by the recent $10M investment from Musk in such research and the subsequent $15M investment by the Leverhulme Foundation.

2) An arms race in offensive autonomous weapons beyond meaningful human control is nothing to worry about, and attempting to stop it would harm the AI field and national security.

The thousands of AI researchers who disagree with ITIF’s assessment in their open letter are in a situation similar to that of the biologists and chemists who supported the successful bans on biological and chemical weapons. These bans did not prevent the fields of biology and chemistry from flourishing, nor did they harm US national security – as President Richard Nixon emphasized when he proposed the Biological Weapons Convention. As in this summer’s panel discussion, Atkinson once again appears to suggest that AI researchers should hide potential risks to humanity rather than incur any risk of reduced funding.

3) Studying how AI can be kept safe in the long term is counterproductive: it is unnecessary and may reduce AI funding.

Although ITIF claims that such research is unnecessary, he never gives a supporting argument, merely providing a brief misrepresentation of what Nick Bostrom has written about the advent of super-human AI (raising, in particular, the red herring of self-awareness) and baldly stating that, What should not be debatable is that this possible future is a long, long way off.” Scientific questions should by definition be debatable, and recent surveys of AI researchers indicate a healthy debate with broad range of arrival estimates, ranging from never to not very far off. Research on how to keep AI beneficial is worthwhile today even if it will only be needed many decades from now: the toughest and most crucial questions may take decades to answer, so it is prudent to start tackling them now to ensure that we have the answers by the time we need them. In the absence of such answers, AI research may indeed be slowed down in future in the event of localized control failures – like the so-called “Flash Crash” on the stock market – that dent public confidence in AI systems.

ITIF argues that the AI researchers behind these open letters have unfounded worries. The truly unfounded worries are those that ITIF harbors about AI funding being jeopardized: since the beneficial-AI debate heated up during the past two years, the AI field has enjoyed more investment than ever before, including OpenAI’s billion-dollar investment in beneficial AI research – arguably the largest AI funding initiative in history, with a large share invested by one of ITIF’s alleged Luddites.

Under Robert Atkinson’s leadership, the Information Technology Innovation Foundation has a distinguished record of arguing against misguided policies arising from ignorance of technology. We hope ITIF returns to this tradition and refrains from further attacks on expert scientists and engineers who make reasoned technical arguments about the importance of managing the impacts of increasingly powerful technologies. This is not Luddism, but common sense.

Stuart Russell, Berkeley, Professor of Computer Science, director of the Center for Intelligent Systems, and co-author of the standard textbook “Artificial Intelligence: a Modern Approach”

Max Tegmark, MIT, Professor of Physics, President of Future of Life Institute

Were the Paris Climate Talks a Success?

An interview with Seth Baum, Executive Director of the Global Catastrophic Risk Institute:

Can the Paris Climate Agreement Succeed Where Other Agreements Have Failed?

On Friday, December 18, I talked with Seth Baum, the Executive Director of the Global Catastrophic Risk Institute, about the realistic impact of the Paris Climate Agreement.

The Paris Climate talks ended December 12th, and there’s been a lot of fanfare in the media about how successful these were because 195 countries came together with an agreement. That so many leaders of so many countries could come together on the issue of climate change is a huge success.

As Baum said after the interview, “The Paris Agreement is a good example of the international community, as a whole, coming together to take action that makes the world a safe place. It’s pretty amazing!”

But as amazing as global cooperation is, reading some of that agreement was less than inspiring. There was a lot of suggesting and urging and advising, but no demanding or requiring or committing.

The countries have all agreed to try not to let global temperatures increase beyond 2 degrees Celsius of pre industrial temperatures, and they’re aiming for 1.5 degrees Celsius as the maximum. This is a nice, lofty goal, but is it possible?

The agreement calls for countries to basically check in every five years, but with the rate at which the temperatures are increasing and climate change is affecting us, is this going to be sufficient to accomplish much? This meeting was called the COP21 because this group has now convened every year for the last 21 years. Why should we expect this agreement to produce greater results than what we’ve seen in the past?

As Baum explains, this agreement is “probably about as good as we’re going to get.” It focused on goals that each of the leaders can try to reach using whatever means is best suited for their respective countries. However, there is no penalty if the countries don’t comply. According to Baum, one of the major reasons the agreement is so vague is that the American Senate is unlikely to get the 67 votes necessary to ratify an official treaty on climate change.

Baum also points out that “the difference between 1.9 degrees and 2.1 is pretty trivial.” The goal is to aim for limiting the increase of global temperatures, and whatever improvements can be made toward that objective can at least be considered small successes.

There’s also been some debate about whether climate change and terrorism might be connected, but we also considered another issue that doesn’t get brought up as often: if we reduce our dependency on fossil fuels, will that lead to further destabilization in the Middle East? Baum suspects the answer is yes.

Listen to the full interview for more insight into the Paris Climate Agreement, including how successful it might be under future leadership, as well as how climate change is no longer a catastrophic risk, but rather, a known cause of catastrophes.

 

 

 

 

What’s so exciting about AI? Conversations at the Nobel Week Dialogue

Each year, the Nobel Prize brings together some of the brightest minds to celebrate accomplishments and people that have changed life on Earth for the better. Through a suite of events ranging from lectures and panel discussions to art exhibits, concerts and glamorous banquets, the Nobel Prize doesn’t just celebrate accomplishments, but also celebrates work in progress: open questions and new, tantalizing opportunities for research and innovation.

This year, the topic of the Nobel Week Dialogue was “The Future of Intelligence.” The conference gathered some of the leading researchers and innovators in Artificial Intelligence and generated discussions on topics such as these: What is intelligence? Is the digital age changing us? Should we fear or welcome the Singularity? How will AI change the World?

Although challenges in developing AI and concerns about human-computer interaction were both expressed, in the celebratory spirit of the Nobel Prize, let’s focus on the future possibilities of AI that were deemed most exciting and worth celebrating by some of the leaders in the field. Michael Levitt, the 2013 Nobel Laureate in Chemistry, expressed excitement regarding the potential of AI to simulate and model very complex phenomena. His work on developing multiscale models for complex chemical systems, for which he received the Nobel Prize, stands as testimony to the great power of modeling, more of which could be unleashed through further development of AI.

Harry Shum, the executive Vice President of Microsoft’s Technology and Research group, was excited about the creation of a machine alter-ego, with which humans could comfortably share data and preferences, and which would intelligently use this to help us accomplish our goals and improve our lives. His vision was that of a symbiotic relationship between human and artificial intelligence where information could be fully, fluidly and seamlessly shared between the “natural” ego and the “artificial” alter ego resulting in intelligence enhancement.

Barbara Grosz, professor at Harvard University, felt that a very exciting role for AI would be that of providing advice and expertise, and through it enhance people’s abilities in governance. Also, by applying artificial intelligence to information sharing, team-making could be perfected and enhanced. A concrete example would be that of creating efficient medical teams by moving past a schizophrenic healthcare system where experts often do not receive relevant information from each other. AI could instead serve as a bridge, as a curator of information by (intelligently) deciding what information to share with whom and when for the maximum benefit of the patient.

Stuart Russell, professor at UC Berkeley, highlighted AI’s potential to collect and synthesize information and expressed excitement about ingenious applications of this potential. His vision was that of building “consensus systems” – systems that would collect and synthesize information and create consensus on what is known, unknown, certain and uncertain in a specific field. This, he thought, could be applied not just to the functioning of nature and life in the biological sense, but also to history. Having a “consensus history”, a history that we would all agree upon, could help humanity learn from its mistakes and maybe even build a more-goal directed view of our future.

As regards the future of AI, there is much to celebrate and be excited about, but much work remains to be done to ensure that, in the words of Alfred Nobel, AI will “confer the greatest benefit to mankind.”

Inside OpenAI: An Interview by SingularityHUB

The following interview was conducted and written by Shelly Fan for SingularityHUB.

Last Friday at the Neural Information and Processing Systems conference in Montreal, Canada, a team of artificial intelligence luminaries announced OpenAI, a non-profit company set to change the world of machine learning.

Backed by Tesla and Space X’s Elon Musk and Y Combinator’s Sam Altman, OpenAI has a hefty budget and even heftier goals. With a billion dollars in initial funding, OpenAI eschews the need for financial gains, allowing it to place itself on sky-high moral grounds.

artificial-general-intelligenceBy not having to answer to industry or academia, OpenAI hopes to focus not just on developing digital intelligence, but also guide research along an ethical route that, according to their inaugural blog post, “benefits humanity as a whole.”

OpenAI began with the big picture in mind: in 100 years, what will AI be able to achieve, and should we be worried? If left in the hands of giant, for-profit tech companies such as Google, Facebook and Apple, all of whom have readily invested in developing their own AI systems in the last few years, could AI — and future superintelligent systems— hit a breaking point and spiral out of control? Could AI be commandeered by governments to monitor and control their citizens? Could it, as Elon Musk warned earlier this year, ultimately destroy humankind?

Since its initial conception earlier this year, OpenAI has surgically snipped the cream of the crop in the field of deep learning to assemble its team. Among its top young talent is Andrej Karpathy, a PhD candidate at Stanford whose resume includes internships at Google and DeepMind, the secretive London-based AI company that Google bought in 2014.

Last Tuesday, I sat down with Andrej to chat about OpenAI’s ethos and vision, its initial steps and focus, as well as the future of AI and superintelligence. The interview has been condensed and edited for clarity.


How did OpenAI come about?

Earlier this year, Greg [Brockman], who used to be the CTO of Stripe, left the company looking to do something a bit different. He has a long-lasting interest in AI so he was asking around, toying with the idea of a research-focused AI startup. He reached out to the field and got the names of people who’re doing good work and ended up rounding us up.

At the same time, Sam [Altman] from YC became extremely interested in this as well. One way that YC is encouraging innovation is as a startup accelerator; another is through research labs. So, Sam recently opened YC Research, which is an umbrella research organization, and OpenAI is, or will become, one of the labs.

As for Elon — obviously he has had concerns over AI for a while, and after many conversations, he jumped onboard OpenAI in hopes to help AI develop in a beneficial and safe way.

How much influence will the funders have on how OpenAI does its research?

We’re still at very early stages so I’m not sure how this will work out. Elon said he’d like to work with us roughly once a week. My impression is that he doesn’t intend to come in and tell us what to do — our first interactions were more along the lines of “let me know in what way I can be helpful.” I felt a similar attitude from Sam and others.

AI has been making leaps recently, with contributions from academia, big tech companies and clever startups. What can OpenAI hope to achieve by putting you guys together in the same room that you can’t do now as a distributed network?

I’m a huge believer in putting people physically together in the same spot and having them talk. The concept of a network of people collaborating across institutions would be much less efficient, especially if they all have slightly different incentives and goals.

More abstractly, in terms of advancing AI as a technology, what can OpenAI do that current research institutions, companies or deep learning as a field can’t?

how-to-prevent-evil-ai-9A lot of it comes from OpenAI as a non-profit. What’s happening now in AI is that you have a very limited number of research labs and large companies, such as Google, which are hiring a lot of researchers doing groundbreaking work. Now suppose AI could one day become — for lack of a better word — dangerous, or used dangerously by people. It’s not clear that you would want a big for-profit company to have a huge lead, or even a monopoly over the research. It is primarily an issue of incentives, and the fact that they are not necessarily aligned with what is good for humanity. We are baking that into our DNA from the start.

Also, there are some benefits of being a non-profit that I didn’t really appreciate until now. People are actually reaching out and saying “we want to help”; you don’t get this in companies; it’s unthinkable. We’re getting emails from dozens of places — people offering to help, offering their services, to collaborate, offering GPU power. People are very willing to engage with you, and in the end, it will propel our research forward, as well as AI as a field.

OpenAI seems to be built on the big picture how will AI benefit humanity, and how it may eventually destroy us all. Elon has repeatedly warned against unmonitored AI development. In your opinion, is AI a threat?

When Elon talks about the future, he talks about scales of tens or hundreds of years from now, not 5 or 10 years that most people think about. I don’t see AI as a threat over the next 5 or 10 years, other than those you might expect from more reliance on automation; but if we’re looking at humanity already populating Mars (that far in the future), then I have much more uncertainty, and sure, AI might develop in ways that could pose serious challenges.

how-to-prevent-evil-ai-5I think that saying AI will destroy humanity is out there on a five-year horizon; but if we’re looking at humanity already populating Mars (that far in the future), then yeah AI could be a serious problem.

One thing we do see is that a lot of progress is happening very fast. For example, computer vision has undergone a complete transformation — papers from more than three years ago now look foreign in face of recent approaches. So when we zoom out further over decades I think I have a fairly wide distribution over where we could be. So say there is a 1% chance of something crazy and groundbreaking happening. When you additionally multiply that by the utility of a few for-profit companies having monopoly over this tech, then yes that starts to sound scary.

Do you think we should put restraints on AI research to assure safety?

No, not top-down, at least right now. In general I think it’s a safer route to have more AI experts who have a shared awareness of the work in the field. Opening up research like what OpenAI wants to do, rather than having commercial entities having monopoly over results for intellectual property purposes, is perhaps a good way to go.

True, but recently for-profit companies are releasing their technology as well I’m thinking Google’s TensorFlow and Facebook’s Torch. In this sense how does OpenAI differ in its “open research” approach?

So when you say “releasing” there are a few things that need clarification. First Facebook did not release Torch; Torch is a library that’s been around for several years now. Facebook has committed to Torch and is improving on it. So has DeepMind.

how-to-prevent-evil-ai-7But TensorFlow and Torch are just tiny specks of their research — they are tools that can help others do research well, but they’re not actual results that others can build upon.

Still, it is true that many of these industrial labs have recently established a good track record of publishing research results, partly because a large number of people on the inside are from academia. Still, there is a veil of secrecy surrounding a large portion of the work, and not everything makes it out. In the end, companies don’t really have very strong incentives to share.

OpenAI, on the other hand, encourages us to publish, to engage the public and academia, to Tweet, to blog. I’ve gotten into trouble in the past for sharing a bit too much from inside companies, so I personally really, really enjoy the freedom.

What if OpenAI comes up with a potentially game-changing algorithm that could lead to superintelligence? Wouldn’t a fully open ecosystem increase the risk of abusing the technology?

In a sense it’s kind of like CRISPR. CRISPR is a huge leap for genome editing that’s been around for only a few years, but has great potential for benefiting — and hurting — humankind. Because of these ethical issues there was a recent conference on it in DC to discuss how we should go forward with it as a society.

If something like that happens in AI during the course of OpenAI’s research — well, we’d have to talk about it. We are not obligated to share everything — in that sense the name of the company is a misnomer — but the spirit of the company is that we do by default.

In the end, if there is a small chance of something crazy happening in AI research, everything else being equal, do you want these advances to be made inside a commercial company, especially one that has monopoly on the research, or do you want this to happen within a non-profit?

We have this philosophy embedded in our DNA from the start that we are mindful of how AI develops, rather than just [a focus on] maximizing profit.

In that case, is OpenAI comfortable being the gatekeeper, so to speak? You’re heavily influencing how the field is going to go and where it’s going.

It’s a lot of responsibility. It’s a “lesser evil” argument; I think it’s still bad. But we’re not the only ones “controlling” the field — because of our open nature we welcome and encourage others to join in on the discussion. Also, what’s the alternative? In a way a non-profit, with sharing and safety in its DNA, is the best option for the field and the utility of the field.

Also, AI is not the only field to worry about — I think bio is a far more pressing domain in terms of destroying the world [laugh]!

In terms of hiring — OpenAI is competing against giant tech companies in the Silicon Valley. How is the company planning on attracting top AI researchers?

We have perks [laugh].

But in all seriousness, I think the company’s mission and team members are enough. We’re currently actively hiring people, and so far have no trouble getting people excited about joining us. In several ways OpenAI combines the best of academia and the startup world, and being a non-profit we have the moral high ground, which is nice [laugh].

The team, especially, is a super strong, super tight team and that is a large part of the draw.

Take some rising superstars in the field — myself not included — put them together and you get OpenAI. I joined mainly because I heard about who else is on the team. In a way, that’s the most shocking part; a friend of mine described it as “storming the temple.” Greg came in from nowhere and scooped up the top people to do something great and make something new.

hub-viral-hits-2015-1Now that OpenAI has a rockstar team of scientists,what’s your strategy for developing AI? Are you getting vast amounts of data from Elon? What problems are you tackling first?

So we’re really still trying to figure a lot of this out. We are trying to approach this with a combination of bottom up and top down thinking. Bottom up are the various papers and ideas we might want to work on. Top down is doing so in a way that adds up. We’re currently in the process of thinking this through.

For example, I just submitted one vision research proposal draft today, actually [laugh]. We’re putting a few of them together. Also it’s worth pointing out that we’re not currently actively working on AI safety. A lot of the research we currently have in mind looks conventional. In terms of general vision and philosophy I think we’re most similar to DeepMind.

We might be able to at some point take advantage of data from Elon or YC companies, but for now we also think we can go quite far making our own datasets, or working with existing public datasets that we can work on in sync with the rest of academia.

Would OpenAI ever consider going into hardware, since sensors are a main way of interacting with the environment?

So, yes we are interested, but hardware has a lot of issues. For us, roughly speaking there are two worlds: the world of bits and the world of atoms. I am personally inclined to stay in the world of bits for now, in other words, software. You can run things in the cloud, it’s much faster. The world of atoms — such as robots — breaks too often and usually has a much slower iteration cycle. This is a very active discussion that we’re having in the company right now.

Do you think we can actually get to generalized AI?

I think to get to superintelligence we might currently be missing differences of a “kind,” in the sense that we won’t get there by just making our current systems better. But fundamentally there’s nothing preventing us getting to human-like intelligence and beyond.

To me, it’s mostly a question of “when,” rather than “if.”

I don’t think we need to simulate the human brain to get to human-like intelligence; we can zoom out and approximate how it works. I think there’s a more straightforward path. For example, some recent work shows that ConvNet* activations are very similar to the human visual cortex’s IT area activation, without mimicking how neurons actually work.

[*SF: ConvNet, or convolutional network, is a type of artificial neural network topology tailored to visual tasks first developed by Yann LeCun in the 1990s. IT is the inferior temporal cortex, which processes complex object features.]

how-to-prevent-evil-ai-8So it seems to me that with ConvNets we’ve almost checked off large parts of the visual cortex, which is somewhere around 30% of the cortex, and the rest of the cortex maybe doesn’t look all that different. So I don’t see how over a timescale of several decades we can’t make good progress on checking off the rest.

Another point is that we don’t necessarily have to be worried about human-level AI. I consider chimp-level AI to be equally scary, because going from chimp to humans took nature only a blink of an eye on evolutionary time scales, and I suspect that might be the case in our own work as well. Similarly, my feeling is that once we get to that level it will be easy to overshoot and get to superintelligence.

On a positive note though, what gives me solace is that when you look at our field historically, the image of AI research progressing with a series of unexpected “eureka” breakthroughs is wrong. There is no historical precedent for such moments; instead we’re seeing a lot of fast and accelerating, but still incremental progress. So let’s put this wonderful technology to good use in our society while also keeping a watchful eye on how it all develops.

Image Credit: Shutterstock.com

See the original post here.

Santa, Mistakes, and Nuclear War

Written by: , physicist & co-director, Global Security | December 14, 2015, 9:34 am EST

On December 1, the U.S. military started its annual tracking of Santa’s flight from the North Pole.

Really.

NORAD—the North Atlantic Aerospace Defense Command—is not known for its sense of humor. Its mission is deadly serious: to alert authorities about an aircraft or missile attack on North America. In the event of a nuclear missile attack, NORAD’s job is to detect it, analyze it, and provide the information the president needs to decide whether to launch U.S. nuclear weapons in response.

So what is it doing tracking Santa?

This off-mission public service stems from a series of mistakes and coincidences so unlikely they read like fiction.

The original Sears ad. Note that it says “Kiddies Be Sure and Dial the Correct Number” (Source: NORAD)

It started innocently enough: A 1955 Sears Christmas ad in a Colorado Springs newspaper featured Santa telling kids to call him “any time day or night” and gave a number for his “private phone.”

But due to a typo in the phone number, calls were routed to a top secret red phone at nearby Ent Air Force Base, home of the warning center that became NORAD.

Maybe two people in the world had this phone number—until then. The supervisor on duty that night, Col. Harry Shoup, was not amused when the red phone began to ring off the hook. A no-nonsense military officer, he took his job seriously. And so his men were shocked when, after learning what had happened, Shoup began answering the phone with “ho-ho-ho” and inquiring about the caller’s behavior over the previous 12 months—and then tasked his men to answer the phone the same way.

That Christmas Eve, Shoup shocked his staff yet again. He realized that NORAD’s specialty was, in fact, tracking objects flying toward the United States. So he picked up the phone and called a local radio station to tell them that the world’s finest warning sensors had just picked up a sleigh flying in from the North Pole. A tradition was born.

This uplifting occasion was not the only time things have gone awry at NORAD, but other incidents have been more heart-stopping than heartwarming.

False Warning of Nuclear Attack

For example, in 1979, NORAD’s computer screens lit up showing an all-out Soviet nuclear attack bearing down on the United States. The missiles would take less than 25 minutes to reach their targets.

The military immediately began preparing to launch a retaliatory attack. Nuclear bomber crews were dispatched to their planes. And the crews manning U.S. missiles were ready: The missiles were on 24/7 hair-trigger alert so they could be launched within minutes.

NORAD officers knew they would have only minutes to sort out what was happening, giving the president about 10 minutes to make a launch decision.

Fortunately, it was a time of reduced U.S.-Soviet tensions, so the officers were skeptical about the warning. They also failed to get confirmation from U.S. radar sites that there was a missile attack. They soon discovered that a technician had mistakenly inserted a training tape simulating a large Soviet attack into a NORAD computer. U.S. nuclear forces stood down, averting a nuclear war.

But things could have gone much differently. Within months, tensions between the two superpowers spiked when the Soviets invaded Afghanistan and relations continued to sour through the first Reagan term. Had communication systems been down or U.S. radars detected unrelated missile launches, the situation could have been much more serious.

President Obama: End Hair-Trigger Alert

Since 1979 there have been additional hair-raising incidents and false warnings due to a variety of technical and human errors in both the United States and Russia. Regardless, both countries still keep hundreds of missiles on hair-trigger alert to give their presidents the option of launching them quickly on warning of an attack, increasing the risk that a false alarm could lead to an accidental war. And that risk is significant. Indeed, some retired high-level military officers say an accident or a mistake would be the most likely cause of a nuclear war today.

President Obama understands this risk. Early in his presidency he called for taking U.S. missiles off hair-trigger alert. He has the authority to do so, but has apparently deferred to Cold War holdouts in the Pentagon.

Growing tensions between the United States and Russia now make taking missiles off hair-trigger alert even more urgent. It is during times of crisis when miscalculations and misunderstandings are most likely to occur.

As Col. Shoup and other NORAD officers learned repeatedly, unexpected things happen. They shouldn’t lead to nuclear war.

The best Christmas present President Obama could give to the country this year would be to take U.S. missiles off hair-trigger alert.

Co-written by David Wright and Lisbeth Gronlund. Featured Photo by Bart Fields.

The original version of this article can be found here.

Should AI Be Open?

POSTED ON DECEMBER 17, 2015 BY SCOTT ALEXANDER

I.

H.G. Wells’ 1914 sci-fi book The World Set Free did a pretty good job predicting nuclear weapons:

They did not see it until the atomic bombs burst in their fumbling hands…before the last war began it was a matter of common knowledge that a man could carry about in a handbag an amount of latent energy sufficient to wreck half a city

Wells’ thesis was that the coming atomic bombs would be so deadly that we would inevitably create a utopian one-world government to prevent them from ever being used. Sorry, Wells. It was a nice thought.

But imagine that in the 1910s and 1920s, the period’s intellectual and financial elites had started thinking really seriously along Wellsian lines. Imagine what might happen when the first nation – let’s say America – got the Bomb. It would be totally unstoppable in battle and could take over the entire world and be arbitrarily dictatorial. Such a situation would be the end of human freedom and progress.

So in 1920 they all pool their resources to create their own version of the Manhattan Project. Over the next decade their efforts bear fruit, and they learn a lot about nuclear fission. In particular, they learn that uranium is a necessary resource, and that the world’s uranium sources are few enough that a single nation or coalition of nations could obtain a monopoly upon them. The specter of atomic despotism is more worrying than ever.

They get their physicists working overtime, and they discover a variety of nuke that requires no uranium at all. In fact, once you understand the principles you can build one out of parts from a Model T engine. The only downside to this new kind of nuke is that if you don’t build it exactly right, its usual failure mode is to detonate on the workbench in an uncontrolled hyper-reaction that blows the entire hemisphere to smithereens. But it definitely doesn’t require any kind of easily controlled resource.

And so the intellectual and financial elites declare victory – no one country can monopolize atomic weapons now – and send step-by-step guides to building a Model T nuke to every household in the world. Within a week, both hemispheres are blown to very predictable smithereens.

II.

Some of the top names in Silicon Valley have just announced a new organization, OpenAI, dedicated to “advanc[ing] digital intelligence in the way that is most likely to benefit humanity as a whole…as broadly and evenly distributed as possible.” Co-chairs Elon Musk and Sam Altman talk to Steven Levy:

Levy: How did this come about? […]

Musk: Philosophically there’s an important element here: we want AI to be widespread. There’s two schools of thought?—?do you want many AIs, or a small number of AIs? We think probably many is good. And to the degree that you can tie it to an extension of individual human will, that is also good. […]

Altman: We think the best way AI can develop is if it’s about individual empowerment and making humans better, and made freely available to everyone, not a single entity that is a million times more powerful than any human. Because we are not a for-profit company, like a Google, we can focus not on trying to enrich our shareholders, but what we believe is the actual best thing for the future of humanity.

Levy: Couldn’t your stuff in OpenAI surpass human intelligence?

Altman: I expect that it will, but it will just be open source and useable by everyone instead of useable by, say, just Google. Anything the group develops will be available to everyone. If you take it and repurpose it you don’t have to share that. But any of the work that we do will be available to everyone.

Levy: If I’m Dr. Evil and I use it, won’t you be empowering me?

Musk: I think that’s an excellent question and it’s something that we debated quite a bit.

Altman: There are a few different thoughts about this. Just like humans protect against Dr. Evil by the fact that most humans are good, and the collective force of humanity can contain the bad elements, we think its far more likely that many, many AIs, will work to stop the occasional bad actors than the idea that there is a single AI a billion times more powerful than anything else. If that one thing goes off the rails or if Dr. Evil gets that one thing and there is nothing to counteract it, then we’re really in a bad place.

Both sides here keep talking about who is going to “use” the superhuman intelligence a billion times more powerful than humanity, as if it were a microwave or something. Far be it from me to claim to know more than Sam Altman about anything, but I propose that the correct answer to “what would you do if Dr. Evil used superintelligent AI” is “cry tears of joy and declare victory”, because anybody at all having a usable level of control over the first superintelligence is so much more than we have any right to expect that I’m prepared to accept the presence of a medical degree and ominous surname.

A more Bostromian view would forget about Dr. Evil, and model AI progress as a race between Dr. Good and Dr. Amoral. Dr. Good is anyone who understands that improperly-designed AI could get out of control and destroy the human race – and who is willing to test and fine-tune his AI however long it takes to be truly confident in its safety. Dr. Amoral is anybody who doesn’t worry about that and who just wants to go forward as quickly as possible in order to be the first one with a finished project. If Dr. Good finishes an AI first, we get a good AI which protects human values. If Dr. Amoral finishes an AI first, we get an AI with no concern for humans that will probably cut short our future.

Dr. Amoral has a clear advantage in this race: building an AI without worrying about its behavior beforehand is faster and easier than building an AI and spending years testing it and making sure its behavior is stable and beneficial. He will win any fair fight. The hope has always been that the fight won’t be fair, because all the smartest AI researchers will realize the stakes and join Dr. Good’s team.

Open-source AI crushes that hope. Suppose Dr. Good and her team discover all the basic principles of AI but wisely hold off on actually instantiating a superintelligence until they can do the necessary testing and safety work. But suppose they also release what they’ve got on the Internet. Dr. Amoral downloads the plans, sticks them in his supercomputer, flips the switch, and then – as Dr. Good himself put it back in 1963 – “the human race has become redundant.”

The decision to make AI findings open source is a tradeoff between risks and benefits. The risk is letting the most careless person in the world determine the speed of AI research – because everyone will always have the option to exploit the full power of existing AI designs, and the most careless person in the world will always be the first one to take it. The benefit is that in a world where intelligence progresses very slowly and AIs are easily controlled, nobody will be able to use their sole possession of the only existing AI to garner too much power.

Unfortunately, I think we live in a different world – one where AIs progress from infrahuman to superhuman intelligence very quickly, very dangerously, and in a way very difficult to control unless you’ve prepared beforehand.

III.

If AI saunters lazily from infrahuman to human to superhuman, then we’ll probably end up with a lot of more-or-less equally advanced AIs that we can tweak and fine-tune until they cooperate well with us. In this situation, we have to worry about who controls those AIs, and it is here that OpenAI’s model makes the most sense.

But many thinkers in this field including Nick Bostrom and Eliezer Yudkowsky worry that AI won’t work like this at all. Instead there could be a “hard takeoff”, a huge subjective discontinuity in the function mapping AI research progress to intelligence as measured in ability-to-get-things-done. If on January 1 you have a toy AI as smart as a cow, one which can identify certain objects in pictures and navigate a complex environment, and on February 1 it’s solved P = NP and started building a ring around the sun, that was a hard takeoff.

(I won’t have enough space here to really do these arguments justice, so I once again suggest reading Bostrom’sSuperintelligence if you haven’t already. For more on what AI researchers themselves think of these ideas, see AI Researchers On AI Risk.)

Why should we expect this to happen? Multiple reasons. The first is that it happened before. It took evolution twenty million years to go from cows with sharp horns to hominids with sharp spears; it took only a few tens of thousands of years to go from hominids with sharp spears to moderns with nuclear weapons. Almost all of the practically interesting differences in intelligence occur within a tiny window that you could blink and miss.

If you were to come up with a sort of objective zoological IQ based on amount of evolutionary work required to reach a certain level, complexity of brain structures, etc, you might put nematodes at 1, cows at 90, chimps at 99, homo erectus at 99.9, and modern humans at 100. The difference between 99.9 and 100 is the difference between “frequently eaten by lions” and “has to pass anti-poaching laws to prevent all lions from being wiped out”.

Worse, the reasons we humans aren’t more intelligent are really stupid. Like, even people who find the idea abhorrent agree that selectively breeding humans for intelligence would work in some limited sense. Find all the smartest people, make them marry each other for a couple of generations, and you’d get some really smart great-grandchildren. But think about how weird this is! Breeding smart people isn’t doing work, per se. It’s not inventing complex new brain lobes. If you want to get all anthropomorphic about it, you’re just “telling” evolution that intelligence is something it should be selecting for. Heck, that’s all that the African savannah was doing too – the difference between chimps and humans isn’t some brilliant new molecular mechanism, it’s just sticking chimps in an environment where intelligence was selected for so that evolution was incentivized to pull out a few stupid hacks. The hacks seem to be things like “bigger brain size” (did you know that both among species and among individual humans, brain size correlates pretty robustly with intelligence, and that one reason we’re not smarter may be that it’s too annoying to have to squeeze a bigger brain through the birth canal?) If you believe in Greg Cochran’s Ashkenazi IQ hypothesis, just having a culture that valued intelligence on the marriage market was enough to boost IQ 15 points in a couple of centuries, and this is exactly the sort of thing you should expect in a world like ours where intelligence increases are stupidly easy to come by.

I think there’s a certain level of hard engineering/design work that needs to be done for intelligence, a level waybelow humans, and after that the limits on intelligence are less about novel discoveries and more about tradeoffs like “how much brain can you cram into a head big enough to fit out a birth canal?” or “wouldn’t having faster-growing neurons increase your cancer risk?” Computers are not known for having to fit through birth canals or getting cancer, so it may be that AI researchers only have to develop a few basic principles – let’s say enough to make cow-level intelligence – and after that the road to human intelligence runs through adding the line NumberOfNeuronsSimulated = 100000000000 to the code, and the road to superintelligence runs through adding another zero after that.

(Remember, it took all of human history from Mesopotamia to 19th-century Britain to invent a vehicle that could go as fast as a human. But after that it only took another four years to build one that could go twice as fast as a human.)

If there’s a hard takeoff, OpenAI’s strategy looks a lot more problematic. There’s no point in ensuring that everyone has their own AIs, because there’s not much time between the first useful AI and the point at which things get too confusing to model and nobody “has” the AIs at all.

IV.

OpenAI also skips over a second aspect of AI risk: the control problem.

All of this talk of “will big corporations use AI” or “will Dr. Evil use AI” or “Will AI be used for the good of all” presuppose that you can use an AI. You can certainly use an AI like the ones in chess-playing computers, but nobody’s very scared of the AIs in chess-playing computers either. The real question is whether AIs powerful enough to be scary are still usable.

Most of the people thinking about AI risk believe there’s a very good chance they won’t be, especially if we get a Dr. Amoral who doesn’t work too hard to make them so. Remember the classic programmers’ complaint: computers always do what you tell them to do instead of what you meant for them to do. The average computer program doesn’t do what you meant for it to do the first time you test it. Google Maps has a relatively simple task (plot routes between Point A and Point B), has been perfected over the course of years by the finest engineers at Google, has been ‘playtested’ by tens of millions of people day after day, and still occasionally does awful thingslike suggest you drive over the edge of a deadly canyon, or tell you walk across an ocean and back for no reason on your way to the corner store.

Humans have a really robust neural architecture, to the point where you can logically prove that what they’re doing is suboptimal, and they’ll shrug and say they don’t see any flaw in your proof but they’re going to keep doing what they’re doing anyway because it feels right. Computers are not like this unless we make them that way, a task which would itself require a lot of trial-and-error and no shortage of genius insights. Computers are naturally fragile and oriented toward specific goals. An AI that ended up with a drive as perverse as Google Maps’ occasional tendency to hurl you off cliffs would not be self-correcting unless we gave it a self-correction mechanism, which would be hard. A smart AI might be able to figure out that humans didn’t mean for it to have the drive it did. But that wouldn’t cause it to change its drive, any more than you can convert a gay person to heterosexuality by patiently explaining to them that evolution probably didn’t mean for them to be gay. Your drives are your drives, whether they are intentional or not.

When Google Maps tells people to drive off cliffs, Google quietly patches the program. AIs that are more powerful than us may not need to accept our patches, and may actively take action to prevent us from patching them. If an alien species showed up in their UFOs, said that they’d created us but made a mistake and actually we were supposed to eat our children, and asked us to line up so they could insert the functioning child-eating gene in us, we would probably go all Independence Day on them; because of computers’ more goal-directed architecture, they would if anything be more willing to fight such changes.

If it really is a quick path from cow-level AI to superhuman-level AI, it would be really hard to test the cow-level AI for stability and expect it to remain stable all the way up to superhuman-level – superhumans have a lot more options available to them than cows do. That means a serious risk of superhuman AIs that want to do the equivalent of hurl us off cliffs, and which are very resistant to us removing that desire from them. We may be able to deal with this, but it would require a lot of deep thought and a lot of careful testing and prodding at the cow-level AIs to make sure they are as prepared as possible for the transition to superhumanity.

And we lose that option by making the AI open source. Make such a program universally available, and while Dr. Good is busy testing and prodding, Dr. Amoral has already downloaded the program, flipped the switch, and away we go.

V.

Once again: The decision to make AI findings open source is a tradeoff between risks and benefits. The risk is that assuming hard takeoff and control problems – two assumptions that a lot of people think make sense – it leads to buggy superhuman AIs that doom everybody. The benefit is that in a world where intelligence progresses very slowly and AIs are easily controlled, nobody will be able to use their sole possession of the only existing AI to garner too much power.

But the benefits just aren’t clear enough to justify that level of risk. I’m still not really sure exactly how the OpenAI founders visualize the future they’re trying to prevent. Are AIs fast and dangerous? Are they slow and easily-controlled? Does just one company have them? Several companies? All rich people? Are they a moderate advantage? A huge advantage? None of those possibilities seem worrisome enough to justify OpenAI’s tradeoff against safety.

Suppose that AIs progress slowly and are easy to control, remaining dominated by one company but gradually becoming necessary for almost every computer user. Microsoft Windows is dominated by one company and became necessary for almost every computer user. For a while people were genuinely terrified that Microsoft would exploit its advantage to become a monopolistic giant that took over the Internet and something something something. Instead, they were caught flat-footed and outcompeted by Apple and Google, plus if you really want you can use something open-source like Linux instead. And anyhow, whenever a new version of Windows comes out, hackers have it up on the Pirate Bay in a matter of days.

Or are we worried that AIs will somehow help the rich get richer and the poor get poorer? This is a weird concern to have about a piece of software which can be replicated pretty much for free. Windows and Google Search are both fantastically complex products of millions of man-hours of research, and both are provided free of charge. In fact, people have gone through the trouble of creating fantastically complex competitors to both and providingthose free of charge, to the point where multiple groups are competing to offer people fantastically complex software for free. While it’s possible that rich people will be able to afford premium AIs, it is hard for me to weigh “rich people get premium versions of things” on the same scale as “human race likely destroyed”. Like, imagine the sort of dystopian world where rich people had more power than the rest of us! It’s too horrifying even to talk about!

Or are we worried that AI will progress really quickly and allow someone to have completely ridiculous amounts of power? Remember, there is still a government and they tend to look askance on other people becoming powerful enough to compete with them. If some company is monopolizing AI to become too big and powerful, the government will break it up, the same way they kept threatening to break up Microsoft when it was becoming too big and powerful. If someone tries to use AI to exploit others, the government can pass a complicated regulation against that. You can say a lot of things about the United States government, but you can never say that they don’t realize they can pass complicated regulations forbidding people from doing things.

Or are we worried that AI will be so powerful that someone armed with AI is stronger than the government? Think about this scenario for a moment. If the government notices someone getting, say, a quarter as powerful as it is, it’ll probably take action. So an AI user isn’t likely to overpower the government unless their AI can become powerful enough to defeat the US military too quickly for the government to notice or respond to. But if AIs can do that, we’re back in the intelligence explosion/fast takeoff world where OpenAI’s assumptions break down. If AIs can go from zero to more-powerful-than-the-US-military in a very short amount of time while still remaining well-behaved, then we actually do have to worry about Dr. Evil and we shouldn’t be giving him all our research.

Or are we worried that some big corporation will make an AI more powerful than the US government in secret? I guess this is sort of scary, but you can forgive me for not quite quaking in my boots. So Google takes over the world? Fine. Do you think Larry Page would be a better or worse ruler than one of these people? What if he had a superintelligent AI helping him, and also everything was post-scarcity? Yeah, I guess all in all I’d prefer constitutional limited government, but this is another supposed horror scenario which doesn’t even weigh on the same scale as “human race likely destroyed”.

If OpenAI wants to trade off the safety of the human race from rogue AIs in order to get better safety against people trying to exploit control over AIs, they need to make a much stronger case than anything I’ve seen so far for why the latter is such a terrible risk.

There was a time when the United States was the only country with nukes. While it did nuke two cities during that period, it otherwise mostly failed to press its advantage, bumbled its way into letting the Russians steal the schematics, and now everyone from Israel to North Korea has nuclear weapons and things are pretty okay. If we’d been so afraid of letting the US government have its brief tactical advantage that we’d given the plans for extremely unstable super-nukes to every library in the country, we probably wouldn’t even be around to regret our skewed priorities.

Elon Musk famously said that “AIs are more dangerous than nukes”. He’s right – so AI probably shouldn’t be open source any more than nukes should.

VI.

And yet Elon Musk is involved in this project. So are Sam Altman and Peter Thiel. So are a bunch of other people whom I know have read Bostrom, are deeply concerned about AI risk, and are pretty clued-in.

My biggest hope is that as usual they are smarter than I am and know something I don’t. My second biggest hope is that they are making a simple and uncharacteristic error, because these people don’t let errors go uncorrected for long and if it’s just an error they can change their minds.

But I worry it’s worse than either of those two things. I got a chance to talk to some people involved in the field, and the impression I got was one of a competition that was heating up. Various teams led by various Dr. Amorals are rushing forward more quickly and determinedly than anyone expected at this stage, so much so that it’s unclear how any Dr. Good could expect both to match their pace and to remain as careful as the situation demands. There was always a lurking fear that this would happen. I guess I hoped that everyone involved wassmart enough to be good cooperators. I guess I was wrong. Instead we’ve reverted to type and ended up in the classic situation of such intense competition for speed that we need to throw every other value under the bus just to avoid being overtaken.

In this context, the OpenAI project seems more like an act of desperation. Like Dr. Good needing some kind of high-risk, high-reward strategy to push himself ahead and allow at least some amount of safety research to take place. Maybe getting the cooperation of the academic and open-source community will do that. I won’t question the decisions of people smarter and better informed than I am if that’s how their strategy talks worked out. I guess I just have to hope that the OpenAI leaders know what they’re doing, don’t skimp on safety research. and have a process for deciding which results not to share too quickly. But I’m terrified that it’s come to this. It suggests that we really and truly do not have what it takes, that we’re just going to blunder our way into extinction because cooperation problems are too hard for us.

I am reminded of what Malcolm Muggeridge wrote as he watched World War II begin:

All this likewise indubitably belonged to history, and would have to be historically assessed; like the Murder of the Innocents, or the Black Death, or the Battle of Paschendaele. But there was something else; a monumental death-wish, an immense destructive force loosed in the world which was going to sweep over everything and everyone, laying them flat, burning, killing, obliterating, until nothing was left…Nor have I from that time ever had the faintest expectation that, in earthly terms, anything could be salvaged; that any earthly battle could be won or earthly solution found. It has all just been sleep-walking to the end of the night.

The AI Wars: The Battle of the Human Minds to Keep Artificial Intelligence Safe

For all the media fear mongering about the rise of artificial intelligence in the future and the potential for malevolent machines, a battle of the AI war has already begun. But this one is being waged by some of the most impressive minds within the realm of human intelligence today.

At the start of 2015, few AI researchers were worried about AI safety, but that all changed quickly. Throughout the year, Nick Bostrom’s book, Superintelligence: Paths, Dangers, Strategies, grew increasingly popular. The Future of Life Institute held its AI safety conference in Puerto Rico. Two open letters regarding artificial intelligence and autonomous weapons were released. Countless articles came out, quoting AI concerns from the likes of Elon Musk, Stephen Hawking, Bill Gates, Steve Wozniak, and other luminaries of science and technology. Musk donated $10 million in funding to AI safety research through FLI. Fifteen million dollars was granted to the creation of the Leverhulme Centre for the Future of Intelligence. And most recently, the nonprofit AI research company, OpenAI, was launched to the tune of $1 billion, which will allow some of the top minds in the AI field to address safety-related problems as they come up.

In all, it’s been a big year for AI safety research. Many in science and industry have joined the AI-safety-research-is-needed camp, but there are still some stragglers of equally staggering intellect. So just what does the debate still entail?

OpenAI was the big news of the past week, and its launch coincided (probably not coincidentally) with the Neural Information Processing Systems conference, which attracts some of the best-of-the-best in machine learning. Among the attractions at the conference was the symposium, Algorithms Among Us: The Societal Impacts of Machine Learning, where some of the most influential people in AI research and industry debated their thoughts and concerns about the future of artificial intelligence.

[Author’s note: The following are symposium highlights grouped together by topic to inform about arguments in the world of AI research. The discussions did not necessarily occur in the order below.]
 

From session 2 of the Algorithms Among Us symposium: Murray Shanahan, Shane Legg, Andrew Ng, Yann LeCun, Tom Dietterich, and Gary Marcus

What is AGI and should we be worried about it?

Artificial general intelligence (AGI) is the term given to artificial intelligence that would be, in some sense, equivalent to human intelligence. It wouldn’t solve just a narrow, specific task, as AI does today, but would instead solve a variety of problems and perform a variety of tasks, with or without being programmed to do so. That said, it’s not the most well defined term. As the director of Facebook’s AI research group, Yann LeCun stated, “I don’t want to talk about human-level intelligence because I don’t know what that means really.”

If defining AGI is difficult, predicting if or when it will exist is nearly impossible. Some of the speakers, like LeCun and Andrew Ng, didn’t want to waste time considering the possibility of AGI since they consider it to be so distant. Both referenced the likelihood of another AI winter, in which, after all this progress, scientists will hit a research wall that will take some unknown number of years or decades to overcome. Ng, a Stanford professor and Chief Scientist of Baidu, compared concerns about the future of human-level AI to far-fetched worries about the difficulties surrounding travel to the star system Alpha Centauri.

LeCun pointed out that we don’t really know what a superintelligent AI would look like. “Will AI look like human intelligence? I think not. Not at all,” he said. He then went on to explain why human intelligence isn’t nearly as general as we like to believe. “We’re driven by basic instincts […] They (AI) won’t have the drives that make humans do bad things to each other.” He added that there would be no reason he can think of to build preservation instincts or curiosity into machines.

However, many of the participants disagreed with LeCun and Ng, emphasizing the need to be prepared in advance of problems, rather than trying to deal with them as they arise.

Shane Legg, co-founder of Google’s DeepMind, argued that the benefit of starting safety research now is that it will help us develop a framework that will allow researchers to move in a positive direction toward the development of smarter AI. “In terms of AI safety, I think it’s both overblown and underemphasized,” he said, commenting on how profound – both positively and negatively – the societal impact of advanced AI could be. “If we are approaching a transition of this magnitude, I think it’s only responsible that we start to consider, to whatever extent that we can in advance, the technical aspects and the societal aspects and the legal aspects and whatever else […] Being prepared ahead of time is better than trying to be prepared after you already need some good answers.”

Gary Marcus, Director of the NYU Center for Language and Music, added, “In terms of being prepared, we don’t just need to prepare for AGI, we need to prepare for better AI […] Already, issues of security and risk have come forth.”

Even Ng agreed that AI safety research certainly wasn’t a bad thing, saying, “I’m actually glad there are other parts of society studying ethical parts of AI. I think this is a great thing to do.” Though he also admitted it wasn’t something he wanted to spend his own time on.
 

It’s the economy…

Among all of the AI issues debated by researchers, the one agreed upon by almost everyone who took the stage at the symposium was the detrimental impact AI could have on the job market. Erik Brynjolfsson, co-author of The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies, set the tone for the discussion with his presentation which highlighted some of the issues that artificial intelligence will have on the economy. He explained that we’re in the midst of incredible technological advances, which could be highly beneficial, but our skills, organizations and institutions aren’t keeping up. Because of the huge gap in pace, business as usual won’t work.

As unconcerned about the future of AGI as Ng was, he quickly became the strongest advocate for tackling the economics issue that will pop up in the near future. “I think the biggest challenge is the challenge of unemployment,” Ng said.

The issue of unemployment is one that is already starting to appear, even with the very narrow AI that exists today. Around the world, low- and middle-skilled workers are getting displaced by robots or software, and that trend is expected to continue at rapid rates.

LeCun argued that the world overcame the massive job loss that resulted from the new technologies associated with the steam engine too, but both Brynjolfsson and Ng disagreed with that argument, citing the much more rapid pace of technology today. “Technology has always been destroying jobs, and it’s always been creating jobs,” Brynjolfsson admitted, but he also explained how difficult it is to predict which technologies will impact us the most and when they’ll kick in. The current exponential rate of technological progress is unlike anything we’ve ever experienced before in history.

Bostrom mentioned that the rise of thinking machines will be more analogous to the rise of the human species than to the steam engine or the industrial revolution. He reminded the audience that if a superintelligent AI is developed, it will be the last invention we ever have to make.

A big concern with the economy is that the job market is changing so quickly that most people can’t develop new skills fast enough to keep up. The possibility of a basic income and paying people to go back to school were both mentioned. However, the psychological toll of being unemployed is one that can’t be overcome even with a basic income, and the effect that mass unemployment might have on people drew concern from the panelists.

Bostrom became an unexpected voice of optimism, pointing out that there have always been groups who were unemployed, such as aristocrats, children and retirees. Each of these groups managed to enjoy their unemployed time by filling it with other hobbies and activities.

However, solutions like basic income and leisure time will only work if political leaders begin to take the initiative soon to address the unemployment issues that near-future artificial intelligence will trigger.
 

From session 2 of the Algorithms Among Us symposium: Michael Osborne, Finale Doshi-Velez, Neil Lawrence, Cynthia Dwork, Tom Dietterich, Erik Brynjolfsson, and Ian Kerr

Closing arguments

Ideally, technology is just a tool that is not inherently good or bad. Whether it helps humanity or hurts us should depend on how we use the tool. Except if AI develops the capacity to think, this argument isn’t quite accurate. At that point, the AI isn’t a person, but it isn’t just an instrument either.

Ian Kerr, the Research Chair of Ethics, Law, and Technology at the University of Ottawa, spoke early in the symposium about the legal ramifications (or lack thereof) of artificial intelligence. The overarching question for an AI gone wrong is: who’s to blame? Who will be held responsible when something goes wrong? Or, on the flip side, who is to blame if a human chooses to ignore the advice of an AI that’s had inconsistent results, but which later turns out to have been the correct advice?

If anything, one of the most impressive results from this debate was how often the participants agreed with each other. At the start of the year, few AI researchers were worried about safety. Now, though many still aren’t worried, most acknowledge that we’re all better off if we consider safety and other issues sooner rather than later. The most disagreement was over when we should start working on AI safety, not if it should happen. The panelists also all agreed that regardless of how smart AI might become, it will happen incrementally, rather than as the “event” that is implied in so many media stories. We already have machines that are smarter and better at some tasks than humans, and that trend will continue.

For now, as Harvard Professor Finale Doshi-Velez pointed out, we can control what we get out of the machine: if we don’t like or understand the results, we can reprogram it.

But how much longer will that be a viable solution?
 

Coming soon…

The article above highlights some of the discussion that occurred between AI researchers about whether or not we need to focus on AI safety research. Because so many AI researchers do support safety research, there was also much more discussion during the symposium about which areas pose the most risk and have the most potential. We’ll be starting a new series in the new year that goes into greater detail about different fields of study that AI researchers are most worried about and most excited about.

 

Pentagon Seeks $12 -$15 Billion for AI Weapons Research

The news this month is full of stories about money pouring into AI research. First we got the news about the $15 million granted to the new Leverhulme Center for the Future of Intelligence. Then Elon Musk and friends dropped the news about launching OpenAI to the tune of $1 billion, promising that this would be a not-for-profit company committed to safe AI and improving the world. But that all pales in comparison to the $12-$15 billion that the Pentagon is requesting for the development of AI weapons.

According to Reuters, “The Pentagon’s fiscal 2017 budget request will include $12 billion to $15 billion to fund war gaming, experimentation and the demonstration of new technologies aimed at ensuring a continued military edge over China and Russia.” The military is looking to develop more advanced weapons technologies that will include autonomous weapons and deep learning machines.

While the research itself would be strictly classified, the military wants to ensure that countries like China and Russia know this advanced weapons research is taking place.

“I want our competitors to wonder what’s behind the black curtain,” Deputy Defense Secretary Robert Work said.

The United States will continue to try to develop positive relations with Russia and China, but Work believes AI weapons R&D will help strengthen deterrence.

Read the full Reuters article here.

 

 

GCRI December News Summary

The following is the December news summary for the Global Catastrophic Risk Institute, written by .
It was originally published at the Global Catastrophic Risk Institute. Please sign up for the GCRI newsletter.

640px-ChineseCoalPower_optChinese power plant image courtesy of Tobias Brox under a Creative Commons Attribution-ShareAlike 3.0 Unported license (the image has been cropped)

Turkish F-16s shot down a Russian Su-24 fighter-bomber near the border between Turkey and Syria. Some reports indicate that the Russian plane’s pilots were shot and possibly killed as they parachuted from their damaged plane. It was the first time a NATO member shot down a Russian military plane since the end of the Cold War. Turkey claimed the Russian plane violated its airspace for five minutes and was shot down only after ignoring ten warnings. Russia said that its plane never entered Turkish airspace. The US military confirmed Turkey’s claim that the Russian plane did receive ten warnings without apparently responding. But Der Spiegel said that both analysis of the flight path of the plane provided by the Turkish military and NATO sources indicate the plane was inside Turkish airspace for only a few seconds. Turkey had already accused Russian planes of violating its airspace on a number of different occasions. Russia formally apologized to Turkey in October when another Russian plane crossed into Turkish airspace. Turkey has also complained about Russian attacks on Syrian villages inhabited by Turkmen, an ethnic group with cultural connections with Turkey. Russian President Vladimir Putin called the incident a “stab in the back”. US President Barack Obama said Turkey had the right to defend itself but called for all sides to de-escalate the situation. Turkey called for an emergency NATO meeting to discuss the incident. Max Fisher noted in Vox that an incident between NATO and Russian forces near the Syrian border like this probably would not escalate, because neither side is likely to mistake it as the beginning of a real attack; if a Russian plane were shot down in the Baltics it would probably be much more dangerous.

Russia launched a military satellite thought be the first part of a new system designed to provide early warning of ballistic missile launches. Russia’s last early-warning satellite failed in 2014. President Putin announced earlier in the month that Russia planned to deploy weapons that are “capable of penetrating any missile defenses”. Russia has objected to the US missile defense system, which Russia says is intended to render Russia’s nuclear deterrent ineffective. The US says that its missile defenses are designed to protect against limited attacks from countries like Iran and North Korea and could not protect the US or its allies against a Russian strike. At a meeting on the state of the Russian defense industry, Russian cameras captured what appeared to be a page in a briefing book outlining the development of underwater drone called “Status-6” designed to deliver a nuclear weapon to port cities. The system would ostensibly be intended to maximize nuclear fallout in order to inflict “unacceptable damage to a country’s territory by creating areas of wide radioactive contamination that would be unsuitable for military, economic, or other activity for long periods of time”. Steven Pifer argued that Russia deliberately leaked the weapon design for domestic political purposes, but that it would actually be of limited strategic value and might not be something Russia actually intends to build.

According to revised official data, China has been underestimating its coal use since 2000. The new data show that China has been emitting as much as 17% more greenhouse gases from coal than was previously disclosed. China was already the largest emitter of greenhouse gases before the revision. China’s emissions were revised upward by an amount equivalent to the emissions of the entire German economy. The new data will not change scientists’ estimates of the amount of carbon dioxide in the atmosphere, which is measured directly. But it may force scientists to revise their estimate of how much carbon is being absorbed by “carbon sinks” like forests and oceans. “We have known for some time that China was underreporting coal consumption,” the Sierra Club’s Nicole Ghio told ThinkProgress. “The fact that the Chinese government is now revising the numbers to more accurately reflect the real consumption is a good thing.”

A new World Bank report found that climate change could cause more than 100 million people to fall into poverty by 2030 by increasing the spread of diseases and interfering with agriculture. The report cited studies showing that climate change could cause crop yields to fall by 5% and increase the number of people at risk for malaria by 150 million by 2030. The report calls for proactive climate adaptation measures, like building dikes and drainage systems to manage flooding and the cultivation of climate-resistant crops and livestock. But the report said that ultimately only efforts to reduce global emissions will protect the world’s poor from the impact of climate change.

Three new cases of Ebola were confirmed in a suburb of Monrovia, more than a month after Liberia was declared free of the disease for the second time. Investigators have not determined how the index patient, a 15-year-old boy, contracted the disease. The fact that the boy’s mother tested positive for high levels of Ebola antibodies raises the possibility that the disease could be spreading through undocumented or mildly symptomatic cases. Studies show that the virus can remain in bodily fluids for as many as nine months in survivors. World Health Organization (WHO) Special Representative for the Ebola Response Bruce Aylward said that flare-ups of Ebola after countries have been declared free of the disease should be treated as rare but inevitable. Dan Kelly, a doctor who advises Partners in Health in Sierra Leone, said that “if we have learned anything in this epidemic, it’s that 42 days is not adequate to declare the end of human-to-human transmission.”

An independent panel of researchers at Harvard and the London School of Hygiene & Tropical Medicine looking into the response to the Ebola outbreak called for independent oversight of WHO. In September, an AP investigation found that senior WHO officials were reluctant to declare Ebola a health emergency for political and economic reasons. The panel’s report called for the creation of a dedicated outbreak response center and a “politically protected” committee with the authority to declare public health emergencies. Suerie Moon, a public health researcher at Harvard who worked on the report, said that “The WHO is too important to fail”.

Researchers created a hybrid version of a bat coronavirus related to the virus that causes severe acute respiratory syndrome (SARS) that is capable of infecting human airway cells. The virus is not the first bat coronavirus known to be capable of binding to key receptors on human airway cells, but the results suggest that bat coronaviruses may be more of a danger to humans than previously believed. In 2014, the US asked researchers to suspend “gain-of-function” research making certain viruses more deadly or transmissible while the National Science Advisory Board for Biosecurity and the National Research Council assess the risks. The bat coronavirus research had already started and was allowed to continue when the moratorium was called. Critics of gain-of-function research worry that what we learn from it does not justify the risk of creating dangerous new viruses. Rutgers molecular biologist Richard Ebright told Nature that in his opinion “the only impact of this work is the creation, in a lab, of a new, non-natural risk”.

A study in The Journal of Volcanology and Geothermal Research finds that supervolcanoes may erupt only when they are triggered by something external like an earthquake or faults in the structure of the surrounding rock. Another recent paper in Science argued that the increase in volcanic activity in the Deccan Traps that may have contributed to the Cretaceous-Paleogene extinction event 65 million years ago could have been triggered by the asteroid or comet that hit the Earth around that time. The Journal of Volcanology and Geothermal Research paper goes against the prevailing theory that supervolcanoes erupt when the internal pressure within a magma chamber builds to the point that it causes an explosion. But the study’s model suggests that the buoyancy of the magma may not actually put much pressure on the magma chamber. Lead author Patricia Gregg said there is also not much evidence of pressure build up at supervolcano sites. “If we want to monitor supervolcanoes to determine if one is progressing toward eruption, we need better understanding of what triggers a supereruption,” Gregg said. “It’s very likely that supereruptions must be triggered by an external mechanism and not an internal mechanism, which makes them very different from the typical, smaller volcanoes that we monitor.”

A Nature Communications paper argued that the Earth was hit by massive solar storms at least twice in the first millennium A.D. Scientists suspect that the spike in the concentration of carbon-14 in the atmosphere around the years 774 and 993 was due to some kind of surge of extraterrestrial radiation. The recent paper argues that new measurements of the concentration of isotopes of beryllium and chlorine from Arctic and Antarctic ice cores indicate that the radiation that hit the Earth was from solar storms at least five times larger than any solar storm scientists have measured. These storms would have to have been even stronger the 1859 “Carrington Event” which interfered with telegraph systems and created auroras over large parts of the planet, but did not notably increase the concentration of carbon-14 in the atmosphere. A solar storm of that large could blow out transformers and damage electrical systems around the world.

This news summary was put together in collaboration with Anthropocene. Thanks to Tony Barrett, Seth Baum, Kaitlin Butler, and Grant Wilson for help compiling the news.

OpenAI Announced

Press release from OpenAI:
Introducing OpenAI

by Greg Brockman, Ilya Sutskever, and the OpenAI team
December 11, 2015
OpenAI is a non-profit artificial intelligence research company. Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.

Since our research is free from financial obligations, we can better focus on a positive human impact. We believe AI should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as is possible safely.

The outcome of this venture is uncertain and the work is difficult, but we believe the goal and the structure are right. We hope this is what matters most to the best in the field.

Background

Artificial intelligence has always been a surprising field. In the early days, people thought that solving certain tasks (such as chess) would lead us to discover human-level intelligence algorithms. However, the solution to each task turned out to be much less general than people were hoping (such as doing a search over a huge number of moves).

The past few years have held another flavor of surprise. An AI technique explored for decades, deep learning, started achieving state-of-the-art results in a wide variety of problem domains. In deep learning, rather than hand-code a new algorithm for each problem, you design architectures that can twist themselves into a wide range of algorithms based on the data you feed them.

This approach has yielded outstanding results on pattern recognition problems, such as recognizing objects in images, machine translation, and speech recognition. But we’ve also started to see what it might be like for computers to be creative, to dream, and to experience the world.

Looking forward

AI systems today have impressive but narrow capabilities. It seems that we’ll keep whittling away at their constraints, and in the extreme case they will reach human performance on virtually every intellectual task. It’s hard to fathom how much human-level AI could benefit society, and it’s equally hard to imagine how much it could damage society if built or used incorrectly.

OpenAI

Because of AI’s surprising history, it’s hard to predict when human-level AI might come within reach. When it does, it’ll be important to have a leading research institution which can prioritize a good outcome for all over its own self-interest.

We’re hoping to grow OpenAI into such an institution. As a non-profit, our aim is to build value for everyone rather than shareholders. Researchers will be strongly encouraged to publish their work, whether as papers, blog posts, or code, and our patents (if any) will be shared with the world. We’ll freely collaborate with others across many institutions and expect to work with companies to research and deploy new technologies.

OpenAI’s research director is Ilya Sutskever, one of the world experts in machine learning. Our CTO is Greg Brockman, formerly the CTO of Stripe. The group’s other founding members are world-class research engineers and scientists: Trevor Blackwell, Vicki Cheung, Andrej Karpathy, Durk Kingma, John Schulman, Pamela Vagata, and Wojciech Zaremba. Pieter Abbeel, Yoshua Bengio, Alan Kay, Sergey Levine, and Vishal Sikka are advisors to the group. OpenAI’s co-chairs are Sam Altman and Elon Musk.

Sam, Greg, Elon, Reid Hoffman, Jessica Livingston, Peter Thiel, Amazon Web Services (AWS), Infosys, and YC Research are donating to support OpenAI. In total, these funders have committed $1 billion, although we expect to only spend a tiny fraction of this in the next few years.

You can follow us on Twitter at @open_ai or email us at info@openai.com.

MIRI’s December Newsletter Is Live!

Research updates

General updates

News and links

The Future of Humanity Institute Is Hiring!

Exciting news from FHI:

The Future of Humanity Institute at the University of Oxford invites applications for four postdoctoral research positions. We seek outstanding applicants with backgrounds that could include computer science, mathematics, economics, technology policy, and/or philosophy.

The Future of Humanity Institute is a leading research centre in the University of Oxford looking at big-picture questions for human civilization. We seek to focus our work where we can make the greatest positive difference. Our researchers regularly collaborate with governments from around the world and key industry groups working on artificial intelligence. To read more about the institute’s research activities, please see http://www.fhi.ox.ac.uk/research/research-areas/.

1. Research Fellow – AI – Strategic Artificial Intelligence Research Centre, Future of Humanity Institute (Vacancy ID# 121242). We are seeking expertise in the technical aspects of AI safety, including a solid understanding of present-day academic and industrial research frontiers, machine learning development, and knowledge of academic and industry stakeholders and groups. The fellow is expected to have the knowledge and skills to advance the state of the art in proposed solutions to the “control problem.” This person should have a technical background, for example, in computer science, mathematics, or statistics. Candidates with a very strong machine learning or mathematics background are encouraged to apply even if they do not have experience with AI safety topics, assuming they are willing to switch to this subfield. Applications are due by Noon 6 January 2016. You can apply for this position through the Oxford recruitment website at http://bit.ly/1M11RbY.

2. Research Fellow – AI Policy – Strategic Artificial Intelligence Research Centre, Future of Humanity Institute (Vacancy ID# 121241). We are looking for someone with expertise relevant to assessing the socio-economic and strategic impacts of future technologies, identifying key issues and potential risks, and rigorously analysing policy options for responding to these challenges. This person might have an economics, political science, social science, or risk analysis background. Applications are due by Noon 6 January 2016. You can apply for this position through the Oxford recruitment website at http://bit.ly/1OfWd7Q.

3. Research Fellow – AI Strategy – Strategic Artificial Intelligence Research Centre, Future of Humanity Institute (Vacancy ID# 121168). We are looking for someone with a multidisciplinary science, technology, or philosophy background and with outstanding analytical ability. The post holder will investigate, understand, and analyse the capabilities and plausibility of theoretically feasible but not yet fully developed technologies that could impact AI development, and to relate such analysis to broader strategic and systemic issues. The academic background of the post-holder is unspecified, but could involve, for example, computer science or economics. Applications are due by Noon 6 January 2016. You can apply for this position through the Oxford recruitment website at http://bit.ly/1jM5Pic.

4. Research Fellow – ERC UnPrEDICT Programme, Future of Humanity Institute (Vacancy ID# 121313). This Research Fellowship will work on a new European Research Council-funded UnPrEDICT (Uncertainty and Precaution: Ethical Decisions Involving Catastrophic Threats) programme, hosted by the Future of Humanity Institute at the University of Oxford. This is a research position for a strong generalist, and will focus on topics related to existential risk, model uncertainty, the precautionary principle, and other principles for handling technological progress. In particular, this research fellow will help to develop decision procedures for navigating empirical uncertainties related to existential risk, including information hazards and situations where model or structural uncertainty are the dominating form of uncertainty. The research could take a decision-theoretic approach, although this is not strictly necessary. We also expect the candidate to engage with the research on specific existential risks, possibly including developing a framework to evaluate uncertain risks in the context of nuclear weapons, climate risks, dual use biotechnology, and/or the development of future artificial intelligence. The successful candidate must demonstrate evidence of, or the potential for producing, outstanding research in the areas of relevance to the project, the ability to integrate interdisciplinary research in philosophy, mathematics and/or economics, and familiarity with both normative and empirical issues surrounding existential risk. Applications are due by Noon 6 January 2016. You can apply for this position through the Oxford recruitment website at http://bit.ly/1HSCKgP.

Alternatively, please visit http://www.fhi.ox.ac.uk/vacancies/ or https://www.recruit.ox.ac.uk/ and search using the above vacancy IDs for more details.

Guest Blog: Paris, Nuclear Weapons, and Suicide Bombing

The following post was written by Dr. Alan Robock, a Distinguished Professor of Climate Science at Rutgers University.

France’s 300 nuclear weapons were useless to protect them from the horrendous suicide bomb attacks in Paris on Nov. 13, 2016. And if France ever uses those weapons to attack another country’s cities and industrial areas, France itself will become a suicide bomber. Mutually assured destruction gave way to self-assured destructionyears ago when we discovered that, even if a country launches a successful nuclear strike against their enemy, the resulting nuclear winter could kill billions more around the world, including the attacking country’s own citizens. The climate effects of the smoke generated from fires from those attacks would last for more than a decade, plunging our planet into such cold temperatures that agricultural production would be halted or severely reduced, producing famine in France and the rest of the world.

2015-12-06-1449441297-7057790-ParisPeaceSign.jpg
It is imperative for France and the rest of the world to get rid of their nuclear arsenals. They cannot be used without endangering the attacker. The threat of their use by any nation is ludicrous and cannot be taken seriously. They do not provide a deterrent. Not only do nuclear weapons not deter terrorists, they do not deter nations from attacking. Just think of the attack on the UK by Argentina (Falkland Islands War), on Israel (Six Day War), and the invasion of Eastern Europe after World War II.

The chance of the use of nuclear weapons by mistake, in a panic after an international incident, by a computer hacker, or by a rogue leader of a nuclear nation can only be removed by the removal of the weapons themselves.

As the important climate negotiations at the 21st Conference of the Parties in Paris in December 2015 continue, we have to keep in mind that the greatest threat to our planet from human actions is not global warming, as important as this threat is, but from the accidental or intentional use of nuclear weapons. We need to ban nuclear weapons now, so we have the luxury of addressing the global warming problem.

This article was also featured on the Huffington Post.

The World Has Lost 33% of Its Farmable Land

During the Paris climate talks last week, researchers from the University of Sheffield’s Grantham Center revealed that in the last 40 years, the world has lost nearly 33% of its farmable land.

The loss is attributed to erosion and pollution, but the effects are expected to be exacerbated by climate change. Meanwhile, global food production is expected to grow by 60% in the next 35 years.

Researchers at the Grantham Center argue that the current intensive agriculture system is unsustainable. Modern agriculture requires heavy use of fertilizers, which “consume 5% of the world’s natural gas production and 2% of the world’s annual energy supply.” This use of fertilizers also allows “nutrients to wash out and pollute fresh coastal waters, causing algal blooms and lethal oxygen depletion,” along with a host of other problems. As fertilizers weaken the soil, heavily ploughed fields can face erosion rates that are “10-100 times greater than [the] rates of soil formation.”

Organic farming typically includes better soil management practices, but crop yields will not be sufficient to feed the growing global population.

In response to these concerns, Grantham Center researchers have called for a sustainable model for intensive agriculture that will incorporate lessons both from history and modern biotechnology. The scientists suggest the following three principles for improved farming practices:

  1. “Managing soil by direct manure application, rotating annual and cover crops, and practicing no-till agriculture.”
  2. “Using biotechnology to wean crops off the artificial world we have created for them, enabling plants to initiate and sustain symbioses with soil microbes.”
  3. “Recycling nutrients from sewage in a modern example of circular economy. Inorganic fertilizers could be manufactured from human sewage in biorefineries operating at industrial or local scales.”

The Grantham researchers recognize that the task of improving our farming situation can’t just fall on farmers’ shoulders. They expect policymakers will also need to get involved.

Speaking to the Guardian, Duncan Cameron, one of the scientists involved in this study, said, “We can’t blame the farmers in this. We need to provide the capitalisation to help them rather than say, ‘Here’s a new policy, go and do it.’ We have the technology. We just need the political will to give us a fighting chance of solving this problem.”

Read the complete Grantham Center briefing note here.