The Top A.I. Breakthroughs of 2015

Progress in artificial intelligence and machine learning has been impressive this year. Those in the field acknowledge progress is accelerating year by year, though it is still a manageable pace for us. The vast majority of work in the field these days actually builds on previous work done by other teams earlier the same year, in contrast to most other fields where references span decades.

Creating a summary of a wide range of developments in this field will almost invariably lead to descriptions that sound heavily anthropomorphic, and this summary does indeed. Such metaphors, however, are only convenient shorthands for talking about these functionalities. It’s important to remember that even though many of these capabilities sound very thought-like, they’re usually not very similar to how human cognition works. The systems are all of course functional and mechanistic, and, though increasingly less so, each are still quite narrow in what they do. Be warned though: in reading this article, these functionalities may seem to go from fanciful to prosaic.

The biggest developments of 2015 fall into five categories of intelligence: abstracting across environments, intuitive concept understanding, creative abstract thought, dreaming up visions, and dexterous fine motor skills. I’ll highlight a small number of important threads within each that have brought the field forward this year.

 

Abstracting Across Environments

A long-term goal of the field of AI is to achieve artificial general intelligence, a single learning program that can learn and act in completely different domains at the same time, able to transfer some skills and knowledge learned in, e.g., making cookies and apply them to making brownies even better than it would have otherwise. A significant stride forward in this realm of generality was provided by Parisotto, Ba, and Salakhutdinov. They built on DeepMind’s seminal DQN, published earlier this year in Nature, that learns to play many different Atari games well.


Instead of using a fresh network for each game, this team combined deep multitask reinforcement learning with deep-transfer learning to be able to use the same deep neural network across different types of games. This leads not only to a single instance that can succeed in multiple different games, but to one that also learns new games better and faster because of what it remembers about those other games. For example, it can learn a new tennis video game faster because it already gets the concept — the meaningful abstraction of hitting a ball with a paddle — from when it was playing Pong. This is not yet general intelligence, but it erodes one of the hurdles to get there.

Reasoning across different modalities has been another bright spot this year. The Allen Institute for AI and University of Washington have been working on test-taking AIs, over the years working up from 4th grade level tests to 8th grade level tests, and this year announced a system that addresses the geometry portion of the SAT. Such geometry tests contain combinations of diagrams, supplemental information, and word problems. In more narrow AI, these different modalities would typically be analyzed separately, essentially as different environments. This system combines computer vision and natural language processing, grounding both in the same structured formalism, and then applies a geometric reasoner to answer the multiple-choice questions, matching the performance of the average American 11th grade student.

 

Intuitive Concept Understanding

A more general method of multimodal concept grounding has come about from deep learning in the past few years: Subsymbolic knowledge and reasoning are implicitly understood by a system rather than being explicitly programmed in or even explicitly represented. Decent progress has been made this year in the subsymbolic understanding of concepts that we as humans can relate to. This progress helps with the age-old symbol grounding problem — how symbols or words get their meaning. The increasingly popular way to achieve this grounding these days is by joint embeddings — deep distributed representations where different modalities or perspectives on the same concept are placed very close together in a high-dimensional vector space.

Last year, this technique helped power abilities like automated image caption writing, and this year a team from Stanford and Tel Aviv University have extended this basic idea to jointly embed images and 3D shapes to bridge computer vision and graphics. Rajendran et al. then extended joint embeddings to support the confluence of multiple meaningfully related mappings at once, across different modalities and different languages. As these embeddings get more sophisticated and detailed, they can become workhorses for more elaborate AI techniques. Ramanathan et al. have leveraged them to create a system that learns a meaningful schema of relationships between different types of actions from a set of photographs and a dictionary.

As single systems increasingly do multiple things, and as deep learning is predicated on, any lines between the features of the data and the learned concepts will blur away. Another demonstration of this deep feature grounding, by a team from Cornell and WUStL, uses a dimensionality reduction of a deep net’s weights to form a surface of convolutional features that can simply be slid along to meaningfully, automatically, photorealistically alter particular aspects of photographs, e.g., changing people’s facial expressions or their ages, or colorizing photos.

 

One hurdle in deep learning techniques is that they require a lot of training data to produce good results. Humans, on the other hand, are often able to learn from just a single example. Salakhutdinov, Tenenbaum, and Lake have overcome this disparity with a technique for human-level concept learning through Bayesian program induction from a single example. This system is then able to, for instance, draw variations on symbols in a way indistinguishable from those drawn by humans.

 

Creative Abstract Thought

Beyond understanding simple concepts lies grasping aspects of causal structure — understanding how ideas tie together to make things happen or tell a story in time — and to be able to create things based on those understandings. Building on the basic ideas from both DeepMind’s neural Turing machine and Facebook’s memory networks, combinations of deep learning and novel memory architectures have shown great promise in this direction this year. These architectures provide each node in a deep neural network with a simple interface to memory.

Kumar and Socher’s dynamic memory networks improved on memory networks with better support for attention and sequence understanding. Like the original, this system could read stories and answer questions about them, implicitly learning 20 kinds of reasoning, like deduction, induction, temporal reasoning, and path finding. It was never programmed with any of those kinds of reasoning. Weston et al’s more recent end-to-end memory networks then added the ability to perform multiple computational hops per output symbol, expanding modeling capacity and expressivity to be able to capture things like out-of-order access, long term dependencies, and unordered sets, further improving accuracy on such tasks.

Programs themselves are of course also data, and they certainly make use of complex causal, structural, grammatical, sequence-like properties, so programming is ripe for this approach. Last year, neural Turing machines proved deep learning of programs to be possible. This year, Grefenstette et al. showed how programs can be transduced, or generatively figured out from sample output, much more efficiently than with neural Turing machines, by using a new type of memory-based recurrent neural networks (RNNs) where the nodes simply access differentiable versions of data structures such as stacks and queues. Reed and de Freitas of DeepMind have also recently shown how their neural programmer-interpreter can represent lower-level programs that control higher-level and domain-specific functionalities.

Another example of proficiency in understanding time in context, and applying that to create new artifacts, is a rudimentary but creative video summarization capability developed this year. Park and Kim from Seoul National U. developed a novel architecture called a coherent recurrent convolutional network, applying it to creating novel and fluid textual stories from sequences of images.

Another important modality that includes causal understanding, hypotheticals, and creativity in abstract thought is scientific hypothesizing. A team at Tufts combined genetic algorithms and genetic pathway simulation to create a system that arrived at the first significant new AI-discovered scientific theory of how exactly flatworms are able to regenerate body parts so readily. In a couple of days it had discovered what eluded scientists for a century. This should provide a resounding answer to those who question why we would ever want to make AIs curious in the first place.

 

Dreaming Up Visions

AI did not stop at writing programs, travelogues, and scientific theories this year. There are AIs now able to imagine, or using the technical term, hallucinate, meaningful new imagery as well. Deep learning isn’t only good at pattern recognition, but indeed pattern understanding and therefore also pattern creation.

A team from MIT and Microsoft Research have created a deep convolution inverse graphic network, which, among other things, contains a special training technique to get neurons in its graphics code layer to differentiate to meaningful transformations of an image. In so doing, they are deep-learning a graphics engine, able to understand the 3D shapes in novel 2D images it receives, and able to photorealistically imagine what it would be like to change things like camera angle and lighting.

A team from NYU and Facebook devised a way to generate realistic new images from meaningful and plausible combinations of elements it has seen in other images. Using a pyramid of adversarial networks — with some trying to produce realistic images and others critically judging how real the images look — their system is able to get better and better at imagining new photographs. Though the examples online are quite low-res, offline I’ve seen some impressive related high-res results.

Also significant in ’15 is the ability to deeply imagine entirely new imagery based on short English descriptions of the desired picture. While scene renderers taking symbolic, restricted vocabularies have been around a while, this year has seen the advent of a purely neural system doing this in a way that’s not explicitly programmed. This University of Toronto team applies attention mechanisms to generation of images incrementally based on the meaning of each component of the description, in any of a number of ways per request. So androids can now dream of electric sheep.

There has even been impressive progress in computational imagination of new animated video clips this year. A team from the University of Michigan created a deep analogy system that recognizes complex implicit relationships in exemplars and is able to apply that relationship as a generative transformation of query examples. They’ve applied this in a number of synthetic applications, but most impressive is the demo (from the 10:10-11:00 mark of the video embedded below), where an entirely new short video clip of an animated character is generated based on a single still image of the never-before-seen target character, along with a comparable video clip of a different character at a different angle.

While the generation of imagery was used in these for ease of demonstration, their techniques for computational imagination are applicable across a wide variety of domains and modalities. Picture these applied to voices, or music, for instance.

 

Agile and Dexterous Fine Motor Skills

This year’s progress in AI hasn’t been confined to computer screens.

Earlier in the year, a German primatology team has recorded the hand motions of primates in tandem with corresponding neural activity, and they’re able to predict, based on brain activity, what fine motions are going on. They’ve also been able to teach those same fine motor skills to robotic hands, aiming at neural-enhanced prostheses.

In the middle of the year, a team at U.C. Berkeley announced a much more general and easier way to teach robots fine motor skills. They applied deep reinforcement learning-based guided policy search to get robots to be able to screw caps on bottles, to use the back of a hammer to remove a nail from wood, and other seemingly every day actions. These are the kind of actions that are typically trivial for people but very difficult for machines, and this team’s system matches human dexterity and speed at these tasks. It actually learns to do these actions by trying to do them using hand-eye coordination, and by practicing, refining its technique after just a few tries.

 

Watch This Space

This is by no means a comprehensive list of the impressive feats in AI and machine learning (ML) for the year. There are also many more foundational discoveries and developments that have occurred this year, including some that I fully expect to be more revolutionary than any of the above. But those are in early days and so out of the scope of these top picks.

This year has certainly provided some impressive progress. But we expect to see even more in 2016. Coming up next year, I expect to see some more radical deep architectures, better integration of the symbolic and subsymbolic, some impressive dialogue systems, an AI finally dominating the game of Go, deep learning being used for more elaborate robotic planning and motor control, high-quality video summarization, and more creative and higher-resolution dreaming, which should all be quite a sight. What’s even more exciting are the developments we don’t expect.

Highlights and impressions from NIPS conference on machine learning

This year’s NIPS was an epicenter of the current enthusiasm about AI and deep learning – there was a visceral sense of how quickly the field of machine learning is progressing, and two new AI startups were announced. Attendance has almost doubled compared to the 2014 conference (I hope they make it multi-track next year), and several popular workshops were standing room only. Given that there were only 400 accepted papers and almost 4000 people attending, most people were there to learn and socialize. The conference was a socially intense experience that reminded me a bit of Burning Man – the overall sense of excitement, the high density of spontaneous interesting conversations, the number of parallel events at any given time, and of course the accumulating exhaustion.

Some interesting talks and posters

Sergey Levine’s robotics demo at the crowded Deep Reinforcement Learning workshop (we showed up half an hour early to claim spots on the floor). This was one of the talks that gave me a sense of fast progress in the field. The presentation started with videos from this summer’s DARPA robotics challenge, where the robots kept falling down while trying to walk or open a door. Levine proceeded to outline his recent work on guided policy search, alternating between trajectory optimization and supervised training of the neural network, and granularizing complex tasks. He showed demos of robots successfully performing various high-dexterity tasks, like opening a door, screwing on a bottle cap, or putting a coat hanger on a rack. Impressive!

Generative image models using a pyramid of adversarial networks by Denton & Chintala. Generating realistic-looking images using one neural net as a generator and another as an evaluator – the generator tries to fool the evaluator by making the image indistinguishable from a real one, while the evaluator tries to tell real and generated images apart. Starting from a coarse image, successively finer images are generated using the adversarial networks from the coarser images at the previous level of the pyramid. The resulting images were mistaken for real images 40% of the time in the experiment, and around 80% of them looked realistic to me when staring at the poster.

Path-SGD by Salakhutdinov et al, a scale-invariant version of the stochastic gradient descent algorithm. Standard SGD uses the L2 norm in as the measure of distance in the parameter space, and rescaling the weights can have large effects on optimization speed. Path-SGD instead regularizes the maximum norm of incoming weights into any unit, minimizing the max-norm over all rescalings of the weights. The resulting norm (called a “path regularizer”) is shown to be invariant to weight rescaling. Overall a principled approach with good empirical results.

End-to-end memory networks by Sukhbaatar et al (video), an extension of memory networks – neural networks that learn to read and write to a memory component. Unlike traditional memory networks, the end-to-end version eliminates the need for supervision at each layer. This makes the method applicable to a wider variety of domains – it is competitive both with memory networks for question answering and with LSTMs for language modeling. It was fun to see the model perform basic inductive reasoning about locations, colors and sizes of objects.

Neural GPUs (video), Deep visual analogy-making (video), On-the-job learning, and many others.

Algorithms Among Us symposium (videos)

A highlight of the conference was the Algorithms Among Us symposium on the societal impacts of machine learning, which I helped organize along with others from FLI. The symposium consisted of 3 panels and accompanying talks – on near-term AI impacts, timelines to general AI, and research priorities for beneficial AI. The symposium organizers (Adrian Weller, Michael Osborne and Murray Shanahan) gathered an impressive array of AI luminaries with a variety of views on the subject, including Cynthia Dwork from Microsoft, Yann LeCun from Facebook, Andrew Ng from Baidu, and Shane Legg from DeepMind. All three panel topics generated lively debate among the participants.

Andrew Ng took his famous statement that “worrying about general AI is like worrying about overpopulation on Mars” to the next level, namely “overpopulation on Alpha Centauri” (is Mars too realistic these days?). But he also endorsed long-term AI safety research, saying that it’s not his cup of tea but someone should be working on it. Ng’s main argument was that even superforecasters can’t predict anything 5 years into the future, so any predictions on longer time horizons are useless. However, as Murray pointed out, having complete uncertainty past a 5-year horizon means that you can’t rule out reaching general AI in 20 years either.

With regards to roadmapping the remaining milestones to general AI, Yann LeCun gave an apt analogy of traveling through mountains in the fog – there are some you can see, and an unknown number hiding in the fog. He also argued that advanced AI is unlikely to be human-like, and cautioned against anthropomorphizing it.

In the research priorities panel, Shane Legg gave some specific recommendations – goal-system stability, interruptibility, sandboxing / containment, and formalization of various thought experiments (e.g. in Superintelligence). He pointed out that AI safety is both overblown and underemphasized – while the risks from advanced AI are not imminent the way they are usually portrayed in the media, more thought and resources need to be devoted to the challenging research problems involved.

One question that came up during the symposium is the importance of interpretability for AI systems, which is actually the topic of my current research project. There was some disagreement about the tradeoff between effectiveness and interpretability. LeCun thought that the main advantage of interpretability is increased robustness, and improvements to transfer learning should produce that anyway, without decreases in effectiveness. Percy Liang argued that transparency is needed to explain to the rest of the world what machine learning systems are doing, which is increasingly important in many applications. LeCun also pointed out that machine learning systems that are usually considered transparent, such as decision trees, aren’t necessarily so. There was also disagreement about what interpretability means in the first place – as Cynthia Dwork said, we need a clearer definition before making any conclusions. It seems that more work is needed both on defining interpretability and on figuring out how to achieve it without sacrificing effectiveness.

Overall, the symposium was super interesting and gave a lot of food for thought (here’s a more detailed summary by Ariel from FLI). Thanks to Adrian, Michael and Murray for their hard work in putting it together.

AI startups

It was exciting to see two new AI startups announced at NIPS – OpenAI, led by Ilya Sutskever and backed by Musk, Altman and others, and Geometric Intelligence, led by Zoubin Ghahramani and Gary Marcus.

OpenAI is a non-profit with a mission to democratize AI research and keep it beneficial for humanity, and a whopping $1Bn in funding pledged. They believe that it’s safer to have AI breakthroughs happening in a non-profit, unaffected by financial interests, rather than monopolized by for-profit corporations. The intent to open-source the research seems clearly good in the short and medium term, but raises some concerns in the long run when getting closer to general AI. As an OpenAI researcher emphasized in an interview, “we are not obligated to share everything – in that sense the name of the company is a misnomer”, and decisions to open-source the research would in fact be made on a case-by-case basis.

While OpenAI plans to focus on deep learning in their first few years, Geometric Intelligence is developing an alternative approach to deep learning that can learn more effectively from less data. Gary Marcus argues that we need to learn more from how human minds acquire knowledge in order to build advanced AI (an inspiration for the venture was observing his toddler learn about the world). I’m looking forward to what comes out of the variety of approaches taken by these new companies and other research teams.

(Thanks to Janos Kramar for his help with editing this post.)

Think-tank dismisses leading AI researchers as luddites

By Stuart Russell and Max Tegmark

2015 has seen a major growth in funding, research and discussion of issues related to ensuring that future AI systems are safe and beneficial for humanity. In a surprisingly polemic report, ITIF think-tank president Robert Atkinson misinterprets this growing altruistic focus of AI researchers as innovation-stifling “Luddite-induced paranoia.” This contrasts with the filmed expert testimony from a panel that he himself chaired last summer. The ITIF report makes three main points regarding AI:

1) The people promoting this beneficial-AI agenda are Luddites and “AI detractors.”

This is a rather bizarre assertion given that the agenda has been endorsed by thousands of AI researchers, including many of the world’s leading experts in industry and academia, in two open letters supporting beneficial AI and opposing offensive autonomous weapons. ITIF even calls out Bill Gates and Elon Musk by name, despite them being widely celebrated as drivers of innovation, and despite Musk having landed a rocket just days earlier. By implication, ITIF also labels as Luddites two of the twentieth century’s most iconic technology pioneers – Alan Turing, the father of computer science, and Norbert Wiener, the father of control theory – both of whom pointed out that super-human AI systems could be problematic for humanity. If Alan Turing, Norbert Wiener, Bill Gates, and Elon Musk are Luddites, then the word has lost its meaning.

Contrary to ITIF’s assertion, the goal of the beneficial-AI movement is not to slow down AI research, but to ensure its continuation by guaranteeing that AI remains beneficial. This goal is supported by the recent $10M investment from Musk in such research and the subsequent $15M investment by the Leverhulme Foundation.

2) An arms race in offensive autonomous weapons beyond meaningful human control is nothing to worry about, and attempting to stop it would harm the AI field and national security.

The thousands of AI researchers who disagree with ITIF’s assessment in their open letter are in a situation similar to that of the biologists and chemists who supported the successful bans on biological and chemical weapons. These bans did not prevent the fields of biology and chemistry from flourishing, nor did they harm US national security – as President Richard Nixon emphasized when he proposed the Biological Weapons Convention. As in this summer’s panel discussion, Atkinson once again appears to suggest that AI researchers should hide potential risks to humanity rather than incur any risk of reduced funding.

3) Studying how AI can be kept safe in the long term is counterproductive: it is unnecessary and may reduce AI funding.

Although ITIF claims that such research is unnecessary, he never gives a supporting argument, merely providing a brief misrepresentation of what Nick Bostrom has written about the advent of super-human AI (raising, in particular, the red herring of self-awareness) and baldly stating that, What should not be debatable is that this possible future is a long, long way off.” Scientific questions should by definition be debatable, and recent surveys of AI researchers indicate a healthy debate with broad range of arrival estimates, ranging from never to not very far off. Research on how to keep AI beneficial is worthwhile today even if it will only be needed many decades from now: the toughest and most crucial questions may take decades to answer, so it is prudent to start tackling them now to ensure that we have the answers by the time we need them. In the absence of such answers, AI research may indeed be slowed down in future in the event of localized control failures – like the so-called “Flash Crash” on the stock market – that dent public confidence in AI systems.

ITIF argues that the AI researchers behind these open letters have unfounded worries. The truly unfounded worries are those that ITIF harbors about AI funding being jeopardized: since the beneficial-AI debate heated up during the past two years, the AI field has enjoyed more investment than ever before, including OpenAI’s billion-dollar investment in beneficial AI research – arguably the largest AI funding initiative in history, with a large share invested by one of ITIF’s alleged Luddites.

Under Robert Atkinson’s leadership, the Information Technology Innovation Foundation has a distinguished record of arguing against misguided policies arising from ignorance of technology. We hope ITIF returns to this tradition and refrains from further attacks on expert scientists and engineers who make reasoned technical arguments about the importance of managing the impacts of increasingly powerful technologies. This is not Luddism, but common sense.

Stuart Russell, Berkeley, Professor of Computer Science, director of the Center for Intelligent Systems, and co-author of the standard textbook “Artificial Intelligence: a Modern Approach”

Max Tegmark, MIT, Professor of Physics, President of Future of Life Institute

Were the Paris Climate Talks a Success?

An interview with Seth Baum, Executive Director of the Global Catastrophic Risk Institute:

Can the Paris Climate Agreement Succeed Where Other Agreements Have Failed?

On Friday, December 18, I talked with Seth Baum, the Executive Director of the Global Catastrophic Risk Institute, about the realistic impact of the Paris Climate Agreement.

The Paris Climate talks ended December 12th, and there’s been a lot of fanfare in the media about how successful these were because 195 countries came together with an agreement. That so many leaders of so many countries could come together on the issue of climate change is a huge success.

As Baum said after the interview, “The Paris Agreement is a good example of the international community, as a whole, coming together to take action that makes the world a safe place. It’s pretty amazing!”

But as amazing as global cooperation is, reading some of that agreement was less than inspiring. There was a lot of suggesting and urging and advising, but no demanding or requiring or committing.

The countries have all agreed to try not to let global temperatures increase beyond 2 degrees Celsius of pre industrial temperatures, and they’re aiming for 1.5 degrees Celsius as the maximum. This is a nice, lofty goal, but is it possible?

The agreement calls for countries to basically check in every five years, but with the rate at which the temperatures are increasing and climate change is affecting us, is this going to be sufficient to accomplish much? This meeting was called the COP21 because this group has now convened every year for the last 21 years. Why should we expect this agreement to produce greater results than what we’ve seen in the past?

As Baum explains, this agreement is “probably about as good as we’re going to get.” It focused on goals that each of the leaders can try to reach using whatever means is best suited for their respective countries. However, there is no penalty if the countries don’t comply. According to Baum, one of the major reasons the agreement is so vague is that the American Senate is unlikely to get the 67 votes necessary to ratify an official treaty on climate change.

Baum also points out that “the difference between 1.9 degrees and 2.1 is pretty trivial.” The goal is to aim for limiting the increase of global temperatures, and whatever improvements can be made toward that objective can at least be considered small successes.

There’s also been some debate about whether climate change and terrorism might be connected, but we also considered another issue that doesn’t get brought up as often: if we reduce our dependency on fossil fuels, will that lead to further destabilization in the Middle East? Baum suspects the answer is yes.

Listen to the full interview for more insight into the Paris Climate Agreement, including how successful it might be under future leadership, as well as how climate change is no longer a catastrophic risk, but rather, a known cause of catastrophes.

 

 

 

 

Inside OpenAI: An Interview by SingularityHUB

The following interview was conducted and written by Shelly Fan for SingularityHUB.

Last Friday at the Neural Information and Processing Systems conference in Montreal, Canada, a team of artificial intelligence luminaries announced OpenAI, a non-profit company set to change the world of machine learning.

Backed by Tesla and Space X’s Elon Musk and Y Combinator’s Sam Altman, OpenAI has a hefty budget and even heftier goals. With a billion dollars in initial funding, OpenAI eschews the need for financial gains, allowing it to place itself on sky-high moral grounds.

artificial-general-intelligenceBy not having to answer to industry or academia, OpenAI hopes to focus not just on developing digital intelligence, but also guide research along an ethical route that, according to their inaugural blog post, “benefits humanity as a whole.”

OpenAI began with the big picture in mind: in 100 years, what will AI be able to achieve, and should we be worried? If left in the hands of giant, for-profit tech companies such as Google, Facebook and Apple, all of whom have readily invested in developing their own AI systems in the last few years, could AI — and future superintelligent systems— hit a breaking point and spiral out of control? Could AI be commandeered by governments to monitor and control their citizens? Could it, as Elon Musk warned earlier this year, ultimately destroy humankind?

Since its initial conception earlier this year, OpenAI has surgically snipped the cream of the crop in the field of deep learning to assemble its team. Among its top young talent is Andrej Karpathy, a PhD candidate at Stanford whose resume includes internships at Google and DeepMind, the secretive London-based AI company that Google bought in 2014.

Last Tuesday, I sat down with Andrej to chat about OpenAI’s ethos and vision, its initial steps and focus, as well as the future of AI and superintelligence. The interview has been condensed and edited for clarity.


How did OpenAI come about?

Earlier this year, Greg [Brockman], who used to be the CTO of Stripe, left the company looking to do something a bit different. He has a long-lasting interest in AI so he was asking around, toying with the idea of a research-focused AI startup. He reached out to the field and got the names of people who’re doing good work and ended up rounding us up.

At the same time, Sam [Altman] from YC became extremely interested in this as well. One way that YC is encouraging innovation is as a startup accelerator; another is through research labs. So, Sam recently opened YC Research, which is an umbrella research organization, and OpenAI is, or will become, one of the labs.

As for Elon — obviously he has had concerns over AI for a while, and after many conversations, he jumped onboard OpenAI in hopes to help AI develop in a beneficial and safe way.

How much influence will the funders have on how OpenAI does its research?

We’re still at very early stages so I’m not sure how this will work out. Elon said he’d like to work with us roughly once a week. My impression is that he doesn’t intend to come in and tell us what to do — our first interactions were more along the lines of “let me know in what way I can be helpful.” I felt a similar attitude from Sam and others.

AI has been making leaps recently, with contributions from academia, big tech companies and clever startups. What can OpenAI hope to achieve by putting you guys together in the same room that you can’t do now as a distributed network?

I’m a huge believer in putting people physically together in the same spot and having them talk. The concept of a network of people collaborating across institutions would be much less efficient, especially if they all have slightly different incentives and goals.

More abstractly, in terms of advancing AI as a technology, what can OpenAI do that current research institutions, companies or deep learning as a field can’t?

how-to-prevent-evil-ai-9A lot of it comes from OpenAI as a non-profit. What’s happening now in AI is that you have a very limited number of research labs and large companies, such as Google, which are hiring a lot of researchers doing groundbreaking work. Now suppose AI could one day become — for lack of a better word — dangerous, or used dangerously by people. It’s not clear that you would want a big for-profit company to have a huge lead, or even a monopoly over the research. It is primarily an issue of incentives, and the fact that they are not necessarily aligned with what is good for humanity. We are baking that into our DNA from the start.

Also, there are some benefits of being a non-profit that I didn’t really appreciate until now. People are actually reaching out and saying “we want to help”; you don’t get this in companies; it’s unthinkable. We’re getting emails from dozens of places — people offering to help, offering their services, to collaborate, offering GPU power. People are very willing to engage with you, and in the end, it will propel our research forward, as well as AI as a field.

OpenAI seems to be built on the big picture how will AI benefit humanity, and how it may eventually destroy us all. Elon has repeatedly warned against unmonitored AI development. In your opinion, is AI a threat?

When Elon talks about the future, he talks about scales of tens or hundreds of years from now, not 5 or 10 years that most people think about. I don’t see AI as a threat over the next 5 or 10 years, other than those you might expect from more reliance on automation; but if we’re looking at humanity already populating Mars (that far in the future), then I have much more uncertainty, and sure, AI might develop in ways that could pose serious challenges.

how-to-prevent-evil-ai-5I think that saying AI will destroy humanity is out there on a five-year horizon; but if we’re looking at humanity already populating Mars (that far in the future), then yeah AI could be a serious problem.

One thing we do see is that a lot of progress is happening very fast. For example, computer vision has undergone a complete transformation — papers from more than three years ago now look foreign in face of recent approaches. So when we zoom out further over decades I think I have a fairly wide distribution over where we could be. So say there is a 1% chance of something crazy and groundbreaking happening. When you additionally multiply that by the utility of a few for-profit companies having monopoly over this tech, then yes that starts to sound scary.

Do you think we should put restraints on AI research to assure safety?

No, not top-down, at least right now. In general I think it’s a safer route to have more AI experts who have a shared awareness of the work in the field. Opening up research like what OpenAI wants to do, rather than having commercial entities having monopoly over results for intellectual property purposes, is perhaps a good way to go.

True, but recently for-profit companies are releasing their technology as well I’m thinking Google’s TensorFlow and Facebook’s Torch. In this sense how does OpenAI differ in its “open research” approach?

So when you say “releasing” there are a few things that need clarification. First Facebook did not release Torch; Torch is a library that’s been around for several years now. Facebook has committed to Torch and is improving on it. So has DeepMind.

how-to-prevent-evil-ai-7But TensorFlow and Torch are just tiny specks of their research — they are tools that can help others do research well, but they’re not actual results that others can build upon.

Still, it is true that many of these industrial labs have recently established a good track record of publishing research results, partly because a large number of people on the inside are from academia. Still, there is a veil of secrecy surrounding a large portion of the work, and not everything makes it out. In the end, companies don’t really have very strong incentives to share.

OpenAI, on the other hand, encourages us to publish, to engage the public and academia, to Tweet, to blog. I’ve gotten into trouble in the past for sharing a bit too much from inside companies, so I personally really, really enjoy the freedom.

What if OpenAI comes up with a potentially game-changing algorithm that could lead to superintelligence? Wouldn’t a fully open ecosystem increase the risk of abusing the technology?

In a sense it’s kind of like CRISPR. CRISPR is a huge leap for genome editing that’s been around for only a few years, but has great potential for benefiting — and hurting — humankind. Because of these ethical issues there was a recent conference on it in DC to discuss how we should go forward with it as a society.

If something like that happens in AI during the course of OpenAI’s research — well, we’d have to talk about it. We are not obligated to share everything — in that sense the name of the company is a misnomer — but the spirit of the company is that we do by default.

In the end, if there is a small chance of something crazy happening in AI research, everything else being equal, do you want these advances to be made inside a commercial company, especially one that has monopoly on the research, or do you want this to happen within a non-profit?

We have this philosophy embedded in our DNA from the start that we are mindful of how AI develops, rather than just [a focus on] maximizing profit.

In that case, is OpenAI comfortable being the gatekeeper, so to speak? You’re heavily influencing how the field is going to go and where it’s going.

It’s a lot of responsibility. It’s a “lesser evil” argument; I think it’s still bad. But we’re not the only ones “controlling” the field — because of our open nature we welcome and encourage others to join in on the discussion. Also, what’s the alternative? In a way a non-profit, with sharing and safety in its DNA, is the best option for the field and the utility of the field.

Also, AI is not the only field to worry about — I think bio is a far more pressing domain in terms of destroying the world [laugh]!

In terms of hiring — OpenAI is competing against giant tech companies in the Silicon Valley. How is the company planning on attracting top AI researchers?

We have perks [laugh].

But in all seriousness, I think the company’s mission and team members are enough. We’re currently actively hiring people, and so far have no trouble getting people excited about joining us. In several ways OpenAI combines the best of academia and the startup world, and being a non-profit we have the moral high ground, which is nice [laugh].

The team, especially, is a super strong, super tight team and that is a large part of the draw.

Take some rising superstars in the field — myself not included — put them together and you get OpenAI. I joined mainly because I heard about who else is on the team. In a way, that’s the most shocking part; a friend of mine described it as “storming the temple.” Greg came in from nowhere and scooped up the top people to do something great and make something new.

hub-viral-hits-2015-1Now that OpenAI has a rockstar team of scientists,what’s your strategy for developing AI? Are you getting vast amounts of data from Elon? What problems are you tackling first?

So we’re really still trying to figure a lot of this out. We are trying to approach this with a combination of bottom up and top down thinking. Bottom up are the various papers and ideas we might want to work on. Top down is doing so in a way that adds up. We’re currently in the process of thinking this through.

For example, I just submitted one vision research proposal draft today, actually [laugh]. We’re putting a few of them together. Also it’s worth pointing out that we’re not currently actively working on AI safety. A lot of the research we currently have in mind looks conventional. In terms of general vision and philosophy I think we’re most similar to DeepMind.

We might be able to at some point take advantage of data from Elon or YC companies, but for now we also think we can go quite far making our own datasets, or working with existing public datasets that we can work on in sync with the rest of academia.

Would OpenAI ever consider going into hardware, since sensors are a main way of interacting with the environment?

So, yes we are interested, but hardware has a lot of issues. For us, roughly speaking there are two worlds: the world of bits and the world of atoms. I am personally inclined to stay in the world of bits for now, in other words, software. You can run things in the cloud, it’s much faster. The world of atoms — such as robots — breaks too often and usually has a much slower iteration cycle. This is a very active discussion that we’re having in the company right now.

Do you think we can actually get to generalized AI?

I think to get to superintelligence we might currently be missing differences of a “kind,” in the sense that we won’t get there by just making our current systems better. But fundamentally there’s nothing preventing us getting to human-like intelligence and beyond.

To me, it’s mostly a question of “when,” rather than “if.”

I don’t think we need to simulate the human brain to get to human-like intelligence; we can zoom out and approximate how it works. I think there’s a more straightforward path. For example, some recent work shows that ConvNet* activations are very similar to the human visual cortex’s IT area activation, without mimicking how neurons actually work.

[*SF: ConvNet, or convolutional network, is a type of artificial neural network topology tailored to visual tasks first developed by Yann LeCun in the 1990s. IT is the inferior temporal cortex, which processes complex object features.]

how-to-prevent-evil-ai-8So it seems to me that with ConvNets we’ve almost checked off large parts of the visual cortex, which is somewhere around 30% of the cortex, and the rest of the cortex maybe doesn’t look all that different. So I don’t see how over a timescale of several decades we can’t make good progress on checking off the rest.

Another point is that we don’t necessarily have to be worried about human-level AI. I consider chimp-level AI to be equally scary, because going from chimp to humans took nature only a blink of an eye on evolutionary time scales, and I suspect that might be the case in our own work as well. Similarly, my feeling is that once we get to that level it will be easy to overshoot and get to superintelligence.

On a positive note though, what gives me solace is that when you look at our field historically, the image of AI research progressing with a series of unexpected “eureka” breakthroughs is wrong. There is no historical precedent for such moments; instead we’re seeing a lot of fast and accelerating, but still incremental progress. So let’s put this wonderful technology to good use in our society while also keeping a watchful eye on how it all develops.

Image Credit: Shutterstock.com

See the original post here.

The AI Wars: The Battle of the Human Minds to Keep Artificial Intelligence Safe

For all the media fear mongering about the rise of artificial intelligence in the future and the potential for malevolent machines, a battle of the AI war has already begun. But this one is being waged by some of the most impressive minds within the realm of human intelligence today.

At the start of 2015, few AI researchers were worried about AI safety, but that all changed quickly. Throughout the year, Nick Bostrom’s book, Superintelligence: Paths, Dangers, Strategies, grew increasingly popular. The Future of Life Institute held its AI safety conference in Puerto Rico. Two open letters regarding artificial intelligence and autonomous weapons were released. Countless articles came out, quoting AI concerns from the likes of Elon Musk, Stephen Hawking, Bill Gates, Steve Wozniak, and other luminaries of science and technology. Musk donated $10 million in funding to AI safety research through FLI. Fifteen million dollars was granted to the creation of the Leverhulme Centre for the Future of Intelligence. And most recently, the nonprofit AI research company, OpenAI, was launched to the tune of $1 billion, which will allow some of the top minds in the AI field to address safety-related problems as they come up.

In all, it’s been a big year for AI safety research. Many in science and industry have joined the AI-safety-research-is-needed camp, but there are still some stragglers of equally staggering intellect. So just what does the debate still entail?

OpenAI was the big news of the past week, and its launch coincided (probably not coincidentally) with the Neural Information Processing Systems conference, which attracts some of the best-of-the-best in machine learning. Among the attractions at the conference was the symposium, Algorithms Among Us: The Societal Impacts of Machine Learning, where some of the most influential people in AI research and industry debated their thoughts and concerns about the future of artificial intelligence.

[Author’s note: The following are symposium highlights grouped together by topic to inform about arguments in the world of AI research. The discussions did not necessarily occur in the order below.]
 

From session 2 of the Algorithms Among Us symposium: Murray Shanahan, Shane Legg, Andrew Ng, Yann LeCun, Tom Dietterich, and Gary Marcus

What is AGI and should we be worried about it?

Artificial general intelligence (AGI) is the term given to artificial intelligence that would be, in some sense, equivalent to human intelligence. It wouldn’t solve just a narrow, specific task, as AI does today, but would instead solve a variety of problems and perform a variety of tasks, with or without being programmed to do so. That said, it’s not the most well defined term. As the director of Facebook’s AI research group, Yann LeCun stated, “I don’t want to talk about human-level intelligence because I don’t know what that means really.”

If defining AGI is difficult, predicting if or when it will exist is nearly impossible. Some of the speakers, like LeCun and Andrew Ng, didn’t want to waste time considering the possibility of AGI since they consider it to be so distant. Both referenced the likelihood of another AI winter, in which, after all this progress, scientists will hit a research wall that will take some unknown number of years or decades to overcome. Ng, a Stanford professor and Chief Scientist of Baidu, compared concerns about the future of human-level AI to far-fetched worries about the difficulties surrounding travel to the star system Alpha Centauri.

LeCun pointed out that we don’t really know what a superintelligent AI would look like. “Will AI look like human intelligence? I think not. Not at all,” he said. He then went on to explain why human intelligence isn’t nearly as general as we like to believe. “We’re driven by basic instincts […] They (AI) won’t have the drives that make humans do bad things to each other.” He added that there would be no reason he can think of to build preservation instincts or curiosity into machines.

However, many of the participants disagreed with LeCun and Ng, emphasizing the need to be prepared in advance of problems, rather than trying to deal with them as they arise.

Shane Legg, co-founder of Google’s DeepMind, argued that the benefit of starting safety research now is that it will help us develop a framework that will allow researchers to move in a positive direction toward the development of smarter AI. “In terms of AI safety, I think it’s both overblown and underemphasized,” he said, commenting on how profound – both positively and negatively – the societal impact of advanced AI could be. “If we are approaching a transition of this magnitude, I think it’s only responsible that we start to consider, to whatever extent that we can in advance, the technical aspects and the societal aspects and the legal aspects and whatever else […] Being prepared ahead of time is better than trying to be prepared after you already need some good answers.”

Gary Marcus, Director of the NYU Center for Language and Music, added, “In terms of being prepared, we don’t just need to prepare for AGI, we need to prepare for better AI […] Already, issues of security and risk have come forth.”

Even Ng agreed that AI safety research certainly wasn’t a bad thing, saying, “I’m actually glad there are other parts of society studying ethical parts of AI. I think this is a great thing to do.” Though he also admitted it wasn’t something he wanted to spend his own time on.
 

It’s the economy…

Among all of the AI issues debated by researchers, the one agreed upon by almost everyone who took the stage at the symposium was the detrimental impact AI could have on the job market. Erik Brynjolfsson, co-author of The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies, set the tone for the discussion with his presentation which highlighted some of the issues that artificial intelligence will have on the economy. He explained that we’re in the midst of incredible technological advances, which could be highly beneficial, but our skills, organizations and institutions aren’t keeping up. Because of the huge gap in pace, business as usual won’t work.

As unconcerned about the future of AGI as Ng was, he quickly became the strongest advocate for tackling the economics issue that will pop up in the near future. “I think the biggest challenge is the challenge of unemployment,” Ng said.

The issue of unemployment is one that is already starting to appear, even with the very narrow AI that exists today. Around the world, low- and middle-skilled workers are getting displaced by robots or software, and that trend is expected to continue at rapid rates.

LeCun argued that the world overcame the massive job loss that resulted from the new technologies associated with the steam engine too, but both Brynjolfsson and Ng disagreed with that argument, citing the much more rapid pace of technology today. “Technology has always been destroying jobs, and it’s always been creating jobs,” Brynjolfsson admitted, but he also explained how difficult it is to predict which technologies will impact us the most and when they’ll kick in. The current exponential rate of technological progress is unlike anything we’ve ever experienced before in history.

Bostrom mentioned that the rise of thinking machines will be more analogous to the rise of the human species than to the steam engine or the industrial revolution. He reminded the audience that if a superintelligent AI is developed, it will be the last invention we ever have to make.

A big concern with the economy is that the job market is changing so quickly that most people can’t develop new skills fast enough to keep up. The possibility of a basic income and paying people to go back to school were both mentioned. However, the psychological toll of being unemployed is one that can’t be overcome even with a basic income, and the effect that mass unemployment might have on people drew concern from the panelists.

Bostrom became an unexpected voice of optimism, pointing out that there have always been groups who were unemployed, such as aristocrats, children and retirees. Each of these groups managed to enjoy their unemployed time by filling it with other hobbies and activities.

However, solutions like basic income and leisure time will only work if political leaders begin to take the initiative soon to address the unemployment issues that near-future artificial intelligence will trigger.
 

From session 2 of the Algorithms Among Us symposium: Michael Osborne, Finale Doshi-Velez, Neil Lawrence, Cynthia Dwork, Tom Dietterich, Erik Brynjolfsson, and Ian Kerr

Closing arguments

Ideally, technology is just a tool that is not inherently good or bad. Whether it helps humanity or hurts us should depend on how we use the tool. Except if AI develops the capacity to think, this argument isn’t quite accurate. At that point, the AI isn’t a person, but it isn’t just an instrument either.

Ian Kerr, the Research Chair of Ethics, Law, and Technology at the University of Ottawa, spoke early in the symposium about the legal ramifications (or lack thereof) of artificial intelligence. The overarching question for an AI gone wrong is: who’s to blame? Who will be held responsible when something goes wrong? Or, on the flip side, who is to blame if a human chooses to ignore the advice of an AI that’s had inconsistent results, but which later turns out to have been the correct advice?

If anything, one of the most impressive results from this debate was how often the participants agreed with each other. At the start of the year, few AI researchers were worried about safety. Now, though many still aren’t worried, most acknowledge that we’re all better off if we consider safety and other issues sooner rather than later. The most disagreement was over when we should start working on AI safety, not if it should happen. The panelists also all agreed that regardless of how smart AI might become, it will happen incrementally, rather than as the “event” that is implied in so many media stories. We already have machines that are smarter and better at some tasks than humans, and that trend will continue.

For now, as Harvard Professor Finale Doshi-Velez pointed out, we can control what we get out of the machine: if we don’t like or understand the results, we can reprogram it.

But how much longer will that be a viable solution?
 

Coming soon…

The article above highlights some of the discussion that occurred between AI researchers about whether or not we need to focus on AI safety research. Because so many AI researchers do support safety research, there was also much more discussion during the symposium about which areas pose the most risk and have the most potential. We’ll be starting a new series in the new year that goes into greater detail about different fields of study that AI researchers are most worried about and most excited about.

 

Pentagon Seeks $12 -$15 Billion for AI Weapons Research

The news this month is full of stories about money pouring into AI research. First we got the news about the $15 million granted to the new Leverhulme Center for the Future of Intelligence. Then Elon Musk and friends dropped the news about launching OpenAI to the tune of $1 billion, promising that this would be a not-for-profit company committed to safe AI and improving the world. But that all pales in comparison to the $12-$15 billion that the Pentagon is requesting for the development of AI weapons.

According to Reuters, “The Pentagon’s fiscal 2017 budget request will include $12 billion to $15 billion to fund war gaming, experimentation and the demonstration of new technologies aimed at ensuring a continued military edge over China and Russia.” The military is looking to develop more advanced weapons technologies that will include autonomous weapons and deep learning machines.

While the research itself would be strictly classified, the military wants to ensure that countries like China and Russia know this advanced weapons research is taking place.

“I want our competitors to wonder what’s behind the black curtain,” Deputy Defense Secretary Robert Work said.

The United States will continue to try to develop positive relations with Russia and China, but Work believes AI weapons R&D will help strengthen deterrence.

Read the full Reuters article here.

 

 

OpenAI Announced

Press release from OpenAI:
Introducing OpenAI

by Greg Brockman, Ilya Sutskever, and the OpenAI team
December 11, 2015
OpenAI is a non-profit artificial intelligence research company. Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.

Since our research is free from financial obligations, we can better focus on a positive human impact. We believe AI should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as is possible safely.

The outcome of this venture is uncertain and the work is difficult, but we believe the goal and the structure are right. We hope this is what matters most to the best in the field.

Background

Artificial intelligence has always been a surprising field. In the early days, people thought that solving certain tasks (such as chess) would lead us to discover human-level intelligence algorithms. However, the solution to each task turned out to be much less general than people were hoping (such as doing a search over a huge number of moves).

The past few years have held another flavor of surprise. An AI technique explored for decades, deep learning, started achieving state-of-the-art results in a wide variety of problem domains. In deep learning, rather than hand-code a new algorithm for each problem, you design architectures that can twist themselves into a wide range of algorithms based on the data you feed them.

This approach has yielded outstanding results on pattern recognition problems, such as recognizing objects in images, machine translation, and speech recognition. But we’ve also started to see what it might be like for computers to be creative, to dream, and to experience the world.

Looking forward

AI systems today have impressive but narrow capabilities. It seems that we’ll keep whittling away at their constraints, and in the extreme case they will reach human performance on virtually every intellectual task. It’s hard to fathom how much human-level AI could benefit society, and it’s equally hard to imagine how much it could damage society if built or used incorrectly.

OpenAI

Because of AI’s surprising history, it’s hard to predict when human-level AI might come within reach. When it does, it’ll be important to have a leading research institution which can prioritize a good outcome for all over its own self-interest.

We’re hoping to grow OpenAI into such an institution. As a non-profit, our aim is to build value for everyone rather than shareholders. Researchers will be strongly encouraged to publish their work, whether as papers, blog posts, or code, and our patents (if any) will be shared with the world. We’ll freely collaborate with others across many institutions and expect to work with companies to research and deploy new technologies.

OpenAI’s research director is Ilya Sutskever, one of the world experts in machine learning. Our CTO is Greg Brockman, formerly the CTO of Stripe. The group’s other founding members are world-class research engineers and scientists: Trevor Blackwell, Vicki Cheung, Andrej Karpathy, Durk Kingma, John Schulman, Pamela Vagata, and Wojciech Zaremba. Pieter Abbeel, Yoshua Bengio, Alan Kay, Sergey Levine, and Vishal Sikka are advisors to the group. OpenAI’s co-chairs are Sam Altman and Elon Musk.

Sam, Greg, Elon, Reid Hoffman, Jessica Livingston, Peter Thiel, Amazon Web Services (AWS), Infosys, and YC Research are donating to support OpenAI. In total, these funders have committed $1 billion, although we expect to only spend a tiny fraction of this in the next few years.

You can follow us on Twitter at @open_ai or email us at info@openai.com.

The World Has Lost 33% of Its Farmable Land

During the Paris climate talks last week, researchers from the University of Sheffield’s Grantham Center revealed that in the last 40 years, the world has lost nearly 33% of its farmable land.

The loss is attributed to erosion and pollution, but the effects are expected to be exacerbated by climate change. Meanwhile, global food production is expected to grow by 60% in the next 35 years.

Researchers at the Grantham Center argue that the current intensive agriculture system is unsustainable. Modern agriculture requires heavy use of fertilizers, which “consume 5% of the world’s natural gas production and 2% of the world’s annual energy supply.” This use of fertilizers also allows “nutrients to wash out and pollute fresh coastal waters, causing algal blooms and lethal oxygen depletion,” along with a host of other problems. As fertilizers weaken the soil, heavily ploughed fields can face erosion rates that are “10-100 times greater than [the] rates of soil formation.”

Organic farming typically includes better soil management practices, but crop yields will not be sufficient to feed the growing global population.

In response to these concerns, Grantham Center researchers have called for a sustainable model for intensive agriculture that will incorporate lessons both from history and modern biotechnology. The scientists suggest the following three principles for improved farming practices:

  1. “Managing soil by direct manure application, rotating annual and cover crops, and practicing no-till agriculture.”
  2. “Using biotechnology to wean crops off the artificial world we have created for them, enabling plants to initiate and sustain symbioses with soil microbes.”
  3. “Recycling nutrients from sewage in a modern example of circular economy. Inorganic fertilizers could be manufactured from human sewage in biorefineries operating at industrial or local scales.”

The Grantham researchers recognize that the task of improving our farming situation can’t just fall on farmers’ shoulders. They expect policymakers will also need to get involved.

Speaking to the Guardian, Duncan Cameron, one of the scientists involved in this study, said, “We can’t blame the farmers in this. We need to provide the capitalisation to help them rather than say, ‘Here’s a new policy, go and do it.’ We have the technology. We just need the political will to give us a fighting chance of solving this problem.”

Read the complete Grantham Center briefing note here.

$15 Million Granted by Leverhulme to New AI Research Center at Cambridge University

The University of Cambridge has received a grant for just over $15 Million USD from the Leverhulme Foundation to establish a 10-year Centre focused on the opportunities and challenges posed by AI in the long-term. They provided FLI with the following news release:

About the New Center

Hot on the heels of 80K’s excellent AI risk research career profile, we’re delighted to announce the funding of a new international Leverhulme Centre for the Future of Intelligence (CFI), to be led by Cambridge (Huw Price and Zoubin Ghahramani), with spokes at Oxford (Nick Bostrom), Imperial (Murray Shanahan), and Berkeley (Stuart Russell). The Centre proposal was developed at CSER, but will be a stand-alone centre, albeit collaborating extensively with CSER and with the Strategic AI Research Centre (an Oxford-Cambridge collaboration recently funded by the Future of Life Institute’s AI safety grants program). We also hope for extensive collaboration with the Future of Life Institute.

Building on the “Puerto Rico Agenda” from the Future of Life Institute’s landmark January 2015 conference, it will have the long-term safe and beneficial development of AI at its core, but with a broader remit than CSER’s focus on catastrophic AI risk and superintelligence. For example, it will consider some near-term challenges such as lethal autonomous weapons, as well as some of the longer-term philosophical and practical issues surrounding the opportunities and challenges we expect to face, should greater-than-human-level intelligence be developed later this century.

CFI builds on the pioneering work of FHI, FLI and others, along with the generous support of Elon Musk, who helped massively boost this field with his (separate) $10M grants programme in January of this year. One of the most important things this Centre will achieve is in taking a big step towards making this global area of research a long-term one in which the best talents can be expected to have lasting careers – the Centre is funded for a full 10 years, and we will aim to build longer-lasting funding on top of this.

In practical terms, it means that ~10 new postdoc positions will be opening up in this space across academic disciplines and locations (Cambridge, Oxford, Berkeley, Imperial and elsewhere). Our first priority will be to identify and hire a world-class Executive Director, who would start in October. This will be a very influential position over the coming years. Research positions will most likely begin in April 2017.

Between now and then, FHI is hiring for AI safety researchers, CSER will be hiring for an AI policy postdoc in the spring, and MIRI is also hiring. A number of the key researchers in the AI safety community are also organizing a high-level symposium on the impacts and future of AI at the Neural Information Processing Systems conference next week.

 

CFI and the Future of AI Safety Research

Human-level intelligence is familiar in biological ‘hardware’ — it happens inside our skulls. Technology and science are now converging on a possible future where similar intelligence can be created in computers.

While it is hard to predict when this will happen, some researchers suggest that human-level AI will be created within this century. Freed of biological constraints, such machines might become much more intelligent than humans. What would this mean for us? Stuart Russell, a world-leading AI researcher at the University of California, Berkeley, and collaborator on the project, suggests that this would be “the biggest event in human history”. Professor Stephen Hawking agrees, saying that “when it eventually does occur, it’s likely to be either the best or worst thing ever to happen to humanity, so there’s huge value in getting it right.”

Now, thanks to an unprecedented £10 million (~$15 million USD) grant from the Leverhulme Trust, the University of Cambridge is to establish a new interdisciplinary research centre, the Leverhulme Centre for the Future of Intelligence, to explore the opportunities and challenges of this potentially epoch-making technological development, both short and long term.

The Centre brings together computer scientists, philosophers, social scientists and others to examine the technical, practical and philosophical questions artificial intelligence raises for humanity in the coming century.

Huw Price, the Bertrand Russell Professor of Philosophy at Cambridge and Director of the Centre, said: “Machine intelligence will be one of the defining themes of our century, and the challenges of ensuring that we make good use of its opportunities are ones we all face together. At present, however, we have barely begun to consider its ramifications, good or bad”.

The Centre is a response to the Leverhulme Trust’s call for “bold, disruptive thinking, capable of creating a step-change in our understanding”. The Trust awarded the grant to Cambridge for a proposal developed with the Executive Director of the University’s Centre for the Study of Existential Risk (CSER), Dr Seán Ó hÉigeartaigh. CSER investigates emerging risks to humanity’s future including climate change, disease, warfare and technological revolutions.

Dr Ó hÉigeartaigh said: “The Centre is intended to build on CSER’s pioneering work on the risks posed by high-level AI and place those concerns in a broader context, looking at themes such as different kinds of intelligence, responsible development of technology and issues surrounding autonomous weapons and drones.”

The Leverhulme Centre for the Future of Intelligence spans institutions, as well as disciplines. It is a collaboration led by the University of Cambridge with links to the Oxford Martin School at the University of Oxford, Imperial College London, and the University of California, Berkeley. It is supported by Cambridge’s Centre for Research in the Arts, Social Sciences and Humanities (CRASSH). As Professor Price put it, “a proposal this ambitious, combining some of the best minds across four universities and many disciplines, could not have been achieved without CRASSH’s vision and expertise.”

Zoubin Ghahramani, Deputy Director, Professor of Information Engineering and a Fellow of St John’s College, Cambridge, said: “The field of machine learning continues to advance at a tremendous pace, and machines can now achieve near-human abilities at many cognitive tasks — from recognising images to translating between languages and driving cars. We need to understand where this is all leading, and ensure that research in machine intelligence continues to benefit humanity. The Leverhulme Centre for the Future of Intelligence will bring together researchers from a number of disciplines, from philosophers to social scientists, cognitive scientists and computer scientists, to help guide the future of this technology and study its implications.”

The Centre aims to lead the global conversation about the opportunities and challenges to humanity that lie ahead in the future of AI. Professor Price said: “With far-sighted alumni such as Charles Babbage, Alan Turing, and Margaret Boden, Cambridge has an enviable record of leadership in this field, and I am delighted that it will be home to the new Leverhulme Centre.”

A version of this news release can also be found on the Cambridge University website and at Eureka Alert.