Think-tank dismisses leading AI researchers as luddites

By Stuart Russell and Max Tegmark

2015 has seen a major growth in funding, research and discussion of issues related to ensuring that future AI systems are safe and beneficial for humanity. In a surprisingly polemic report, ITIF think-tank president Robert Atkinson misinterprets this growing altruistic focus of AI researchers as innovation-stifling “Luddite-induced paranoia.” This contrasts with the filmed expert testimony from a panel that he himself chaired last summer. The ITIF report makes three main points regarding AI:

1) The people promoting this beneficial-AI agenda are Luddites and “AI detractors.”

This is a rather bizarre assertion given that the agenda has been endorsed by thousands of AI researchers, including many of the world’s leading experts in industry and academia, in two open letters supporting beneficial AI and opposing offensive autonomous weapons. ITIF even calls out Bill Gates and Elon Musk by name, despite them being widely celebrated as drivers of innovation, and despite Musk having landed a rocket just days earlier. By implication, ITIF also labels as Luddites two of the twentieth century’s most iconic technology pioneers – Alan Turing, the father of computer science, and Norbert Wiener, the father of control theory – both of whom pointed out that super-human AI systems could be problematic for humanity. If Alan Turing, Norbert Wiener, Bill Gates, and Elon Musk are Luddites, then the word has lost its meaning.

Contrary to ITIF’s assertion, the goal of the beneficial-AI movement is not to slow down AI research, but to ensure its continuation by guaranteeing that AI remains beneficial. This goal is supported by the recent $10M investment from Musk in such research and the subsequent $15M investment by the Leverhulme Foundation.

2) An arms race in offensive autonomous weapons beyond meaningful human control is nothing to worry about, and attempting to stop it would harm the AI field and national security.

The thousands of AI researchers who disagree with ITIF’s assessment in their open letter are in a situation similar to that of the biologists and chemists who supported the successful bans on biological and chemical weapons. These bans did not prevent the fields of biology and chemistry from flourishing, nor did they harm US national security – as President Richard Nixon emphasized when he proposed the Biological Weapons Convention. As in this summer’s panel discussion, Atkinson once again appears to suggest that AI researchers should hide potential risks to humanity rather than incur any risk of reduced funding.

3) Studying how AI can be kept safe in the long term is counterproductive: it is unnecessary and may reduce AI funding.

Although ITIF claims that such research is unnecessary, he never gives a supporting argument, merely providing a brief misrepresentation of what Nick Bostrom has written about the advent of super-human AI (raising, in particular, the red herring of self-awareness) and baldly stating that, What should not be debatable is that this possible future is a long, long way off.” Scientific questions should by definition be debatable, and recent surveys of AI researchers indicate a healthy debate with broad range of arrival estimates, ranging from never to not very far off. Research on how to keep AI beneficial is worthwhile today even if it will only be needed many decades from now: the toughest and most crucial questions may take decades to answer, so it is prudent to start tackling them now to ensure that we have the answers by the time we need them. In the absence of such answers, AI research may indeed be slowed down in future in the event of localized control failures – like the so-called “Flash Crash” on the stock market – that dent public confidence in AI systems.

ITIF argues that the AI researchers behind these open letters have unfounded worries. The truly unfounded worries are those that ITIF harbors about AI funding being jeopardized: since the beneficial-AI debate heated up during the past two years, the AI field has enjoyed more investment than ever before, including OpenAI’s billion-dollar investment in beneficial AI research – arguably the largest AI funding initiative in history, with a large share invested by one of ITIF’s alleged Luddites.

Under Robert Atkinson’s leadership, the Information Technology Innovation Foundation has a distinguished record of arguing against misguided policies arising from ignorance of technology. We hope ITIF returns to this tradition and refrains from further attacks on expert scientists and engineers who make reasoned technical arguments about the importance of managing the impacts of increasingly powerful technologies. This is not Luddism, but common sense.

Stuart Russell, Berkeley, Professor of Computer Science, director of the Center for Intelligent Systems, and co-author of the standard textbook “Artificial Intelligence: a Modern Approach”

Max Tegmark, MIT, Professor of Physics, President of Future of Life Institute

Were the Paris Climate Talks a Success?

An interview with Seth Baum, Executive Director of the Global Catastrophic Risk Institute:

Can the Paris Climate Agreement Succeed Where Other Agreements Have Failed?

On Friday, December 18, I talked with Seth Baum, the Executive Director of the Global Catastrophic Risk Institute, about the realistic impact of the Paris Climate Agreement.

The Paris Climate talks ended December 12th, and there’s been a lot of fanfare in the media about how successful these were because 195 countries came together with an agreement. That so many leaders of so many countries could come together on the issue of climate change is a huge success.

As Baum said after the interview, “The Paris Agreement is a good example of the international community, as a whole, coming together to take action that makes the world a safe place. It’s pretty amazing!”

But as amazing as global cooperation is, reading some of that agreement was less than inspiring. There was a lot of suggesting and urging and advising, but no demanding or requiring or committing.

The countries have all agreed to try not to let global temperatures increase beyond 2 degrees Celsius of pre industrial temperatures, and they’re aiming for 1.5 degrees Celsius as the maximum. This is a nice, lofty goal, but is it possible?

The agreement calls for countries to basically check in every five years, but with the rate at which the temperatures are increasing and climate change is affecting us, is this going to be sufficient to accomplish much? This meeting was called the COP21 because this group has now convened every year for the last 21 years. Why should we expect this agreement to produce greater results than what we’ve seen in the past?

As Baum explains, this agreement is “probably about as good as we’re going to get.” It focused on goals that each of the leaders can try to reach using whatever means is best suited for their respective countries. However, there is no penalty if the countries don’t comply. According to Baum, one of the major reasons the agreement is so vague is that the American Senate is unlikely to get the 67 votes necessary to ratify an official treaty on climate change.

Baum also points out that “the difference between 1.9 degrees and 2.1 is pretty trivial.” The goal is to aim for limiting the increase of global temperatures, and whatever improvements can be made toward that objective can at least be considered small successes.

There’s also been some debate about whether climate change and terrorism might be connected, but we also considered another issue that doesn’t get brought up as often: if we reduce our dependency on fossil fuels, will that lead to further destabilization in the Middle East? Baum suspects the answer is yes.

Listen to the full interview for more insight into the Paris Climate Agreement, including how successful it might be under future leadership, as well as how climate change is no longer a catastrophic risk, but rather, a known cause of catastrophes.

 

 

 

 

Inside OpenAI: An Interview by SingularityHUB

The following interview was conducted and written by Shelly Fan for SingularityHUB.

Last Friday at the Neural Information and Processing Systems conference in Montreal, Canada, a team of artificial intelligence luminaries announced OpenAI, a non-profit company set to change the world of machine learning.

Backed by Tesla and Space X’s Elon Musk and Y Combinator’s Sam Altman, OpenAI has a hefty budget and even heftier goals. With a billion dollars in initial funding, OpenAI eschews the need for financial gains, allowing it to place itself on sky-high moral grounds.

artificial-general-intelligenceBy not having to answer to industry or academia, OpenAI hopes to focus not just on developing digital intelligence, but also guide research along an ethical route that, according to their inaugural blog post, “benefits humanity as a whole.”

OpenAI began with the big picture in mind: in 100 years, what will AI be able to achieve, and should we be worried? If left in the hands of giant, for-profit tech companies such as Google, Facebook and Apple, all of whom have readily invested in developing their own AI systems in the last few years, could AI — and future superintelligent systems— hit a breaking point and spiral out of control? Could AI be commandeered by governments to monitor and control their citizens? Could it, as Elon Musk warned earlier this year, ultimately destroy humankind?

Since its initial conception earlier this year, OpenAI has surgically snipped the cream of the crop in the field of deep learning to assemble its team. Among its top young talent is Andrej Karpathy, a PhD candidate at Stanford whose resume includes internships at Google and DeepMind, the secretive London-based AI company that Google bought in 2014.

Last Tuesday, I sat down with Andrej to chat about OpenAI’s ethos and vision, its initial steps and focus, as well as the future of AI and superintelligence. The interview has been condensed and edited for clarity.


How did OpenAI come about?

Earlier this year, Greg [Brockman], who used to be the CTO of Stripe, left the company looking to do something a bit different. He has a long-lasting interest in AI so he was asking around, toying with the idea of a research-focused AI startup. He reached out to the field and got the names of people who’re doing good work and ended up rounding us up.

At the same time, Sam [Altman] from YC became extremely interested in this as well. One way that YC is encouraging innovation is as a startup accelerator; another is through research labs. So, Sam recently opened YC Research, which is an umbrella research organization, and OpenAI is, or will become, one of the labs.

As for Elon — obviously he has had concerns over AI for a while, and after many conversations, he jumped onboard OpenAI in hopes to help AI develop in a beneficial and safe way.

How much influence will the funders have on how OpenAI does its research?

We’re still at very early stages so I’m not sure how this will work out. Elon said he’d like to work with us roughly once a week. My impression is that he doesn’t intend to come in and tell us what to do — our first interactions were more along the lines of “let me know in what way I can be helpful.” I felt a similar attitude from Sam and others.

AI has been making leaps recently, with contributions from academia, big tech companies and clever startups. What can OpenAI hope to achieve by putting you guys together in the same room that you can’t do now as a distributed network?

I’m a huge believer in putting people physically together in the same spot and having them talk. The concept of a network of people collaborating across institutions would be much less efficient, especially if they all have slightly different incentives and goals.

More abstractly, in terms of advancing AI as a technology, what can OpenAI do that current research institutions, companies or deep learning as a field can’t?

how-to-prevent-evil-ai-9A lot of it comes from OpenAI as a non-profit. What’s happening now in AI is that you have a very limited number of research labs and large companies, such as Google, which are hiring a lot of researchers doing groundbreaking work. Now suppose AI could one day become — for lack of a better word — dangerous, or used dangerously by people. It’s not clear that you would want a big for-profit company to have a huge lead, or even a monopoly over the research. It is primarily an issue of incentives, and the fact that they are not necessarily aligned with what is good for humanity. We are baking that into our DNA from the start.

Also, there are some benefits of being a non-profit that I didn’t really appreciate until now. People are actually reaching out and saying “we want to help”; you don’t get this in companies; it’s unthinkable. We’re getting emails from dozens of places — people offering to help, offering their services, to collaborate, offering GPU power. People are very willing to engage with you, and in the end, it will propel our research forward, as well as AI as a field.

OpenAI seems to be built on the big picture how will AI benefit humanity, and how it may eventually destroy us all. Elon has repeatedly warned against unmonitored AI development. In your opinion, is AI a threat?

When Elon talks about the future, he talks about scales of tens or hundreds of years from now, not 5 or 10 years that most people think about. I don’t see AI as a threat over the next 5 or 10 years, other than those you might expect from more reliance on automation; but if we’re looking at humanity already populating Mars (that far in the future), then I have much more uncertainty, and sure, AI might develop in ways that could pose serious challenges.

how-to-prevent-evil-ai-5I think that saying AI will destroy humanity is out there on a five-year horizon; but if we’re looking at humanity already populating Mars (that far in the future), then yeah AI could be a serious problem.

One thing we do see is that a lot of progress is happening very fast. For example, computer vision has undergone a complete transformation — papers from more than three years ago now look foreign in face of recent approaches. So when we zoom out further over decades I think I have a fairly wide distribution over where we could be. So say there is a 1% chance of something crazy and groundbreaking happening. When you additionally multiply that by the utility of a few for-profit companies having monopoly over this tech, then yes that starts to sound scary.

Do you think we should put restraints on AI research to assure safety?

No, not top-down, at least right now. In general I think it’s a safer route to have more AI experts who have a shared awareness of the work in the field. Opening up research like what OpenAI wants to do, rather than having commercial entities having monopoly over results for intellectual property purposes, is perhaps a good way to go.

True, but recently for-profit companies are releasing their technology as well I’m thinking Google’s TensorFlow and Facebook’s Torch. In this sense how does OpenAI differ in its “open research” approach?

So when you say “releasing” there are a few things that need clarification. First Facebook did not release Torch; Torch is a library that’s been around for several years now. Facebook has committed to Torch and is improving on it. So has DeepMind.

how-to-prevent-evil-ai-7But TensorFlow and Torch are just tiny specks of their research — they are tools that can help others do research well, but they’re not actual results that others can build upon.

Still, it is true that many of these industrial labs have recently established a good track record of publishing research results, partly because a large number of people on the inside are from academia. Still, there is a veil of secrecy surrounding a large portion of the work, and not everything makes it out. In the end, companies don’t really have very strong incentives to share.

OpenAI, on the other hand, encourages us to publish, to engage the public and academia, to Tweet, to blog. I’ve gotten into trouble in the past for sharing a bit too much from inside companies, so I personally really, really enjoy the freedom.

What if OpenAI comes up with a potentially game-changing algorithm that could lead to superintelligence? Wouldn’t a fully open ecosystem increase the risk of abusing the technology?

In a sense it’s kind of like CRISPR. CRISPR is a huge leap for genome editing that’s been around for only a few years, but has great potential for benefiting — and hurting — humankind. Because of these ethical issues there was a recent conference on it in DC to discuss how we should go forward with it as a society.

If something like that happens in AI during the course of OpenAI’s research — well, we’d have to talk about it. We are not obligated to share everything — in that sense the name of the company is a misnomer — but the spirit of the company is that we do by default.

In the end, if there is a small chance of something crazy happening in AI research, everything else being equal, do you want these advances to be made inside a commercial company, especially one that has monopoly on the research, or do you want this to happen within a non-profit?

We have this philosophy embedded in our DNA from the start that we are mindful of how AI develops, rather than just [a focus on] maximizing profit.

In that case, is OpenAI comfortable being the gatekeeper, so to speak? You’re heavily influencing how the field is going to go and where it’s going.

It’s a lot of responsibility. It’s a “lesser evil” argument; I think it’s still bad. But we’re not the only ones “controlling” the field — because of our open nature we welcome and encourage others to join in on the discussion. Also, what’s the alternative? In a way a non-profit, with sharing and safety in its DNA, is the best option for the field and the utility of the field.

Also, AI is not the only field to worry about — I think bio is a far more pressing domain in terms of destroying the world [laugh]!

In terms of hiring — OpenAI is competing against giant tech companies in the Silicon Valley. How is the company planning on attracting top AI researchers?

We have perks [laugh].

But in all seriousness, I think the company’s mission and team members are enough. We’re currently actively hiring people, and so far have no trouble getting people excited about joining us. In several ways OpenAI combines the best of academia and the startup world, and being a non-profit we have the moral high ground, which is nice [laugh].

The team, especially, is a super strong, super tight team and that is a large part of the draw.

Take some rising superstars in the field — myself not included — put them together and you get OpenAI. I joined mainly because I heard about who else is on the team. In a way, that’s the most shocking part; a friend of mine described it as “storming the temple.” Greg came in from nowhere and scooped up the top people to do something great and make something new.

hub-viral-hits-2015-1Now that OpenAI has a rockstar team of scientists,what’s your strategy for developing AI? Are you getting vast amounts of data from Elon? What problems are you tackling first?

So we’re really still trying to figure a lot of this out. We are trying to approach this with a combination of bottom up and top down thinking. Bottom up are the various papers and ideas we might want to work on. Top down is doing so in a way that adds up. We’re currently in the process of thinking this through.

For example, I just submitted one vision research proposal draft today, actually [laugh]. We’re putting a few of them together. Also it’s worth pointing out that we’re not currently actively working on AI safety. A lot of the research we currently have in mind looks conventional. In terms of general vision and philosophy I think we’re most similar to DeepMind.

We might be able to at some point take advantage of data from Elon or YC companies, but for now we also think we can go quite far making our own datasets, or working with existing public datasets that we can work on in sync with the rest of academia.

Would OpenAI ever consider going into hardware, since sensors are a main way of interacting with the environment?

So, yes we are interested, but hardware has a lot of issues. For us, roughly speaking there are two worlds: the world of bits and the world of atoms. I am personally inclined to stay in the world of bits for now, in other words, software. You can run things in the cloud, it’s much faster. The world of atoms — such as robots — breaks too often and usually has a much slower iteration cycle. This is a very active discussion that we’re having in the company right now.

Do you think we can actually get to generalized AI?

I think to get to superintelligence we might currently be missing differences of a “kind,” in the sense that we won’t get there by just making our current systems better. But fundamentally there’s nothing preventing us getting to human-like intelligence and beyond.

To me, it’s mostly a question of “when,” rather than “if.”

I don’t think we need to simulate the human brain to get to human-like intelligence; we can zoom out and approximate how it works. I think there’s a more straightforward path. For example, some recent work shows that ConvNet* activations are very similar to the human visual cortex’s IT area activation, without mimicking how neurons actually work.

[*SF: ConvNet, or convolutional network, is a type of artificial neural network topology tailored to visual tasks first developed by Yann LeCun in the 1990s. IT is the inferior temporal cortex, which processes complex object features.]

how-to-prevent-evil-ai-8So it seems to me that with ConvNets we’ve almost checked off large parts of the visual cortex, which is somewhere around 30% of the cortex, and the rest of the cortex maybe doesn’t look all that different. So I don’t see how over a timescale of several decades we can’t make good progress on checking off the rest.

Another point is that we don’t necessarily have to be worried about human-level AI. I consider chimp-level AI to be equally scary, because going from chimp to humans took nature only a blink of an eye on evolutionary time scales, and I suspect that might be the case in our own work as well. Similarly, my feeling is that once we get to that level it will be easy to overshoot and get to superintelligence.

On a positive note though, what gives me solace is that when you look at our field historically, the image of AI research progressing with a series of unexpected “eureka” breakthroughs is wrong. There is no historical precedent for such moments; instead we’re seeing a lot of fast and accelerating, but still incremental progress. So let’s put this wonderful technology to good use in our society while also keeping a watchful eye on how it all develops.

Image Credit: Shutterstock.com

See the original post here.

The AI Wars: The Battle of the Human Minds to Keep Artificial Intelligence Safe

For all the media fear mongering about the rise of artificial intelligence in the future and the potential for malevolent machines, a battle of the AI war has already begun. But this one is being waged by some of the most impressive minds within the realm of human intelligence today.

At the start of 2015, few AI researchers were worried about AI safety, but that all changed quickly. Throughout the year, Nick Bostrom’s book, Superintelligence: Paths, Dangers, Strategies, grew increasingly popular. The Future of Life Institute held its AI safety conference in Puerto Rico. Two open letters regarding artificial intelligence and autonomous weapons were released. Countless articles came out, quoting AI concerns from the likes of Elon Musk, Stephen Hawking, Bill Gates, Steve Wozniak, and other luminaries of science and technology. Musk donated $10 million in funding to AI safety research through FLI. Fifteen million dollars was granted to the creation of the Leverhulme Centre for the Future of Intelligence. And most recently, the nonprofit AI research company, OpenAI, was launched to the tune of $1 billion, which will allow some of the top minds in the AI field to address safety-related problems as they come up.

In all, it’s been a big year for AI safety research. Many in science and industry have joined the AI-safety-research-is-needed camp, but there are still some stragglers of equally staggering intellect. So just what does the debate still entail?

OpenAI was the big news of the past week, and its launch coincided (probably not coincidentally) with the Neural Information Processing Systems conference, which attracts some of the best-of-the-best in machine learning. Among the attractions at the conference was the symposium, Algorithms Among Us: The Societal Impacts of Machine Learning, where some of the most influential people in AI research and industry debated their thoughts and concerns about the future of artificial intelligence.

[Author’s note: The following are symposium highlights grouped together by topic to inform about arguments in the world of AI research. The discussions did not necessarily occur in the order below.]
 

From session 2 of the Algorithms Among Us symposium: Murray Shanahan, Shane Legg, Andrew Ng, Yann LeCun, Tom Dietterich, and Gary Marcus

What is AGI and should we be worried about it?

Artificial general intelligence (AGI) is the term given to artificial intelligence that would be, in some sense, equivalent to human intelligence. It wouldn’t solve just a narrow, specific task, as AI does today, but would instead solve a variety of problems and perform a variety of tasks, with or without being programmed to do so. That said, it’s not the most well defined term. As the director of Facebook’s AI research group, Yann LeCun stated, “I don’t want to talk about human-level intelligence because I don’t know what that means really.”

If defining AGI is difficult, predicting if or when it will exist is nearly impossible. Some of the speakers, like LeCun and Andrew Ng, didn’t want to waste time considering the possibility of AGI since they consider it to be so distant. Both referenced the likelihood of another AI winter, in which, after all this progress, scientists will hit a research wall that will take some unknown number of years or decades to overcome. Ng, a Stanford professor and Chief Scientist of Baidu, compared concerns about the future of human-level AI to far-fetched worries about the difficulties surrounding travel to the star system Alpha Centauri.

LeCun pointed out that we don’t really know what a superintelligent AI would look like. “Will AI look like human intelligence? I think not. Not at all,” he said. He then went on to explain why human intelligence isn’t nearly as general as we like to believe. “We’re driven by basic instincts […] They (AI) won’t have the drives that make humans do bad things to each other.” He added that there would be no reason he can think of to build preservation instincts or curiosity into machines.

However, many of the participants disagreed with LeCun and Ng, emphasizing the need to be prepared in advance of problems, rather than trying to deal with them as they arise.

Shane Legg, co-founder of Google’s DeepMind, argued that the benefit of starting safety research now is that it will help us develop a framework that will allow researchers to move in a positive direction toward the development of smarter AI. “In terms of AI safety, I think it’s both overblown and underemphasized,” he said, commenting on how profound – both positively and negatively – the societal impact of advanced AI could be. “If we are approaching a transition of this magnitude, I think it’s only responsible that we start to consider, to whatever extent that we can in advance, the technical aspects and the societal aspects and the legal aspects and whatever else […] Being prepared ahead of time is better than trying to be prepared after you already need some good answers.”

Gary Marcus, Director of the NYU Center for Language and Music, added, “In terms of being prepared, we don’t just need to prepare for AGI, we need to prepare for better AI […] Already, issues of security and risk have come forth.”

Even Ng agreed that AI safety research certainly wasn’t a bad thing, saying, “I’m actually glad there are other parts of society studying ethical parts of AI. I think this is a great thing to do.” Though he also admitted it wasn’t something he wanted to spend his own time on.
 

It’s the economy…

Among all of the AI issues debated by researchers, the one agreed upon by almost everyone who took the stage at the symposium was the detrimental impact AI could have on the job market. Erik Brynjolfsson, co-author of The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies, set the tone for the discussion with his presentation which highlighted some of the issues that artificial intelligence will have on the economy. He explained that we’re in the midst of incredible technological advances, which could be highly beneficial, but our skills, organizations and institutions aren’t keeping up. Because of the huge gap in pace, business as usual won’t work.

As unconcerned about the future of AGI as Ng was, he quickly became the strongest advocate for tackling the economics issue that will pop up in the near future. “I think the biggest challenge is the challenge of unemployment,” Ng said.

The issue of unemployment is one that is already starting to appear, even with the very narrow AI that exists today. Around the world, low- and middle-skilled workers are getting displaced by robots or software, and that trend is expected to continue at rapid rates.

LeCun argued that the world overcame the massive job loss that resulted from the new technologies associated with the steam engine too, but both Brynjolfsson and Ng disagreed with that argument, citing the much more rapid pace of technology today. “Technology has always been destroying jobs, and it’s always been creating jobs,” Brynjolfsson admitted, but he also explained how difficult it is to predict which technologies will impact us the most and when they’ll kick in. The current exponential rate of technological progress is unlike anything we’ve ever experienced before in history.

Bostrom mentioned that the rise of thinking machines will be more analogous to the rise of the human species than to the steam engine or the industrial revolution. He reminded the audience that if a superintelligent AI is developed, it will be the last invention we ever have to make.

A big concern with the economy is that the job market is changing so quickly that most people can’t develop new skills fast enough to keep up. The possibility of a basic income and paying people to go back to school were both mentioned. However, the psychological toll of being unemployed is one that can’t be overcome even with a basic income, and the effect that mass unemployment might have on people drew concern from the panelists.

Bostrom became an unexpected voice of optimism, pointing out that there have always been groups who were unemployed, such as aristocrats, children and retirees. Each of these groups managed to enjoy their unemployed time by filling it with other hobbies and activities.

However, solutions like basic income and leisure time will only work if political leaders begin to take the initiative soon to address the unemployment issues that near-future artificial intelligence will trigger.
 

From session 2 of the Algorithms Among Us symposium: Michael Osborne, Finale Doshi-Velez, Neil Lawrence, Cynthia Dwork, Tom Dietterich, Erik Brynjolfsson, and Ian Kerr

Closing arguments

Ideally, technology is just a tool that is not inherently good or bad. Whether it helps humanity or hurts us should depend on how we use the tool. Except if AI develops the capacity to think, this argument isn’t quite accurate. At that point, the AI isn’t a person, but it isn’t just an instrument either.

Ian Kerr, the Research Chair of Ethics, Law, and Technology at the University of Ottawa, spoke early in the symposium about the legal ramifications (or lack thereof) of artificial intelligence. The overarching question for an AI gone wrong is: who’s to blame? Who will be held responsible when something goes wrong? Or, on the flip side, who is to blame if a human chooses to ignore the advice of an AI that’s had inconsistent results, but which later turns out to have been the correct advice?

If anything, one of the most impressive results from this debate was how often the participants agreed with each other. At the start of the year, few AI researchers were worried about safety. Now, though many still aren’t worried, most acknowledge that we’re all better off if we consider safety and other issues sooner rather than later. The most disagreement was over when we should start working on AI safety, not if it should happen. The panelists also all agreed that regardless of how smart AI might become, it will happen incrementally, rather than as the “event” that is implied in so many media stories. We already have machines that are smarter and better at some tasks than humans, and that trend will continue.

For now, as Harvard Professor Finale Doshi-Velez pointed out, we can control what we get out of the machine: if we don’t like or understand the results, we can reprogram it.

But how much longer will that be a viable solution?
 

Coming soon…

The article above highlights some of the discussion that occurred between AI researchers about whether or not we need to focus on AI safety research. Because so many AI researchers do support safety research, there was also much more discussion during the symposium about which areas pose the most risk and have the most potential. We’ll be starting a new series in the new year that goes into greater detail about different fields of study that AI researchers are most worried about and most excited about.

 

Pentagon Seeks $12 -$15 Billion for AI Weapons Research

The news this month is full of stories about money pouring into AI research. First we got the news about the $15 million granted to the new Leverhulme Center for the Future of Intelligence. Then Elon Musk and friends dropped the news about launching OpenAI to the tune of $1 billion, promising that this would be a not-for-profit company committed to safe AI and improving the world. But that all pales in comparison to the $12-$15 billion that the Pentagon is requesting for the development of AI weapons.

According to Reuters, “The Pentagon’s fiscal 2017 budget request will include $12 billion to $15 billion to fund war gaming, experimentation and the demonstration of new technologies aimed at ensuring a continued military edge over China and Russia.” The military is looking to develop more advanced weapons technologies that will include autonomous weapons and deep learning machines.

While the research itself would be strictly classified, the military wants to ensure that countries like China and Russia know this advanced weapons research is taking place.

“I want our competitors to wonder what’s behind the black curtain,” Deputy Defense Secretary Robert Work said.

The United States will continue to try to develop positive relations with Russia and China, but Work believes AI weapons R&D will help strengthen deterrence.

Read the full Reuters article here.

 

 

OpenAI Announced

Press release from OpenAI:
Introducing OpenAI

by Greg Brockman, Ilya Sutskever, and the OpenAI team
December 11, 2015
OpenAI is a non-profit artificial intelligence research company. Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.

Since our research is free from financial obligations, we can better focus on a positive human impact. We believe AI should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as is possible safely.

The outcome of this venture is uncertain and the work is difficult, but we believe the goal and the structure are right. We hope this is what matters most to the best in the field.

Background

Artificial intelligence has always been a surprising field. In the early days, people thought that solving certain tasks (such as chess) would lead us to discover human-level intelligence algorithms. However, the solution to each task turned out to be much less general than people were hoping (such as doing a search over a huge number of moves).

The past few years have held another flavor of surprise. An AI technique explored for decades, deep learning, started achieving state-of-the-art results in a wide variety of problem domains. In deep learning, rather than hand-code a new algorithm for each problem, you design architectures that can twist themselves into a wide range of algorithms based on the data you feed them.

This approach has yielded outstanding results on pattern recognition problems, such as recognizing objects in images, machine translation, and speech recognition. But we’ve also started to see what it might be like for computers to be creative, to dream, and to experience the world.

Looking forward

AI systems today have impressive but narrow capabilities. It seems that we’ll keep whittling away at their constraints, and in the extreme case they will reach human performance on virtually every intellectual task. It’s hard to fathom how much human-level AI could benefit society, and it’s equally hard to imagine how much it could damage society if built or used incorrectly.

OpenAI

Because of AI’s surprising history, it’s hard to predict when human-level AI might come within reach. When it does, it’ll be important to have a leading research institution which can prioritize a good outcome for all over its own self-interest.

We’re hoping to grow OpenAI into such an institution. As a non-profit, our aim is to build value for everyone rather than shareholders. Researchers will be strongly encouraged to publish their work, whether as papers, blog posts, or code, and our patents (if any) will be shared with the world. We’ll freely collaborate with others across many institutions and expect to work with companies to research and deploy new technologies.

OpenAI’s research director is Ilya Sutskever, one of the world experts in machine learning. Our CTO is Greg Brockman, formerly the CTO of Stripe. The group’s other founding members are world-class research engineers and scientists: Trevor Blackwell, Vicki Cheung, Andrej Karpathy, Durk Kingma, John Schulman, Pamela Vagata, and Wojciech Zaremba. Pieter Abbeel, Yoshua Bengio, Alan Kay, Sergey Levine, and Vishal Sikka are advisors to the group. OpenAI’s co-chairs are Sam Altman and Elon Musk.

Sam, Greg, Elon, Reid Hoffman, Jessica Livingston, Peter Thiel, Amazon Web Services (AWS), Infosys, and YC Research are donating to support OpenAI. In total, these funders have committed $1 billion, although we expect to only spend a tiny fraction of this in the next few years.

You can follow us on Twitter at @open_ai or email us at info@openai.com.

The World Has Lost 33% of Its Farmable Land

During the Paris climate talks last week, researchers from the University of Sheffield’s Grantham Center revealed that in the last 40 years, the world has lost nearly 33% of its farmable land.

The loss is attributed to erosion and pollution, but the effects are expected to be exacerbated by climate change. Meanwhile, global food production is expected to grow by 60% in the next 35 years.

Researchers at the Grantham Center argue that the current intensive agriculture system is unsustainable. Modern agriculture requires heavy use of fertilizers, which “consume 5% of the world’s natural gas production and 2% of the world’s annual energy supply.” This use of fertilizers also allows “nutrients to wash out and pollute fresh coastal waters, causing algal blooms and lethal oxygen depletion,” along with a host of other problems. As fertilizers weaken the soil, heavily ploughed fields can face erosion rates that are “10-100 times greater than [the] rates of soil formation.”

Organic farming typically includes better soil management practices, but crop yields will not be sufficient to feed the growing global population.

In response to these concerns, Grantham Center researchers have called for a sustainable model for intensive agriculture that will incorporate lessons both from history and modern biotechnology. The scientists suggest the following three principles for improved farming practices:

  1. “Managing soil by direct manure application, rotating annual and cover crops, and practicing no-till agriculture.”
  2. “Using biotechnology to wean crops off the artificial world we have created for them, enabling plants to initiate and sustain symbioses with soil microbes.”
  3. “Recycling nutrients from sewage in a modern example of circular economy. Inorganic fertilizers could be manufactured from human sewage in biorefineries operating at industrial or local scales.”

The Grantham researchers recognize that the task of improving our farming situation can’t just fall on farmers’ shoulders. They expect policymakers will also need to get involved.

Speaking to the Guardian, Duncan Cameron, one of the scientists involved in this study, said, “We can’t blame the farmers in this. We need to provide the capitalisation to help them rather than say, ‘Here’s a new policy, go and do it.’ We have the technology. We just need the political will to give us a fighting chance of solving this problem.”

Read the complete Grantham Center briefing note here.

$15 Million Granted by Leverhulme to New AI Research Center at Cambridge University

The University of Cambridge has received a grant for just over $15 Million USD from the Leverhulme Foundation to establish a 10-year Centre focused on the opportunities and challenges posed by AI in the long-term. They provided FLI with the following news release:

About the New Center

Hot on the heels of 80K’s excellent AI risk research career profile, we’re delighted to announce the funding of a new international Leverhulme Centre for the Future of Intelligence (CFI), to be led by Cambridge (Huw Price and Zoubin Ghahramani), with spokes at Oxford (Nick Bostrom), Imperial (Murray Shanahan), and Berkeley (Stuart Russell). The Centre proposal was developed at CSER, but will be a stand-alone centre, albeit collaborating extensively with CSER and with the Strategic AI Research Centre (an Oxford-Cambridge collaboration recently funded by the Future of Life Institute’s AI safety grants program). We also hope for extensive collaboration with the Future of Life Institute.

Building on the “Puerto Rico Agenda” from the Future of Life Institute’s landmark January 2015 conference, it will have the long-term safe and beneficial development of AI at its core, but with a broader remit than CSER’s focus on catastrophic AI risk and superintelligence. For example, it will consider some near-term challenges such as lethal autonomous weapons, as well as some of the longer-term philosophical and practical issues surrounding the opportunities and challenges we expect to face, should greater-than-human-level intelligence be developed later this century.

CFI builds on the pioneering work of FHI, FLI and others, along with the generous support of Elon Musk, who helped massively boost this field with his (separate) $10M grants programme in January of this year. One of the most important things this Centre will achieve is in taking a big step towards making this global area of research a long-term one in which the best talents can be expected to have lasting careers – the Centre is funded for a full 10 years, and we will aim to build longer-lasting funding on top of this.

In practical terms, it means that ~10 new postdoc positions will be opening up in this space across academic disciplines and locations (Cambridge, Oxford, Berkeley, Imperial and elsewhere). Our first priority will be to identify and hire a world-class Executive Director, who would start in October. This will be a very influential position over the coming years. Research positions will most likely begin in April 2017.

Between now and then, FHI is hiring for AI safety researchers, CSER will be hiring for an AI policy postdoc in the spring, and MIRI is also hiring. A number of the key researchers in the AI safety community are also organizing a high-level symposium on the impacts and future of AI at the Neural Information Processing Systems conference next week.

 

CFI and the Future of AI Safety Research

Human-level intelligence is familiar in biological ‘hardware’ — it happens inside our skulls. Technology and science are now converging on a possible future where similar intelligence can be created in computers.

While it is hard to predict when this will happen, some researchers suggest that human-level AI will be created within this century. Freed of biological constraints, such machines might become much more intelligent than humans. What would this mean for us? Stuart Russell, a world-leading AI researcher at the University of California, Berkeley, and collaborator on the project, suggests that this would be “the biggest event in human history”. Professor Stephen Hawking agrees, saying that “when it eventually does occur, it’s likely to be either the best or worst thing ever to happen to humanity, so there’s huge value in getting it right.”

Now, thanks to an unprecedented £10 million (~$15 million USD) grant from the Leverhulme Trust, the University of Cambridge is to establish a new interdisciplinary research centre, the Leverhulme Centre for the Future of Intelligence, to explore the opportunities and challenges of this potentially epoch-making technological development, both short and long term.

The Centre brings together computer scientists, philosophers, social scientists and others to examine the technical, practical and philosophical questions artificial intelligence raises for humanity in the coming century.

Huw Price, the Bertrand Russell Professor of Philosophy at Cambridge and Director of the Centre, said: “Machine intelligence will be one of the defining themes of our century, and the challenges of ensuring that we make good use of its opportunities are ones we all face together. At present, however, we have barely begun to consider its ramifications, good or bad”.

The Centre is a response to the Leverhulme Trust’s call for “bold, disruptive thinking, capable of creating a step-change in our understanding”. The Trust awarded the grant to Cambridge for a proposal developed with the Executive Director of the University’s Centre for the Study of Existential Risk (CSER), Dr Seán Ó hÉigeartaigh. CSER investigates emerging risks to humanity’s future including climate change, disease, warfare and technological revolutions.

Dr Ó hÉigeartaigh said: “The Centre is intended to build on CSER’s pioneering work on the risks posed by high-level AI and place those concerns in a broader context, looking at themes such as different kinds of intelligence, responsible development of technology and issues surrounding autonomous weapons and drones.”

The Leverhulme Centre for the Future of Intelligence spans institutions, as well as disciplines. It is a collaboration led by the University of Cambridge with links to the Oxford Martin School at the University of Oxford, Imperial College London, and the University of California, Berkeley. It is supported by Cambridge’s Centre for Research in the Arts, Social Sciences and Humanities (CRASSH). As Professor Price put it, “a proposal this ambitious, combining some of the best minds across four universities and many disciplines, could not have been achieved without CRASSH’s vision and expertise.”

Zoubin Ghahramani, Deputy Director, Professor of Information Engineering and a Fellow of St John’s College, Cambridge, said: “The field of machine learning continues to advance at a tremendous pace, and machines can now achieve near-human abilities at many cognitive tasks — from recognising images to translating between languages and driving cars. We need to understand where this is all leading, and ensure that research in machine intelligence continues to benefit humanity. The Leverhulme Centre for the Future of Intelligence will bring together researchers from a number of disciplines, from philosophers to social scientists, cognitive scientists and computer scientists, to help guide the future of this technology and study its implications.”

The Centre aims to lead the global conversation about the opportunities and challenges to humanity that lie ahead in the future of AI. Professor Price said: “With far-sighted alumni such as Charles Babbage, Alan Turing, and Margaret Boden, Cambridge has an enviable record of leadership in this field, and I am delighted that it will be home to the new Leverhulme Centre.”

A version of this news release can also be found on the Cambridge University website and at Eureka Alert.