What happens when our computers get smarter than we are?

The author of Superintelligence: Paths, Dangers, Strategies ,Nick Bostrom gave a talk at TED 2015 about artificial superintelligence (ASI):

“What happens when our computers get smarter than we are?”

From his polls of researchers he posits we will hit ASI by 2040 or 2050, this may be a dramatic change. To quote Nick:

“Now most people, when they think about what is smart and what is dumb, I think have in mind a picture roughly like this. So at one end we have the village idiot, and then far over at the other side we have Ed Witten, or Albert Einstein, or whoever your favorite guru is. But I think that from the point of view of artificial intelligence, the true picture is actually probably more like this: AI starts out at this point here, at zero intelligence, and then, after many, many years of really hard work, maybe eventually we get to mouse-level artificial intelligence, something that can navigate cluttered environments as well as a mouse can. And then, after many, many more years of really hard work, lots of investment, maybe eventually we get to chimpanzee-level artificial intelligence. And then, after even more years of really, really hard work, we get to village idiot artificial intelligence. And a few moments later, we are beyond Ed Witten. The train doesn’t stop at Humanville Station. It’s likely, rather, to swoosh right by.”

For those who are unfamiliar with the subject he breaks the problem down in a very straightforward fashion. He lays out many of the concerns about diverging interests with the human race, and limitations in our ability to control an ASI once it has been developed. His final prescription in the talk is an optimistic one, but looking at human advances weaponizing code in the past few years, it may fall short of assuaging concerns. It is definitely worth a look, over atted.com

Happy Birthday, FLI!

Today we are celebrating one year since our launch event. It’s been an amazing year, full of wonderful accomplishments, and we would like to express our gratitude to all those who supported us with their advice, hard work and resources. Thank you – and let’s make this year even better!

Here’s a video with some of the highlights of our first year. You’ll find many familiar faces here, perhaps including your own!

What AI Researchers Say About Risks from AI

As the media relentlessly focuses on the concerns of public figures like Elon Musk, Stephen Hawking and Bill Gates, you may wonder – what do AI researchers think about the risks from AI? In his informative article, Scott Alexander does a comprehensive review of the opinions of prominent AI researchers on these risks. He selected researchers to profile in his article as follows:

The criteria for my list: I’m only mentioning the most prestigious researchers, either full professors at good schools with lots of highly-cited papers, or else very-well respected scientists in industry working at big companies with good track records. They have to be involved in AI and machine learning. They have to have multiple strong statements supporting some kind of view about a near-term singularity and/or extreme risk from superintelligent AI. Some will have written papers or books about it; others will have just gone on the record saying they think it’s important and worthy of further study.

Scott’s review turns up some interesting parallels between the views of the concerned researchers and the skeptics:

When I read the articles about skeptics, I see them making two points over and over again. First, we are nowhere near human-level intelligence right now, let alone superintelligence, and there’s no obvious path to get there from here. Second, if you start demanding bans on AI research then you are an idiot.

I agree whole-heartedly with both points. So do the leaders of the AI risk movement.

[…]

The “skeptic” position seems to be that, although we should probably get a couple of bright people to start working on preliminary aspects of the problem, we shouldn’t panic or start trying to ban AI research.

The “believers”, meanwhile, insist that although we shouldn’t panic or start trying to ban AI research, we should probably get a couple of bright people to start working on preliminary aspects of the problem.

It’s encouraging that there is less controversy than one might expect – in a nutshell, AI researchers agree that hype is bad and research into potential risks is good.

Hawking AI speech

Stephen Hawking, who serves on our FLI Scientific Advisory Board, just gave an inspiring and thought-provoking talk that I think of as “A Brief History of Intelligence”. He spoke of the opportunities and challenges related to future artificial intelligence at a Google’s conference outside London, and you can watch it here.

MIRI’s New Executive Director

Big news from our friends at MIRI: Nate Soares is stepping up as the new Executive Director, and Luke Muehlhauser has accepted a research position at GiveWell.

Luke has done an awesome job leading MIRI for the past three years, and it’s been a pleasure for us at FLI to collaborate with him. We wish him the best of success in his research work at GiveWell.

Nate has contributed greatly to MIRI’s mission over the past year. We are excited about his appointment as MIRI’s new leader, and looking forward to the course he sets for the organization.

Congratulations to Nate and Luke from the FLI team!

Chinese Scientists Report Unsuccessful Attempt to Selectively Edit Disease Gene in Human Embryos

Researchers from Sun Yat-sen University, Guangzhou failed to selectively modify a single gene in unicellular human embryos using the CRISPR/Cas9 technology, noting many off-target mutations. The study received a lot of media and public attention (NYT, Nature, TIME), primarily because of ethical concerns about human genetic modification expressed earlier.

The authors seem to ignore the opinion of many scientists in the US, including the original developers of CRISPR/Cas9 technology, who called for a pause in all human germline gene editing studies until the risks and benefits can be accessed by the public and the research community. This shows that international research community currently lacks power to discourage potentially dangerous or ethically questionable research if national governments choose to support it. However, it is important that the paper was rejected by Nature and Science (and, possibly, other journals) in part due to ethical considerations and had to be published in a much less prestigious Chinese journal Protein & Cell. This is a reason for optimism: if Science, Nature and other high-impact journals can coordinate on this, they might be able to cooperate in others cases of research of concern, such as Gain-of-Function studies.

While some think that the study shows that CRISPR gene editing has a long way to go before it is ready for the use in humans, this seems unlikely to me. Previous studies in mice, and, more importantly, in monkeys were very successful (in case of monkeys “no off-target mutagenesis was detected”). It seems more likely that the failure of the Chinese study was caused by the defective embryos – in an attempt to mitigate ethical concerns, the researchers used tripronuclear zygotes, which can’t develop normally. It may turn out that normal human embryos are much easier to modify and, given that, according to Nature, there are at least 4 other Chinese groups working on similar problems, we may find this out sooner than we might want.

Dubai to Employ “Fully Intelligent” Robot Police

I don’t know how seriously to take this, but Dubai is developing Robo-cops to roam public areas like malls:

“‘The robots will interact directly with people and tourists,’ [Colonel Khalid Nasser Alrazooqi] said. ‘They will include an interactive screen and microphone connected to the Dubai Police call centres. People will be able to ask questions and make complaints, but they will also have fun interacting with the robots.’

In four or five years, however, Alrazooqi said that Dubai Police will be able to field autonomous robots that require no input from human controllers.

“These will be fully intelligent robots that can interact with people, with no human intervention at all,” he said. “This is still under research and development, but we are planning on it.””

I don’t know what he means by ‘fully intelligent’ robots, but I would be surprised if anything fitting my conception of it were around in five years.

Interestingly, this sounds similar to the Knightscope K5 already in beta in the United States – a rolling, autonomous robot whose sensors try to detect suspicious activity or recognize faces and license plates of wanted criminals, and then alert authorities.

While the Knightscope version focuses on mass surveillance and data collection, Dubai is proposing that their robo-cops be able to interact with the public.

I would expect people to be more comfortable with that – especially if there’s a smooth transition from being controlled and voiced by humans to being fully autonomous.