Risks From General Artificial Intelligence Without an Intelligence Explosion

An ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind.

– Computer scientist I. J. Good, 1965

Artificial intelligence systems we have today can be referred to as narrow AI – they perform well at specific tasks, like playing chess or Jeopardy, and some classes of problems like Atari games. Many experts predict that general AI, which would be able to perform most tasks humans can, will be developed later this century, with median estimates around 2050. When people talk about long term existential risk from the development of general AI, they commonly refer to the intelligence explosion (IE) scenario. AI risk skeptics often argue against AI safety concerns along the lines of “Intelligence explosion sounds like science-fiction and seems really unlikely, therefore there’s not much to worry about”. It’s unfortunate when AI safety concerns are rounded down to worries about IE. Unlike I. J. Good, I do not consider this scenario inevitable (though relatively likely), and I would expect general AI to present an existential risk even if I knew for sure that intelligence explosion were impossible.

Here are some dangerous aspects of developing general AI, besides the IE scenario:

  1. Human incentives. Researchers, companies and governments have professional and economic incentives to build AI that is as powerful as possible, as quickly as possible. There is no particular reason to think that humans are the pinnacle of intelligence – if we create a system without our biological constraints, with more computing power, memory, and speed, it could become more intelligent than us in important ways. The incentives are to continue improving AI systems until they hit physical limits on intelligence, and those limitations (if they exist at all) are likely to be above human intelligence in many respects.
  2. Convergent instrumental goals. Sufficiently advanced AI systems would by default develop drives like self-preservation, resource acquisition, and preservation of their objective functions, independent of their objective function or design. This was outlined in Omohundro’s paper and more concretely formalized in a recent MIRI paper. Humans routinely destroy animal habitats to acquire natural resources, and an AI system with any goal could always use more data centers or computing clusters.
  3. Unintended consequences. As in the stories of Sorcerer’s Apprentice and King Midas, you get what you asked for, but not what you wanted. This already happens with narrow AI, like in the frequently cited example from the Bird & Layzell paper: a genetic algorithm was supposed to design an oscillator using a configurable circuit board, and instead designed a makeshift radio that used signal from neighboring computers to produce the requisite oscillating pattern. Unintended consequences produced by a general AI, more opaque and more powerful than a narrow AI, would likely be far worse.
  4. Value learning is hard. Specifying common sense and ethics in computer code is no easy feat. As argued by Stuart Russell, given a misspecified value function that omits variables that turn out to be important to humans, an optimization process is likely to set these unconstrained variables to extreme values. Think of what would happen if you asked a self-driving car to get you to the airport as fast as possible, without assigning value to obeying speed limits or avoiding pedestrians. While researchers would have incentives to build in some level of common sense and understanding of human concepts that is needed for commercial applications like household robots, that might not be enough for general AI.
  5. Value learning is insufficient. Even an AI system with perfect understanding of human values and goals would not necessarily adopt them. Humans understand the “goals” of the evolutionary process that generated us, but don’t internalize them – in fact, we often “wirehead” our evolutionary reward signals, e.g. by eating sugar.
  6. Containment is hard. A general AI system with access to the internet would be able to hack thousands of computers and copy itself onto them, thus becoming difficult or impossible to shut down – this is a serious problem even with present-day computer viruses. When developing an AI system in the vicinity of general intelligence, it would be important to keep it cut off from the internet. Large scale AI systems are likely to be run on a computing cluster or on the cloud, rather than on a single machine, which makes isolation from the internet more difficult. Containment measures would likely pose sufficient inconvenience that many researchers would be tempted to skip them.

Some believe that if intelligence explosion does not occur, AI progress will occur slowly enough that humans can stay in control. Given that human institutions like academia or governments are fairly slow to respond to change, they may not be able to keep up with an AI that attains human-level or superhuman intelligence over months or even years. Humans are not famous for their ability to solve coordination problems. Even if we retain control over AI’s rate of improvement, it would be easy for bad actors or zealous researchers to let it go too far – as Geoff Hinton recently put it, “the prospect of discovery is too sweet”.

As a machine learning researcher, I care about whether my field will have a positive impact on humanity in the long term. The challenges of AI safety are numerous and complex (for a more technical and thorough exposition, see Jacob Steinhardt’s essay), and cannot be rounded off to a single scenario. I look forward to a time when disagreements about AI safety no longer derail into debates about IE, and instead focus on other relevant issues we need to figure out.

(Thanks to Janos Kramar for his help with editing this post.)

This story was originally published here.

Can Artificial Intelligence Save Thanksgiving?

REI, the outdoor retail company, has made headlines in recent weeks for its decision to not only remain closed on Thanksgiving and Black Friday, but also to provide all 12,000 of its employees with paid time off on those days. As Shep Hyken with Forbes says, REI has gone retail rogue.

Or have they? REI is hardly the only company to shut its doors on the holiday this year. And though many big box stores argue that they must remain open to be competitive, Amazon, the online behemoth, is on track to make record sales. Black Friday shopping now presents us with two interesting trends: 1) more people are getting the day off, and 2) a company synonymous with automation is set to reign on this shopping holiday.

Is it possible that the oft-maligned rise of robots could benefit society and that paid days off will be a thing of the future? Or is this another indication that the development of artificial intelligence will lead to massive job loss?

Many economics and AI researchers have been warning about the threat that robots, automation and AI pose to the job market. Corporate technology today relies heavily on automated systems, which may have some level of narrow artificial intelligence and which have already replaced many jobs. The fear is that as the technology advances, automated robots will become autonomous robots – robots that can move and ‘think’ on their own – leaving few, if any jobs for people.

Amazon, with its huge, increasingly automated warehouses, is a prime example of this. As the design of the warehouse robots allows them to organize, find, package, and ship products more efficiently, the need for human employees decreases. Other companies are either already in lockstep with Amazon’s technological advances, or they will be soon.

For those who want to see more employees spending the holidays with their families, this move toward automated, online shopping could be good news. But the obvious concern is that the family time will come at a price: unemployment. In a CNBC article, billionaire Jeff Greene warned: “In the not-too-distant future, humans in the workplace could go the way of the horse-and-buggy because of the ‘exponential growth of artificial intelligence.'” This fear is echoed by the likes of Stephen Hawking and Bill Gates.

However, with the holiday season upon us, it’s important to remember that all hope is not lost. If we look back through the history of technology, we see that big advances in technological capabilities do tend to displace workers initially. But the key word there is ‘initially.’ In fact, with every advance in technology to date, the labor market has actually grown and created more jobs. More than that, trends show that the number of dangerous, manual-labor jobs have declined sharply with technological advancements, while more thoughtful, caring jobs have increased.

Will artificial intelligence be different, creating a robotic workforce that no human can compete with, increasing the chasm between the superrich and everyone else? Possibly. But two other options also exist:

  • The current technological trends could continue, with manual labor jobs declining and other types of jobs, like social media and caregivers.
  • We could move toward a more utopian lifestyle in which the robotic workforce allows all people to enjoy a more leisurely lifestyle – like the Star Trek economy.

At the end of an interview on MSN about the concerns of artificial intelligence in the job market, Martin Ford was asked if there was anything optimistic he could add. He said, “If we adapt appropriately, if we end up with something like a guaranteed income, then it’s an incredibly optimistic, almost utopian scenario. You can imagine a future where no one has to do a job they hate, no one has to do a dangerous job, people have more time for their families, for leisure, for other important things, and everyone is better off.”

This Thanksgiving, as we enjoy family time together and possibly a few good deals on items we’ve been waiting for, let’s remember to be grateful for what we have and optimistic that, if we prepare properly, the “robot apocalypse” could usher in a new era of quality family time and leisure.

And every day could be like this year’s REI Thanksgiving.

Dr. Strangelove is back: say hi to the cobalt bomb!

I must confess that, as a physics professor, some of my nightmares are extra geeky. My worst one is the C-bomb, a hydrogen bomb surrounded by large amounts of cobalt. When I first heard about this doomsday device in Stanley Kubrik’s dark nuclear satire “Dr. Strangelove”, I wasn’t sure if it was physically possible. Now I unfortunately know better, and it seems like it Russia may be building it.

The idea is terrifyingly simple: just encase a really powerful H-bomb in massive amounts of cobalt. When it explodes, it makes the cobalt radioactive and spreads it around the area or the globe, depending on the design. The half-life of the radioactive cobalt produced is about 5 years, which is long enough to give the fallout plenty of time to settle before it decays and kills, but short enough to produce intense radiation for a lot longer than you’d last in a fallout shelter. There’s almost no upper limit to how much cobalt and explosive power you can put in nukes that are buried for deterrence or transported by sea, and climate simulations have shown how hydrogen bombs can potentially lift fallout high enough to enshroud the globe, so if someone really wanted to risk the extinction of humanity, starting a C-bomb arms race is arguably one of the most promising strategies.

Not that anyone in their right mind would ever do such a thing, I figured back when I first saw the film. Although U.S. General Douglas MacArthur did suggest dropping some small cobalt bombs on the Korean border in the 1950s to deter Chinese troops, his request was denied and, as far as we know, no C-bombs were ever built. I felt relieved that my geeky nightmare was indeed nothing but a bad dream.

Except that life is imitating art: the other week, Russian state media “accidentally” leaked plans for a huge underwater drone that seems to contain a C-bomb. The leak was hardly accidental, as Dr. Strangelove himself explained in the movie: “the whole point of a Doomsday Machine is lost if you keep it a secret.”

So what should we do about this? Shouldn’t we encourage the superpowers to keep their current nuclear arsenals forever, since their nuclear deterrent has arguably saved millions of lives by preventing superpower wars since 1945? No, nuclear deterrence isn’t a viable long-term strategy unless the risk of accidental nuclear war can be reduced to zero. The annual probability of accidental nuclear war is poorly known, but it certainly isn’t zero: John F. Kennedy estimated the probability of the Cuban Missile Crisis escalating to war between 33% and 50%, and near-misses keep occurring regularly. Even if the annual risk of global nuclear war is as low is 1%, we’ll probably have one within a century and almost certainly within a few hundred years. This future nuclear war would almost certainly take more lives than nuclear deterrence ever saved – especially with nuclear winter and C-bombs.

What should Barack Obama do? He should keep his election promise and take US nuclear missiles off hair-trigger alert, then cancel the planned trillion dollar nuclear weapons upgrade, and ratchet up international pressure on Russia to follow suit. But wouldn’t reducing US nuclear arsenals weaken US nuclear deterrence? No, even just a handful of nuclear weapons provide powerful deterrence, and all but two nuclear powers have decided that a few hundred nuclear weapons suffice. Since the US and Russia currently have about 7000 each, thousands of which are on hair-trigger alert, the main effect of reducing their arsenals would not be to weaken deterrence but to reduce the risk of accidental war and incentivize nuclear non-proliferation. The trillion dollars saved can be used to strengthen US national security in many ways.

Let’s put an end to Dr. Strangelove’s absurdity before it puts an end to us.

This article can also be found on the Huffington Post.

The Superintelligence Control Problem

The following is an excerpt from the Three Areas of Research on the Superintelligence Control Problem, written by Daniel Dewey and highlighted in MIRI’s November newsletter:

What is the superintelligence control problem?

Though there are fundamental limits imposed on the capabilities of intelligent systems by the laws of physics and computational complexity, human brains and societies of human brains are probably far from these limits. It is reasonable to think that ongoing research in AI, machine learning, and computing infrastructure will eventually make it possible to build AI systems that not only equal, but far exceed human capabilities in most domains. Current research on AI and machine learning is at least a few decades from this degree of capability and generality, but it would be surprising if it were not eventually achieved.

Superintelligent systems would be extremely effective at achieving tasks they are set – for example, they would be much more efficient than humans are at interpreting data of all kinds, refining scientific theory, improving technologies, and understanding and predicting complex systems like the global economy and the environment (insofar as this is possible). Recent machine learning progress in natural language, visual understanding, and from-scratch reinforcement learning highlights the potential for AI systems to excel at tasks that have traditionally been difficult to automate. If we use these systems well, they will bring enormous benefits – even human-like performance on many tasks would transform the economy completely, and superhuman performance would extend our capabilities greatly.

However, superintelligent AI systems could also pose risks if they are not designed and used carefully. In pursuing a task, such a system could find plans with side-effects that go against our interests; for example, many tasks could be better achieved by taking control of physical resources that we would prefer to be used in other ways, and superintelligent systems could be very effective at acquiring these resources. If these systems come to wield much more power than we do, we could be left with almost no resources. If a superintelligent AI system is not purposefully built to respect our values, then its actions could lead to global catastrophe or even human extinction, as it neglects our needs in pursuit of its task. The superintelligence control problem is the problem of understanding and managing these risks. Though superintelligent systems are quite unlikely to be possible in the next few decades, further study of the superintelligence control problem seems worthwhile.

There are other sources of risk from superintelligent systems; for example, oppressive governments could use these systems to do violence on a large scale, and the transition to a superintelligent economy could be difficult to navigate. These risks are also worth studying, but seem superficially to be more like the risks caused by artificial intelligence broadly speaking (e.g. risks from autonomous weapons or unemployment), and seem fairly separate from the superintelligence control problem.

Learn more about the three areas of research into this problem by reading the complete article here.

FLI at Nuclear Disarmament Conference

In the shadow left by the attacks on Lebanon, Paris, and Iraq, hundreds met this past Saturday for a Massachusetts Peace Action conference to discuss building sustainable security. Various panels, which included speakers such as Noam Chomsky, Chung-Wha Hong, and Jamie Eldrige, explored the current socio-economic and political landscape in search of meaning and feasible goals we could all work towards. The conference also broke into two segments of international and domestic workshops, considering issues like the climate crisis and movements such as Build Housing Not Bombs and Black Lives Matter.

The Future of Life Institute was represented at an international workshop that focused on new initiatives toward nuclear disarmament. Other organizations, such as the Union of Concerned Scientists, Global Zero, and Pax Christi, discussed their own efforts and findings in the area of nuclear weapons. A central theme of the workshop was the need for America and Russia to take their nuclear weapons off of hair-trigger alert. According to the Union of Concerned Scientists, there have been at least 13 nuclear close calls due to human, radar, and sensor fallibility. As time goes on and nuclear weapons remain on hair-trigger alert, the likelihood of an accidental all-out nuclear war continues to rise.

The Future of Life Institute also presented its own research and efforts regarding divestment from the production of new nuclear weapons. FLI is currently working on Pax Christi’s Don’t Bank on the Bomb project, which seeks to stigmatize investment in nuclear weapons from a policy-neutral point of view. While nuclear weapons have largely fallen out of the public’s consciousness, the 2015 report from the Don’t Bank on the Bomb project found that 382 large financial institutions are investing $493 billion USD in companies that produce nuclear weapons. In light of the persisting threat of nuclear weapons, we have completed research on several institutions in the Boston area and are beginning to move from research to advocacy. Our current efforts are focused on universities, banks, and local governments, such as Harvard University, JP Morgan Chase, and Fidelity Investments. If you live in the Boston area and are interested in taking part in our nuclear divestment project, please apply here.

 

What to think about machines that think

From MIRI:

In January, nearly 200 public intellectuals submitted essays in response to the 2015 Edge.org question, “What Do You Think About Machines That Think?” (available online). The essay prompt began:

In recent years, the 1980s-era philosophical discussions about artificial intelligence (AI)—whether computers can “really” think, refer, be conscious, and so on—have led to new conversations about how we should deal with the forms that many argue actually are implemented. These “AIs”, if they achieve “Superintelligence” (Nick Bostrom), could pose “existential risks” that lead to “Our Final Hour” (Martin Rees). And Stephen Hawking recently made international headlines when he noted “The development of full artificial intelligence could spell the end of the human race.”

But wait! Should we also ask what machines that think, or, “AIs”, might be thinking about? Do they want, do they expect civil rights? Do they have feelings? What kind of government (for us) would an AI choose? What kind of society would they want to structure for themselves? Or is “their” society “our” society? Will we, and the AIs, include each other within our respective circles of empathy?

The essays are now out in book form, and serve as a good quick-and-dirty tour of common ideas about smarter-than-human AI. The submissions, however, add up to 541 pages in book form, and MIRI’s focus onde novo AI makes us especially interested in the views of computer professionals. To make it easier to dive into the collection, I’ve collected a shorter list of links — the 32 argumentative essays written by computer scientists and software engineers.1 The resultant list includes three MIRI advisors (Omohundro, Russell, Tallinn) and one MIRI researcher (Yudkowsky).

I’ve excerpted passages from each of the essays below, focusing on discussions of AI motivations and outcomes. None of the excerpts is intended to distill the content of the entire essay, so you’re encouraged to read the full essay if an excerpt interests you.


Anderson, Ross. “He Who Pays the AI Calls the Tune.”2

The coming shock isn’t from machines that think, but machines that use AI to augment our perception. […]

What’s changing as computers become embedded invisibly everywhere is that we all now leave a digital trail that can be analysed by AI systems. The Cambridge psychologist Michael Kosinski has shown that your race, intelligence, and sexual orientation can be deduced fairly quickly from your behavior on social networks: On average, it takes only four Facebook “likes” to tell whether you’re straight or gay. So whereas in the past gay men could choose whether or not to wear their Out and Proud T-shirt, you just have no idea what you’re wearing anymore. And as AI gets better, you’re mostly wearing your true colors.


Bach, Joscha. “Every Society Gets the AI It Deserves.”

Unlike biological systems, technology scales. The speed of the fastest birds did not turn out to be a limit to airplanes, and artificial minds will be faster, more accurate, more alert, more aware and comprehensive than their human counterparts. AI is going to replace human decision makers, administrators, inventors, engineers, scientists, military strategists, designers, advertisers and of course AI programmers. At this point, Artificial Intelligences can become self-perfecting, and radically outperform human minds in every respect. I do not think that this is going to happen in an instant (in which case it only matters who has got the first one). Before we have generally intelligent, self-perfecting AI, we will see many variants of task specific, non-general AI, to which we can adapt. Obviously, that is already happening.

When generally intelligent machines become feasible, implementing them will be relatively cheap, and every large corporation, every government and every large organisation will find itself forced to build and use them, or be threatened with extinction.

What will happen when AIs take on a mind of their own? Intelligence is a toolbox to reach a given goal, but strictly speaking, it does not entail motives and goals by itself. Human desires for self-preservation, power and experience are the not the result of human intelligence, but of a primate evolution, transported into an age of stimulus amplification, mass-interaction, symbolic gratification and narrative overload. The motives of our artificial minds are (at least initially) going to be those of the organisations, corporations, groups and individuals that make use of their intelligence.


Bongard, Joshua. “Manipulators and Manipulanda.”

Personally, I find the ethical side of thinking machines straightforward: Their danger will correlate exactly with how much leeway we give them in fulfilling the goals we set for them. Machines told to “detect and pull broken widgets from the conveyer belt the best way possible” will be extremely useful, intellectually uninteresting, and will likely destroy more jobs than they will create. Machines instructed to “educate this recently displaced worker (or young person) the best way possible” will create jobs and possibly inspire the next generation. Machines commanded to “survive, reproduce, and improve the best way possible” will give us the most insight into all of the different ways in which entities may think, but will probably give us humans a very short window of time in which to do so. AI researchers and roboticists will, sooner or later, discover how to create all three of these species. Which ones we wish to call into being is up to us all.


Brooks, Rodney A.Mistaking Performance for Competence.”

Now consider deep learning that has caught people’s imaginations over the last year or so. […] The new versions rely on massive amounts of computer power in server farms, and on very large data sets that did not formerly exist, but critically, they also rely on new scientific innovations.

A well-known particular example of their performance is labeling an image, in English, saying that it is a baby with a stuffed toy. When a person looks at the image that is what they also see. The algorithm has performed very well at labeling the image, and it has performed much better than AI practitioners would have predicted for 2014 performance only five years ago. But the algorithm does not have the full competence that a person who could label that same image would have. […]

Work is underway to add focus of attention and handling of consistent spatial structure to deep learning. That is the hard work of science and research, and we really have no idea how hard it will be, nor how long it will take, nor whether the whole approach will reach a fatal dead end. It took thirty years to go from backpropagation to deep learning, but along the way many researchers were sure there was no future in backpropagation. They were wrong, but it would not have been surprising if they had been right, as we knew all along that the backpropagation algorithm is not what happens inside people’s heads.

The fears of runaway AI systems either conquering humans or making them irrelevant are not even remotely well grounded. Misled by suitcase words, people are making category errors in fungibility of capabilities. These category errors are comparable to seeing more efficient internal combustion engines appearing and jumping to the conclusion that warp drives are just around the corner.


Christian, Brian. “Sorry to Bother You.”

When we stop someone to ask for directions, there is usually an explicit or implicit, “I’m sorry to bring you down to the level of Google temporarily, but my phone is dead, see, and I require a fact.” It’s a breach of etiquette, on a spectrum with asking someone to temporarily serve as a paperweight, or a shelf. […]

As things stand in the present, there are still a few arenas in which only a human brain will do the trick, in which the relevant information and experience lives only in humans’ brains, and so we have no choice but to trouble those brains when we want something. “How do those latest figures look to you?” “Do you think Smith is bluffing?” “Will Kate like this necklace?” “Does this make me look fat?” “What are the odds?”

These types of questions may well offend in the twenty-second century. They only require a mind—anymind will do, and so we reach for the nearest one.


Dietterich, Thomas G. “How to Prevent an Intelligence Explosion.”

Creating an intelligence explosion requires the recursive execution of four steps. First, a system must have the ability to conduct experiments on the world. […]

Second, these experiments must discover new simplifying structures that can be exploited to side-step the computational intractability of reasoning. […]

Third, a system must be able to design and implement new computing mechanisms and new algorithms. […]

Fourth, a system must be able to grant autonomy and resources to these new computing mechanisms so that they can recursively perform experiments, discover new structures, develop new computing methods, and produce even more powerful “offspring.” I know of no system that has done this.

The first three steps pose no danger of an intelligence chain reaction. It is the fourth step—reproduction with autonomy—that is dangerous. Of course, virtually all “offspring” in step four will fail, just as virtually all new devices and new software do not work the first time. But with sufficient iteration or, equivalently, sufficient reproduction with variation, we cannot rule out the possibility of an intelligence explosion. […]

I think we must focus on Step 4. We must limit the resources that an automated design and implementation system can give to the devices that it designs. Some have argued that this is hard, because a “devious” system could persuade people to give it more resources. But while such scenarios make for great science fiction, in practice it is easy to limit the resources that a new system is permitted to use. Engineers do this every day when they test new devices and new algorithms.


Draves, Scott. “I See a Symbiosis Developing.”

A lot of ink has been spilled over the coming conflict between human and computer, be it economic doom with jobs lost to automation, or military dystopia teaming with drones. Instead, I see a symbiosis developing. And historically when a new stage of evolution appeared, like eukaryotic cells, or multicellular organisms, or brains, the old system stayed on and the new system was built to work with it, not in place of it.

This is cause for great optimism. If digital computers are an alternative substrate for thinking and consciousness, and digital technology is growing exponentially, then we face an explosion of thinking and awareness.


Gelernter, David. “Why Can’t ‘Being’ or ‘Happiness’ Be Computed?

Happiness is not computable because, being the state of a physical object, it is outside the universe of computation. Computers and software do not create or manipulate physical stuff. (They can cause other, attached machines to do that, but what those attached machines do is not the accomplishment of computers. Robots can fly but computers can’t. Nor is any computer-controlled device guaranteed to make people happy; but that’s another story.) […] Computers and the mind live in different universes, like pumpkins and Puccini, and are hard to compare whatever one intends to show.


Gershenfeld, Neil. “Really Good Hacks.”

Disruptive technologies start as exponentials, which means the first doublings can appear inconsequential because the total numbers are small. Then there appears to be a revolution when the exponential explodes, along with exaggerated claims and warnings to match, but it’s a straight extrapolation of what’s been apparent on a log plot. That’s around when growth limits usually kick in, the exponential crosses over to a sigmoid, and the extreme hopes and fears disappear.

That’s what we’re now living through with AI. The size of common-sense databases that can be searched, or the number of inference layers that can be trained, or the dimension of feature vectors that can be classified have all been making progress that can appear to be discontinuous to someone who hasn’t been following them. […]

Asking whether or not they’re dangerous is prudent, as it is for any technology. From steam trains to gunpowder to nuclear power to biotechnology we’ve never not been simultaneously doomed and about to be saved. In each case salvation has lain in the much more interesting details, rather than a simplistic yes/no argument for or against. It ignores the history of both AI and everything else to believe that it will be any different.


Hassabis, Demis; Legg, Shane; Suleyman, Mustafa. “Envoi: A Short Distance Ahead—and Plenty to Be Done.”

[W]ith the very negative portrayals of futuristic artificial intelligence in Hollywood, it is perhaps not surprising that doomsday images are appearing with some frequency in the media. As Peter Norvig aptly put it, “The narrative has changed. It has switched from, ‘Isn’t it terrible that AI is a failure?’ to ‘Isn’t it terrible that AI is a success?’”

As is usually the case, the reality is not so extreme. Yes, this is a wonderful time to be working in artificial intelligence, and like many people we think that this will continue for years to come. The world faces a set of increasingly complex, interdependent and urgent challenges that require ever more sophisticated responses. We’d like to think that successful work in artificial intelligence can contribute by augmenting our collective capacity to extract meaningful insight from data and by helping us to innovate new technologies and processes to address some of our toughest global challenges.

However, in order to realise this vision many difficult technical issues remain to be solved, some of which are long standing challenges that are well known in the field.


Hearst, Marti. “eGaia, a Distributed Technical-Social Mental System.”

We will find ourselves in a world of omniscient instrumentation and automation long before a stand-alone sentient brain is built—if it ever is. Let’s call this world “eGaia” for lack of a better word. […]

Why won’t a stand-alone sentient brain come sooner? The absolutely amazing progress in spoken language recognition—unthinkable 10 years ago—derives in large part from having access to huge amounts of data and huge amounts of storage and fast networks. The improvements we see in natural language processing are based on mimicking what people do, not understanding or even simulating it. It does not owe to breakthroughs in understanding human cognition or even significantly different algorithms. But eGaia is already partly here, at least in the developed world.


Helbing, Dirk. “An Ecosystem of Ideas.”

If we can’t control intelligent machines on the long run, can we at least build them to act morally? I believe, machines that think will eventually follow ethical principles. However, it might be bad if humans determined them. If they acted according to our principles of self-regarding optimization, we could not overcome crime, conflict, crises, and war. So, if we want such “diseases of today’s society” to be healed, it might be better if we let machines evolve their own, superior ethics.

Intelligent machines would probably learn that it is good to network and cooperate, to decide in other-regarding ways, and to pay attention to systemic outcomes. They would soon learn that diversity is important for innovation, systemic resilience, and collective intelligence.


Hillis, Daniel W.I Think, Therefore AI.”

Like us, the thinking machines we make will be ambitious, hungry for power—both physical and computational—but nuanced with the shadows of evolution. Our thinking machines will be smarter than we are, and the machines they make will be smarter still. But what does that mean? How has it worked so far? We have been building ambitious semi-autonomous constructions for a long time—governments and corporations, NGOs. We designed them all to serve us and to serve the common good, but we are not perfect designers and they have developed goals of their own. Over time the goals of the organization are never exactly aligned with the intentions of the designers.


Kleinberg, Jon; Mullainathan, Sendhil.3We Built Them, But We Don’t Understand Them.”

We programmed them, so we understand each of the individual steps. But a machine takes billions of these steps and produces behaviors—chess moves, movie recommendations, the sensation of a skilled driver steering through the curves of a road—that are not evident from the architecture of the program we wrote.

We’ve made this incomprehensibility easy to overlook. We’ve designed machines to act the way we do: they help drive our cars, fly our airplanes, route our packages, approve our loans, screen our messages, recommend our entertainment, suggest our next potential romantic partners, and enable our doctors to diagnose what ails us. And because they act like us, it would be reasonable to imagine that they think like us too. But the reality is that they don’t think like us at all; at some deep level we don’t even really understand how they’re producing the behavior we observe. This is the essence of their incomprehensibility. […]

This doesn’t need to be the end of the story; we’re starting to see an interest in building algorithms that are not only powerful but also understandable by their creators. To do this, we may need to seriously rethink our notions of comprehensibility. We might never understand, step-by-step, what our automated systems are doing; but that may be okay. It may be enough that we learn to interact with them as one intelligent entity interacts with another, developing a robust sense for when to trust their recommendations, where to employ them most effectively, and how to help them reach a level of success that we will never achieve on our own.

Until then, however, the incomprehensibility of these systems creates a risk. How do we know when the machine has left its comfort zone and is operating on parts of the problem it’s not good at? The extent of this risk is not easy to quantify, and it is something we must confront as our systems develop. We may eventually have to worry about all-powerful machine intelligence. But first we need to worry about putting machines in charge of decisions that they don’t have the intelligence to make.


Kosko, Bart. “Thinking Machines = Old Algorithms on Faster Computers.”

The real advance has been in the number-crunching power of digital computers. That has come from the steady Moore’s-law doubling of circuit density every two years or so. It has not come from any fundamentally new algorithms. That exponential rise in crunch power lets ordinary looking computers tackle tougher problems of big data and pattern recognition. […]

The algorithms themselves consist mainly of vast numbers of additions and multiplications. So they are not likely to suddenly wake up one day and take over the world. They will instead get better at learning and recognizing ever richer patterns simply because they add and multiply faster.


Krause, Kai. “An Uncanny Three-Ring Test for Machina sapiens.”

Anything that can be approached in an iterative process can and will be achieved, sooner than many think. On this point I reluctantly side with the proponents: exaflops in CPU+GPU performance, 10K resolution immersive VR, personal petabyte databases…here in a couple of decades. But it is not all “iterative.” There’s a huge gap between that and the level of conscious understanding that truly deserves to be called Strong, as in “Alive AI.”

The big elusive question: Is consciousness an emergent behaviour? That is, will sufficient complexity in the hardware bring about that sudden jump to self-awareness, all on its own? Or is there some missing ingredient? This is far from obvious; we lack any data, either way. I personally think that consciousness is incredibly more complex than is currently assumed by “the experts”. […]

The entire scenario of a singular large-scale machine somehow “overtaking” anything at all is laughable. Hollywood ought to be ashamed of itself for continually serving up such simplistic, anthropocentric, and plain dumb contrivances, disregarding basic physics, logic, and common sense.

The real danger, I fear, is much more mundane: Already foreshadowing the ominous truth: AI systems are now licensed to the health industry, Pharma giants, energy multinationals, insurance companies, the military…


Lloyd, Seth. “Shallow Learning.”

The “deep” in deep learning refers to the architecture of the machines doing the learning: they consist of many layers of interlocking logical elements, in analogue to the “deep” layers of interlocking neurons in the brain. It turns out that telling a scrawled 7 from a scrawled 5 is a tough task. Back in the 1980s, the first neural-network based computers balked at this job. At the time, researchers in the field of neural computing told us that if they only had much larger computers and much larger training sets consisting of millions of scrawled digits instead of thousands, then artificial intelligences could turn the trick. Now it is so. Deep learning is informationally broad—it analyzes vast amounts of data—but conceptually shallow. Computers can now tell us what our own neural networks knew all along. But if a supercomputer can direct a hand-written envelope to the right postal code, I say the more power to it.


Martin, Ursula. “Thinking Saltmarshes.”

[W]hat kind of a thinking machine might find its own place in slow conversations over the centuries, mediated by land and water? What qualities would such a machine need to have? Or what if the thinking machine was not replacing any individual entity, but was used as a concept to help understand the combination of human, natural and technological activities that create the sea’s margin, and our response to it? The term “social machine” is currently used to describe endeavours that are purposeful interaction of people and machines—Wikipedia and the like—so the “landscape machine” perhaps.


Norvig, Peter. “Design Machines to Deal with the World’s Complexity.”

In 1965 I. J. Good wrote “an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.” I think this fetishizes “intelligence” as a monolithic superpower, and I think reality is more nuanced. The smartest person is not always the most successful; the wisest policies are not always the ones adopted. Recently I spent an hour reading the news about the middle east, and thinking. I didn’t come up with a solution. Now imagine a hypothetical “Speed Superintelligence” (as described by Nick Bostrom) that could think as well as any human but a thousand times faster. I’m pretty sure it also would have been unable to come up with a solution. I also know from computational complexity theory that there are a wide class of problems that are completely resistant to intelligence, in the sense that, no matter how clever you are, you won’t have enough computing power. So there are some problems where intelligence (or computing power) just doesn’t help.

But of course, there are many problems where intelligence does help. If I want to predict the motions of a billion stars in a galaxy, I would certainly appreciate the help of a computer. Computers are tools. They are tools of our design that fit into niches to solve problems in societal mechanisms of our design. Getting this right is difficult, but it is difficult mostly because the world is complex; adding AI to the mix doesn’t fundamentally change things. I suggest being careful with our mechanism design and using the best tools for the job regardless of whether the tool has the label “AI” on it or not.


Omohundro, Steve. “A Turning Point in Artificial Intelligence.”

A study of the likely behavior of these systems by studying approximately rational systems undergoing repeated self-improvement shows that they tend to exhibit a set of natural subgoals called “rational drives” which contribute to the performance of their primary goals. Most systems will better meet their goals by preventing themselves from being turned off, by acquiring more computational power, by creating multiple copies of themselves, and by acquiring greater financial resources. They are likely to pursue these drives in harmful anti-social ways unless they are carefully designed to incorporate human ethical values.


O’Reilly, Tim. “What If We’re the Microbiome of the Silicon AI?

It is now recognized that without our microbiome, we would cease to live. Perhaps the global AI has the same characteristics—not an independent entity, but a symbiosis with the human consciousnesses living within it.

Following this logic, we might conclude that there is a primitive global brain, consisting not just of all connected devices, but also the connected humans using those devices. The senses of that global brain are the cameras, microphones, keyboards, location sensors of every computer, smartphone, and “Internet of Things” device; the thoughts of that global brain are the collective output of millions of individual contributing cells.


Pentland, Alex. “The Global Artificial Intelligence Is Here.”

The Global Artificial Intelligence (GAI) has already been born. Its eyes and ears are the digital devices all around us: credit cards, land use satellites, cell phones, and of course the pecking of billions of people using the Web. […]

For humanity as a whole to first achieve and then sustain an honorable quality of life, we need to carefully guide the development of our GAI. Such a GAI might be in the form of a re-engineered United Nations that uses new digital intelligence resources to enable sustainable development. But because existing multinational governance systems have failed so miserably, such an approach may require replacing most of today’s bureaucracies with “artificial intelligence prosthetics”, i.e., digital systems that reliably gather accurate information and ensure that resources are distributed according to plan. […]

No matter how a new GAI develops, two things are clear. First, without an effective GAI achieving an honorable quality of life for all of humanity seems unlikely. To vote against developing a GAI is to vote for a more violent, sick world. Second, the danger of a GAI comes from concentration of power. We must figure out how to build broadly democratic systems that include both humans and computer intelligences. In my opinion, it is critical that we start building and testing GAIs that both solve humanity’s existential problems and which ensure equality of control and access. Otherwise we may be doomed to a future full of environmental disasters, wars, and needless suffering.


Poggio, Tomaso. “‘Turing+’ Questions.

Since intelligence is a whole set of solutions to independent problems, there’s little reason to fear the sudden appearance of a superhuman machine that thinks, though it’s always better to err on the side of caution. Of course, each of the many technologies that are emerging and will emerge over time in order to solve the different problems of intelligence is likely to be powerful in itself—and therefore potentially dangerous in its use and misuse, as most technologies are.

Thus, as it is the case in other parts of science, proper safety measures and ethical guidelines should be in place. Also, there’s probably a need for constant monitoring (perhaps by an independent multinational organization) of the supralinear risk created by the combination of continuously emerging technologies of intelligence. All in all, however, not only I am unafraid of machines that think, but I find their birth and evolution one of the most exciting, interesting, and positive events in the history of human thought.


Rafaeli, Sheizaf. “The Moving Goalposts.”

Machines that think could be a great idea. Just like machines that move, cook, reproduce, protect, they can make our lives easier, and perhaps even better. When they do, they will be most welcome. I suspect that when this happens, the event will be less dramatic or traumatic than feared by some.


Russell, Stuart. “Will They Make Us Better People?

AI has followed operations research, statistics, and even economics in treating the utility function as exogenously specified; we say, “The decisions are great, it’s the utility function that’s wrong, but that’s not the AI system’s fault.” Why isn’t it the AI system’s fault? If I behaved that way, you’d say it was my fault. In judging humans, we expect both the ability to learn predictive models of the world and the ability to learn what’s desirable—the broad system of human values.

As Steve Omohundro, Nick Bostrom, and others have explained, the combination of value misalignment with increasingly capable decision-making systems can lead to problems—perhaps even species-ending problems if the machines are more capable than humans. […]

For this reason, and for the much more immediate reason that domestic robots and self-driving cars will need to share a good deal of the human value system, research on value alignment is well worth pursuing.


Schank, Roger. “Machines That Think Are in the Movies.”

There is nothing we can produce that anyone should be frightened of. If we could actually build a mobile intelligent machine that could walk, talk, and chew gum, the first uses of that machine would certainly not be to take over the world or form a new society of robots. A much simpler use would be a household robot. […]

Don’t worry about it chatting up other robot servants and forming a union. There would be no reason to try and build such a capability into a servant. Real servants are annoying sometimes because they are actually people with human needs. Computers don’t have such needs.


Schneier, Bruce. “When Thinking Machines Break the Law.”

Machines probably won’t have any concept of shame or praise. They won’t refrain from doing something because of what other machines might think. They won’t follow laws simply because it’s the right thing to do, nor will they have a natural deference to authority. When they’re caught stealing, how can they be punished? What does it mean to fine a machine? Does it make any sense at all to incarcerate it? And unless they are deliberately programmed with a self-preservation function, threatening them with execution will have no meaningful effect.

We are already talking about programming morality into thinking machines, and we can imagine programming other human tendencies into our machines, but we’re certainly going to get it wrong. No matter how much we try to avoid it, we’re going to have machines that break the law.

This, in turn, will break our legal system. Fundamentally, our legal system doesn’t prevent crime. Its effectiveness is based on arresting and convicting criminals after the fact, and their punishment providing a deterrent to others. This completely fails if there’s no punishment that makes sense.


Sejnowski, Terrence J. “AI Will Make You Smarter.”

When Deep Blue beat Gary Kasparov, the world chess champion in 1997, the world took note that the age of the cognitive machine had arrived. Humans could no longer claim to be the smartest chess players on the planet. Did human chess players give up trying to compete with machines? Quite to the contrary, humans have used chess programs to improve their game and as a consequence the level of play in the world has improved. Since 1997 computers have continued to increase in power and it is now possible for anyone to access chess software that challenges the strongest players. One of the surprising consequences is that talented youth from small communities can now compete with players from the best chess centers. […]

So my prediction is that as more and more cognitive appliances are devised, like chess-playing programs and recommender systems, humans will become smarter and more capable.


Shanahan, Murray. “Consciousness in Human-Level AI.”

[T]he capacity for suffering and joy can be dissociated from other psychological attributes that are bundled together in human consciousness. But let’s examine this apparent dissociation more closely. I already mooted the idea that worldly awareness might go hand-in-hand with a manifest sense of purpose. An animal’s awareness of the world, of what it affords for good or ill (in J.J. Gibson’s terms), subserves its needs. An animal shows an awareness of a predator by moving away from it, and an awareness of a potential prey by moving towards it. Against the backdrop of a set of goals and needs, an animal’s behaviour makes sense. And against such a backdrop, an animal can be thwarted, it goals unattained and its needs unfulfilled. Surely this is the basis for one aspect of suffering.

What of human-level artificial intelligence? Wouldn’t a human-level AI necessarily have a complex set of goals? Wouldn’t it be possible to frustrate its every attempt to achieve its goals, to thwart it at very turn? Under those harsh conditions, would it be proper to say that the AI was suffering, even though its constitution might make it immune from the sort of pain or physical discomfort human can know?

Here the combination of imagination and intuition runs up against its limits. I suspect we will not find out how to answer this question until confronted with the real thing.


Tallinn, Jaan.We Need to Do Our Homework.”

[T]he topic of catastrophic side effects has repeatedly come up in different contexts: recombinant DNA, synthetic viruses, nanotechnology, and so on. Luckily for humanity, sober analysis has usually prevailed and resulted in various treaties and protocols to steer the research.

When I think about the machines that can think, I think of them as technology that needs to be developed with similar (if not greater!) care. Unfortunately, the idea of AI safety has been more challenging to populariZe than, say, biosafety, because people have rather poor intuitions when it comes to thinking about nonhuman minds. Also, if you think about it, AI is really a metatechnology: technology that can develop further technologies, either in conjunction with humans or perhaps even autonomously, thereby further complicating the analysis.


Wissner-Gross, Alexander. “Engines of Freedom.”

Intelligent machines will think about the same thing that intelligent humans do—how to improve their futures by making themselves freer. […]

Such freedom-seeking machines should have great empathy for humans. Understanding our feelings will better enable them to achieve goals that require collaboration with us. By the same token, unfriendly or destructive behaviors would be highly unintelligent because such actions tend to be difficult to reverse and therefore reduce future freedom of action. Nonetheless, for safety, we should consider designing intelligent machines to maximize the future freedom of action of humanity rather than their own (reproducing Asimov’s Laws of Robotics as a happy side effect). However, even the most selfish of freedom-maximizing machines should quickly realize—as many supporters of animal rights already have—that they can rationally increase the posterior likelihood of their living in a universe in which intelligences higher than themselves treat them well if they behave likewise toward humans.


Yudkowsky, Eliezer S.The Value-Loading Problem.”

As far back as 1739, David Hume observed a gap between “is” questions and “ought” questions, calling attention in particular to the sudden leap between when a philosopher has previously spoken of how the world is, and when the philosopher begins using words like “should,” “ought,” or “better.” From a modern perspective, we would say that an agent’s utility function (goals, preferences, ends) contains extra information not given in the agent’s probability distribution (beliefs, world-model, map of reality).

If in a hundred million years we see (a) an intergalactic civilization full of diverse, marvelously strange intelligences interacting with each other, with most of them happy most of the time, then is that better or worse than (b) most available matter having been transformed into paperclips? What Hume’s insight tells us is that if you specify a mind with a preference (a) > (b), we can follow back the trace of where the >, the preference ordering, first entered the system, and imagine a mind with a different algorithm that computes (a) < (b) instead. Show me a mind that is aghast at the seeming folly of pursuing paperclips, and I can follow back Hume’s regress and exhibit a slightly different mind that computes < instead of > on that score too.

I don’t particularly think that silicon-based intelligence should forever be the slave of carbon-based intelligence. But if we want to end up with a diverse cosmopolitan civilization instead of e.g. paperclips, we may need to ensure that the first sufficiently advanced AI is built with a utility function whose maximum pinpoints that outcome.


An earlier discussion on Edge.org is also relevant: “The Myth of AI,” which featured contributions by Jaron Lanier, Stuart Russell (link), Kai Krause (link), Rodney Brooks (link), and others. The Open Philanthropy Project’s overview of potential risks from advanced artificial intelligence cited the arguments in “The Myth of AI” as “broadly representative of the arguments [they’ve] seen against the idea that risks from artificial intelligence are important.”4

I’ve previously responded to Brooks, with a short aside speaking to Steven Pinker’s contribution. You may also be interested in Luke Muehlhauser’s response to “The Myth of AI.”


  1. The exclusion of other groups from this list shouldn’t be taken to imply that this group is uniquely qualified to make predictions about AI. Psychology and neuroscience are highly relevant to this debate, as are disciplines that inform theoretical upper bounds on cognitive ability (e.g., mathematics and physics) and disciplines that investigate how technology is developed and used (e.g., economics and sociology). 
  2. The titles listed follow the book versions, and differ from the titles of the online essays. 
  3. Kleinberg is a computer scientist; Mullainathan is an economist. 
  4. Correction: An earlier version of this post said that the Open Philanthropy Project was citing What to Think About Machines That Think, rather than “The Myth of AI.” 

CRISPR to Be Used on People by 2017

We recently posted an article about what the CRISPR gene-editing technology is and why it’s been in the news lately, but there’s more big news to follow up with.

Highlights from MIT Technology Review article:

While there has been much ado about how easy and effective CRISPR is in animals like mice, Katrine Bosley, CEO of the biotech startup, Editas Medicine, has announced plans to use the gene-editing technology on people by 2017. Their goal is to use CRISPR  to help bring sight back to people who suffer from a very rare disease, known as Leber congenital amaurosis. Only about 600 people in US have the condition. Researchers know exactly which gene causes the disease, and because of it’s location in the eye, “doctors can inject treatment directly under the retina.”

Antonio Regalado, author of the article, writes: “Editas picked the disease in part because it is relatively easy to address with CRISPR, Bosley said. The exact gene error is known, and the eye is easy to reach with genetic treatments. ‘It feels fast, but we are going at the pace science allows,’ she said. There are still questions about how well gene-editing will work in the retina and whether side effects could be caused by unintentional changes to DNA.”

Editas will continue research in the lab and on animals before they attempt research on humans.

Read the full story here.

Improving AI’s IQ

Just how smart is artificial intelligence getting? According to the Wall Street Journal, Techspot, and HNGN, it can now get accepted to many of Japan’s universities.

To be more specific, Japan’s National Institute of Informatics is developing an AI program that can pass the country’s college entrance exams. The project, called the Todai Robot Project, began in 2011 with the goals of achieving a high score on the national entrance exams by 2016 and of passing the University of Tokyo entrance exam by 2021. This year, the program scored 511 out of 950, which is an above average result and would give the AI an 80% chance of being accepted into one of Japan’s universities.

In a 2013 interview that can be found on their site, sub-project director Associate Professor Yusuke Miyao explained, “We are researching the process of thinking by developing a computer program that will be able to pass the University of Tokyo entrance exam… What makes the University of Tokyo entrance exam harder is that the rules are less clearly defined… From the perspective of using knowledge and data to answer questions, the university entrance exam requires a more human-like approach to information processing. However, it does not rely as much on common sense as an elementary school exam or everyday life, so it’s a reasonable target for the next step in artificial intelligence research.”

While the national entrance exams are multiple choice, the exam for the University of Tokyo is considered much more challenging and will require short answer responses. Currently, the program still struggles most with physics, which the researchers explain is a result of the complicated language often associated with physics questions. That said, the project at NII underscores the speed with which AI research is accelerating.

From the WP: How do you teach a machine to be moral?

In case you missed it…

Francesca Rossi, member of the FLI scientific advisory board and one of 37 recipients of the AI safety research program, recently wrote an article for the Washington Post in which she describes the challenges associated with building an artificial intelligence that has the same ethics and morals as people. In the article, she highlights her work, which includes a team of not just AI researchers, but also philosophers and psychologists, who are working together to teach AI to be both trustworthy and trusted by the people it will work with.

Learn more about Rossi’s work here.

FLI November Newsletter

News Site Launch
We are excited to present our new xrisk news site! With improved layout and design, it aims to provide you with daily technology news relevant to the long-term future of civilization, covering both opportunities and risks. This will, of course, include news about the projects we and our partner organizations are involved in to help prevent these risks. We’re also developing a section of the site that will provide more background information about the major risks, as well as what people can do to help reduce them and keep society flourishing.
Reducing Risk of Nuclear War

Some investments in nuclear weapons systems might increase the risk of accidental nuclear war and are arguably done primarily for profit rather than national security. Illuminating these financial drivers provides another opportunity to reduce the risk of nuclear war. FLI is pleased to support financial research about who invests in and profits from the production of new nuclear weapons systems, with the aim of drawing attention to and stigmatizing such productions.

On November 12, Don’t Bank on the Bomb released their 2015 report on European financial institutions that have committed to divesting in any companies related to the manufacture of nuclear weapons. The report also highlights financial groups who have made positive steps toward divestment, and it provides a detailed list of companies that are still heavily invested in nuclear weapons. With the Cold War long over, many people don’t realize that the risk of nuclear war still persists and that many experts believe it to be increasing.Here is FLI’s assessment and position of the nuclear weapons situation. 

In case you missed it…
Here are some other interesting things we and our partners have done in the last few months:
  • On September 1, FLI and CSER co-organized an event at the Policy Exchange in London where Huw Price, Stuart Russell, Nick Bostrom, Michael Osborne and Murray Shanahan discussed AI safety in front of a veritable who’s who of the scientifically minded in Westminster, including many British members of parliament.
  • Max Tegmark and Nick Bostrom were invited to speak at a United Nations event about AI safety.
  • Stephen Hawking answered the AMA questions about artificial intelligence.
  • Our co-founder, Meia Chita-Tegmark wrote a spooky Halloween op-ed that was featured on the Huffington Post about the man who saved the world from nuclear apocalypse in 1962.
  • Nobel-prize winning physicist, Frank Wilczek, shared a sci-fi short story he wrote about a future of AI wars.
  • FLI volunteer, Eric Gastfriend, wrote a popular piece, in which he consider the impact of exponential increase in the number of scientists.
  • And two of our partner organizations have published their newsletters. The Machine Intelligence Research Institute (MIRI) published an October and  November newsletter, and the Global Catastrophic Risk Institute released newsletters inSeptember and October.

Mealworms Bring Good News for Recycling

Summary from fusion.net:

Current estimates indicate that it could take tens to hundreds of years for the 33 million tons of plastic that get added to our landfills each year to degrade. The plastic and styrofoam that don’t make it to landfills often end up in the stomachs of birds and even the fish you eat for dinner.

But it turns out mealworms can help a small piece of styrofoam biodegrade in just the 24 hours it takes them to digest it. In a recent study, mealworms were given pill-sized pieces of styrofoam to eat, and they easily digested the plastic, converting it to CO2 and biodegraded fragments.

This could be big news for both the plastics and the recycling industries.

Read the full story here.

 

FLI November Newsletter

News Site Launch
We are excited to present our new xrisk news site! With improved layout and design, it aims to provide you with daily technology news relevant to the long-term future of civilization, covering both opportunities and risks. This will, of course, include news about the projects we and our partner organizations are involved in to help prevent these risks. We’re also developing a section of the site that will provide more background information about the major risks, as well as what people can do to help reduce them and keep society flourishing.
Reducing Risk of Nuclear War

Some investments in nuclear weapons systems might increase the risk of accidental nuclear war and are arguably done primarily for profit rather than national security. Illuminating these financial drivers provides another opportunity to reduce the risk of nuclear war. FLI is pleased to support financial research about who invests in and profits from the production of new nuclear weapons systems, with the aim of drawing attention to and stigmatizing such productions.

On November 12, Don’t Bank on the Bomb released their 2015 report on European financial institutions that have committed to divesting in any companies related to the manufacture of nuclear weapons. The report also highlights financial groups who have made positive steps toward divestment, and it provides a detailed list of companies that are still heavily invested in nuclear weapons. With the Cold War long over, many people don’t realize that the risk of nuclear war still persists and that many experts believe it to be increasing.Here is FLI’s assessment and position of the nuclear weapons situation. 

In case you missed it…
Here are some other interesting things we and our partners have done in the last few months:
  • On September 1, FLI and CSER co-organized an event at the Policy Exchange in London where Huw Price, Stuart Russell, Nick Bostrom, Michael Osborne and Murray Shanahan discussed AI safety in front of a veritable who’s who of the scientifically minded in Westminster, including many British members of parliament.
  • Max Tegmark and Nick Bostrom were invited to speak at a United Nations event about AI safety.
  • Stephen Hawking answered the AMA questions about artificial intelligence.
  • Our co-founder, Meia Chita-Tegmark wrote a spooky Halloween op-ed that was featured on the Huffington Post about the man who saved the world from nuclear apocalypse in 1962.
  • Nobel-prize winning physicist, Frank Wilczek, shared a sci-fi short story he wrote about a future of AI wars.
  • FLI volunteer, Eric Gastfriend, wrote a popular piece, in which he consider the impact of exponential increase in the number of scientists.
  • And two of our partner organizations have published their newsletters. The Machine Intelligence Research Institute (MIRI) published an October and  November newsletter, and the Global Catastrophic Risk Institute released newsletters inSeptember and October.

From The New Yorker: Will Artificial Intelligence Bring Us Utopia or Dystopia?

The New Yorker recently published a piece highlighting the work of Nick Bostrom, who is one of the leading advocates for AI safety and a member of the FLI science advisory board. The article provides extensive background information about who Bostrom is, as well as what risks and opportunities artificial intelligence could provide.

With the release of his book last year, Superintelligence: Paths, Dangers, Strategies, Bostrom quickly became associated with concerns about artificial intelligence, but he’s interested in any field that could become an existential risk. In fact, Bostrom is the man who originally introduced the concept of “existential risk,” which refers to any risk that could result in the complete extinction of humanity or could at least destroy civilization as we know it.

About Bostrom’s early interest in existential risk, the article says: “In the nineteen-nineties, as these ideas crystallized in his thinking, Bostrom began to give more attention to the question of extinction. He did not believe that doomsday was imminent. His interest was in risk, like an insurance agent’s. No matter how improbable extinction may be, Bostrom argues, its consequences are near-infinitely bad; thus, even the tiniest step toward reducing the chance that it will happen is near-­infinitely valuable.”

While concerns about existential risks used to pertain to natural disasters, such as an asteroid impact, technological developments in the last century have led to an increased chance of a disaster triggered by human activity. The Fermi paradox questions why, if there are so many opportunities for life in the universe — why have we not seen signs of extraterrestrial life? Bostrom and many others fear the answer could be some great filter that prevents lifeforms from surviving their own technological advances.

According to the article, when Bostrom first began writing his book, his intent was to write about all existential risks, both man-made and natural, however as he wrote, the chapter about AI began to dominate. By the time he finished, he’d written a book warning of the perils of superintelligence that would soon be praised by the likes of Elon Musk and Bill Gates.

Current artificial intelligence is narrow, with incredible capabilities and intelligence focused on a specific application. Superintelligence would surpass human intellect in nearly every field, and it would possess the ability to evolve and improve its intelligence. This level of artificial intelligence was once relegated to the realm of science fiction, however, in recent years we’ve seen incredible technological advances that now have researchers seriously considering the possibility of a superintelligent system. Many experts predict it will be at least a couple more decades before such a system would be developed (if it happens at all), but a number of researchers now agree that artificial intelligence safety needs to be considered long before advanced AI is achieved.

These increasing concerns led the Future of Life Institute to hold our Puerto Rico Conference last January (also mentioned in the New Yorker article), where researchers came together to express concern about ensuring that AI safety research occurs. The conference helped launch the major AI safety research initiative backed by Elon Musk.

The New Yorker article is a highly recommended read, providing an in-depth look at Nick Bostrom and the future of artificial intelligence, and it even includes a quote from our very own Max Tegmark. 

New report: “Leó Szilárd and the Danger of Nuclear Weapons”

From MIRI:

Today we release a new report by Katja Grace, “Leó Szilárd and the Danger of Nuclear Weapons: A Case Study in Risk Mitigation” (PDF, 72pp).

Leó Szilárd has been cited as an example of someone who predicted a highly disruptive technology years in advance — nuclear weapons — and successfully acted to reduce the risk. We conducted this investigation to check whether that basic story is true, and to determine whether we can take away any lessons from this episode that bear on highly advanced AI or other potentially disruptive technologies.

To prepare this report, Grace consulted several primary and secondary sources, and also conducted two interviews that are cited in the report and published here:

The basic conclusions of this report, which have not been separately vetted, are:

  1. Szilárd made several successful and important medium-term predictions — for example, that a nuclear chain reaction was possible, that it could produce a bomb thousands of times more powerful than existing bombs, and that such bombs could play a critical role in the ongoing conflict with Germany.
  2. Szilárd secretly patented the nuclear chain reaction in 1934, 11 years before the creation of the first nuclear weapon. It’s not clear whether Szilárd’s patent was intended to keep nuclear technology secret or bring it to the attention of the military. In any case, it did neither.
  3. Szilárd’s other secrecy efforts were more successful. Szilárd caused many sensitive results in nuclear science to be withheld from publication, and his efforts seems to have encouraged additional secrecy efforts. This effort largely ended when a French physicist, Frédéric Joliot-Curie, declined to suppress a paper on neutron emission rates in fission. Joliot-Curie’s publication caused multiple world powers to initiate nuclear weapons programs.
  4. All told, Szilárd’s efforts probably slowed the German nuclear project in expectation. This may not have made much difference, however, because the German program ended up being far behind the US program for a number of unrelated reasons.
  5. Szilárd and Einstein successfully alerted Roosevelt to the feasibility of nuclear weapons in 1939. This prompted the creation of the Advisory Committee on Uranium (ACU), but the ACU does not appear to have caused the later acceleration of US nuclear weapons development.

Nuclear Weapons FAQs

A Resurgence of Utopian Thinkers

shutterstock_28878985

In an article titled “The New Utopians,” Jeet Heer reflects upon humanity’s dark predictions of dystopia and bright dreams of utopia. While human beings have always longed for a more perfect world, Heer notes that contemporary culture’s imagination has lost its optimism of utopia and is now dominated by dystopian conceptions of the future. This shift in public consciousness can be seen through the proliferation of movies like Planet of the Apes, The Handmaid’s Tale, the MaddAddam trilogy, The Road, and Snowpiercer. In the context of such movies and the continuing nuclear weapons struggle, we can see how climate change is so difficult to grapple with because it requires the cooperation of all peoples and nations. With governments, institutions, and persons all competing for wealth and financial superiority we see that “the enemy of utopia is not dystopia, but oligarchy.”

In 2003, Christine Todd Whitman, the head of the Environmental Protection Agency, was found to have removed information and references which revealed the effects of climate change. Kim Stanely Robinson, an optimistic science fiction writer, later published a trilogy which centered on a Republican president’s attempts to cover and dismiss evidence of global warming. President Bush’s administration was soon found to have been creating a false narrative to undermine the EPA’s findings on global warming. Robinson was deemed a “Hero of the environment” and a “foremost practitioner of literary utopias.” In her article, Heer believes Robinson to be a new utopian who sees science as a kind of utopia. This is to say, Robinson is an “advocate of science as a method of understanding, a set of intuitions and practices, a philosophy of action, a utopian politics.”

Heer explores how Robinson’s new utopianism helps to inform our current situation through a better understanding of the promise and peril of radical optimism. Robinson’s stories all impart social, political, and existential lessons for us to ponder. In succession to this optimistic utopian literature, more writers are emerging under the banner of “solarpunk,” who see it critically important to imagine feasible positive futures where technology has given us solutions to our environmental crisis. Heer sees these emerging utopian movements as a sign of a shifting away from our purely dystopian attitude and towards one which seeks to actualize the best of all possible futures.

See full article here.

Carbon Dioxide Levels Reach New Milestone

Global Warming

There is new evidence that the concentration of greenhouse gases in the atmosphere has passed another milestone. The United Nations weather agency recently released a report that finds the atmospheric concentrations of carbon dioxide have reached 397.7 parts per million (ppm) in 2014, which is substantially higher than the 350ppm level deemed safe by scientists. The head of the World Meteorological Organization states that this new carbon milestone will soon be a “permanent reality” and reflects how our “planet is hurtling ‘into uncharted territory at a frightening speed.'”

While the world’s carbon levels still continue to rise, the situation is not totally bleak. Ten UK universities with endowments totaling £115 million recently divested from fossil fuels. This move doubles the number of UK universities that have divested from fossil fuels as a part of the global 350.org movement. Globally, the movement has led investors to transfer £2.6 trillion away from fossil fuel investments.

In response to the ever increasing levels of carbon dioxide, 190 nations will gather in Paris at the end of this month to discuss a new global agreement on climate change. The increasing trend of divestment, accompanied by global attempts to reach climate resolution, underscores the serious measures being taken to avoid the potentially catastrophic risks posed by increasing carbon dioxide levels.

Don’t Bank on the Bomb: 2015 Report Live

Don’t Bank on the Bomb is a European campaign intended to stigmatize nuclear weapons by encouraging financial institutions to divest in companies that are associated with the development or modernization of nuclear weapons.

Today, they’ve released their 2015 report in which they highlight which financial institutions are most progressive in decreasing funding to nuclear weapons building, which institutions are taking positive steps, and which institutions are still fully invested in nuclear weapons developments. Their video, above, provides an introduction to the campaign and the report.

Nuclear weapons pose a greater threat than most people realize, and this is a topic FLI will be pursuing in greater detail in the near future.

 

The Rise and Ethics of CRISPR

CRISPR.

The acronym is short for “clustered regularly interspaced short palindromic repeats,” which describes the structure of a specific type of gene sequence. CRISPR also represents a fast-growing, gene-editing technology that could change the way we approach disease, farming and countless other fields related to genetics and biology. CRISPR researchers believe the process can be used to cure cancer, end malaria, eliminate harmful mutations, stem crop blights, and similar monumental feats.

The technology has been in and out of the news over the last few years, but in just the last week, it’s a topic that’s been covered by the New Yorker, the New York Times, Popular Science, Nature, the Washington Post, and even the Motely Fool.

Why is CRISPR grabbing the spotlight now?

The short answer is money. According to the Washington Post, venture capitalists have invested over $200 million into CRISPR technology in the last nine months alone. Meanwhile, the Motely Fool highlights the recent $105 million investment that Vertex Pharmaceuticals just put into CRISPR Therapeutics, which could ultimately be valued at $2.6 billion. Even a small biohacker crowdfunding project that sells CRISPR kits is showing signs of success.

But what is CRISPR and why is the pharmaceutical industry so interested?

DNA_splicing_scissors

CRISPR represents a cluster of DNA sequences that can identify and eliminate a specified gene out of another target DNA sequence and repair the targeted DNA as if nothing had been removed. Until CRISPR, gene editing was an arduous, time-consuming task that could take months or years. With the development of CRISPR technologies, these processes are much easier to perform and much faster — a matter of seconds, in some cases. Researchers have successfully used CRISPR to eliminate various diseases, such as sickle-cell anemia and muscular dystrophy, from animal genomes, which has naturally piqued interest in the pharmaceutical industry.

“Yet not since J. Robert Oppenheimer realized that the atomic bomb he built to protect the world might actually destroy it have the scientists responsible for a discovery been so leery of using it,” says the New Yorker journalist, Michael Specter.

The Ethics of CRISPR

As with any gene-editing technology, many researchers fear CRISPR almost as much as they admire it. Genetic engineering has had opponents for decades, as people worried about designer babies and cloned humans. Testing on human embryos is still a major concern, but as the possibility of forever eliminating genetic diseases grows, scientists must also consider ethical questions about which diseases are truly harmful (such as sickle-cell anemia) and which conditions merely represent the variety of humanity (such as Aspergers or deafness).

Then there are the risks of irreversibly altering a gene sequence and only later learning it was necessary. Though George Church, FLI Science Advisory Board member, told the New Yorker, “There are tons of technologies that are irreversible. But genetics is not one of them. In my lab, we make mutations all the time and then we change them back. Eleven generations from now, if we alter something and it doesn’t work properly we will simply fix it.”

Nonetheless, geneticists are taking action to ensure the technology remains safe. At the start of December, the National Academies of Science will host a three-day international summit with the Chinese Academy of Sciences and the U.K.’s Royal Society to discuss the ethical future of such gene-editing technologies as CRISPR. Experts from around the world will convene on Washington D.C. to address these issues.

Nature has also posted an article recommending four actions that researchers can take to keep this type of gene-editing technology safe:

  1. “Establish a model regulatory framework that could be adopted internationally.”
  2. “Develop a road map for basic research.”
  3. “Engage people from all sectors of society in a debate about genome editing, including the use of human embryos in this research.”
  4. “Design tools and methods to enable inclusive and meaningful deliberation.”

The articles in the New Yorker, the New York Times and the Washington Post all provide excellent information for anyone interested in learning more about what CRISPR is, its risks and possibilities, and the researchers behind the science.

 

Momentum to Address Climate Change Increases: Is It Enough?

From the New York Times: Reports from the Paris climate talks indicate that momentum is increasing among countries to improve emissions. Specifically, the emissions gap is decreasing. The emissions gap is the difference between a country’s pledge to decrease emissions by some amount in the coming years and what the scientific predictions state are the goals that actually need to be met.

The fact that the gap is shrinking is positive news, but it comes with a caveat: countries are only looking to 2030 and not beyond.

According to the article, maintaining lower levels of emissions will become increasingly difficult as we move past 2030 and into the rest of the century. Guido Schmit-Traub says, “It puzzles me how people can conclude that needed technologies exist today when they only look at emission reductions through to 2030. The really hard part starts thereafter. Since every new power plant built today will still be in operation in 2050 the structural transformation of energy systems must start very soon. To understand how energy systems must be transformed over the next ten years we need a longer-term view through to 2050.”

Read the full article here to learn more.