Why 2016 Was Actually a Year of Hope

Just about everyone found something to dislike about 2016, from wars to politics and celebrity deaths. But hidden within this year’s news feeds were some really exciting news stories. And some of them can even give us hope for the future.

Artificial Intelligence

Though concerns about the future of AI still loom, 2016 was a great reminder that, when harnessed for good, AI can help humanity thrive.

AI and Health

Some of the most promising and hopefully more immediate breakthroughs and announcements were related to health. Google’s DeepMind announced a new division that would focus on helping doctors improve patient care. Harvard Business Review considered what an AI-enabled hospital might look like, which would improve the hospital experience for the patient, the doctor, and even the patient’s visitors and loved ones. A breakthrough from MIT researchers could see AI used to more quickly and effectively design new drug compounds that could be applied to a range of health needs.

More specifically, Microsoft wants to cure cancer, and the company has been working with research labs and doctors around the country to use AI to improve cancer research and treatment. But Microsoft isn’t the only company that hopes to cure cancer. DeepMind Health also partnered with University College London’s hospitals to apply machine learning to diagnose and treat head and neck cancers.

AI and Society

Other researchers are turning to AI to help solve social issues. While AI has what is known as the “white guy problem” and examples of bias cropped up in many news articles, Fei Fei Li has been working with STEM girls at Stanford to bridge the gender gap. Stanford researchers also published research that suggests  artificial intelligence could help us use satellite data to combat global poverty.

It was also a big year for research on how to keep artificial intelligence safe as it continues to develop. Google and the Future of Humanity Institute made big headlines with their work to design a “kill switch” for AI. Google Brain also published a research agenda on various problems AI researchers should be studying now to help ensure safe AI for the future.

Even the White House got involved in AI this year, hosting four symposia on AI and releasing reports in October and December about the potential impact of AI and the necessary areas of research. The White House reports are especially focused on the possible impact of automation on the economy, but they also look at how the government can contribute to AI safety, especially in the near future.

AI in Action

And of course there was AlphaGo. In January, Google’s DeepMind published a paper, which announced that the company had created a program, AlphaGo, that could beat one of Europe’s top Go players. Then, in March, in front of a live audience, AlphaGo beat the reigning world champion of Go in four out of five games. These results took the AI community by surprise and indicate that artificial intelligence may be progressing more rapidly than many in the field realized.

And AI went beyond research labs this year to be applied practically and beneficially in the real world. Perhaps most hopeful was some of the news that came out about the ways AI has been used to address issues connected with pollution and climate change. For example, IBM has had increasing success with a program that can forecast pollution in China, giving residents advanced warning about days of especially bad air. Meanwhile, Google was able to reduce its power usage by using DeepMind’s AI to manipulate things like its cooling systems.

And speaking of addressing climate change…

Climate Change

With recent news from climate scientists indicating that climate change may be coming on faster and stronger than previously anticipated and with limited political action on the issue, 2016 may not have made climate activists happy. But even here, there was some hopeful news.

Among the biggest news was the ratification of the Paris Climate Agreement. But more generally, countries, communities and businesses came together on various issues of global warming, and Voices of America offers five examples of how this was a year of incredible, global progress.

But there was also news of technological advancements that could soon help us address climate issues more effectively. Scientists at Oak Ridge National Laboratory have discovered a way to convert CO2 into ethanol. A researcher from UC Berkeley has developed a method for artificial photosynthesis, which could help us more effectively harness the energy of the sun. And a multi-disciplinary team has genetically engineered bacteria that could be used to help combat global warming.

Biotechnology

Biotechnology — with fears of designer babies and manmade pandemics – is easily one of most feared technologies. But rather than causing harm, the latest biotech advances could help to save millions of people.

CRISPR

In the course of about two years, CRISPR-cas9 went from a new development to what could become one of the world’s greatest advances in biology. Results of studies early in the year were promising, but as the year progressed, the news just got better. CRISPR was used to successfully remove HIV from human immune cells. A team in China used CRISPR on a patient for the first time in an attempt to treat lung cancer (treatments are still ongoing), and researchers in the US have also received approval to test CRISPR cancer treatment in patients. And CRISPR was also used to partially restore sight to blind animals.

Gene Drive

Where CRISPR could have the most dramatic, life-saving effect is in gene drives. By using CRISPR to modify the genes of an invasive species, we could potentially eliminate the unwelcome plant or animal, reviving the local ecology and saving native species that may be on the brink of extinction. But perhaps most impressive is the hope that gene drive technology could be used to end mosquito- and tick-borne diseases, such as malaria, dengue, Lyme, etc. Eliminating these diseases could easily save over a million lives every year.

Other Biotech News

The year saw other biotech advances as well. Researchers at MIT addressed a major problem in synthetic biology in which engineered genetic circuits interfere with each other. Another team at MIT engineered an antimicrobial peptide that can eliminate many types of bacteria, including some of the antibiotic-resistant “superbugs.” And various groups are also using CRISPR to create new ways to fight antibiotic-resistant bacteria.

Nuclear Weapons

If ever there was a topic that does little to inspire hope, it’s nuclear weapons. Yet even here we saw some positive signs this year. The Cambridge City Council voted to divest their $1 billion pension fund from any companies connected with nuclear weapons, which earned them an official commendation from the U.S. Conference of Mayors. In fact, divestment may prove a useful tool for the general public to express their displeasure with nuclear policy, which will be good, since one cause for hope is that the growing awareness of the nuclear weapons situation will help stigmatize the new nuclear arms race.

In February, Londoners held the largest anti-nuclear rally Britain had seen in decades, and the following month MinutePhysics posted a video about nuclear weapons that’s been seen by nearly 1.3 million people. In May, scientific and religious leaders came together to call for steps to reduce nuclear risks. And all of that pales in comparison to the attention the U.S. elections brought to the risks of nuclear weapons.

As awareness of nuclear risks grows, so do our chances of instigating the change necessary to reduce those risks.

The United Nations Takes on Weapons

But if awareness alone isn’t enough, then recent actions by the United Nations may instead be a source of hope. As October came to a close, the United Nations voted to begin negotiations on a treaty that would ban nuclear weapons. While this might not have an immediate impact on nuclear weapons arsenals, the stigmatization caused by such a ban could increase pressure on countries and companies driving the new nuclear arms race.

The U.N. also announced recently that it would officially begin looking into the possibility of a ban on lethal autonomous weapons, a cause that’s been championed by Elon Musk, Steve Wozniak, Stephen Hawking and thousands of AI researchers and roboticists in an open letter.

Looking Ahead

And why limit our hope and ambition to merely one planet? This year, a group of influential scientists led by Yuri Milner announced an Alpha-Centauri starshot, in which they would send a rocket of space probes to our nearest star system. Elon Musk later announced his plans to colonize Mars. And an MIT scientist wants to make all of these trips possible for humans by using CRISPR to reengineer our own genes to keep us safe in space.

Yet for all of these exciting events and breakthroughs, perhaps what’s most inspiring and hopeful is that this represents only a tiny sampling of all of the amazing stories that made the news this year. If trends like these keep up, there’s plenty to look forward to in 2017.

AI Safety Highlights from NIPS 2016

This year’s Neural Information Processing Systems (NIPS) conference was larger than ever, with almost 6000 people attending, hosted in a huge convention center in Barcelona, Spain. The conference started off with two exciting announcements on open-sourcing collections of environments for training and testing general AI capabilities – the DeepMind Lab and the OpenAI Universe. Among other things, this is promising for testing safety properties of ML algorithms. OpenAI has already used their Universe environment to give an entertaining and instructive demonstration of reward hacking that illustrates the challenge of designing robust reward functions.

I was happy to see a lot of AI-safety-related content at NIPS this year. The ML and the Law symposium and Interpretable ML for Complex Systems workshop focused on near-term AI safety issues, while the Reliable ML in the Wild workshop also covered long-term problems. Here are some papers relevant to long-term AI safety:

Inverse Reinforcement Learning

Cooperative Inverse Reinforcement Learning (CIRL) by Hadfield-Menell, Russell, Abbeel, and Dragan (main conference). This paper addresses the value alignment problem by teaching the artificial agent about the human’s reward function, using instructive demonstrations rather than optimal demonstrations like in classical IRL (e.g. showing the robot how to make coffee vs having it observe coffee being made). (3-minute video)

cirl

Generalizing Skills with Semi-Supervised Reinforcement Learning by Finn, Yu, Fu, Abbeel, and Levine (Deep RL workshop). This work addresses the scalable oversight problem by proposing the first tractable algorithm for semi-supervised RL. This allows artificial agents to robustly learn reward functions from limited human feedback. The algorithm uses an IRL-like approach to infer the reward function, using the agent’s own prior experiences in the supervised setting as an expert demonstration.

ssrl

Towards Interactive Inverse Reinforcement Learning by Armstrong and Leike (Reliable ML workshop). This paper studies the incentives of an agent that is trying to learn about the reward function while simultaneously maximizing the reward. The authors discuss some ways to reduce the agent’s incentive to manipulate the reward learning process.

interactive-irl

Should Robots Have Off Switches? by Milli, Hadfield-Menell, and Russell (Reliable ML workshop). This poster examines some adverse effects of incentivizing artificial agents to be compliant in the off-switch game (a variant of CIRL).

off-switch

 

Safe Exploration

Safe Exploration in Finite Markov Decision Processes with Gaussian Processes by Turchetta, Berkenkamp, and Krause (main conference). This paper develops a reinforcement learning algorithm called Safe MDP that can explore an unknown environment without getting into irreversible situations, unlike classical RL approaches.

safemdp

Combating Reinforcement Learning’s Sisyphean Curse with Intrinsic Fear by Lipton, Gao, Li, Chen, and Deng (Reliable ML workshop). This work addresses the ‘Sisyphean curse’ of DQN algorithms forgetting past experiences, as they become increasingly unlikely under a new policy, and therefore eventually repeating catastrophic mistakes. The paper introduces an approach called ‘intrinsic fear’, which maintains a model for how likely different states are to lead to a catastrophe within some number of steps.

intrinsic_fear

Most of these papers were related to inverse reinforcement learning – while IRL is a promising approach, it would be great to see more varied safety material at the next NIPS. There were some more safety papers on other topics at UAI this summer: Safely Interruptible Agents (formalizing what it means to incentivize an agent to obey shutdown signals) and A Formal Solution to the Grain of Truth Problem (providing a broad theoretical framework for multiple agents learning to predict each other in arbitrary computable games).

These highlights were originally posted here and cross-posted to Approximately Correct. Thanks to Jan Leike, Zachary Lipton, and Janos Kramar for providing feedback on this post.

 

MIRI December 2016 Newsletter

We’re in the final weeks of our push to cover our funding shortfall, and we’re now halfway to our $160,000 goal. For potential donors who are interested in an outside perspective, Future of Humanity Institute (FHI) researcher Owen Cotton-Barratt has written up why he’s donating to MIRI this year. (Donation page.)Research updates

General updates

  • We teamed up with a number of AI safety researchers to help compile a list of recommended AI safety readings for the Center for Human-Compatible AI. See this page if you would like to get involved with CHCAI’s research.
  • Investment analyst Ben Hoskin reviews MIRI and other organizations involved in AI safety.

News and links

  • The Off-Switch Game“: Dylan Hadfield-Manell, Anca Dragan, Pieter Abbeel, and Stuart Russell show that an AI agent’s corrigibility is closely tied to the uncertainty it has about its utility function.
  • Russell and Allan Dafoe critique an inaccurate summary by Oren Etzioni of a new survey of AI experts on superintelligence.
  • Sam Harris interviews Russell on the basics of AI risk (video). See also Russell’s new Q&A on the future of AI.
  • Future of Life Institute co-founder Viktoriya Krakovna and FHI researcher Jan Leike join Google DeepMind’s safety team.
  • GoodAI sponsors a challenge to “accelerate the search for general artificial intelligence”.
  • OpenAI releases Universe, “a software platform for measuring and training an AI’s general intelligence across the world’s supply of games”. Meanwhile, DeepMind has open-sourced their own platform for general AI research, DeepMind Lab.
  • Staff at GiveWell and the Centre for Effective Altruism, along with others in the effective altruism community, explain where they’re donating this year.
  • FHI is seeking AI safety interns, researchers, and admins: jobs page.

This newsletter was originally posted here.

Silo Busting in AI Research

Artificial intelligence may seem like a computer science project, but if it’s going to successfully integrate with society, then social scientists must be more involved.

Developing an intelligent machine is not merely a problem of modifying algorithms in a lab. These machines must be aligned with human values, and this requires a deep understanding of ethics and the social consequences of deploying intelligent machines.

Getting people with a variety of backgrounds together seems logical enough in theory, but in practice, what happens when computer scientists, AI developers, economists, philosophers, and psychologists try to discuss AI issues? Do any of them even speak the same language?

Social scientists and computer scientists will come at AI problems from very different directions. And if they collaborate, everybody wins. Social scientists can learn about the complex tools and algorithms used in computer science labs, and computer scientists can become more attuned to the social and ethical implications of advanced AI.

Through transdisciplinary learning, both fields will be better equipped to handle the challenges of developing AI, and society as a whole will be safer.

 

Silo Busting

Too often, researchers focus on their narrow area of expertise, rarely reaching out to experts in other fields to solve common problems. AI is no different, with thick walls – sometimes literally – separating the social sciences from the computer sciences. This process of breaking down walls between research fields is often called silo-busting.

If AI researchers largely operate in silos, they may lose opportunities to learn from other perspectives and collaborate with potential colleagues. Scientists might miss gaps in their research or reproduce work already completed by others, because they were secluded away in their silo. This can significantly hamper the development of value-aligned AI.

To bust these silos, Wendell Wallach organized workshops to facilitate knowledge-sharing among leading computer and social scientists. Wallach, a consultant, ethicist, and scholar at Yale University’s Interdisciplinary Center for Bioethics, holds these workshops at The Hastings Center, where he is a senior advisor.

With co-chairs Gary Marchant, Stuart Russell, and Bart Selman, Wallach held the first workshop in April 2016. “The first workshop was very much about exposing people to what experts in all of these different fields were thinking about,” Wallach explains. “My intention was just to put all of these people in a room and hopefully they’d see that they weren’t all reinventing the wheel, and recognize that there were other people who were engaged in similar projects.”

The workshop intentionally brought together experts from a variety of viewpoints, including engineering ethics, philosophy, and resilience engineering, as well as participants from the Institute of Electrical and Electronics Engineers (IEEE), the Office of Naval Research, and the World Economic Forum (WEF). Wallach recounts, “some were very interested in how you implement sensitivity to moral considerations in AI computationally, and others were more interested in how AI changes the societal context.”

Other participants studied how the engineers of these systems may be susceptible to harmful cognitive biases and conflicts of interest, while still others focused on governance issues surrounding AI. Each of these viewpoints is necessary for developing beneficial AI, and The Hastings Center’s workshop gave participants the opportunity to learn from and teach each other.

But silo-busting is not easy. Wallach explains, “everybody has their own goals, their own projects, their own intentions, and it’s hard to hear someone say, ‘maybe you’re being a little naïve about this.’” When researchers operate exclusively in silos, “it’s almost impossible to understand how people outside of those silos did what they did,” he adds.

The intention of the first workshop was not to develop concrete strategies or proposals, but rather to open researchers’ minds to the broad challenges of developing AI with human values. “My suspicion is, the most valuable things that came out of this workshop would be hard to quantify,” Wallach clarifies. “It’s more like people’s minds were being stretched and opened. That was, for me, what this was primarily about.”

The workshop did yield some tangible results. For example, Marchant and Wallach introduced a pilot project for the international governance of AI, and nearly everyone at the workshop agreed to work on it. Since then, the IEEE, the International Committee of the Red Cross, the UN, the World Economic Forum, and other institutions have agreed to become active partners with The Hastings Center in building global infrastructure to ensure that AI and Robotics are beneficial.

This transdisciplinary cooperation is a promising sign that Wallach’s efforts are succeeding in strengthening the global response to AI challenges.

 

Value Alignment

Wallach and his co-chairs held a second workshop at the end of October. The participants were mostly scientists, but also included social theorists, a legal scholar, philosophers, and ethicists. The overall goal remained – to bust AI silos and facilitate transdisciplinary cooperation – but this workshop had a narrower focus.

“We made it more about value alignment and machine ethics,” he explains. “The tension in the room was between those who thought the problem [of value alignment] was imminently solvable and those who were deeply skeptical about solving the problem at all.”

In general, Wallach observed that “the social scientists and philosophers tend to overplay the difficulties [of creating AI with full value alignment] and computer scientists tend to underplay the difficulties.”

Wallach believes that while computer scientists will build the algorithms and utility functions for AI, they will need input from social scientists to ensure value alignment. “If a utility function represents 100,000 inputs, social theorists will help the AI researchers understand what those 100,000 inputs are,” he explains. “The AI researchers might be able to come up with 50,000-60,000 on their own, but they’re suddenly going to realize that people who have thought much more deeply about applied ethics are perhaps sensitive to things that they never considered.”

“I’m hoping that enough of [these researchers] learn each other’s language and how to communicate with each other, that they’ll recognize the value they can get from collaborating together,” he says. “I think I see evidence of that beginning to take place.”

 

Moving Forward

Developing value-aligned AI is a monumental task with existential risks. Experts from various perspectives must be willing to learn from each other and adapt their understanding of the issue.

In this spirit, The Hastings Center is leading the charge to bring the various AI silos together. After two successful events that resulted in promising partnerships, Wallach and his co-chairs will hold their third workshop in Spring 2018. And while these workshops are a small effort to facilitate transdisciplinary cooperation on AI, Wallach is hopeful.

“It’s a small group,” he admits, “but it’s people who are leaders in these various fields, so hopefully that permeates through the whole field, on both sides.”

This article is part of a Future of Life series on the AI safety research grants, which were funded by generous donations from Elon Musk and the Open Philanthropy Project.

Artificial Intelligence and the King Midas Problem

Value alignment. It’s a phrase that often pops up in discussions about the safety and ethics of artificial intelligence. How can scientists create AI with goals and values that align with those of the people it interacts with?

Very simple robots with very constrained tasks do not need goals or values at all. Although the Roomba’s designers know you want a clean floor, Roomba doesn’t: it simply executes a procedure that the Roomba’s designers predict will work—most of the time. If your kitten leaves a messy pile on the carpet, Roomba will dutifully smear it all over the living room. If we keep programming smarter and smarter robots, then by the late 2020s, you may be able to ask your wonderful domestic robot to cook a tasty, high-protein dinner. But if you forgot to buy any meat, you may come home to a hot meal but find the aforementioned cat has mysteriously vanished. The robot, designed for chores, doesn’t understand that the sentimental value of the cat exceeds its nutritional value.

AI and King Midas

Stuart Russell, a renowned AI researcher, compares the challenge of defining a robot’s objective to the King Midas myth. “The robot,” says Russell, “has some objective and pursues it brilliantly to the destruction of mankind. And it’s because it’s the wrong objective. It’s the old King Midas problem.”

This is one of the big problems in AI safety that Russell is trying to solve. “We’ve got to get the right objective,” he explains, “and since we don’t seem to know how to program it, the right answer seems to be that the robot should learn – from interacting with and watching humans – what it is humans care about.”

Russell works from the assumption that the robot will solve whatever formal problem we define. Rather than assuming that the robot should optimize a given objective, Russell defines the problem as a two-player game (“game” as used by economists, meaning a decision problem with multiple agents) called cooperative inverse reinforcement learning (CIRL).

A CIRL game includes a person and a robot: the robot’s only purpose is to make the person happy, but it doesn’t know what the person wants. Fortunately, it can learn more about what the person wants by observing her behavior. For example, if a robot observed the human’s morning routine, it should discover how important coffee is—not to itself, of course (we don’t want robots drinking coffee), but to the human. Then, it will make coffee for the person without being asked.

What about the example of King Midas, who thought he wanted as much gold as possible? How does the approach deal with someone stating their wishes incorrectly?

“We don’t have to know what we want. That’s the whole point. It’s hard to know what we want, and what we ask for is evidence for what we really want, but it’s not ground truth,” says Russell.

The robot should learn what people want by watching their actions. Then, if someone asks for something that’s technically incorrect, the robot will understand their underlying wish. Russell adds, “The robot—if it had been there—would have told Midas that he didn’t really want everything turned to gold, maybe just a few choice objects that he might point at from time to time and say the magic word.”

AI Off-Switch

Russell and his Berkeley colleagues also recently announced further progress toward ensuring safe AI, with a paper on ensuring an AI’s off-switch is always accessible. An off-switch is often suggested to guarantee a robot doesn’t harm humans – if the robot goes bad, just turn it off! But what if the AI anticipates that and disables its off switch?

Russell and his team propose solving this by providing a level of uncertainty in the robot’s objective. They created another CIRL game in which the robot chooses to announce its intentions, which gives the human a chance to switch it off, or to plunge ahead with its plan, bypassing human oversight.

They showed that a robot with a fixed objective will take the second choice and disable its off-switch. But if there is uncertainty about the objective, the robot will reason that a human decision to switch the robot off would imply the announced intention is undesirable. Because its goal is to make the human happy, the robot accepts being switched off. The robot has a positive incentive not to disable the off-switch, and that incentive is directly tied to the robot’s uncertainty about the human’s true objective.

Ensuring AI Safety

In addition to his research, Russell is also one of the most vocal and active AI safety researchers concerned with ensuring a stronger public understanding of the potential issues surrounding AI development.

He recently co-authored a rebuttal to an article in the MIT Technology Review, which claimed that real AI scientists weren’t worried about the existential threat of AI. Russell and his co-author summed up why it’s better to be cautious and careful than just assume all will turn out for the best:

“Our experience with Chernobyl suggests it may be unwise to claim that a powerful technology entails no risks. It may also be unwise to claim that a powerful technology will never come to fruition. On September 11, 1933, Lord Rutherford, perhaps the world’s most eminent nuclear physicist, described the prospect of extracting energy from atoms as nothing but “moonshine.” Less than 24 hours later, Leo Szilard invented the neutron-induced nuclear chain reaction; detailed designs for nuclear reactors and nuclear weapons followed a few years later. Surely it is better to anticipate human ingenuity than to underestimate it, better to acknowledge the risks than to deny them. … [T]he risk [of AI] arises from the unpredictability and potential irreversibility of deploying an optimization process more intelligent than the humans who specified its objectives.”

This summer, Russell received a grant of over $5.5 million from the Open Philanthropy Project for a new research center, the Center for Human-Compatible Artificial Intelligence, in Berkeley. Among the primary objectives of the Center will be to study this problem of value alignment, to continue his efforts toward provably beneficial AI, and to ensure we don’t make the same mistakes as King Midas.

“Look,” he says, “if you were King Midas, would you want your robot to say, ‘Everything turns to gold? OK, boss, you got it.’ No! You’d want it to say, ‘Are you sure? Including your food, drink, and relatives? I’m pretty sure you wouldn’t like that. How about this: you point to something and say ‘Abracadabra Aurificio’ or something, and then I’ll turn it to gold, OK?’”

This article is part of a Future of Life series on the AI safety research grants, which were funded by generous donations from Elon Musk and the Open Philanthropy Project.

Effective Altruism and Existential Risks: a talk with Lucas Perry

What are the greatest problems of our time? And how can we best address them?

FLI’s Lucas Perry recently spoke at Duke University and Boston College to address these questions. Perry presented two major ideas in these talks – effective altruism and existential risk – and explained how they work together.

As Perry explained to his audiences, effective altruism is a movement in philanthropy that seeks to use evidence, analysis, and reason to take actions that will do the greatest good in the world. Since each person has limited resources, effective altruists argue it is essential to focus resources where they can do the most good. As such, effective altruists tend to focus on neglected, large-scale problems where their efforts can yield the greatest positive change.

Effective altruists focus on issues including poverty alleviation, animal suffering, and global health through various organizations. Nonprofits such as 80,000 Hours help people find jobs within effective altruism, and charity evaluators such as GiveWell investigate and rank the most effective ways to donate money. These groups and many others are all dedicated to using evidence to address neglected problems that cause, or threaten to cause, immense suffering.

Some of these neglected problems happen to be existential risks – they represent threats that could permanently and drastically harm intelligent life on Earth. Since existential risks, by definition, put our very existence at risk, and have the potential to create immense suffering, effective altruists consider these risks extremely important to address.

Perry explained to his audiences that the greatest existential risks arise due to humans’ ability to manipulate the world through technology. These risks include artificial intelligence, nuclear war, and synthetic biology. But Perry also cautioned that some of the greatest existential threats might remain unknown. As such, he and effective altruists believe the topic deserves more attention.

Perry learned about these issues while he was in college, which helped redirect his own career goals, and he wants to share this opportunity with other students. He explains, “In order for effective altruism to spread and the study of existential risks to be taken seriously, it’s critical that the next generation of thought leaders are in touch with their importance.”

College students often want to do more to address humanity’s greatest threats, but many students are unsure where to go. Perry hopes that learning about effective altruism and existential risks might give them direction. Realizing the urgency of existential risks and how underfunded they are – academics spend more time on the dung fly than on existential risks – can motivate students to use their education where it can make a difference.

As such, Perry’s talks are a small effort to open the field to students who want to help the world and also crave a sense of purpose. He provided concrete strategies to show students where they can be most effective, whether they choose to donate money, directly work with issues, do research, or advocate.

By understanding the intersection between effective altruism and existential risks, these students can do their part to ensure that humanity continues to prosper in the face of our greatest threats yet.

As Perry explains, “When we consider what existential risks represent for the future of intelligent life, it becomes clear that working to mitigate them is an essential part of being an effective altruist.”

Westworld Op-Ed: Are Conscious AI Dangerous?

“These violent delights have violent ends.”

With the help of Shakespeare and Michael Crichton, HBO’s Westworld has brought to light some of the concerns about creating advanced artificial intelligence.

If you haven’t seen it already, Westworld is a show in which human-like AI populate a park designed to look like America’s Wild West. Visitors spend huge amounts of money to visit the park and live out old west adventures, in which they can fight, rape, and kill the AI. Each time one of the robots “dies,” its body is cleaned up, its memory is wiped, and it starts a new iteration of its script.

The show’s season finale aired Sunday evening, and it certainly went out with a bang – but not to worry, there are no spoilers in this article.

AI Safety Issues in Westworld

Westworld was inspired by an old Crichton movie of the same name, and leave it to him – the writer of Jurassic Park — to create a storyline that would have us questioning the level of control we’ll be able to maintain over advanced scientific endeavors. But unlike the original movie, in which the robot is the bad guy, in the TV show, the robots are depicted as the most sympathetic and even the most human characters.

Not surprisingly, concerns about the safety of the park show up almost immediately. The park is overseen by one man who can make whatever program updates he wants without running it by anyone for a safety check. The robots show signs of remembering their mistreatment. One of the characters mentions that only one line of code keeps the robots from being able to harm humans.

These issues are just some of the problems the show touches on that present real AI safety concerns: A single “bad agent” who uses advanced AI to intentionally cause harm to people; small glitches in the software that turn deadly; and a lack of redundancy and robustness in the code to keep people safe.

But to really get your brain working, many of the safety and ethics issues that crop up during the show hinge on whether or not the robots are conscious. In fact, the show whole-heartedly delves into one of the hardest questions of all: what is consciousness? On top of that, can humans create a conscious being? If so, can we control it? Do we want to find out?

To consider these questions, I turned to Georgia Tech AI researcher Mark Riedl, whose research focuses on creating creative AI, and NYU philosopher David Chalmers, who’s most famous for his formulation of the “hard problem of consciousness.”

Can AI Feel Pain?

I spoke with Riedl first, asking him about the extent to which a robot would feel pain if it was so programmed. “First,” he said, “I do not condone violence against humans, animals, or anthropomorphized robots or AI.” He then explained that humans and animals feel pain as a warning signal to “avoid a particular stimulus.”

For robots, however, “the closest analogy might be what happens in reinforcement learning agents, which engage in trial-and-error learning.” The AI would receive a positive or negative reward for some action and it would adjust its future behavior accordingly. Rather than feeling like pain, Riedl suggests that the negative reward would be more “akin to losing points in a computer game.”

“Robots and AI can be programmed to ‘express’ pain in a human-like fashion,” says Riedl, “but it would be an illusion. There is one reason for creating this illusion: for the robot to communicate its internal state to humans in a way that is instantly understandable and invokes empathy.”

Riedl isn’t worried that the AI would feel real pain, and if the robot’s memory is completely erased each night, then he suggests it would be as though nothing happened. However, he does see one possible safety issue here. For reinforcement learning to work properly, the AI needs to take actions that optimize for the positive reward. If the robot’s memory isn’t completely erased — if the robot starts to remember the bad things that happened to it – then it could try to avoid those actions or people that trigger the negative reward.

“In theory,” says Riedl, “these agents can learn to plan ahead to reduce the possibility of receiving negative reward in the most cost-effective way possible. … If robots don’t understand the implications of their actions in terms other than reward gain or loss, this can also mean acting in advance to stop humans from harming them.”

Riedl points out, though, that for the foreseeable future, we do not have robots with sufficient capabilities to pose an immediate concern. But assuming these robots do arrive, problems with negative rewards could be potentially dangerous for the humans. (Possibly even more dangerous, as the show depicts, is if the robots do understand the implications of their actions against humans who have been mistreating them for decades.)

Can AI Be Conscious?

Chalmers sees things a bit differently. “The way I think about consciousness,” says Chalmers, “the way most people think about consciousness – there just doesn’t seem to be any question that these beings are conscious. … They’re presented as having fairly rich emotional lives – that’s presented as feeling pain and thinking thoughts. … They’re not just exhibiting reflexive behavior. They’re thinking about their situations. They’re reasoning.”

“Obviously, they’re sentient,” he adds.

Chalmers suggests that instead of trying to define what about the robots makes them conscious, we should instead consider what it is they’re lacking. Most notably, says Chalmers, they lack free will and memory. However, many of us live in routines that we’re unable to break out from. And there have been numerous cases of people with extreme memory problems, but no one thinks that makes it okay to rape or kill them.

“If it is regarded as okay to mistreat the AIs on this show, is it because of some deficit they have or because of something else?” Chalmers asks.

The specific scenarios portrayed in Westworld may not be realistic because Chalmers doesn’t believe the Bicameral-mind theory is unlikely to lead to consciousness, even for robots. ” I think it’s hopeless as a theory,” he says, “even of robot consciousness — or of robot self-consciousness, which seems more what’s intended.  It would be so much easier just to program the robots to monitor their own thoughts directly.”

But this still presents risks. “If you had a situation that was as complex and as brain-like as these, would it also be so easily controllable?” asks Chalmers.

In any case, treating robots badly could easily pose a risk to human safety. We risk creating unconscious robots that learn the wrong lessons from negative feedback, or we risk inadvertently (or intentionally, as in the case of Westworld) creating conscious entities who will eventually fight back against their abuse and oppression.

When a host in episode two is asked if she’s “real,” she responds, “If you can’t tell, does it matter?”

These seem like the safest words to live by.

The Problem of Defining Autonomous Weapons

What, exactly, is an autonomous weapon? For the general public, the phrase is often used synonymously with killer robots and triggers images of the Terminator. But for the military, the definition of an autonomous weapons system, or AWS, is deceivingly simple.

The United States Department of Defense defines an AWS as “a weapon system that, once activated, can select and engage targets without further intervention by a human operator.  This includes human-supervised autonomous weapon systems that are designed to allow human operators to override operation of the weapon system, but can select and engage targets without further human input after activation.”

Basically, it is a weapon that can be used in any domain — land, air, sea, space, cyber, or any combination thereof — and encompasses significantly more than just the platform that fires the munition. This means that there are various capabilities the system possesses, such as identifying targets, tracking, and firing, all of which may have varying levels of human interaction and input.

Heather Roff, a research scientist at The Global Security Initiative at Arizona State University and a senior research fellow at the University of Oxford, suggests that even the basic terminology of the DoD’s definition is unclear.

“This definition is problematic because we don’t really know what ‘select’ means here.  Is it ‘detect’ or ‘select’?” she asks. Roff also notes another definitional problem arises because, in many instances, the difference between an autonomous weapon (acting independently) and an automated weapon (pre-programmed to act automatically) is not clear.

 

A Database of Weapons Systems

State parties to the UN’s Convention on Conventional Weapons (CCW) also grapple with what constitutes an autonomous — and not a current automated — weapon. During the last three years of discussion at Informal Meetings of Experts at the CCW, participants typically only referred to two or three presently deployed weapons systems that appear to be AWS, such as the Israeli Harpy or the United States’ Counter Rocket and Mortar system.

To address this, the International Committee of the Red Cross requested more data on presently deployed systems. It wanted to know what the weapons systems are that states currently use and what projects are under development. Roff took up the call to action. She poured over publicly available data from a variety of sources and compiled a database of 284 weapons systems. She wanted to know what capacities already existed on presently deployed systems and whether these were or were not “autonomous.”

“The dataset looks at the top five weapons exporting countries, so that’s Russia, China, the United States, France and Germany,” says Roff. “I’m looking at major sales and major defense industry manufacturers from each country. And then I look at all the systems that are presently deployed by those countries that are manufactured by those top manufacturers, and I code them along a series of about 20 different variables.”

These variables include capabilities like navigation, homing, target identification, firing, etc., and for each variable, Roff coded a weapon as either having the capacity or not. Roff then created a series of three indices to bundle the various capabilities: self-mobility, self-direction, and self-determination. Self-mobility capabilities allow a system to move by itself, self-direction relates to target identification, and self-determination indexes the abilities that a system may possess in relation to goal setting, planning, and communication. Most “smart” weapons have high self-direction and self-mobility, but few, if any, have self-determination capabilities.

As Roff explains in a recent Foreign Policy post, the data shows that “the emerging trend in autonomy has less to do with the hardware and more on the areas of communications and target identification. What we see is a push for better target identification capabilities, identification friend or foe (IFF), as well as learning.  Systems need to be able to adapt, to learn, and to change or update plans while deployed. In short, the systems need to be tasked with more things and vaguer tasks.” Thus newer systems will need greater self-determination capabilities.

 

The Human in the Loop

But understanding what the weapons systems can do is only one part of the equation. In most systems, humans still maintain varying levels of control, and the military often claims that a human will always be “in the loop.” That is, a human will always have some element of meaningful control over the system. But this leads to another definitional problem: just what is meaningful human control?

Roff argues that this idea of keeping a human “in the loop” isn’t just “unhelpful,” but that it may be “hindering our ability to think about what’s wrong with autonomous systems.” She references what the UK Ministry of Defense calls, the Empty Hangar Problem: no one expects to walk into a military airplane hangar and discover that the autonomous plane spontaneously decided, on its own, to go to war.

“That’s just not going to happen,” Roff says, “These systems are always going to be used by humans, and humans are going to decide to use them.” But thinking about humans in some loop, she contends, means that any difficulties with autonomy get pushed aside.

Earlier this year, Roff worked with Article 36, which coined the phrase “meaningful human control,” to establish more a more clear-cut definition of the term. They published a concept paper, Meaningful Human Control, Artificial Intelligence and Autonomous Weapons, which offered guidelines for delegates at the 2016 CCW Meeting of Experts on Lethal Autonomous Weapons Systems.

In the paper, Roff and Richard Moyes outlined key elements – such as predictable, reliable and transparent technology, accurate user information, a capacity for timely human action and intervention, human control during attacks, etc. – for determining whether an AWS allows for meaningful human control.

“You can’t offload your moral obligation to a non-moral agent,” says Roff. “So that’s where I think our work on meaningful human control is: a human commander has a moral obligation to undertake precaution and proportionality in each attack.” The weapon system cannot do it for the human.

Researchers and the international community are only beginning to tackle the ethical issues that arise from AWSs. Clearly defining the weapons systems and the role humans will continue to play is one small part of a very big problem. Roff will continue to work with the international community to establish more well defined goals and guidelines.

“I’m hoping that the doctrine and the discussions that are developing internationally and through like-minded states will actually guide normative generation of how to use or not use such systems,” she says.

Heather Roff also spoke about this work on an FLI podcast.

This article is part of a Future of Life series on the AI safety research grants, which were funded by generous donations from Elon Musk and the Open Philanthropy Project.

2300 Scientists from All Fifty States Pen Open Letter to Incoming Trump Administration

The following press release comes from the Union of Concerned Scientists.

Unfettered Science Essential to Decision Making; the Science Community Will Be Watching

WASHINGTON (November 30, 2016)—More than 2300 scientists from all fifty states, including 22 Nobel Prize recipients, released an open letter urging the Trump administration and Congress to set a high bar for integrity, transparency and independence in using science to inform federal policies. Some notable signers have advised Republican and Democratic presidents, from Richard Nixon to Barack Obama.

“Americans recognize that science is critical to improving our quality of life, and when science is ignored or politically corrupted, it’s the American people who suffer,” said physicist Lewis Branscomb, professor at the University of California, San Diego School of Global Policy and Strategy, who served as vice president and chief scientist at IBM and as director of the National Bureau of Standards under President Nixon. “Respect for science in policymaking should be a prerequisite for any cabinet position.”

The letter lays out several expectations from the science community for the Trump administration, including that he appoint a cabinet with a track record of supporting independent science and diversity; independence for federal science advisors; and sufficient funding for scientific data collection. It also outlines basic standards to ensure that federal policy is fully informed by the best available science.

For example, federal scientists should be able to: conduct their work without political or private-sector interference; freely communicate their findings to Congress, the public and their scientific peers; and expose and challenge misrepresentation, censorship or other abuses of science without fear of retaliation.

“A thriving federal scientific enterprise has enormous benefits to the public,” said Nobel Laureate Carol Greider, director of molecular biology and genetics at Johns Hopkins University. “Experts at federal agencies prevent the spread of diseases, ensure the safety of our food and water, protect consumers from harmful medical devices, and so much more. The new administration must ensure that federal agencies can continue to use science to serve the public interest.”

The letter also calls on the Trump administration and Congress to resist attempts to weaken the scientific foundation of laws such as the Clean Air Act and Endangered Species Act. Congress is expected to reintroduce several harmful legislative proposals—such as the REINS Act and the Secret Science Reform Act—that would increase political control over the ability of federal agency experts to use science to protect public health and the environment.

The signers encouraged their fellow scientists to engage with the executive and legislative branches, but also to monitor the activities of the White House and Congress closely. “Scientists will pay close attention to how the Trump administration governs, and are prepared to fight any attempts to undermine of the role of science in protecting public health and the environment,” said James McCarthy, professor of biological oceanography at Harvard University and former president of the American Association for the Advancement of Science. “We will hold them to a high standard from day one.”

Complex AI Systems Explain Their Actions

cobots_mauela_veloso

In the future, service robots equipped with artificial intelligence (AI) are bound to be a common sight. These bots will help people navigate crowded airports, serve meals, or even schedule meetings.

As these AI systems become more integrated into daily life, it is vital to find an efficient way to communicate with them. It is obviously more natural for a human to speak in plain language rather than a string of code. Further, as the relationship between humans and robots grows, it will be necessary to engage in conversations, rather than just give orders.

This human-robot interaction is what Manuela M. Veloso’s research is all about. Veloso, a professor at Carnegie Mellon University, has focused her research on CoBots, autonomous indoor mobile service robots which transport items, guide visitors to building locations, and traverse the halls and elevators. The CoBot robots have been successfully autonomously navigating for several years now, and have traveled more than 1,000km. These accomplishments have enabled the research team to pursue a new direction, focusing now on novel human-robot interaction.

“If you really want these autonomous robots to be in the presence of humans and interacting with humans, and being capable of benefiting humans, they need to be able to talk with humans” Veloso says.

 

Communicating With CoBots

Veloso’s CoBots are capable of autonomous localization and navigation in the Gates-Hillman Center using WiFi, LIDAR, and/or a Kinect sensor (yes, the same type used for video games).

The robots navigate by detecting walls as planes, which they match to the known maps of the building. Other objects, including people, are detected as obstacles, so navigation is safe and robust. Overall, the CoBots are good navigators and are quite consistent in their motion. In fact, the team noticed the robots could wear down the carpet as they traveled the same path numerous times.

Because the robots are autonomous, and therefore capable of making their own decisions, they are out of sight for large amounts of time while they navigate the multi-floor buildings.

The research team began to wonder about this unaccounted time. How were the robots perceiving the environment and reaching their goals? How was the trip? What did they plan to do next?

“In the future, I think that incrementally we may want to query these systems on why they made some choices or why they are making some recommendations,” explains Veloso.

The research team is currently working on the question of why the CoBots took the route they did while autonomous. The team wanted to give the robots the ability to record their experiences and then transform the data about their routes into natural language. In this way, the bots could communicate with humans and reveal their choices and hopefully the rationale behind their decisions.

 

Levels of Explanation

The “internals” underlying the functions of any autonomous robots are completely based on numerical computations, and not natural language. For example, the CoBot robots in particular compute the distance to walls, assigning velocities to their motors to enable the motion to specific map coordinates.

Asking an autonomous robot for a non-numerical explanation is complex, says Veloso. Furthermore, the answer can be provided in many potential levels of detail.

“We define what we call the ‘verbalization space’ in which this translation into language can happen with different levels of detail, with different levels of locality, with different levels of specificity.”

For example, if a developer is asking a robot to detail their journey, they might expect a lengthy retelling, with details that include battery levels. But a random visitor might just want to know how long it takes to get from one office to another.

Therefore, the research is not just about the translation from data to language, but also the acknowledgment that the robots need to explain things with more or less detail. If a human were to ask for more detail, the request triggers CoBot “to move” into a more detailed point in the verbalization space.

“We are trying to understand how to empower the robots to be more trustable through these explanations, as they attend to what the humans want to know,” says Veloso. The ability to generate explanations, in particular at multiple levels of detail, will be especially important in the future, as the AI systems will work with more complex decisions. Humans could have a more difficult time inferring the AI’s reasoning. Therefore, the bot will need to be more transparent.

For example, if you go to a doctor’s office and the AI there makes a recommendation about your health, you may want to know why it came to this decision, or why it recommended one medication over another.

Currently, Veloso’s research focuses on getting the robots to generate these explanations in plain language. The next step will be to have the robots incorporate natural language when humans provide them with feedback. “[The CoBot] could say, ‘I came from that way,’ and you could say, ‘well next time, please come through the other way,’” explains Veloso.

These sorts of corrections could be programmed into the code, but Veloso believes that “trustability” in AI systems will benefit from our ability to dialogue, query, and correct their autonomy. She and her team aim at contributing to a multi-robot, multi-human symbiotic relationship, in which robots and humans coordinate and cooperate as a function of their limitations and strengths.

“What we’re working on is to really empower people – a random person who meets a robot – to still be able to ask things about the robot in natural language,” she says.

In the future, when we will have more and more AI systems that are able to perceive the world, make decisions, and support human decision-making, the ability to engage in these types of conversations will be essential­­.

This article is part of a Future of Life series on the AI safety research grants, which were funded by generous donations from Elon Musk and the Open Philanthropy Project.

MIRI’S November 2016 Newsletter

Post-fundraiser update: Donors rallied late last month to get us most of the way to our first fundraiser goal, but we ultimately fell short. This means that we’ll need to make up the remaining $160k gap over the next month if we’re going to move forward on our 2017 plans. We’re in a good position to expand our research staff and trial a number of potential hires, but only if we feel confident about our funding prospects over the next few years.Since we don’t have an official end-of-the-year fundraiser planned this time around, we’ll be relying more on word-of-mouth to reach new donors. To help us with our expansion plans, donate at https://intelligence.org/donate/ — and spread the word!

Research updates

General updates

News and links

Insight From the Dalai Lama Applied to AI Ethics

One of the primary objectives — if not the primary objective — of artificial intelligence is to improve life for all people. But an equally powerful motivator to create AI is to improve profits. These two goals can occasionally be at odds with each other.

Currently, with AI becoming smarter and automation becoming more efficient, many in AI and government are worried about mass unemployment. But the results of mass unemployment may be even worse than most people suspect. A study released last year found that 1 in 5 people who committed suicide were unemployed. Another study found significant increases in suicide rates during recessions and the Great Depression.

A common solution that’s often suggested to address mass unemployment is that of a universal basic income (UBI). A UBI would ensure everyone has at least some amount of income. However, this would not address non-financial downsides of unemployment.

A recent op-ed, co-authored by the Dalai Lama for the New York Times, suggests he doesn’t believe money alone would cheer up the unemployed.

He explains, “Americans who prioritize doing good for others are almost twice as likely to say they are very happy about their lives. In Germany, people who seek to serve society are five times likelier to say they are very happy than those who do not view service as important. … The more we are one with the rest of humanity, the better we feel.”

But, he continues, “In one shocking experiment, researchers found that senior citizens who didn’t feel useful to others were nearly three times as likely to die prematurely as those who did feel useful. This speaks to a broader human truth: We all need to be needed.”

The question of what it means and what it takes to feel needed is an important problem for ethicists and philosophers, but it may be just as important for AI researchers to consider. The Dalai Lama argues that lack of meaning and purpose in one’s work increases frustration and dissatisfaction among even those who are gainfully employed.

“The problem,” says the Dalai Lama, “is … the growing number of people who feel they are no longer useful, no longer needed, no longer one with their societies. … Feeling superfluous is a blow to the human spirit. It leads to social isolation and emotional pain, and creates the conditions for negative emotions to take root.”

If feeling needed and feeling useful are necessary for happiness, then AI researchers may face a conundrum. Many researchers hope that job loss due to artificial intelligence and automation could, in the end, provide people with more leisure time to pursue enjoyable activities. But if the key to happiness is feeling useful and needed, then a society without work could be just as emotionally challenging as today’s career-based societies, and possibly worse.

“Leaders need to recognize that a compassionate society must create a wealth of opportunities for meaningful work, so that everyone who is capable of contributing can do so,” says the Dalai Lama.

Yet, presumably, the senior citizens mentioned above were retired, and some of them still felt needed. Perhaps those who thrived in retirement volunteered their time, or perhaps they focused on relationships and social interactions. Maybe they achieved that feeling of being needed through some other means altogether.

More research is necessary, but understanding how people without jobs find meaning in their lives will likely be necessary in order to successfully move toward beneficial AI.

And the Dalai Lama also remains hopeful, suggesting that recognizing and addressing the need to be needed could have great benefits for society:

“[Society’s] refusal to be content with physical and material security actually reveals something beautiful: a universal human hunger to be needed. Let us work together to build a society that feeds this hunger.”

The Historic UN Vote On Banning Nuclear Weapons

By Joe Cirincione

History was made at the United Nations today. For the first time in its 71 years, the global body voted to begin negotiations on a treaty to ban nuclear weapons.

Eight nations with nuclear arms (the United States, Russia, China, France, the United Kingdom, India, Pakistan, and Israel) opposed or abstained from the resolution, while North Korea voted yes. However, with a vote of 123 for, 38 against and 16 abstaining, the First Assembly decided “to convene in 2017 a United Nations conference to negotiate a legally binding instrument to prohibit nuclear weapons, leading towards their total elimination.”

The resolution effort, led by Mexico, Austria, Brazil Ireland, Nigeria and South Africa, was joined by scores of others.

“There comes a time when choices have to be made and this is one of those times,” said Helena Nolan, Ireland’s director of Disarmament and Non-Proliferation, “Given the clear risks associated with the continued existence of nuclear weapons, this is now a choice between responsibility and irresponsibility. Governance requires accountability and governance requires leadership.”

The Obama Administration was in fierce opposition. It lobbied all nations, particularly its allies, to vote no. “How can a state that relies on nuclear weapons for its security possibly join a negotiation meant to stigmatize and eliminate them?” argued Ambassador Robert Wood, the U.S. special representative to the UN Conference on Disarmament in Geneva, “The ban treaty runs the risk of undermining regional security.”

The U.S. opposition is a profound mistake. Ambassador Wood is a career foreign service officer and a good man who has worked hard for our country. But this position is indefensible.

Every president since Harry Truman has sought the elimination of nuclear weapons. Ronald Reagan famously said in his 1984 State of the Union:

“A nuclear war cannot be won and must never be fought. The only value in our two nations possessing nuclear weapons is to make sure they will never be used. But then would it not be better to do away with them entirely?”

In case there was any doubt as to his intentions, he affirmed in his second inaugural address that, “We seek the total elimination one day of nuclear weapons from the face of the Earth.”

President Barack Obama himself stigmatized these weapons, most recently in his speech in Hiroshima this May:

“The memory of the morning of Aug. 6, 1945, must never fade. That memory allows us to fight complacency. It fuels our moral imagination. It allows us to change,” he said, “We may not be able to eliminate man’s capacity to do evil, so nations and the alliances that we form must possess the means to defend ourselves. But among those nations like my own that hold nuclear stockpiles, we must have the courage to escape the logic of fear and pursue a world without them.”

The idea of a treaty to ban nuclear weapons is inspired by similar, successful treaties to ban biological weapons, chemical weapons, and landmines. All started with grave doubts. Many in the United States opposed these treaties. But when President Richard Nixon began the process to ban biological weapons and President George H.W. Bush began talks to ban chemical weapons, other nations rallied to their leadership. These agreements have not yet entirely eliminated these deadly arsenals (indeed, the United States is still not a party to the landmine treaty) but they stigmatized them, hugely increased the taboo against their use or possession, and convinced the majority of countries to destroy their stockpiles.I am engaged in real, honest debates among nuclear security experts on the pros and cons of this ban treaty. Does it really matter if a hundred-plus countries sign a treaty to ban nuclear weapons but none of the countries with nuclear weapons join? Will this be a serious distraction from the hard work of stopping new, dangerous weapons systems, cutting nuclear budgets, or ratifying the nuclear test ban treaty?

The ban treaty idea did not originate in the United States, nor was it championed by many U.S. groups, nor is within U.S. power to control the process. Indeed, this last seems to be one of the major reasons the administration opposes the talks.

But this movement is gaining strength. Two years ago, I covered the last of the three conferences held on the humanitarian impact of nuclear weapons for Defense One. Whatever experts and officials thought about the goals of the effort, I said, “the Vienna conference signals the maturing of a new, significant current in the nuclear policy debate. Government policy makers would be wise to take this new factor into account.”

What began as sincere concerns about the horrendous humanitarian consequences of using nuclear weapons has now become a diplomatic process driving towards a new global accord. It is fueled less by ideology than by fear.

The movement reflects widespread fears that the world is moving closer to a nuclear catastrophe — and that the nuclear-armed powers are not serious about reducing these risks or their arsenals. If anything, these states are increasing the danger by pouring hundreds of billions of dollars into new Cold War nuclear weapons programs.

The fears in the United States that, if elected, Donald Trump would have unfettered control of thousands of nuclear weapons has rippled out from the domestic political debate to exacerbate these fears. Rising US-Russian tensions, new NATO military deployments on the Russian border, a Russian aircraft carrier cruising through the Straits of Gibraltar, the shock at the Trump candidacy and the realization — exposed by Trump’s loose talk of using nuclear weapons – that any US leader can unleash a nuclear war with one command, without debate, deliberation or restraint, have combined to convince many nations that dramatic action is needed before it is too late.

As journalist Bill Press said as we discussed these developments on his show, “He scared the hell out of them.”

There is still time for the United States to shift gears. We should not squander the opportunity to join a process already in motion and to help guide it to a productive outcome. It is a Washington trope that you cannot defeat something with nothing. Right now, the US has nothing positive to offer. The disarmament process is dead and this lack of progress undermines global support for the Non-Proliferation Treaty and broader efforts to stop the spread of nuclear weapons.

The new presidential administration must make a determined effort to mount new initiatives that reduce these weapons, reduce these risks. It should also support the ban treaty process as a powerful way to build global support for a long-standing American national security goal. We must, as President John F. Kennedy said, eliminate these weapons before they eliminate us.

This article was originally posted on the Huffington Post.

Note from FLI: Among our objectives is to inspire discussion and a sharing of ideas. As such, we post op-eds that we believe will help spur discussion within our community. Op-eds do not necessarily represent FLI’s opinions or views.

The Ethical Questions Behind Artificial Intelligence

What do philosophers and ethicists worry about when they consider the long-term future of artificial intelligence? Well, to start, though most people involved in the field of artificial intelligence are excited about its development, many worry that without proper planning an advanced AI could destroy all of humanity.

And no, this does not mean they’re worried about Skynet.

At a recent NYU conference, the Ethics of Artificial Intelligence, Eliezer Yudkowsky from the Machine Intelligence Research Institute explained that AI run amok was less likely to look like the Terminator and more likely to resemble the overeager broom that Mickey Mouse brings to life in the Sorcerer’s Apprentice in Fantasia. The broom has one goal and not only does it remain focused, regardless of what Mickey does, it multiplies itself and becomes even more efficient. Concerns about a poorly designed AI are similar — except with artificial intelligence, there will be no sorcerer to stop the mayhem at the end.

To help visualize how an overly competent advanced AI could go wrong, Oxford philosopher Nick Bostrom came up with a thought experiment about a deadly paper-clip-making machine. If you are in the business of selling paper clips then making a paper-clip-maximizing artificial intelligence seems harmless enough. However, with this as its only goal, an intelligent AI might keep making paper clips at the expense of everything else you care about. When it runs out of materials, it will figure out how to break everything around it down to molecular components and reassemble the molecules into paper clips. Soon it will have destroyed life on earth, the earth itself, the solar system, and possibly even the universe — all in an unstoppable quest to build more and more paper clips.

This might seem like a silly concern, but who hasn’t had some weird experience with their computer or some other technology when it went on the fritz? Consider the number of times you’ve sent bizarre messages thanks to autocorrect or, more seriously, the Flash Crash of 2010. Now imagine how such a naively designed — yet very complex — program could be exponentially worse if that system were to manage the power grid or oversee weapons systems.

Even now, with only very narrow AI systems, researchers are discovering that simple biases lead to increases in racism and sexism in the tech world; that cyberattacks are growing in strength and numbers; and that a military AI arms race may be underway.

At the conference, Bostrom explained that there are two types of problems that AI development could encounter: the mistakes that can be fixed later on, and the mistakes that will only be made once. He’s worried about the latter. Yudkowsky also summarized this concern when he said, “AI … is difficult like space probes are difficult: Once you’ve launched it, it’s out there.”

AI researcher and philosopher Wendell Wallach added, “We are building technology that we can’t effectively test.”

As artificial intelligence gets closer to human-level intelligence, how can AI designers ensure their creations are ethical and behave appropriately from the start? It turns out this question only begets more questions.

What does beneficial AI look like? Will AI benefit all people or only some? Will it increase income inequality? What are the ethics behind creating an AI that can feel pain? Can a conscious AI be developed without a concrete definition of consciousness? What is the ultimate goal of artificial intelligence? Will it help businesses? Will it help people? Will AI make us happy?

“If we have no clue what we want, we’re less likely to get it,” said MIT physicist Max Tegmark.

Stephen Peterson, a philosopher from Niagara University, summed up the gist of all of the questions when he encouraged the audience to wonder not only what the “final goal” for artificial intelligence is, but also how to get there. Scrooge, whom Peterson used as an example, always wanted happiness: the ghosts of Christmases past, present, and future just helped him realize that friends and family would help him achieve this goal more than money would.

Facebook’s Director of AI Research, Yann LeCun, believes that such advanced artificial intelligence is still a very long way off. He compared the current state of AI development to a chocolate cake. “We know how to make the icing and the cherry,” he said, “but we have no idea how to make the cake.”

But if AI development is like baking a cake, it seems AI ethics will require the delicate balance and attention of perfecting a soufflé. And most participants at the two-day event agreed that the only way to ensure permanent AI mistakes aren’t made, regardless of when advanced AI is finally developed, is to start addressing ethical and safety concerns now.

This is not to say that the participants of the conference aren’t also excited about artificial intelligence. As mentioned above, they are. The number of lives that could be saved and improved as humans and artificial intelligence work together is tremendous. The key is to understand what problems could arise and what questions need to be answered so that AI is developed beneficially.

“When it comes to AI,” said University of Connecticut philosopher, Susan Schneider, “philosophy is a matter of life and death.”

OpenAI Unconference on Machine Learning

The following post originally appeared here.

Last weekend, I attended OpenAI’s self-organizing conference on machine learning (SOCML 2016), meta-organized by Ian Goodfellow (thanks Ian!). It was held at OpenAI’s new office, with several floors of large open spaces. The unconference format was intended to encourage people to present current ideas alongside with completed work. The schedule mostly consisted of 2-hour blocks with broad topics like “reinforcement learning” and “generative models”, guided by volunteer moderators. I especially enjoyed the sessions on neuroscience and AI and transfer learning, which had smaller and more manageable groups than the crowded popular sessions, and diligent moderators who wrote down the important points on the whiteboard. Overall, I had more interesting conversation but also more auditory overload at SOCML than at other conferences.

To my excitement, there was a block for AI safety along with the other topics. The safety session became a broad introductory Q&A, moderated by Nate Soares, Jelena Luketina and me. Some topics that came up: value alignment, interpretability, adversarial examples, weaponization of AI.

unconference_group1

AI safety discussion group (image courtesy of Been Kim)

One value alignment question was how to incorporate a diverse set of values that represents all of humanity in the AI’s objective function. We pointed out that there are two complementary problems: 1) getting the AI’s values to be in the small part of values-space that’s human-compatible, and 2) averaging over that space in a representative way. People generally focus on the ways in which human values differ from each other, which leads them to underestimate the difficulty of the first problem and overestimate the difficulty of the second. We also agreed on the importance of allowing for moral progress by not locking in the values of AI systems.

Nate mentioned some alternatives to goal-optimizing agents – quantilizers and approval-directed agents. We also discussed the limitations of using blacklisting/whitelisting in the AI’s objective function: blacklisting is vulnerable to unforeseen shortcuts and usually doesn’t work from a security perspective, and whitelisting hampers the system’s ability to come up with creative solutions (e.g. the controversial move 37 by AlphaGo in the second game against Sedol).

Been Kim brought up the recent EU regulation on the right to explanation for algorithmic decisions. This seems easy to game due to lack of good metrics for explanations. One proposed metric was that a human would be able to predict future model outputs from the explanation. This might fail for better-than-human systems by penalizing creative solutions if applied globally, but seems promising as a local heuristic.

Ian Goodfellow mentioned the difficulties posed by adversarial examples: an imperceptible adversarial perturbation to an image can make a convolutional network misclassify it with very high confidence. There might be some kind of No Free Lunch theorem where making a system more resistant to adversarial examples would trade off with performance on non-adversarial data.

We also talked about dual-use AI technologies, e.g. advances in deep reinforcement learning for robotics that could end up being used for military purposes. It was unclear whether corporations or governments are more trustworthy with using these technologies ethically: corporations have a profit motive, while governments are more likely to weaponize the technology.

unconference_board.jpg

More detailed notes by Janos coming soon! For a detailed overview of technical AI safety research areas, I highly recommend reading Concrete Problems in AI Safety.

MIRI October 2016 Newsletter

The following newsletter was originally posted on MIRI’s website.

Our big announcement this month is our paper “Logical Induction,” introducing an algorithm that learns to assign reasonable probabilities to mathematical, empirical, and self-referential claims in a way that outpaces deduction. MIRI’s 2016 fundraiser is also live, and runs through the end of October.

Research updates

General updates

  • We wrote up a more detailed fundraiser post for the Effective Altruism Forum, outlining our research methodology and the basic case for MIRI.
  • We’ll be running an “Ask MIRI Anything” on the EA Forum this Wednesday, Oct. 12.
  • The Open Philanthropy Project has awarded MIRI a one-year $500,000 grant to expand our research program. See also Holden Karnofsky’s account of how his views on EA and AI have changed.

News and links

Sam Harris TED Talk: Can We Build AI Without Losing Control Over It?

The threat of uncontrolled artificial intelligence, Sam Harris argues in a recently released TED Talk, is one of the most pressing issues of our time. Yet most people “seem unable to marshal an appropriate emotional response to the dangers that lie ahead.”

Harris, a neuroscientist, philosopher, and best-selling author, has thought a lot about this issue. In the talk, he clarifies that it’s not likely armies of malicious robots will wreak havoc on civilization like many movies and caricatures portray. He likens this machine-human relationship to the way humans treat ants. “We don’t hate [ants],” he explains, “but whenever their presence seriously conflicts with one of our goals … we annihilate them without a qualm. The concern is that we will one day build machines that, whether they are conscious or not, could treat us with similar disregard.”

Harris explains that one only needs to accept three basic assumptions to recognize the inevitability of superintelligent AI:

  1. Intelligence is a product of information processing in physical systems.
  2. We will continue to improve our intelligent machines.
  3. We do not stand on the peak of intelligence or anywhere near it.

Humans have already created systems with narrow intelligence that exceeds human intelligence (such as computers). And since mere matter can give rise to general intelligence (as in the human brain), there is nothing, in principle, preventing advanced general intelligence in machines, which are also made of matter.

But Harris says the third assumption is “the crucial insight” that “makes our situation so precarious.” If machines surpass human intelligence and can improve themselves, they will be more capable than even the smartest humans—in unimaginable ways.

Even if a machine is no smarter than a team of researchers at MIT, “electronic circuits function about a million times faster than biochemical ones,” Harris explains. “So you set it running for a week, and it will perform 20,000 years of human-level intellectual work, week after week after week.”

Harris wonders, “how could we even understand, much less constrain, a mind making this sort of progress?”

Harris also worries that the power of superintelligent AI will be abused, furthering wealth inequality and increasing the risk of war. “This is a winner-take-all scenario,” he explains. Given the speed that these machines can process information, “to be six months ahead of the competition here is to be 500,000 years ahead, at a minimum.”

If governments and companies perceive themselves to be in an arms race against one another, they could develop strong incentives to create superintelligent AI first—or attack whoever is on the brink of creating it.

Though some researchers argue that superintelligent AI will not be created for another 50-100 years, Harris points out, “Fifty years is not that much time to meet one of the greatest challenges our species will ever face.”

Harris warns that if his three basic assumptions are correct, “then we have to admit that we are in the process of building some sort of god. Now would be a good time to make sure it’s a god we can live with.”

 

 Photo credit to Bret Hartman from TED. Illustration credit to Paul Lachine. You can see more of Paul’s illustrations at http://www.paullachine.com/index.php.

Artificial Photosynthesis: Can We Harness the Energy of the Sun as Well as Plants?

In the early 1900s, the Italian chemist Giacomo Ciamician recognized that fossil fuel use was unsustainable. And like many of today’s environmentalists, he turned to nature for clues on developing renewable energy solutions, studying the chemistry of plants and their use of solar energy. He admired their unparalleled mastery of photochemical synthesis—the way they use light to synthesize energy from the most fundamental of substances—and how “they reverse the ordinary process of combustion.”

In photosynthesis, Ciamician realized, lay an entirely renewable process of energy creation. When sunlight reaches the surface of a green leaf, it sets off a reaction inside the leaf. Chloroplasts, energized by the light, trigger the production of chemical products—essentially sugars—which store the energy such that the plant can later access it for its biological needs. It is an entirely renewable process; the plant harvests the immense and constant supply of solar energy, absorbs carbon dioxide and water, and releases oxygen. There is no other waste.

If scientists could learn to imitate photosynthesis by providing concentrated carbon dioxide and suitable catalyzers, they could create fuels from solar energy. Ciamician was taken by the seeming simplicity of this solution. Inspired by small successes in chemical manipulation of plants, he wondered, “does it not seem that, with well-adapted systems of cultivation and timely intervention, we may succeed in causing plants to produce, in quantities much larger than the normal ones, the substances which are useful to our modern life?”

In 1912, Ciamician sounded the alarm about the unsustainable use of fossil fuels, and he exhorted the scientific community to explore artificially recreating photosynthesis. But little was done. A century later, however, in the midst of a climate crisis, and armed with improved technology and growing scientific knowledge, his vision reached a major breakthrough.

After more than ten years of research and experimentation, Peidong Yang, a chemist at UC Berkeley, successfully created the first photosynthetic biohybrid system (PBS) in April 2015. This first-generation PBS uses semiconductors and live bacteria to do the photosynthetic work that real leaves do—absorb solar energy and create a chemical product using water and carbon dioxide, while releasing oxygen—but it creates liquid fuels. The process is called artificial photosynthesis, and if the technology continues to improve, it may become the future of energy.

How Does This System Work?

Yang’s PBS can be thought of as a synthetic leaf. It is a one-square-inch tray that contains silicon semiconductors and living bacteria; what Yang calls a semiconductor-bacteria interface.

In order to initiate the process of artificial photosynthesis, Yang dips the tray of materials into water, pumps carbon dioxide into the water, and shines a solar light on it. As the semiconductors harvest solar energy, they generate charges to carry out reactions within the solution. The bacteria take electrons from the semiconductors and use them to transform, or reduce, carbon dioxide molecules and create liquid fuels. In the meantime, water is oxidized on the surface of another semiconductor to release oxygen. After several hours or several days of this process, the chemists can collect the product.

With this first-generation system, Yang successfully produced butanol, acetate, polymers, and pharmaceutical precursors, fulfilling Ciamician’s once-far-fetched vision of imitating plants to create the fuels that we need. This PBS achieved a solar-to-chemical conversion efficiency of 0.38%, which is comparable to the conversion efficiency in a natural, green leaf.

first-g-ap

A diagram of the first-generation artificial photosynthesis, with its four main steps.

Describing his research, Yang says, “Our system has the potential to fundamentally change the chemical and oil industry in that we can produce chemicals and fuels in a totally renewable way, rather than extracting them from deep below the ground.”

If Yang’s system can be successfully scaled up, businesses could build artificial forests that produce the fuel for our cars, planes, and power plants by following the same laws and processes that natural forests follow. Since artificial photosynthesis would absorb and reduce carbon dioxide in order to create fuels, we could continue to use liquid fuel without destroying the environment or warming the planet.

However, in order to ensure that artificial photosynthesis can reliably produce our fuels in the future, it has to be better than nature, as Ciamician foresaw. Our need for renewable energy is urgent, and Yang’s model must be able to provide energy on a global scale if it is to eventually replace fossil fuels.

Recent Developments in Yang’s Artificial Photosynthesis

Since the major breakthrough in April 2015, Yang has continued to improve his system in hopes of eventually producing fuels that are commercially viable, efficient, and durable.

In August 2015, Yang and his team tested his system with a different type of bacteria. The method is the same, except instead of electrons, the bacteria use molecular hydrogen from water molecules to reduce carbon dioxide and create methane, the primary component of natural gas. This process is projected to have an impressive conversion efficiency of 10%, which is much higher than the conversion efficiency in natural leaves.

A conversion efficiency of 10% could potentially be commercially viable, but since methane is a gas it is more difficult to use than liquid fuels such as butanol, which can be transferred through pipes. Overall, this new generation of PBS needs to be designed and assembled in order to achieve a solar-to-liquid-fuel efficiency above 10%.

second-g-ap

A diagram of this second-generation PBS that produces methane.

In December 2015, Yang advanced his system further by making the remarkable discovery that certain bacteria could grow the semiconductors by themselves. This development short-circuited the two-step process of growing the nanowires and then culturing the bacteria in the nanowires. The improved semiconductor-bacteria interface could potentially be more efficient in producing acetate, as well as other chemicals and fuels, according to Yang. And in terms of scaling up, it has the greatest potential.

third-g-ap

A diagram of this third-generation PBS that produces acetate.

In the past few weeks, Yang made yet another important breakthrough in elucidating the electron transfer mechanism between the semiconductor-bacteria interface. This sort of fundamental understanding of the charge transfer at the interface will provide critical insights for the designing of the next generation PBS with better efficiency and durability. He will be releasing the details of this breakthrough shortly.

Despite these important breakthroughs and modifications to the PBS, Yang clarifies, “the physics of the semiconductor-bacteria interface for the solar driven carbon dioxide reduction is now established.” As long as he has an effective semiconductor that absorbs solar energy and feeds electrons to the bacteria, the photosynthetic function will initiate, and the remarkable process of artificial photosynthesis will continue to produce liquid fuels.

Why This Solar Power Is Unique

Peter Forbes, a science writer and the author of Nanoscience: Giants of the Infinitesimal, admires Yang’s work in creating this system. He writes, “It’s a brilliant synthesis: semiconductors are the most efficient light harvesters, and biological systems are the best scavengers of CO2.”

Yang’s artificial photosynthesis only relies on solar energy. But it creates a more useable source of energy than solar panels, which are currently the most popular and commercially viable form of solar power. While the semiconductors in solar panels absorb solar energy and convert it into electricity, in artificial photosynthesis, the semiconductors absorb solar energy and store it in “the carbon-carbon bond or the carbon-hydrogen bond of liquid fuels like methane or butanol.”

This difference is crucial. The electricity generated from solar panels simply cannot meet our diverse energy needs, but these renewable liquid fuels and natural gases can. Unlike solar panels, Yang’s PBS absorbs and breaks down carbon dioxide, releases oxygen, and creates a renewable fuel that can be collected and used. With artificial photosynthesis creating our fuels, driving cars and operating machinery becomes much less harmful. As Katherine Bourzac phrases nicely, “This is one of the best attempts yet to realize the simple equation: sun + water + carbon dioxide = sustainable fuel.”

The Future of Artificial Photosynthesis

Yang’s PBS has been advancing rapidly, but he still has work to do before the technology can be considered commercially viable. Despite encouraging conversion efficiencies, especially with methane, the PBS is not durable enough or cost-effective enough to be marketable.

In order to improve this system, Yang and his team are working to figure out how to replace bacteria with synthetic catalysts. So far, bacteria have proven to be the most efficient catalysts, and they also have high selectivity—that is, they can create a variety of useful compounds such as butanol, acetate, polymers and methane. But since bacteria live and die, they are less durable than a synthetic catalyst and less reliable if this technology is scaled up.

Yang has been testing PBS’s with live bacteria and synthetic catalysts in parallel systems in order to discover which type works best. “From the point of view of efficiency and selectivity of the final product, the bacteria approach is winning,” Yang says, “but if down the road we can find a synthetic catalyst that can produce methane and butanol with similar selectivity, then that is the ultimate solution.” Such a system would give us the ideal fuels and the most durable semiconductor-catalyst interface that can be reliably scaled up.

Another concern is that, unlike natural photosynthesis, artificial photosynthesis requires concentrated carbon dioxide to function. This is easy to do in the lab, but if artificial photosynthesis is scaled up, Yang will have to find a feasible way of supplying concentrated carbon dioxide to the PBS. Peter Forbes argues that Yang’s artificial photosynthesis could be “coupled with carbon-capture technology to pull COfrom smokestack emissions and convert it into fuel”. If this could be done, artificial photosynthesis would contribute to a carbon-neutral future by consuming our carbon emissions and releasing oxygen. This is not the focus of Yang’s research, but it is an integral piece of the puzzle that other scientists must provide if artificial photosynthesis is to supply the fuels we need on a large scale.

When Giacomo Ciamician considered the future of artificial photosynthesis, he imagined a future of abundant energy where humans could master the “photochemical processes that hitherto have been the guarded secret of the plants…to make them bear even more abundant fruit than nature, for nature is not in a hurry and mankind is.” And while the rush was not apparent to scientists in 1912, it is clear now, in 2016.

Peidong Yang has already created a system of artificial photosynthesis that out-produces nature. If he continues to increase the efficiency and durability of his PBS, artificial photosynthesis could revolutionize our energy use and serve as a sustainable model for generations to come. As long as the sun shines, artificial photosynthesis can produce fuels and consume waste. And in this future of artificial photosynthesis, the world would be able to grow and use fuels freely; knowing that the same, natural process that created them would recycle the carbon at the other end.

Yang shares this hope for the future. He explains, “Our vision of a cyborgian evolution—biology augmented with inorganic materials—may bring the PBS concept to full fruition, selectively combining the best of both worlds, and providing society with a renewable solution to solve the energy problem and mitigate climate change.”

If you would like to learn more about Peidong Yang’s research, please visit his website at http://nanowires.berkeley.edu/.

Elon Musk’s Plan to Colonize Mars

In an announcement to the International Astronautical Congress on Tuesday, Elon Musk unveiled his Interplanetary Transport System (ITS). His goal: allow humans to colonize a city on Mars within the next 50 to 100 years.

Speaking to an energetic crowd in Guadalajara, Mexico, Musk explained that the alternative to staying on Earth, which is at risk of a “doomsday event,” is to “become a spacefaring civilization and a multi-planet species.” As he told Aeon magazine in 2014, “I think there is a strong humanitarian argument for making life multi-planetary in order to safeguard the existence of humanity in the event that something catastrophic were to happen.” Colonizing Mars, he believes, is one of our best options.

In his speech, Musk discussed the details of his transport system. The ITS, developed by SpaceX, would use the most powerful rocket ever built, and at 400 feet tall, it would also be the largest spaceflight system ever created. The spaceship would fit 100-200 people and would feature movie theaters, lecture halls, restaurants, and other fun activities to make the approximately three-month journey enjoyable. “You’ll have a great time,” said Musk.

Musk explained four key issues that must be addressed to make colonization of Mars possible: the rockets need to be fully reusable, they need to be able to refuel in orbit, there must be a way to harness energy on Mars, and we must figure out more efficient ways of traveling. If SpaceX succeeds in meeting these requirements, the rockets could travel to Mars and return to Earth to pick up more colonists for the journey. Musk explained that the same rockets could be used up to a dozen times, bringing more and more people to colonize the Red Planet.

Despite his enthusiasm for the ITS, Musk was careful to acknowledge that there are still many difficulties and obstacles in reaching this goal. Currently, getting to Mars would require an investment of about $10 billion, which is not affordable for most people today. However, Musk thinks that the reusable rocket technology could significantly decrease this cost. “If we can get the cost of moving to Mars to the cost of a median house price in the U.S., which is around $200,000, then I think the probability of establishing a self-sustaining civilization is very high,” Musk noted.

But this viability requires significant investment from both the government and the private sector. Musk explained, “I know there’s a lot of people in the private sector who are interested in helping fund a base on Mars and then perhaps there will be interest on the government sector side to also do that. Ultimately, this is going to be a huge public-private partnership.” This speech, and the attention it has garnered, could help make such investment and cooperation possible.

Many questions remain about how to sustain human life on Mars and whether or not SpaceX can make this technology viable, as even Musk admits. He explained, “This is a huge amount of risk, will cost a lot, and there’s a good chance we don’t succeed. But we’re going to try and do our best. […] What I really want to do here is to make Mars seem possible — make it seem as though it’s something that we could do in our lifetimes, and that you can go.”

Musk’s full speech can be found here.