Planning for Existential Hope

It may seem like we at FLI spend a lot of our time worrying about existential risks, but it’s helpful to remember that we don’t do this because we think the world will end tragically: We address issues relating to existential risks because we’re so confident that if we can overcome these threats, we can achieve a future greater than any of us can imagine!

As we end 2018 and look toward 2019, we want to focus on a message of hope, a message of existential hope.

But first, a very quick look back…

We had a great year, and we’re pleased with all we were able to accomplish. Some of our bigger projects and successes include: the Lethal Autonomous Weapons Pledge; a new round of AI safety grants focusing on the beneficial development of AGI; the California State Legislature’s resolution in support of the Asilomar AI Principles; and our second Future of Life Award, which was presented posthumously to Stanislav Petrov and his family.

As we now look ahead and strive to work toward a better future, we, as a society, must first determine what that collective future should be. At FLI, we’re looking forward to working with global partners and thought leaders as we consider what “better futures” might look like and how we can work together to build them.

As FLI President Max Tegmark says, “There’s been so much focus on just making our tech powerful right now, because that makes money, and it’s cool, that we’ve neglected the steering and the destination quite a bit. And in fact, I see that as the core goal of the Future of Life Institute: help bring back focus [to] the steering of our technology and the destination.”

A recent Gizmodo article on why we need more utopian fiction also summed up the argument nicely: ​​”Now, as we face a future filled with corruption, yet more conflict, and the looming doom of global warming, imagining our happy ending may be the first step to achieving it.”

Fortunately, there are already quite a few people who have begun considering how a conflicted world of 7.7 billion can unite to create a future that works for all of us. And for the FLI podcast in December, we spoke with six of them to talk about how we can start moving toward that better future.

The existential hope podcast includes interviews with FLI co-founders Max Tegmark and Anthony Aguirre, as well as existentialhope.com founder Allison Duettmann, Josh Clark who hosts The End of the World with Josh Clark, futurist and researcher Anders Sandberg, and tech enthusiast and entrepreneur Gaia Dempsey. You can listen to the full podcast here, but we also wanted to call attention to some of their comments that most spoke to the idea of steering toward a better future:

Max Tegmark on the far future and the near future:

When I look really far into the future, I also look really far into space and I see this vast cosmos, which is 13.8 billion years old. And most of it is, despite what the UFO enthusiasts say, actually looking pretty dead and [like] wasted opportunities. And if we can help life flourish not just on earth, but ultimately throughout much of this amazing universe, making it come alive and teeming with these fascinating and inspiring developments, that makes me feel really, really inspired.

For 2019 I’m looking forward to more constructive collaboration on many aspects of this quest for a good future for everyone on earth.

Gaia Dempsey on how we can use a technique called world building to help envision a better future for everyone and get more voices involved in the discussion:

Worldbuilding is a really fascinating set of techniques. It’s a process that has its roots in narrative fiction. You can think of, for example, the entire complex world that J.R.R. Tolkien created for The Lord of the Rings series. And in more contemporary times, some spectacularly advanced worldbuilding is occurring now in the gaming industry. So [there are] these huge connected systems that underpin worlds in which millions of people today are playing, socializing, buying and selling goods, engaging in an economy. These are vast online worlds that are not just contained on paper as in a book, but are actually embodied in software. And over the last decade, world builders have begun to formally bring these tools outside of the entertainment business, outside of narrative fiction and gaming, film and so on, and really into society and communities. So I really define worldbuilding as a powerful act of creation.

And one of the reasons that it is so powerful is that it really facilitates collaborative creation. It’s a collaborative design practice.

Ultimately our goal is to use this tool to explore how we want to evolve as a society, as a community, and to allow ideas to emerge about what solutions and tools will be needed to adapt to that future.

One of the things where I think worldbuilding is really good is that the practice itself does not impose a single monolithic narrative. It actually encourages a multiplicity of narratives and perspectives that can coexist.

Anthony Aguirre on how we can use technology to find solutions:

I think we can use technology to solve any problem in the sense that I think technology is an extension of our capability: it’s something that we develop in order to accomplish our goals and to bring our will into fruition. So, sort of by definition, when we have goals that we want to do — problems that we want to solve — technology should in principle be part of the solution.

So I’m broadly optimistic that, as it has over and over again, technology will let us do things that we want to do better than we were previously able to do them.

Allison Duettmann on why she created the website existentialhope.com:

I do think that it’s up to everyone, really, to try to engage with the fact that we may not be doomed, and what may be on the other side. What I’m trying to do with the website, at least, is generate common knowledge to catalyze more directed coordination toward beautiful futures. I think that there [are] a lot of projects out there that are really dedicated to identifying the threats to human existence, but very few really offer guidance on [how] to influence that. So I think we should try to map the space of both peril and promise which lie before us, [and] we should really try to aim for that. This knowledge can empower each and every one of us to navigate toward the grand future.

Josh Clark on the impact of learning about existential risks for his podcast series, The End of the World with Josh Clark:

As I was creating the series, I underwent this transition [regarding] how I saw existential risks, and then ultimately how I saw humanity’s future, how I saw humanity, other people, and I kind of came to love the world a lot more than I did before. Not like I disliked the world or people or anything like that. But I really love people way more than I did before I started out, just because I see that we’re kind of close to the edge here. And so the point of why I made the series kind of underwent this transition, and you can kind of tell in the series itself where it’s like information, information, information. And then now, that you have bought into this, here’s how we do something about it.

I think that one of the first steps to actually taking on existential risks is for more and more people to start talking about [them].

Anders Sandberg on a grand version of existential hope:

The thing is, my hope for the future is we get this enormous open ended future. It’s going to contain strange and frightening things but I also believe that most of it is going to be fantastic. It’s going to be roaring on the world far, far, far into the long term future of the universe probably changing a lot of the aspects of the universe.

When I use the term “existential hope,” I contrast that with existential risk. Existential risks are things that threaten to curtail our entire future, to wipe it out, to make it too much smaller than it could be. Existential hope to me, means that maybe the future is grander than we expect. Maybe we have chances we’ve never seen and I think we are going to be surprised by many things in future and some of them are going to be wonderful surprises. That is the real existential hope.

Right now, this sounds totally utopian, would you expect all humans to get together and agree on something philosophical? That sounds really unlikely. Then again, a few centuries ago the United Nations and the internet would [have sounded] totally absurd. The future is big, we have a lot of centuries ahead of us, hopefully.

From everyone at FLI, we wish you a happy holiday season and a wonderful New Year full of hope!

https://www.flickr.com/photos/lamoncloa_gob_es/32291640108

Updates From the COP24 Climate Change Meeting

For the first two weeks in December, the parties to the United Nations Framework Convention on Climate Change (UNFCC) gathered in Katowice, Poland for the 24th annual Conference of the Parties (COP24).

The UNFCCC defines its ultimate goal as “preventing ‘dangerous’ human interference with the climate system,” and its objective for COP24 was to design an “implementation package” for the 2015 Paris Climate Agreement. This package, known as the Katowice Rules, is intended to bolster the Paris Agreement by intensifying the mitigation goals of each of its member countries and, in so doing, ensure the full implementation of the Paris Agreement.

The significance of this package is clearly articulated in the COP24 presidency’s vision — “there is no Paris Agreement without Katowice.”

And the tone of the event was, fittingly, one of urgency. Negotiations took place in the wake of the latest IPCC report, which made clear in its findings that the original terms of the Paris Agreement are insufficient. If we are to keep to the preferred warming target of 1.5°C this century, the report notes that we must strengthen the global response to climate change.

The need for increased action was reiterated throughout the event. During the first week of talks, the Global Carbon Project released new data showing a 2.7% increase in carbon emissions in 2018 and projecting further emissions growth in 2019. And the second week began with a statement from global investors who, “strongly urge all governments to implement the actions that are needed to achieve the goals of the [Paris] Agreement, with the utmost urgency.” The investors warned that, without drastic changes, the economic fallout from climate change would likely be several times worse than the 2008 financial crisis.

Against this grim backdrop, negotiations crawled along.

Progress was impeded early on by a disagreement over the wording used in the Conference’s acknowledgment of the IPCC report. Four nations — the U.S., Russia, Saudi Arabia, and Kuwait — took issue with a draft that said the parties “welcome” the report, preferring to say they “took note” of it. A statement from the U.S. State Department explained: “The United States was willing to note the report and express appreciation to the scientists who developed it, but not to welcome it, as that would denote endorsement of the report.”

There was also tension between the U.S. and China surrounding the treatment of developed vs. developing countries. The U.S. wants one universal set of rules to govern emissions reporting, while China has advocated for looser standards for itself and other developing nations.

Initially scheduled to wrap on Friday, talks continued into the weekend, as a resolution was delayed in the final hours by Brazil’s opposition to a proposal that would change rules surrounding carbon trading markets. Unable to strike a compromise, negotiators ultimately tabled the proposal until next year, and a deal was finally struck on Saturday, following negotiations that carried on through the night.

The final text of the Katowice Rules welcomes the “timely completion” of the IPCC report and lays out universal requirements for updating and fulfilling national climate pledges. It holds developed and developing countries to the same reporting standard, but it offers flexibility for “those developing country parties that need it in the light of their capacities.” Developing countries will be left to self-determine whether or not they need flexibility.

The rules also require that countries report any climate financing, and developed countries are called on to increase their financial contributions to climate efforts in developing countries.

The photo for this article was originally posted here.

How to Create AI That Can Safely Navigate Our World — An Interview With Andre Platzer

Over the last few decades, the unprecedented pace of technological progress has allowed us to upgrade and modernize much of our infrastructure and solve many long-standing logistical problems. For example, Babylon Health’s AI-driven smartphone app is helping assess and prioritize 1.2 million patients in North London, electronic transfers allow us to instantly send money nearly anywhere in the world, and, over the last 20 years, GPS has revolutionized  how we navigate, how we track and ship goods, and how we regulate traffic.

However, exponential growth comes with its own set of hurdles that must be navigated. The foremost issue is that it’s exceedingly difficult to predict how various technologies will evolve. As a result, it becomes challenging to plan for the future and ensure that the necessary safety features are in place.

This uncertainty is particularly worrisome when it comes to technologies that could pose existential challenges — artificial intelligence, for example.

Yet, despite the unpredictable nature of tomorrow’s AI, certain challenges are foreseeable. Case in point, regardless of the developmental path that AI agents ultimately take, these systems will need to be capable of making intelligent decisions that allow them to move seamlessly and safely through our physical world. Indeed, one of the most impactful uses of artificial intelligence encompasses technologies like autonomous vehicles, robotic surgeons, user-aware smart grids, and aircraft control systems — all of which combine advanced decision-making processes with the physics of motion.

Such systems are known as cyber-physical systems (CPS). The next generation of advanced CPS could lead us into a new era in safety, reducing crashes by 90% and saving the world’s nations hundreds of billions of dollars a year — but only if such systems are themselves implemented correctly.

This is where Andre Platzer, Associate Professor of Computer Science at Carnegie Mellon University, comes in. Platzer’s research is dedicated to ensuring that CPS benefit humanity and don’t cause harm. Practically speaking, this means ensuring that the systems are flexible, reliable, and predictable.

What Does it Mean to Have a Safe System?

Cyber-physical systems have been around, in one form or another, for quite some time. Air traffic control systems, for example, have long relied on CPS-type technology for collision avoidance, traffic management, and a host of other decision-making tasks. However, Platzer notes that as CPS continue to advance, and as they are increasingly required to integrate more complicated automation and learning technologies, it becomes far more difficult to ensure that CPS are making reliable and safe decisions.

To better clarify the nature of the problem, Platzer turns to self-driving vehicles. In advanced systems like these, he notes that we need to ensure that the technology is sophisticated enough to be flexible, as it has to be able to safely respond to any scenario that it confronts. In this sense, “CPS are at their best if they’re not just running very simple [control systems], but if they’re running much more sophisticated and advanced systems,” Platzer notes. However, when CPS utilize advanced autonomy, because they are so complex, it becomes far more difficult to prove that they are making systematically sound choices.

In this respect, the more sophisticated the system becomes, the more we are forced to sacrifice some of the predictability and, consequently, the safety of the system. As Platzer articulates, “the simplicity that gives you predictability on the safety side is somewhat at odds with the flexibility that you need to have on the artificial intelligence side.”

The ultimate goal, then, is to find equilibrium between the flexibility and predictability — between the advanced learning technology and the proof of safety — to ensure that CPS can execute their tasks both safely and effectively. Platzer describes this overall objective as a kind of balancing act, noting that, “with cyber-physical systems, in order to make that sophistication feasible and scalable, it’s also important to keep the system as simple as possible.”

How to Make a System Safe

The first step in navigating this issue is to determine how researchers can verify that a CPS is truly safe. In this respect, Platzer notes that his research is driven by this central question: if scientists have a mathematical model for the behavior of something like a self-driving car or an aircraft, and if they have the conviction that all the behaviors of the controller are safe, how do they go about proving that this is actually the case?

The answer is an automated theorem prover, which is a computer program that assists with the development of rigorous mathematical correctness proofs.

When it comes to CPS, the highest safety standard is such a mathematical correctness proof, which shows that the system always produces the correct output for any given input. It does this by using formal methods of mathematics to prove or disprove the correctness of the control algorithms underlying a system.

After this proof technology has been identified and created, Platzer asserts that the next step is to use it to augment the capabilities of artificially intelligent learning agents — increasing their complexity while simultaneously verifying their safety.

Eventually, Platzer hopes that this will culminate in technology that allows CPS to recover from situations where the expected outcome didn’t turn out to be an accurate model of reality. For example, if a self-driving car assumes another car is speeding up when it is actually slowing down, it needs to be able to quickly correct this error and switch to the correct mathematical model of reality.

The more complex such seamless transitions are, the more complex they are to implement. But they are the ultimate amalgamation of safety and flexibility or, in other words, the ultimately combination of AI and safety proof technology.

Creating the Tech of Tomorrow

To date, one of the biggest developments to come from Platzer’s research is the KeYmaera X prover, which Platzer characterizes as a “gigantic, quantum leap in terms of the reliability of our safety technology, passing far beyond in rigor than what anyone else is doing for the analysis of cyber-physical systems.”

The KeYmaera X prover, which was created by Platzer and his team, is a tool that allows users to easily and reliably construct mathematical correctness proofs for CPS through an easy-to-use interface.

More technically, KeYmaera X is a hybrid systems theorem prover that analyzes the control program and the physical behavior of the controlled system together, in order to provide both efficient computation and the necessary support for sophisticated safety proof techniques. Ultimately, this work builds off of a previous iteration of the technology known as KeYmaera. However, Platzer states that, in order to optimize the tool and make it as simple as possible, the team essentially “started from scratch.”

Emphasizing just how dramatic these most recent changes are, Platzer notes that, in the previous prover, the correctness of the statements was dependent on some 66,000 lines of code. Notably, each of these 66,000 lines were all critical to the correctness of the verdict. According to Platzer, this poses a problem, as it’s exceedingly difficult to ensure that all of the lines are implemented correctly. Although the latest iteration of KeYmaera is ultimately just as large as the previous version, in KeYmaera X, the part of the prover that is responsible for verifying the correctness is a mere 2,000 lines of code.

This allows the team to evaluate the safety of cyber-physical systems more reliably than ever before. “We identified this microkernel, this really minuscule part of the system that was responsible for the correctness of the answers, so now we have a much better chance of making sure that we haven’t accidentally snuck any mistakes into the reasoning engines,” Platzer said. Simultaneously, he notes that it enables users to do much more aggressive automation in their analysis. Platzer explains, “If you have a small part of the system that’s responsible for the correctness, then you can do much more liberal automation. It can be much more courageous because there’s an entire safety net underneath it.”

For the next stage of his research, Platzer is going to begin integrating multiple mathematical models that could potentially describe reality into a CPS. To explain these next steps, Platzer returns once more to self-driving cars: “If you’re following another driver, you can’t know if the driver is currently looking for a parking spot, trying to get somewhere quickly, or about to change lanes. So, in principle, under those circumstances, it’s a good idea to have multiple possible models and comply with the ones that may be the best possible explanation of reality.”

Ultimately, the goal is to allow the CPS to increase their flexibility and complexity by switching between these multiple models as they become more or less likely explanations of reality. “The world is a complicated place,” Platzer explains, “so the safety analysis of the world will also have to be a complicated one.”

FLI Signs Safe Face Pledge

FLI is pleased to announce that we’ve signed the Safe Face Pledge, an effort to ensure facial analysis technologies are not used as weapons or in other situations that can lead to abuse or bias. The pledge was initiated and led by Joy Buolamwini, an AI researcher at MIT and founder of the Algorithmic Justice League.  

Facial analysis technology isn’t just used by our smart phones and on social media. It’s also found in drones and other military weapons, and it’s used by law enforcement, airports and airlines, public surveillance cameras, schools, business, and more. Yet the technology is known to be flawed and biased, often miscategorizing anyone who isn’t a white male. And the bias is especially strong against dark-skinned women.

Research shows facial analysis technology is susceptible to bias and even if accurate can be used in ways that breach civil liberties. Without bans on harmful use cases, regulation, and public oversight, this technology can be readily weaponized, employed in secret government surveillance, and abused in law enforcement,” warns Buolamwini.

By signing the pledge, companies that develop, sell or buy facial recognition and analysis technology promise that they will “prohibit lethal use of the technology, lawless police use, and require transparency in any government use.”

FLI does not develop or use these technologies, but we signed because we support these efforts, and we hope all companies will take necessary steps to ensure their technologies are used for good, rather than as weapons or other means of harm.

Companies that had signed the pledge at launch include Simprints, Yoti, and Robbie AI. Other early signatories of the pledge include prominent AI researchers Noel Sharkey, Subbarao Kambhampati, Toby Walsh, Stuart Russell, and Raja Chatila, as well as tech bauthors Cathy O’Neil and Meredith Broussard, and many more.

The SAFE Face Pledge commits signatories to:

Show Value for Human Life, Dignity, and Rights

  • Do not contribute to applications that risk human life
  • Do not facilitate secret and discriminatory government surveillance
  • Mitigate law enforcement abuse
  • Ensure your rules are being followed

Address Harmful Bias

  • Implement internal bias evaluation processes and support independent evaluation
  • Submit models on the market for benchmark evaluation where available

Facilitate Transparency

  • Increase public awareness of facial analysis technology use
  • Enable external analysis of facial analysis technology on the market

Embed Safe Face Pledge into Business Practices

  • Modify legal documents to reflect value for human life, dignity, and rights
  • Engage with stakeholders
  • Provide details of Safe Face Pledge implementation

Organizers of the pledge say, “Among the most concerning uses of facial analysis technology involve the bolstering of mass surveillance, the weaponization of AI, and harmful discrimination in law enforcement contexts.” And the first statement of the pledge calls on signatories to ensure their facial analysis tools are not used “to locate or identify targets in operations where lethal force may be used or is contemplated.”

Anthony Aguirre, cofounder of FLI, said, “A great majority of AI researchers agree that designers and builders of AI systems are stakeholders in the moral implications of their use, misuse, and actions, with a responsibility and opportunity to shape those implications.  That is, in fact, the 9th Asilomar AI principle. The Safe Face Pledge asks those involved with the development of facial recognition technologies, which are dramatically increasing in power through the use of advanced machine learning, to take this belief seriously and to act on it.  As new technologies are developed and poised for widespread implementation and use, it is imperative for our society to consider their interplay with the rights and privileges of the people they affect — and new rights and responsibilities may have to be considered as well, where technologies are currently in a legal or regulatory grey area.  FLI applauds the multiple initiatives, including this pledge, aimed at ensuring that facial recognition technologies — as with other AI technologies — are implemented only in a way that benefits both individuals and society while taking utmost care to respect individuals’ rights and human dignity.”

You can support the Safe Face Pledge by signing here.

 

Highlights From NeurIPS 2018

The Top Takeaway from Google’s Attempt to Remove Racial Biases From AI

By Jolene Creighton

Algorithms don’t just decide what posts you see in your Facebook newsfeed. They make millions of life-altering decisions every day. They help decide who moves to the next stage of a job interview, who can take out a loan, and even who’s granted parole.

When one stops to consider the well-known biases that exist in these algorithms, the role that they play in our decision-making processes becomes somewhat concerning.

Ultimately, bias is a problem that stems from the unrepresentative datasets that our systems are trained on. For example, when it comes to images, most of the training data is Western-centric — it depicts caucasian individuals taking part in traditionally Western activities. Consequently, as Google research previously revealed, if we give an AI system an image of a caucasian bride in a Western dress, it correctly labels the image as “wedding,” “bride,” and “women.” If, however, we present the same AI system with an image of a bride of Asian descent, is produces results like “clothing,” “event,” and “performance art.”

Of course, this problem is not exclusively a Western one. In 2011, a study found that AI developed in Eastern Asia have more difficulty distinguishing between Caucasian faces than Asian faces.

That’s why, in September of 2018, Google partnered with the NeurIPS confrence to launch the Inclusive Images Competition, an event that was created to help encourage the development of less biased AI image classification models.

For the competition, individuals were asked to use Open Images, a image dataset collected from North America and Europe, to train a system that can be evaluated on images collected from a different geographic region.

At this week’s NeurIPS conference, Pallavi Baljekar, a Google Brain researcher, spoke about the success of the project. Notably, the competition was only marginally successful. Although the leading models maintained relatively high accuracy in the first stages of the competition, four out of five top models didn’t predict the “bride” label when applied to the original two bride images.

However, that’s not to say that progress wasn’t made. Baljekar noted that the competition proved that, even with a small and diverse set of data, “we can improve performance on unseen target distributions.”

And in an interview, Pavel Ostyakov, a Deep Learning Engineer at Samsung AI Center and the researcher who took first place in the competition, added that demanding an entirely unbiased AI may be asking for a bit too much.  Ultimately, our AI need to be able to “stereotype” to some degree in order to make their classifications. “The problem was not solved yet, but I believe that it is impossible for neural networks to make unbiased predictions,” he said. Ultimately, the need to retain some biases are sentiments that have been echoed by other AI researchers before.

Consequently, it seems that making unbiased AI systems is going to be a process that requires continuous improvement and tweaking. Yet, despite the fact that we can’t make entirely unbiased AI, we can do a lot more to make them less biased.

With this in mind, today, Google announced Open Images Extended. It’s an extension of Google’s Open Images and is intended to be a dataset that better represents the global diversity we find on our planet. The first set to be added is seeded with over 470,000 images.

On this very long road we’re traveling, it’s a step in the right direction.

 

 

 

The Reproducibility Problem: AI Agents Should be Trained in More Realistic Environments

By Jolene Creighton

Our world is a complex and vibrant place. It’s also remarkably dynamic, existing in a state of near constant change. As a result, when we’re faced with a decision, there are thousands of variables that must be considered.

According to Joelle Pineau, an Associate Professor at McGill University and lead of Facebook’s Artificial Intelligence Research lab in Montreal, this poses a bit of a problem when it comes to our AI agents.

During her keynote speech at the 2018 NeurIPS conference, Pineau stated that many AI researchers aren’t training their machine learning systems in proper environments. Instead of using dynamic worlds that mimic what we see in real life, much of the work that’s currently being done takes place in simulated worlds that are static and pristine, lacking the complexity of realistic environments.

According to Pineau, although these computer-constructed worlds help make research more reproducible, they also make the results less rigorous and meaningful. “The real world has incredible complexity, and when we go to these simulators, that complexity is completely lost,” she said.

Pineau continued by noting that, if we hope to one day create intelligent machines that are able to work and react like humans — artificial general intelligences (AGIs) — we must go beyond the static and limited worlds that are created by computers and begin tackling real world scenarios. “We have to break out of these simulators…on the roadmap to AGI, this is only the beginning,” she said.

Ultimately, Pineau also noted that we will never achieve a true AGI unless we begin testing our systems on more diverse training sets and forcing our intelligent agents to tackle more complex problems. “The world is your test set,” she said, concluding, “I’m here to encourage you to explore the full spectrum of opportunities…this means using separate tasks for training and testing.”

Teaching a Machine to Reason

Pineau’s primary critique was on an area of machine learning that is known as reinforcement learning (RL). RL systems allow intelligent agents to improve their decision-making capabilities through trial and error. Over time, these agents are able to learn the rules that govern good and bad choices by interacting with their environment and receiving numerical reward signals that are based on the actions that they take.

Ultimately, RL systems are trained to maximize the numerical reward signals that they receive, so their decisions improve as they try more things and discover what actions yield the most reward. But unfortunately, most simulated worlds have a very limited number of variables. As a result, RL systems have very few things that they can interact with. This means that, although intelligent agents may know what constitutes good decision-making in a simulated environment, when they’re deployed in a realistic environment, they quickly become lost amidst all the new variables.

According to Pineau, overcoming this issue means creating more dynamic environments for AI systems to train on.

To showcase one way of accomplishing this, Pineau turned to Breakout, a game launched by Atari in 1976. The game’s environment is simplistic and static, consisting of a background that is entirely black. In order to inject more complexity into this simulated environment, Pineau and her team inserted videos, which are an endless source of natural noise, into the background.

Pineau argued that, by adding these videos into the equation, the team was able to create an environment that includes some of the complexity and variability of the real world. And by ultimately training reinforcement learning systems to operate in such multifaceted environments, researchers obtain more reliable findings and better prepare RL systems to make decisions in the real world.

In order to help researchers better comprehend exactly how reliable and reproducible their results currently are — or aren’t — Pineau pointed to The 2019 ICLR Reproducibility Challenge during her closing remarks.

The goal of this challenge is to have members of the research community try to reproduce the empirical results submitted to the International Conference on Learning Representations. Then, once all of the attempts have been made, the results are sent back to the original authors. Pineau noted that, to date, the challenge has had a dramatic impact on the findings that are reported. During the 2018 challenge, 80% of authors that received reproducibility reports stated that they changed their papers as a result of the feedback.

You can download a copy of Pineau’s slides here.

 

 

Montreal Declaration on Responsible AI May Be Next Step Toward the Development of AI Policy

By Ariel Conn

Over the last few years, as concerns surrounding artificial intelligence have grown, an increasing number of organizations, companies, and researchers have come together to create and support principles that could help guide the development of beneficial AI. With FLI’s Asilomar Principles, IEEE’s treatise on the Ethics of Autonomous and Intelligent Systems, the Partnership on AI’s Tenets, and many more, concerned AI researchers and developers have laid out a framework of ethics that almost everyone can agree upon. However, these previous documents weren’t specifically written to inform and direct AI policy and regulations.

On December 4, at the NeurIPS conference in Montreal, Canadian researchers took the next step, releasing the Montreal Declaration on Responsible AI. The Declaration builds on the current ethical framework of AI, but the architects of the document also add, “Although these are ethical principles, they can be translated into political language and interpreted in legal fashion.”

Yoshua Bengio, a prominent Canadian AI researcher and founder of one of the world’s premiere machine learning labs, described the Declaration saying, “Its goal is to establish a certain number of principles that would form the basis of the adoption of new rules and laws to ensure AI is developed in a socially responsible manner.”

“We want this Declaration to spark a broad dialogue between the public, the experts and government decision-makers,” said UdeM’s rector, Guy Breton. “The theme of artificial intelligence will progressively affect all sectors of society and we must have guidelines, starting now, that will frame its development so that it adheres to our human values ​​and brings true social progress.”

The Declaration lays out ten principles: Well-Being, Respect for Autonomy, Protection of Privacy and Intimacy, Solidarity, Democratic Participation, Equity, Diversity, Prudence, Responsibility, and Sustainable Development.

The primary themes running through the Declaration revolve around ensuring that AI doesn’t disrupt basic human and civil rights and that it enhances equality, privacy, diversity, and human relationships. The Declaration also suggests that humans need to be held responsible for the actions of artificial intelligence systems (AIS), and it specifically states that AIS cannot be allowed to make the decision to take a human life. It also includes a section on ensuring that AIS is designed with the climate and environment in mind, such that resources are sustainably sourced and energy use is minimized.

The Declaration is the result of deliberation that “occurred through consultations held over three months, in 15 different public spaces, and sparked exchanges between over 500 citizens, experts and stakeholders from every horizon.” That it was formulated in Canada is especially relevant given Montreal’s global prominence in AI research.

In his article for the Conversation, Bengio explains, “Because Canada is a scientific leader in AI, it was one of the first countries to see all its potential and to develop a national plan. It also has the will to play the role of social leader.”

He adds, “Generally speaking, scientists tend to avoid getting too involved in politics. But when there are issues that concern them and that will have a major impact on society, they must assume their responsibility and become part of the debate.”

 

 

Making an Impact: What Role Should Scientists Play in Creating AI Policy?

By Jolene Creighton

Artificially intelligent systems are already among us. They fly our planes, drive our cars, and even help doctors make diagnoses and treatment plans. As AI continues to impact daily life and alter society, laws and policies will increasingly have to take it into account. Each day, more and more of the world’s experts call on policymakers to establish clear, international guidelines for the governance of AI.

This week, at the 2018 NeurIPS conference, Edward W. Felten, Professor of Computer Science and Public Affairs at Princeton University, took up the call.

During his opening remarks, Felten noted that AI is poised to radically change everything about the way we live and work, stating that this technology is “extremely powerful and represents a profound change that will happen across many different areas of life.” As such, Felten noted that we must work quickly to amend our laws and update our policies so we’re ready to confront the changes that this new technology brings.

However, Felten argued that policy makers cannot be left to dictate this course alone — members of the AI research community must engage with them.

“Sometimes it seems like our world, the world of the research lab or the developer’s or data scientist’s cubicle, is a million miles from public policy…however, we have not only an opportunity but also a duty to be actively participating in public life,” he said.

Guidelines for Effective Engagement

Felton noted that the first step for researchers is to focus on and understand the political system as a whole. “If you look only at the local picture, it might look irrational. But, in fact, these people [policymakers] are operating inside a system that is big and complicated,” he said. To this point, Felten stated that researchers must become better informed about political processes so that they can participate in policy conversations more effectively.

According to Felten, this means the AI community needs to recognize that policy work is valid and valuable, and this work should be incentivized accordingly. He also called on the AI community to create career paths that encourage researchers to actively engage with policymakers by blending AI research and policy work.

For researchers who are interested in pursuing such work, Felten outlined the steps they should take to start an effective dialogue:

  1. Combine knowledge with preference: As a researcher, work to frame your expertise in the context of the policymaker’s interests.
  2. Structure the decision space: Based on the policymaker’s preferences, give a range of options and explain their possible consequences.
  3. Follow-up: Seek feedback on the utility of the guidance that you offered and the way that you presented your ideas.

If done right, Felton said, this protocol allows experts and policy makers to build productive engagement and trust over time.

US Government Releases Its Latest Climate Assessment, Demands Immediate Action

At the end of last week, amidst the flurry of holiday shopping, the White House quietly released Volume II of the Fourth National Climate Assessment (NCA4). The comprehensive report, which was compiled by the United States Global Change Research Program (USGCRP), is the culmination of decades of environmental research conducted by scientists from 13 different federal agencies. The scope of the work is truly striking, representing more than 300 authors and encompassing thousands of scientific studies.

Unfortunately, the report is also rather grim.

If climate change continues unabated, the assessment asserts that it will cost the U.S. economy hundreds of billions a year by the close of the century — causing some $155 billion in annual damages to labor and another $118 billion in damages to coastal property. In fact, the report notes that, unless we immediately launch “substantial and sustained global mitigation and regional adaptation efforts,” the impact on the agricultural sector alone will reach billions of dollars in losses by the middle of the century.

Notably, the NCA4 authors emphasize that these aren’t just warnings for future generations, pointing to several areas of the United States that are already grappling with the high economic cost of climate change. For example, a powerful heatwave that struck the Northeast left local fisheries devastated, and similar events in Alaska have dramatically slashed fishing quotas for certain stocks. Meanwhile, human activity is exacerbating Florida’s red tide, killing fish populations along the southwest coast.

Of course, the economy won’t be the only thing that suffers.

According to the assessment, climate change is increasingly threatening the health and well-being of the American people, and emission reduction efforts could ultimately save thousands of lives. Young children, pregnant women, and aging populations are identified as most at risk; however, the authors note that waterborne infectious diseases and global food shortages threaten all populations.

As with the economic impact, the toll on human health is already visible. For starters, air pollution is driving a rise in the number of deaths related to heart and lung problems. Asthma diagnoses have increased, and rising temperatures are causing a surge in heatstroke and other heat-related illnesses. And the report makes it clear that the full extent of the risk extends well beyond either the economy or human health, plainly stating that climate change threatens all life on our planet.

Ultimately, the authors emphasize the immediacy of the issue, noting that without immediate action, no system will be left untouched:

“Climate change affects the natural, built, and social systems we rely on individually and through their connections to one another….extreme weather and climate-related impacts on one system can result in increased risks or failures in other critical systems, including water resources, food production and distribution, energy and transportation, public health, international trade, and national security. The full extent of climate change risks to interconnected systems, many of which span regional and national boundaries, is often greater than the sum of risks to individual sectors.”

Yet, the picture painted by the NCA4 assessment is not entirely bleak. The report suggests that, with a concerted and sustained effort, the most dire damage can be undone and ultimate catastrophe averted. The authors note that this will require international cooperation centered on a dramatic reduction in global carbon dioxide emissions.

The 2015 Paris Agreement, in which 195 countries put forth emission reduction pledges, represented a landmark in international effort to curtail global warming. The agreement was designed to cap warming at 2 degrees Celsius, a limit scientists then believed would prevent the most severe and irreversible effects of climate change. That limit has since been lowered to 1.5 degrees Celsius. Unfortunately, current models predict that the even if countries hit their current pledges, temperatures will still climb to 3.3 degrees Celsius by the end of the century. The Paris Agreement offers a necessary first step, but in light of these new predictions, pledges must be strengthened.

Scientists hope the findings in the National Climate Assessment will compel the U.S. government to take the lead in updating their climate commitments.

Handful of Countries – Including the US and Russia – Hamper Discussions to Ban Killer Robots at UN

This press release was originally released by the Campaign to Stop Killer Robots and has been lightly edited.

Geneva, 26 November 2018 – Reflecting the fragile nature of multilateralism today, countries have agreed to continue their diplomatic talks on lethal autonomous weapons systems—killer robots—next year. But the discussions will continue with no clear objective and participating countries will have even less time dedicated to making decisions than they’ve had in the past. The outcome at the Convention on Conventional Weapons (CCW) annual meeting—which concluded at 11:55 PM on Friday, November 23—has again demonstrated the weakness of the forum’s decision-making process, which enables a single country or small group of countries to thwart more ambitious measures sought by a majority of countries.

Killer robots are weapons systems that would select and attack targets without meaningful human control over the process — that is, a weapon that could target and kill people without sufficient human oversight.

“We’re dismayed that [countries] could not agree on a more ambitious mandate aimed at negotiating a treaty to prevent the development of fully autonomous weapons,” said Mary Wareham of Human Rights Watch, coordinator of the Campaign to Stop Killer Robots. “This weak outcome underscores the urgent need for bold political leadership and for consideration of  another route to create a new treaty to ban these weapons systems, which would select and attack targets without meaningful human control.”

“The security of the world and future of humanity hinges on achieving a preemptive ban on killer robots,” Wareham added.

The Campaign to Stop Killer Robots urges all countries to heed the call of the UN Secretary-General and prohibit these weapons, which he has deemed “politically unacceptable and morally repugnant.”

Since the first CCW meeting on killer robots in 2014, most of the participating countries have concluded that current international humanitarian and human rights law will need to be strengthened to prevent the development, production, and use of fully autonomous weapons. This includes 28 countries seeking to prohibit fully autonomous weapons. This past week, El Salvador and Morocco added their names to the list of countries calling for a ban. Austria, Brazil, and Chile have formally proposed the urgent negotiation of “a legally-binding instrument to ensure meaningful human control over the critical functions” of weapons systems.

None of the 88 countries participating in the CCW meeting objected to continuing the formal discussions on lethal autonomous weapons systems. However, Russia, Israel, Australia, South Korea, and the United States have indicated they cannot support negotiation of a new treaty via the CCW or any other process. And Russia alone successfully lobbied to limit the amount of time that states will meet in 2019, reducing the talks from just 10 days to only 7 days.

Seven days is insufficient for the CCW to tackle this challenge, and for the Campaign to Stop Killer Robots, the fact the CCW talks on killer robots will proceed next year is no guarantee of a meaningful outcome.

“It seems ever more likely that concerned [countries] will consider other avenues to create a new international treaty to prohibit fully autonomous weapons,” said Wareham. “The Campaign to Stop Killer Robots stands ready to work to secure a new treaty through any means possible.”

The CCW is not the only group within the United Nations that can pass a legally-binding, international treaty. In the past, the CCW has been tasked with banning antipersonnel landmines, cluster munitions, and nuclear weapons, but in each case, because the CCW requires consensus among all participating countries, the group was never able to prohibit the weapons in question. Instead, fueled by mounting public pressure, concerned countries turned to other bodies within the UN to finally establish treaties that banned the each of these inhumane weapons. But even then, these diplomatic efforts only succeeded because of the genuine partnerships between like-minded countries, UN agencies, the International Committee of the Red Cross and dedicated coalitions of non-governmental organizations.

This past week’s CCW meeting approved Mr. Ljupco Jivan Gjorgjinski of the Former Yugoslav Republic of Macedonia to chair next year’s deliberations on LAWS, which will be divided into two meetings: March 25-29 and August 20-21. The CCW’s annual meeting, at which decisions will be made about future work on autonomous weapons, will be held on November 13-15.

“Over the coming year our dynamic campaigners around the world are intensifying their outreach at the national and regional levels,” said Wareham. “We encourage anyone concerned by the disturbing trend towards killer robots to express their strong desire for their government to endorse and work for a ban on fully autonomous weapons without delay. Only with the public’s support will the ban movement prevail.”

To learn more about how you can help, visit autonomousweapons.org.

Benefits & Risks of Biotechnology

“This is a whole new era where we’re moving beyond little edits on single genes to being able to write whatever we want throughout the genome.”

-George Church, Professor of Genetics at Harvard Medical School

What is biotechnology?

How are scientists putting nature’s machinery to use for the good of humanity, and how could things go wrong?

Biotechnology is nearly as old as humanity itself. The food you eat and the pets you love? You can thank our distant ancestors for kickstarting the agricultural revolution, using artificial selection for crops, livestock, and other domesticated animals. When Edward Jenner invented vaccines and when Alexander Fleming discovered antibiotics, they were harnessing the power of biotechnology. And, of course, modern civilization would hardly be imaginable without the fermentation processes that gave us beer, wine, and cheese!

When he coined the term in 1919, the agriculturalist Karl Ereky described ‘biotechnology’ as “all lines of work by which products are produced from raw materials with the aid of living things.” In modern biotechnology, researchers modify DNA and proteins to shape the capabilities of living cells, plants, and animals into something useful for humans. Biotechnologists do this by sequencing, or reading, the DNA found in nature, and then manipulating it in a test tube – or, more recently, inside of living cells.

In fact, the most exciting biotechnology advances of recent times are occurring at the microscopic level (and smaller!) within the membranes of cells. After decades of basic research into decoding the chemical and genetic makeup of cells, biologists in the mid-20th century launched what would become a multi-decade flurry of research and breakthroughs. Their work has brought us the powerful cellular tools at biotechnologists’ disposal today. In the coming decades, scientists will use the tools of biotechnology to manipulate cells with increasing control, from precision editing of DNA to synthesizing entire genomes from their basic chemical building blocks. These cells could go on to become bomb-sniffing plants, miracle cancer drugs, or ‘de-extincted’ wooly mammoths. And biotechnology may be a crucial ally in the fight against climate change.

But rewriting the blueprints of life carries an enormous risk. To begin with, the same technology being used to extend our lives could instead be used to end them. While researchers might see the engineering of a supercharged flu virus as a perfectly reasonable way to better understand and thus fight the flu, the public might see the drawbacks as equally obvious: the virus could escape, or someone could weaponize the research. And the advanced genetic tools that some are considering for mosquito control could have unforeseen effects, possibly leading to environmental damage. The most sophisticated biotechnology may be no match for Murphy’s Law.

While the risks of biotechnology have been fretted over for decades, the increasing pace of progress – from low cost DNA sequencing to rapid gene synthesis to precision genome editing – suggests biotechnology is entering a new realm of maturity regarding both beneficial applications and more worrisome risks. Adding to concerns, DIY scientists are increasingly taking biotech tools outside of the lab. For now, many of the benefits of biotechnology are concrete while many of the risks remain hypotheticals, but it is better to be proactive and cognizant of the risks than to wait for something to go wrong first and then attempt to address the damage.

How does biotechnology help us?

Satellite images make clear the massive changes that mankind has made to the surface of the Earth: cleared forests, massive dams and reservoirs, millions of miles of roads. If we could take satellite-type images of the microscopic world, the impact of biotechnology would be no less obvious. The majority of the food we eat comes from engineered plants, which are modified – either via modern technology or by more traditional artificial selection – to grow without pesticides, to require fewer nutrients, or to withstand the rapidly changing climate. Manufacturers have substituted petroleum-based ingredients with biomaterials in many consumer goods, such as plastics, cosmetics, and fuels. Your laundry detergent? It almost certainly contains biotechnology. So do nearly all of your cotton clothes.

But perhaps the biggest application of biotechnology is in human health. Biotechnology is present in our lives before we’re even born, from fertility assistance to prenatal screening to the home pregnancy test. It follows us through childhood, with immunizations and antibiotics, both of which have drastically improved life expectancy. Biotechnology is behind blockbuster drugs for treating cancer and heart disease, and it’s being deployed in cutting-edge research to cure Alzheimer’s and reverse aging. The scientists behind the technology called CRISPR/Cas9 believe it may be the key to safely editing DNA for curing genetic disease. And one company is betting that organ transplant waiting lists can be eliminated by growing human organs in chimeric pigs.

What are the risks of biotechnology?

Along with excitement, the rapid progress of research has also raised questions about the consequences of biotechnology advances. Biotechnology may carry more risk than other scientific fields: microbes are tiny and difficult to detect, but the dangers are potentially vast. Further, engineered cells could divide on their own and spread in the wild, with the possibility of far-reaching consequences. Biotechnology could most likely prove harmful either through the unintended consequences of benevolent research or from the purposeful manipulation of biology to cause harm. One could also imagine messy controversies, in which one group engages in an application for biotechnology that others consider dangerous or unethical.

 

1. Unintended Consequences

Sugarcane farmers in Australia in the 1930’s had a problem: cane beetles were destroying their crop. So, they reasoned that importing a natural predator, the cane toad, could be a natural form of pest control. What could go wrong? Well, the toads became a major nuisance themselves, spreading across the continent and eating the local fauna (except for, ironically, the cane beetle).

While modern biotechnology solutions to society’s problems seem much more sophisticated than airdropping amphibians into Australia, this story should serve as a cautionary tale. To avoid blundering into disaster, the errors of the past should be acknowledged.

  • In 2014, the Center for Disease Control came under scrutiny after repeated errors led to scientists being exposed to Ebola, anthrax, and the flu. And a professor in the Netherlands came under fire in 2011 when his lab engineered a deadly, airborne version of the flu virus, mentioned above, and attempted to publish the details. These and other labs study viruses or toxins to better understand the threats they pose and to try to find cures, but their work could set off a public health emergency if a deadly material is released or mishandled as a result of human error.
  • Mosquitoes are carriers of disease – including harmful and even deadly pathogens like Zika, malaria, and dengue – and they seem to play no productive role in the ecosystem. But civilians and lawmakers are raising concerns about a mosquito control strategy that would genetically alter and destroy disease-carrying species of mosquitoes. Known as a ‘gene drive,’ the technology is designed to spread a gene quickly through a population by sexual reproduction. For example, to control mosquitoes, scientists could release males into the wild that have been modified to produce only sterile offspring. Scientists who work on gene drive have performed risk assessments and equipped them with safeguards to make the trials as safe as possible. But, since a man-made gene drive has never been tested in the wild, it’s impossible to know for certain the impact that a mosquito extinction could have on the environment. Additionally, there is a small possibility that the gene drive could mutate once released in the wild, spreading genes that researchers never planned for. Even armed with strategies to reverse a rogue gene drive, scientists may find gene drives difficult to control once they spread outside the lab.
  • When scientists went digging for clues in the DNA of people who are apparently immune to HIV, they found that the resistant individuals had mutated a protein that serves as the landing pad for HIV on the surface of blood cells. Because these patients were apparently healthy in the absence of the protein, researchers reasoned that deleting its gene in the cells of infected or at-risk patients could be a permanent cure for HIV and AIDS. With the arrival of the new tool, a set of ‘DNA scissors’ called CRISPR/Cas9, that holds the promise of simple gene surgery for HIV, cancer, and many other genetic diseases, the scientific world started to imagine nearly infinite possibilities. But trials of CRISPR/Cas9 in human cells have produced troubling results, with mutations showing up in parts of the genome that shouldn’t have been targeted for DNA changes. While a bad haircut might be embarrassing, the wrong cut by CRISPR/Cas9 could be much more serious, making you sicker instead of healthier. And if those edits were made to embryos, instead of fully formed adult cells, then the mutations could permanently enter the gene pool, meaning they will be passed on to all future generations. So far, prominent scientists and prestigious journals are calling for a moratorium on gene editing in viable embryos until the risks, ethics, and social implications are better understood.

 

2. Weaponizing biology

The world recently witnessed the devastating effects of disease outbreaks, in the form of Ebola and the Zika virus – but those were natural in origin. The malicious use of biotechnology could mean that future outbreaks are started on purpose. Whether the perpetrator is a state actor or a terrorist group, the development and release of a bioweapon, such as a poison or infectious disease, would be hard to detect and even harder to stop. Unlike a bullet or a bomb, deadly cells could continue to spread long after being deployed. The US government takes this threat very seriously, and the threat of bioweapons to the environment should not be taken lightly either.

Developed nations, and even impoverished ones, have the resources and know-how to produce bioweapons. For example, North Korea is rumored to have assembled an arsenal containing “anthrax, botulism, hemorrhagic fever, plague, smallpox, typhoid, and yellow fever,” ready in case of attack. It’s not unreasonable to assume that terrorists or other groups are trying to get their hands on bioweapons as well. Indeed, numerous instances of chemical or biological weapon use have been recorded, including the anthrax scare shortly after 9/11, which left 5 dead after the toxic cells were sent through the mail. And new gene editing technologies are increasing the odds that a hypothetical bioweapon targeted at a certain ethnicity, or even a single individual like a world leader, could one day become a reality.

While attacks using traditional weapons may require much less expertise, the dangers of bioweapons should not be ignored. It might seem impossible to make bioweapons without plenty of expensive materials and scientific knowledge, but recent advances in biotechnology may make it even easier for bioweapons to be produced outside of a specialized research lab. The cost to chemically manufacture strands of DNA is falling rapidly, meaning it may one day be affordable to ‘print’ deadly proteins or cells at home. And the openness of science publishing, which has been crucial to our rapid research advances, also means that anyone can freely Google the chemical details of deadly neurotoxins. In fact, the most controversial aspect of the supercharged influenza case was not that the experiments had been carried out, but that the researchers wanted to openly share the details.

On a more hopeful note, scientific advances may allow researchers to find solutions to biotechnology threats as quickly as they arise. Recombinant DNA and biotechnology tools have enabled the rapid invention of new vaccines which could protect against new outbreaks, natural or man-made. For example, less than 5 months after the World Health Organization declared Zika virus a public health emergency, researchers got approval to enroll patients in trials for a DNA vaccine.

The ethics of biotechnology

Biotechnology doesn’t have to be deadly, or even dangerous, to fundamentally change our lives. While humans have been altering genes of plants and animals for millennia — first through selective breeding and more recently with molecular tools and chimeras — we are only just beginning to make changes to our own genomes (amid great controversy).

Cutting-edge tools like CRISPR/Cas9 and DNA synthesis raise important ethical questions that are increasingly urgent to answer. Some question whether altering human genes means “playing God,” and if so, whether we should do that at all. For instance, if gene therapy in humans is acceptable to cure disease, where do you draw the line? Among disease-associated gene mutations, some come with virtual certainty of premature death, while others put you at higher risk for something like Alzheimer’s, but don’t guarantee you’ll get the disease. Many others lie somewhere in between. How do we determine a hard limit for which gene surgery to undertake, and under what circumstances, especially given that the surgery itself comes with the risk of causing genetic damage? Scholars and policymakers have wrestled with these questions for many years, and there is some guidance in documents such as the United Nations’ Universal Declaration on the Human Genome and Human Rights.

And what about ways that biotechnology may contribute to inequality in society? Early work in gene surgery will no doubt be expensive – for example, Novartis plans to charge $475,000 for a one-time treatment of their recently approved cancer therapy, a drug which, in trials, has rescued patients facing certain death. Will today’s income inequality, combined with biotechnology tools and talk of ‘designer babies’, lead to tomorrow’s permanent underclass of people who couldn’t afford genetic enhancement?

Advances in biotechnology are escalating the debate, from questions about altering life to creating it from scratch. For example, a recently announced initiative called GP-Write has the goal of synthesizing an entire human genome from chemical building blocks within the next 10 years. The project organizers have many applications in mind, from bringing back wooly mammoths to growing human organs in pigs. But, as critics pointed out, the technology could make it possible to produce children with no biological parents, or to recreate the genome of another human, like making cellular replicas of Einstein. “To create a human genome from scratch would be an enormous moral gesture,” write two bioethicists regarding the GP-Write project. In response, the organizers of GP-Write insist that they welcome a vigorous ethical debate, and have no intention of turning synthetic cells into living humans. But this doesn’t guarantee that rapidly advancing technology won’t be applied in the future in ways we can’t yet predict.

What are the tools of biotechnology?

 

1. DNA Sequencing

It’s nearly impossible to imagine modern biotechnology without DNA sequencing. Since virtually all of biology centers around the instructions contained in DNA, biotechnologists who hope to modify the properties of cells, plants, and animals must speak the same molecular language. DNA is made up of four building blocks, or bases, and DNA sequencing is the process of determining the order of those bases in a strand of DNA. Since the publication of the complete human genome in 2003, the cost of DNA sequencing has dropped dramatically, making it a simple and widespread research tool.

Benefits: Sonia Vallabh had just graduated from law school when her mother died from a rare and fatal genetic disease. DNA sequencing showed that Sonia carried the fatal mutation as well. But far from resigning to her fate, Sonia and her husband Eric decided to fight back, and today they are graduate students at Harvard, racing to find a cure. DNA sequencing has also allowed Sonia to become pregnant, since doctors could test her eggs for ones that don’t have the mutation. While most people’s genetic blueprints don’t contain deadly mysteries, our health is increasingly supported by the medical breakthroughs that DNA sequencing has enabled. For example, researchers were able to track the 2014 Ebola epidemic in real time using DNA sequencing. And pharmaceutical companies are designing new anti-cancer drugs targeted to people with a specific DNA mutation. Entire new fields, such as personalized medicine, owe their existence to DNA sequencing technology.

Risks: Simply reading DNA is not harmful, but it is foundational for all of modern biotechnology. As the saying goes, knowledge is power, and the misuse of DNA information could have dire consequences. While DNA sequencing alone cannot make bioweapons, it’s hard to imagine waging biological warfare without being able to analyze the genes of infectious or deadly cells or viruses. And although one’s own DNA information has traditionally been considered personal and private, containing information about your ancestors, family, and medical conditions,  governments and corporations increasingly include a person’s DNA signature in the information they collect. Some warn that such databases could be used to track people or discriminate on the basis of private medical records – a dystopian vision of the future familiar to anyone who’s seen the movie GATTACA. Even supplying patients with their own genetic information has come under scrutiny, if it’s done without proper context, as evidenced by the dispute between the FDA and the direct-to-consumer genetic testing service 23andMe. Finally, DNA testing opens the door to sticky ethical questions, such as whether to carry to term a pregnancy after the fetus is found to have a genetic mutation.

 

2. Recombinant DNA

The modern field of biotechnology was born when scientists first manipulated – or ‘recombined’ –  DNA in a test tube, and today almost all aspects of society are impacted by so-called ‘rDNA’. Recombinant DNA tools allow researchers to choose a protein they think may be important for health or industry, and then remove that protein from its original context. Once removed, the protein can be studied in a species that’s simple to manipulate, such as E. coli bacteria. This lets researchers reproduce it in vast quantities, engineer it for improved properties, and/or transplant it into a new species. Modern biomedical research, many best-selling drugs, most of the clothes you wear, and many of the foods you eat rely on rDNA biotechnology.

Benefits: Simply put, our world has been reshaped by rDNA. Modern medical advances are unimaginable without the ability to study cells and proteins with rDNA and the tools used to make it, such as PCR, which helps researchers ‘copy and paste’ DNA in a test tube. An increasing number of vaccines and drugs are the direct products of rDNA. For example, nearly all insulin used in treating diabetes today is produced recombinantly. Additionally, cheese lovers may be interested to know that rDNA provides ingredients for a majority of hard cheeses produced in the West. Many important crops have been genetically modified to produce higher yields, withstand environmental stress, or grow without pesticides. Facing the unprecedented threats of climate change, many researchers believe rDNA and GMOs will be crucial in humanity’s efforts to adapt to rapid environmental changes.

Risks: The inventors of rDNA themselves warned the public and their colleagues about the dangers of this technology. For example, they feared that rDNA derived from drug-resistant bacteria could escape from the lab, threatening the public with infectious superbugs. And recombinant viruses, useful for introducing genes into cells in a petri dish, might instead infect the human researchers. Some of the initial fears were allayed when scientists realized that genetic modification is much trickier than initially thought, and once the realistic threats were identified – like recombinant viruses or the handling of deadly toxins –  safety and regulatory measures were put in place. Still, there are concerns that rogue scientists or bioterrorists could produce weapons with rDNA. For instance, it took researchers just 3 years to make poliovirus from scratch in 2006, and today the same could be accomplished in a matter of weeks. Recent flu epidemics have killed over 200,000, and the malicious release of an engineered virus could be much deadlier – especially if preventative measures, such as vaccine stockpiles, are not in place.

3. DNA Synthesis

Synthesizing DNA has the advantage of offering total researcher control over the final product. With many of the mysteries of DNA still unsolved, some scientists believe the only way to truly understand the genome is to make one from its basic building blocks. Building DNA from scratch has traditionally been too expensive and inefficient to be very practical, but in 2010, researchers did just that, completely synthesizing the genome of a bacteria and injecting it into a living cell. Since then, scientists have made bigger and bigger genomes, and recently, the GP-Write project launched with the intention of tackling perhaps the ultimate goal: chemically fabricating an entire human genome. Meeting this goal – and within a 10 year timeline – will require new technology and an explosion in manufacturing capacity. But the project’s success could signal the impact of synthetic DNA on the future of biotechnology.

Benefits: Plummeting costs and technical advances have made the goal of total genome synthesis seem much more immediate. Scientists hope these advances, and the insights they enable, will ultimately make it easier to make custom cells to serve as medicines or even bomb-sniffing plants. Fantastical applications of DNA synthesis include human cells that are immune to all viruses or DNA-based data storage. Prof. George Church of Harvard has proposed using DNA synthesis technology to ‘de-extinct’ the passenger pigeon, wooly mammoth, or even Neanderthals. One company hopes to edit pig cells using DNA synthesis technology so that their organs can be transplanted into humans. And DNA is an efficient option for storing data, as researchers recently demonstrated when they stored a movie file in the genome of a cell.

Risks: DNA synthesis has sparked significant controversy and ethical concerns. For example, when the GP-Write project was announced, some criticized the organizers for the troubling possibilities that synthesizing genomes could evoke, likening it to playing God. Would it be ethical, for instance, to synthesize Einstein’s genome and transplant it into cells? The technology to do so does not yet exist, and GP-Write leaders have backed away from making human genomes in living cells, but some are still demanding that the ethical debate happen well in advance of the technology’s arrival. Additionally, cheap DNA synthesis could one day democratize the ability to make bioweapons or other nuisances, as one virologist demonstrated when he made the horsepox virus (related to the virus that causes smallpox) with DNA he ordered over the Internet. (It should be noted, however, that the other ingredients needed to make the horsepox virus are specialized equipment and deep technical expertise.)

 

4. Genome Editing

Many diseases have a basis in our DNA, and until recently, doctors had very few tools to address the root causes. That appears to have changed with the recent discovery of a DNA editing system called CRISPR/Cas9. (A note on terminology – CRISPR is a bacterial immune system, while Cas9 is one protein component of that system, but both terms are often used to refer to the protein.) It operates in cells like a DNA scissor, opening slots in the genome where scientists can insert their own sequence. While the capability of cutting DNA wasn’t unprecedented, Cas9 dusts the competition with its effectiveness and ease of use. Even though it’s a biotech newcomer, much of the scientific community has already caught ‘CRISPR-fever,’ and biotech companies are racing to turn genome editing tools into the next blockbuster pharmaceutical.

Benefits: Genome editing may be the key to solving currently intractable genetic diseases such as cystic fibrosis, which is caused by a single genetic defect. If Cas9 can somehow be inserted into a patient’s cells, it could fix the mutations that cause such diseases, offering a permanent cure. Even diseases caused by many mutations, like cancer, or caused by a virus, like HIV/AIDS, could be treated using genome editing. Just recently, an FDA panel recommended a gene therapy for cancer, which showed dramatic responses for patients who had exhausted every other treatment. Genome editing tools are also used to make lab models of diseases, cells that store memories, and tools that can detect epidemic viruses like Zika or Ebola. And as described above, if a gene drive, which uses Cas9, is deployed effectively, we could eliminate diseases such as malaria, which kills nearly half a million people each year.

Risks: Cas9 has generated nearly as much controversy as it has excitement, because genome editing carries both safety issues and ethical risks. Cutting and repairing a cell’s DNA is not risk-free, and errors in the process could make a disease worse, not better. Genome editing in reproductive cells, such as sperm or eggs, could result in heritable genetic changes, meaning dangerous mutations could be passed down to future generations. And some warn of unethical uses of genome editing, fearing a rise of ‘designer babies’ if parents are allowed to choose their children’s traits, even though there are currently no straightforward links between one’s genes and their intelligence, appearance, etc. Similarly, a gene drive, despite possibly minimizing the spread of certain diseases, has the potential to create great harm since it is intended to kill or modify an entire species. A successful gene drive could have unintended ecological impacts, be used with malicious intent, or mutate in unexpected ways. Finally, while the capability doesn’t currently exist, it’s not out of the realm of possibility that a rogue agent could develop genetically selective bioweapons to target individuals or populations with certain genetic traits.

 

Recommended References

Videos

Research Papers

Books

Informational Documents

Articles

Organizations

These organizations above all work on biotechnology issues, though many cover other topics as well. This list is undoubtedly incomplete; please contact us to suggest additions or corrections.

 

Trump to Pull US Out of Nuclear Treaty

Click here to see this page in other languages:  Russian 

Last week, U.S. President Donald Trump confirmed that the United States will be pulling out of the landmark Intermediate-Range Nuclear Forces Treaty (INF). The INF treaty, which went into effect in 1987, banned ground-launched nuclear missiles that have a range of 500 km to 5,500 km (310 to 3,400 miles). Although the agreement covers land-based missiles that carry both nuclear and conventional warheads, it doesn’t cover any air-launched or sea-launched weapons.

Nonetheless, when it was signed into effect by Former U.S. President Ronald Reagan and Former Soviet President Mikhail Gorbachev, it led to the elimination of nearly 2,700 short- and medium-range missiles. More significantly, it helped bring an end to a dangerous nuclear standoff between the two nations, and the trust that it fostered played a critical part in defusing the Cold War.

Now, as a result of the recent announcements from the Trump administration, all of this may be undone. As Malcolm Chalmers, deputy director general of the Royal United Services Institute, stated in an interview with The Guardian, “This is the most severe crisis in nuclear arms control since the 1980s. If the INF treaty collapses, and with the New Start treaty on strategic arms due to expire in 2021, the world could be left without any limits on the nuclear arsenals of nuclear states for the first time since 1972.”

Of course, the U.S. isn’t the only player that’s contributing to unravelling an arms treaty that helped curb competition and contributed to bringing an end to the Cold War.

Reports indicate that Russia has been violating the INF treaty since at least 2014, a fact that was previously acknowledged by the Obama administration and which President Trump cited in his INF withdrawal announcement last week. “Russia has violated the agreement. They’ve been violating it for many years, and I don’t know why President Obama didn’t negotiate or pull out,” Trump stated. “We’re not going to let them violate a nuclear agreement and do weapons and we’re not allowed to.…so we’re going to terminate the agreement. We’re going to pull out,” he continued.

Trump also noted that China played a significant role in his decision to pull the U.S. out of the INF treaty. Since China was not a part of the negotiations and is not a signatory, the country faces no limits when it comes to developing and deploying intermediate-range nuclear missiles — a fact that China has exploited in order to amass a robust missile arsenal. Trump noted that the U.S. will  have to develop those weapons, “unless Russia comes to us and China comes to us and they all come to us and say, ‘let’s really get smart and let’s none of us develop those weapons, but if Russia’s doing it and if China’s doing it, and we’re adhering to the agreement, that’s unacceptable.”

 

A Growing Concern

Concerns over Russian missile systems that breach the INF treaty are real and valid. Equally valid are the concerns over China’s weapons strategy. However, experts note that President Trump’s decision to leave the INF treaty doesn’t set us on the path to the negotiating table, but rather, toward another nuclear arms race.

Russian officials have been clear in this regard, with Leonid Slutsky, who chairs the foreign affairs committee in Russia’s lower house of parliament, stating this week that a U.S. withdrawal from the INF agreement “would mean a real new Cold War and an arms race with 100 percent probability” and “a collapse of the planet’s entire nonproliferation and disarmament regime.”

This is precisely why many policy experts assert that withdrawal is not a viable option and, in order to achieve a successful resolution, negotiations must continue. Wolfgang Ischinger, the former German ambassador to the United States, is one such expert. In a statement issued over the weekend, he noted that he is “deeply worried” about President Trump’s plans to dismantle the INF treaty and urged the U.S. government to, instead, work to expand the treaty. “Multilateralizing this agreement would be a lot better than terminating it,” he wrote on Twitter.

Even if the U.S. government is entirely disinterested in negotiating, and the Trump administration seeks only to respond with increased weaponry, policy experts assert that withdrawing from the INF treaty is still an unavailing and unnecessary move. As Jeffrey Lewis, the director of the East Asia nonproliferation program at the Middlebury Institute of International Studies at Monterey, notes, the INF doesn’t prohibit sea- or air-based systems. Consequently, the U.S. could respond to Russian and Chinese political maneuverings with increased armament without escalating international tensions by upending longstanding treaties.

Indeed, since President Trump made his announcement, a number of experts have condemned the move and called for further negotiations. EU spokeswoman Maja Kocijancic said that the U.S. and Russia “need to remain in a constructive dialogue to preserve this treaty” as it “contributed to the end of the Cold War, to the end of the nuclear arms race and is one of the cornerstones of European security architecture.”  

Most notably, in a statement that was issued Monday, the European Union cautioned the U.S. against withdrawing from the INF treaty, saying, “The world doesn’t need a new arms race that would benefit no one and on the contrary, would bring even more instability.”

An image of Hurricane Michael making landfall October 11, 2018. Photo courtesy of NASA.

IPCC 2018 Special Report Paints Dire — But Not Completely Hopeless — Picture of Future

Click here to see this page in other languages:  Russian 

On Wednesday, October 10, the panhandle of Florida was struck by Hurricane Michael, which has already claimed over 30 lives and destroyed communities, homes and infrastructure across multiple states. Michael is the strongest hurricane in recorded history to make landfall in that region. And in coming years, it’s likely that we’ll continue to see an increase in record breaking storms — as well as record-breaking heat waves, droughts, floods, and wildfires.

Only two days before Michael unleashed its devastation on the United States, the United Nations International Panel on Climate Change (IPCC) released a dire report on the prospects for maintaining global temperature rise to 1.5°C—and why we must meet this challenge head on.

In 2015, roughly during the time that the Paris Climate Agreement was being signed, global temperatures reached 1°C above pre-industrial levels. And we’re already feeling the impacts of this increase in the form of bigger storms, bigger wildfires, higher temperatures, melting arctic ice, etc.

The recent IPCC report concludes that, if society continues on its current trajectory — and even if the world abides by the Paris Climate Agreement — the planet will hit 1.5°C of warming in a matter of decades, and possibly in the next 12 years. And every half degree more that temperatures rises is expected to bring on even more extreme effects. Even if we can limit global warming to 1.5°C, the report predicts we’ll lose most coral reefs, sea levels will rise and flood many coastal communities, more people around the world will experience extreme heat waves, and other natural disasters can be expected to increase.

As global temperatures rise, they don’t rise evenly across the globe. Land air is expected to reach higher temperatures than that over the oceans, so what could be 1.5°C on average across earth, might be a 3-4.5°C increase in some sections of the world. This has the potential to trigger deadly heat waves, wildfires and droughts, which would also negatively impact local ecosystems and farmland.

But what about if we reach 2°C? This level of temperature increase is often floated as the highest limit the world can handle without too much suffering – but how much worse will it be than 1.5°C?

A difference of 0.5°C may not seem like much, but it could mean the difference between a world with some surviving coral reefs, and a world in which they — and many other species — are all destroyed. Two degrees could lead to an extra 420 million people experiencing extreme and possibly deadly heat waves. Some regions of the world will see increases in temperatures as high as 4-6°C. Sea levels are predicted to rise an extra 10 centimeters at 2°C versus 1.5°C, which could impact an extra 10 million people along coastal areas.

Meanwhile, human health will deteriorate; diseases like malaria and dengue fever could become more prevalent and spread into new regions with this increase in temperature. Farmland for many staple crops could decrease, and even livestock are expected to be adversely affected as feed quality and water availability may decrease.

The list goes on and on. But perhaps one of the greatest threats of climate change is that those who will likely be the hardest hit by increasing temperatures are those who are already among the poorest and most vulnerable.

Yet we’re not quite out of time. As the report highlights, all of these problems arise as a result of society taking little to no action. But what if we did start taking steps to reduce global warming? What if we could get governments and corporations to recognize the need to reduce emissions and switch to clean, alternative, renewable energy sources? What if individuals made changes to their own lifestyles while also encouraging their government leaders to take action?

The report suggests that under those circumstances, if we can achieve global net-zero emissions — that is, such low levels of carbon or other pollutants are emitted that they can be absorbed by trees and soil — then we can still prevent temperatures from exceeding 1.5°C. Temperatures will still increase somewhat as a result of current emissions, but there’s still time to curtail the most severe effects.

There are other organizations that believe we can achieve global net-zero emissions as well. For example, this summer, the Exponential Climate Action Roadmap was released, which offers a roadmap to achieve the goals of the Paris Climate Agreement by 2030. Or there’s The Solutions Project, which maps out steps to quickly achieve 100% renewable energy. And Drawdown provides 80 steps we can take to reduce emissions.

We don’t have much time left, but it’s not too late. The prospects are dire if we continue on our current trajectory, but if society can recognize the urgency of the situation and come together to take action, there’s still hope of keeping the worst effects of climate change at bay.

An edited version of this article was originally published on Metro. Photo courtesy of NASA.

Genome Editing and the Future of Biowarfare: A Conversation with Dr. Piers Millett

In both 2016 and 2017, genome editing made it into the annual Worldwide Threat Assessment of the US Intelligence Community. One of biotechnology’s most promising modern developments, it had now been deemed a danger to US national security – and then, after two years, it was dropped from the list again. All of which raises the question: what, exactly, is genome editing, and what can it do?

Most simply, the phrase “genome editing” represents tools and techniques that biotechnologists use to edit the genomethat is, the DNA or RNA of plants, animals, and bacteria. Though the earliest versions of genome editing technology have existed for decades, the introduction of CRISPR in 2013 “brought major improvements to the speed, cost, accuracy, and efficiency of genome editing.

CRISPR, or Clustered Regularly Interspersed Short Palindromic Repeats, is actually an ancient mechanism used by bacteria to remove viruses from their DNA. In the lab, researchers have discovered they can replicate this process by creating a synthetic RNA strand that matches a target DNA sequence in an organism’s genome. The RNA strand, known as a “guide RNA,” is attached to an enzyme that can cut DNA. After the guide RNA locates the targeted DNA sequence, the enzyme cuts the genome at this location. DNA can then be removed, and new DNA can be added. CRISPR has quickly become a powerful tool for editing genomes, with research taking place in a broad range of plants and animals, including humans.

A significant percentage of genome editing research focuses on eliminating genetic diseases. However, with tools like CRISPR, it also becomes possible to alter a pathogen’s DNA to make it more virulent and more contagious. Other potential uses include the creation of “‘killer mosquitos,’ plagues that wipe out staple crops, or even a virus that snips at people’s DNA.”

But does genome editing really deserve a spot among the ranks of global threats like nuclear weapons and cyber hacking? To many members of the scientific community, its inclusion felt like an overreaction. Among them was Dr. Piers Millett, a science policy and international security expert whose work focuses on biotechnology and biowarfare.

Millett wasn’t surprised that biotechnology in general made it into these reports: what he didn’t expect was for one specific tool, genome editing, to be called out. In his words: “I would personally be much more comfortable if it had been a broader sentiment to say ‘Hey, there’s a whole bunch of emerging biotechnologies that could destabilize our traditional risk equation in this space, and we need to be careful with that.’ …But calling out specifically genome editing, I still don’t fully understand any rationale behind it.”

This doesn’t mean, however, that the misuse of genome editing is not cause for concern. Even proper use of the technology often involves the genetic engineering of biological pathogens, research that could very easily be weaponized. Says Millett, “If you’re deliberately trying to create a pathogen that is deadly, spreads easily, and that we don’t have appropriate public health measures to mitigate, then that thing you create is amongst the most dangerous things on the planet.”

 

Biowarfare Before Genome Editing

A medieval depiction of the Black Plague.

Developments such as CRISPR present new possibilities for biowarfare, but biological weapons caused concern long before the advent of gene editing. The first recorded use of biological pathogens in warfare dates back to 600 BC, when Solon, an Athenian statesman, poisoned enemy water supplies during the siege of Krissa. Many centuries later, during the 1346 AD siege of Caffa, the Mongol army catapulted plague-infested corpses into the city, which is thought to have contributed to the 14th century Black Death pandemic that wiped out up to two thirds of Europe’s population.

Though biological weapons were internationally banned by the 1925 Geneva Convention, state biowarfare programs continued and in many cases expanded during World War II and the Cold War. In 1972, as evidence of these violations mounted, 103 nations signed a treaty known as the Biological Weapons Convention (BWC). The treaty bans the creation of biological arsenals and outlaws offensive biological research, though defensive research is permissible. Each year, signatories are required to submit certain information about their biological research programs to the United Nations, and violations reported to the UN Security Council may result in an inspection.

But inspections can be vetoed by the permanent members of the Security Council, and there are no firm guidelines for enforcement. On top of this, the line that separates permissible defensive biological research from its offensive counterpart is murky and remains a subject of controversy. And though the actual numbers remain unknown, pathologist Dr. Riedel asserts that “the number of state-sponsored programs [that have engaged in offensive biological weapons research] has increased significantly during the last 30 years.”

 

Dual Use Research

So biological warfare remains a threat, and it’s one that genome editing technology could hypothetically escalate. Genome editing falls into a category of research and technology that’s known as “dual-use” – that is, it has the potential both for beneficial advances and harmful misuses. “As an enabling technology, it enables you to do things, so it is the intent of the user that determines whether that’s a positive thing or a negative thing,” Millett explains.

And ultimately, what’s considered positive or negative is a matter of perspective. “The same activity can look positive to one group of people, and negative to another. How do we decide which one is right and who gets to make that decision?” Genome editing could be used, for example, to eradicate disease-carrying mosquitoes, an application that many would consider positive. But as Millet points out, some cultures view such blatant manipulation of the ecosystem as harmful or “sacrilegious.”

Millett believes that the most effective way to deal with dual-use research is to get the researchers engaged in the discussion. “We have traditionally treated the scientific community as part of the problem,” he says. “I think we need to move to a point where the scientific community is the key to the solution, where we’re empowering them to be the ones who identify the risks, the ones who initiate the discussion about what forms this research should take.” A good scientist, he adds, is one “who’s not only doing good research, but doing research in a good way.”

 

DIY Genome Editing

But there is a growing worry that dangerous research might be undertaken by those who are not scientists at all. There are already a number of do-it-yourself (DIY) genome editing kits on the market today, and these relatively inexpensive kits allow anyone, anywhere to edit DNA using CRISPR technology. Do these kits pose a real security threat? Millett explains that risk level can be assessed based on two distinct criteria: likelihood and potential impact. Where the “greatest” risks lie will depend on the criterion.

“If you take risk as a factor of likelihood of impact, the most likely attacks will come from low-powered actors, but have a minimal impact and be based on traditional approaches, existing pathogens, and well characterized risks and threats,” Millett explains. DIY genome editors, for example, may be great in number but are likely unable to produce a biological agent capable of causing widespread harm.

“If you switch it around and say where are the most high impact threats going to come from, then I strongly believe that that [type of threat] requires a level of sophistication and technical competency and resources that are not easy to acquire at this point in time,” says Millett. “If you’re looking for advanced stuff: who could misuse genome editing? States would be my bet in the foreseeable future.”

State Bioweapons Programs

Large-scale bioweapons programs, such as those run by states, pose a double threat: there is always the possibility of accidental release alongside the potential for malicious use. Millett believes that these threats are roughly equal, a conclusion backed by a thousand page report from Gryphon Scientific, a US defense contractor.

Historically, both accidental release and malicious use of biological agents have caused damage. In 1979, there was the accidental release of aerosolized anthrax from the Sverdlovsk [now Ekaterinburg] bioweapons production facility in the Soviet Union – a clogged air filter in the facility had been removed, but had not been replaced. Ninety-four people were affected by the incident and at least 64 died, along with a number of livestock. The Soviet secret police attempted a cover-up and it was not until years later that the administration admitted the cause of the outbreak.

More recently, Millett says, a US biodefense facility “failed to kill the anthrax that it sent out for various lab trials, and ended up sending out really nasty anthrax around the world.” Though no one was infected, a 2015 government investigation revealed that “over the course of the last decade, 86 facilities in the United States and seven other countries have received low concentrations of live [anthrax] spore samples… thought to be completely inactivated.”

These incidents pale, however, in comparison with Japan’s intentional use of biological weapons during the 1930s and 40s. There is “a published history that suggests up to 30,000 people were killed in China by the Japanese biological weapons program during the lead up to World War II. And if that data is accurate, that is orders of magnitude bigger than anything else,” Millett says.

Given the near-impossibility of controlling the spread of disease, a deliberate attack may have accidental effects far beyond what was intended. The Japanese, for example, may have meant to target only a few Chinese villages, only to unwittingly trigger an epidemic. There are reports, in fact, that thousands of Japan’s own soldiers became infected during a biological attack in 1941.

Despite the 1972 ban on biological weapons programs, Millett believes that many countries still have the capacity to produce biological weapons. As an example, he explains that the Soviets developed “a set of research and development tools that would answer the key questions and give you all the key capabilities to make biological weapons.”

The BWC only bans offensive research, and “underneath the umbrella of a defensive program,” Millett says, “you can do a whole load of research and development to figure out what you would want to weaponize if you were going to make a weapon.” Then, all a country needs to start producing those weapons is “the capacity to scale up production very, very quickly.” The Soviets, for example, built “a set of state-based commercial infrastructure to make things like vaccines.” On a day-to-day basis, they were making things the Soviet Union needed. “But they could be very radically rebooted and repurposed into production facilities for their biological weapons program,” Millett explains. This is known as a “breakout program.”

Says Millett, “I believe there are many, many countries that are well within the scope of a breakout program … so it’s not that they necessarily at this second have a fully prepared and worked-out biological weapons program that they can unleash on the world tomorrow, but they might well have all of the building blocks they need to do that in place, and a plan for how to turn their existing infrastructure towards a weapons program if they ever needed to. These components would be permissible under current international law.”

 

Biological Weapons Convention

This unsettling reality raises questions about the efficacy of the BWC – namely, what does it do well, and what doesn’t it do well? Millett, who worked for the BWC for well over a decade, has a nuanced view.

“The very fact that we have a ban on these things is brilliant,” he says. “We’re well ahead on biological weapons than many other types of weapons systems. We only got the ban on nuclear weapons – and it was only joined by some tiny number of countries – last year. Chemical weapons, only in 1995. The ban on biological weapons is hugely important. Having a space at the international level to talk about those issues is very important.” But, he adds, “we’re rapidly reaching the end of the space that I can be positive about.”

The ban on biological weapons was motivated, at least in part, by the sense that – unlike chemical weapons – they weren’t particularly useful. Traditionally, chemical and biological weapons were dealt with together. The 1925 Geneva Protocol banned both, and the original proposal for the Biological Weapons Convention, submitted by the UK in 1969, would have dealt with both. But the chemical weapons ban was ultimately dropped from the BWC, Millett says, “because that was during Vietnam, and so there were a number of chemical agents that were being used in Vietnam that weren’t going to be banned.” Once the scope of the ban had been narrowed, however, both the US and the USSR signed on.

Millet describes the resulting document as “aspirational.” He explains,“The Biological Weapons Convention is four pages long, whereas the [1995] Chemical Weapons Convention is 200 pages long, give or take.” And the difference “is about the teeth in the treaty.”

“The BWC is…a short document that’s basically a commitment by states not to make these weapons. The Chemical Weapons Convention is an international regime with an organization, with an inspection regime intended to enforce that. Under the BWC, if you are worried about another state, you’re meant to try to resolve those concerns amicably. But if you can’t do that, we move onto Article Six of the Convention, where you report it to the Security Council. The Security Council is meant to investigate it, but of course if you’re a permanent member of the Security Council, you can veto that, so that doesn’t happen.”

 

De-escalation

One easy way that states can avoid raising suspicion is to be more transparent. As Millett puts it, “If you’re not doing naughty things, then it’s on you to demonstrate that you’re not.” This doesn’t mean revealing everything to everybody. It means finding ways to show other states that they don’t need to worry.

As an example, Millett cites the heightened security culture that developed in the US after 9/11. Following the 2001 anthrax letter attacks, as well as a large investment in US biodefense programs, an initiative was started to prevent foreigners from working in those biodefense facilities. “I’m very glad they didn’t go down that path,” says Millett, “because the greatest risk, I think, was not that a foreign national would sneak in.” Rather, “the advantage of having foreign nationals in those programs was at the international level, when country Y stands up and accuses the US of having an illicit bioweapons program hidden in its biodefense program, there are three other countries that can stand up and say, ‘Well, wait a minute. Our scientists are in those facilities. We work very closely with that program, and we see no evidence of what you’re saying.’”

Historically, secrecy surrounding bioweapons programs has led other countries to begin their own research. Before World War I, the British began exploring the use of bioweapons. The Germans were aware of this. By the onset of the war, the British had abandoned the idea, but the Germans, not knowing this, began their own bioweapons program in an attempt to keep up. By World War II, Germany no longer had a bioweapons program. But the Allies believed they still did, and the U.S. bioweapons program was born of such fears.

 

What now?

Asked if he believes genome editing is a bioweapons “game changer”, Millett says no. “I see it as an enabling technology in the short to medium term, then maybe with longer-term implications [for biowarfare], but then we’re out into the far distance of what we can reasonably talk about and predict,” he says. “Certainly for now, I think its big impact is it makes it easier, faster, cheaper, and more reliable to do things that you could do using traditional approaches.”

But as biotechnology continues to evolve, so too will biowarfare. For example, it will eventually be possible for governments to alter specific genes in their own populations. “Imagine aerosolizing a lovely genome editor that knocks out a specifically nasty gene in your population,” says Millett. “It’s a passive thing. You breathe it in [and it] retroactively alters the population[’s DNA].

A government could use such technology to knock out a gene linked to cancer or other diseases. But, Millett says, “what would happen if you came across a couple of genes that at an individual level were not going to have an impact, but at a population level were connected with something, say, like IQ?” With the help of a genome editor, a government could make their population smarter, on average, by a few IQ points.

“There’s good economic data that says that [average IQ] is … statistically important,” Millett says. “The GDP of the country will be noticeably affected if we could just get another two or three percent IQ points. There are direct national security implications of that. If, for example, Chinese citizens got smarter on average over the next couple of generations by a couple of IQ points per generation, that has national security implications for both the UK and the US.”

For now, such an endeavor remains in the realm of science fiction. But technology is evolving at a breakneck speed, and it’s more important than ever to consider the potential implications of our advancements. That said, Millett is optimistic about the future. “I think the key is the distribution of bad actors versus good actors,” he says. As long as the bad actors remain the minority, there is more reason to be excited for the future of biotechnology than there is to be afraid of it.

Dr. Piers Millett holds fellowships at the Future of Humanity Institute, the University of Oxford, and the Woodrow Wilson Center for International Policy and works as a consultant for the World Health Organization. He also served at the United Nations as the Deputy Head of the Biological Weapons Convention.  

Cognitive Biases and AI Value Alignment: An Interview with Owain Evans

Click here to see this page in other languages:  Russian 

At the core of AI safety, lies the value alignment problem: how can we teach artificial intelligence systems to act in accordance with human goals and values?

Many researchers interact with AI systems to teach them human values, using techniques like inverse reinforcement learning (IRL). In theory, with IRL, an AI system can learn what humans value and how to best assist them by observing human behavior and receiving human feedback.

But human behavior doesn’t always reflect human values, and human feedback is often biased. We say we want healthy food when we’re relaxed, but then we demand greasy food when we’re stressed. Not only do we often fail to live according to our values, but many of our values contradict each other. We value getting eight hours of sleep, for example, but we regularly sleep less because we also value working hard, caring for our children, and maintaining healthy relationships.

AI systems may be able to learn a lot by observing humans, but because of our inconsistencies, some researchers worry that systems trained with IRL will be fundamentally unable to distinguish between value-aligned and misaligned behavior. This could become especially dangerous as AI systems become more powerful: inferring the wrong values or goals from observing humans could lead these systems to adopt harmful behavior.

 

Distinguishing Biases and Values

Owain Evans, a researcher at the Future of Humanity Institute, and Andreas Stuhlmüller, president of the research non-profit Ought, have explored the limitations of IRL in teaching human values to AI systems. In particular, their research exposes how cognitive biases make it difficult for AIs to learn human preferences through interactive learning.

Evans elaborates: “We want an agent to pursue some set of goals, and we want that set of goals to coincide with human goals. The question then is, if the agent just gets to watch humans and try to work out their goals from their behavior, how much are biases a problem there?”

In some cases, AIs will be able to understand patterns of common biases. Evans and Stuhlmüller discuss the psychological literature on biases in their paper, Learning the Preferences of Ignorant, Inconsistent Agents, and in their online book, agentmodels.org. An example of a common pattern discussed in agentmodels.org is “time inconsistency.” Time inconsistency is the idea that people’s values and goals change depending on when you ask them. In other words, “there is an inconsistency between what you prefer your future self to do and what your future self prefers to do.”

Examples of time inconsistency are everywhere. For one, most people value waking up early and exercising if you ask them before bed. But come morning, when it’s cold and dark out and they didn’t get those eight hours of sleep, they often value the comfort of their sheets and the virtues of relaxation. From waking up early to avoiding alcohol, eating healthy, and saving money, humans tend to expect more from their future selves than their future selves are willing to do.

With systematic, predictable patterns like time inconsistency, IRL could make progress with AI systems. But often our biases aren’t so clear. According to Evans, deciphering which actions coincide with someone’s values and which actions spring from biases is difficult or even impossible in general.

“Suppose you promised to clean the house but you get a last minute offer to party with a friend and you can’t resist,” he suggests. “Is this a bias, or your value of living for the moment? This is a problem for using only inverse reinforcement learning to train an AI — how would it decide what are biases and values?”

 

Learning the Correct Values

Despite this conundrum, understanding human values and preferences is essential for AI systems, and developers have a very practical interest in training their machines to learn these preferences.

Already today, popular websites use AI to learn human preferences. With YouTube and Amazon, for instance, machine-learning algorithms observe your behavior and predict what you will want next. But while these recommendations are often useful, they have unintended consequences.

Consider the case of Zeynep Tufekci, an associate professor at the School of Information and Library Science at the University of North Carolina. After watching videos of Trump rallies to learn more about his voter appeal, Tufekci began seeing white nationalist propaganda and Holocaust denial videos on her “autoplay” queue. She soon realized that YouTube’s algorithm, optimized to keep users engaged, predictably suggests more extreme content as users watch more videos. This led her to call the website “The Great Radicalizer.”

This value misalignment in YouTube algorithms foreshadows the dangers of interactive learning with more advanced AI systems. Instead of optimizing advanced AI systems to appeal to our short-term desires and our attraction to extremes, designers must be able to optimize them to understand our deeper values and enhance our lives.

Evans suggests that we will want AI systems that can reason through our decisions better than humans can, understand when we are making biased decisions, and “help us better pursue our long-term preferences.” However, this will entail that AIs suggest things that seem bad to humans on first blush.

One can imagine an AI system suggesting a brilliant, counterintuitive modification to a business plan, and the human just finds it ridiculous. Or maybe an AI recommends a slightly longer, stress-free driving route to a first date, but the anxious driver takes the faster route anyway, unconvinced.

To help humans understand AIs in these scenarios, Evans and Stuhlmüller have researched how AI systems could reason in ways that are comprehensible to humans and can ultimately improve upon human reasoning.

One method (invented by Paul Christiano) is called “amplification,” where humans use AIs to help them think more deeply about decisions. Evans explains: “You want a system that does exactly the same kind of thinking that we would, but it’s able to do it faster, more efficiently, maybe more reliably. But it should be a kind of thinking that if you broke it down into small steps, humans could understand and follow.”

This second concept is called “factored cognition” – the idea of breaking sophisticated tasks into small, understandable steps. According to Evans, it’s not clear how generally factored cognition can succeed. Sometimes humans can break down their reasoning into small steps, but often we rely on intuition, which is much more difficult to break down.

 

Specifying the Problem

Evans and Stuhlmüller have started a research project on amplification and factored cognition, but they haven’t solved the problem of human biases in interactive learning – rather, they’ve set out to precisely lay out these complex issues for other researchers.

“It’s more about showing this problem in a more precise way than people had done previously,” says Evans. “We ended up getting interesting results, but one of our results in a sense is realizing that this is very difficult, and understanding why it’s difficult.”

This article is part of a Future of Life series on the AI safety research grants, which were funded by generous donations from Elon Musk and the Open Philanthropy Project.

$50,000 Award to Stanislav Petrov for helping avert WWIII – but US denies visa

Click here to see this page in other languages:  German Russian 

To celebrate that today is not the 35th anniversary of World War III, Stanislav Petrov, the man who helped avert an all-out nuclear exchange between Russia and the U.S. on September 26 1983 was honored in New York with the $50,000 Future of Life Award at a ceremony at the Museum of Mathematics in New York.

Former United Nations Secretary General Ban Ki-Moon said: “It is hard to imagine anything more devastating for humanity than all-out nuclear war between Russia and the United States. Yet this might have occurred by accident on September 26 1983, were it not for the wise decisions of Stanislav Yevgrafovich Petrov. For this, he deserves humanity’s profound gratitude. Let us resolve to work together to realize a world free from fear of nuclear weapons, remembering the courageous judgement of Stanislav Petrov.”

Stanislav Petrov’s daughter Elena holds the 2018 Future of Life Award flanked by her husband Victor. From left: Ariel Conn (FLI), Lucas Perry (FLI), Hannah Fry, Victor, Elena, Steven Mao (exec. producer of the Petrov film “The Man Who Saved the World”), Max Tegmark (FLI)

Although the U.N. General Assembly, just blocks away, heard politicians highlight the nuclear threat from North Korea’s small nuclear arsenal, none mentioned the greater threat from the many thousands of nuclear weapons in the United States and Russian arsenals that have nearly been unleashed by mistake dozens of times in the past in a seemingly never-ending series of mishaps and misunderstandings.

One of the closest calls occurred thirty-five years ago, on September 26, 1983, when Stanislav Petrov chose to ignore the Soviet early-warning detection system that had erroneously indicated five incoming American nuclear missiles. With his decision to ignore algorithms and instead follow his gut instinct, Petrov helped prevent an all-out US-Russian nuclear war, as detailed in the documentary film “The Man Who Saved the World”, which will be released digitally next week. Since Petrov passed away last year, the award was collected by his daughter Elena. Meanwhile, Petrov’s son Dmitry missed his flight to New York because the U.S. embassy delayed his visa. “That a guy can’t get a visa to visit the city his dad saved from nuclear annihilation is emblematic of how frosty US-Russian relations have gotten, which increases the risk of accidental nuclear war”, said MIT Professor Max Tegmark when presenting the award. Arguably the only recent reduction in the risk of accidental nuclear war came when Donald Trump held a summit with Vladimir Putin in Helsinki earlier this year, which was, ironically, met with widespread criticism.

In Russia, soldiers often didn’t discuss their wartime actions out of fear that it might displease their government, and so, Elena only first heard about her father’s heroic actions in 1998 – 15 years after the event occurred. And even then, Elena and her brother only learned of what her father had done when a German journalist reached out to the family for an article he was working on. It’s unclear if Petrov’s wife, who died in 1997, ever knew of her husband’s heroism. Until his death, Petrov maintained a humble outlook on the event that made him famous. “I was just doing my job,” he’d say.

But most would agree that he went above and beyond his job duties that September day in 1983. The alert of five incoming nuclear missiles came at a time of high tension between the superpowers, due in part to the U.S. military buildup in the early 1980s and President Ronald Reagan’s anti-Soviet rhetoric. Earlier in the month the Soviet Union shot down a Korean Airlines passenger plane that strayed into its airspace, killing almost 300 people, and Petrov had to consider this context when he received the missile notifications. He had only minutes to decide whether or not the satellite data were a false alarm. Since the satellite was found to be operating properly, following procedures would have led him to report an incoming attack. Going partly on gut instinct and believing the United States was unlikely to fire only five missiles, he told his commanders that it was a false alarm before he knew that to be true. Later investigations revealed that reflections of the Sun off of cloud tops had fooled the satellite into thinking it was detecting missile launches.

Last years Nobel Peace Prize Laureate, Beatrice Fihn, who helped establish the recent United Nations treaty banning nuclear weapons, said,“Stanislav Petrov was faced with a choice that no person should have to make, and at that moment he chose the human race — to save all of us. No one person and no one country should have that type of control over all our lives, and all future lives to come. 35 years from that day when Stanislav Petrov chose us over nuclear weapons, nine states still hold the world hostage with 15,000 nuclear weapons. We cannot continue relying on luck and heroes to safeguard humanity. The Treaty on the Prohibition of Nuclear Weapons provides an opportunity for all of us and our leaders to choose the human race over nuclear weapons by banning them and eliminating them once and for all. The choice is the end of us or the end of nuclear weapons. We honor Stanislav Petrov by choosing the latter.”

University College London Mathematics Professor  Hannah Fry, author of  the new book “Hello World: Being Human in the Age of Algorithms”, participated in the ceremony and pointed out that as ever more human decisions get replaced by automated algorithms, it is sometimes crucial to keep a human in the loop – as in Petrov’s case.

The Future of Life Award seeks to recognize and reward those who take exceptional measures to safeguard the collective future of humanity. It is given by the Future of Life Institute (FLI), a non-profit also known for supporting AI safety research with Elon Musk and others. “Although most people never learn about Petrov in school, they might not have been alive were it not for him”, said FLI co-founder Anthony Aguirre. Last year’s award was given to the Vasili Arkhipov, who singlehandedly prevented a nuclear attack on the US during the Cuban Missile Crisis. FLI is currently accepting nominations for next year’s award.

Stanislav Petrov around the time he helped avert WWIII

Making AI Safe in an Unpredictable World: An Interview with Thomas G. Dietterich

Our AI systems work remarkably well in closed worlds. That’s because these environments contain a set number of variables, making the worlds perfectly known and perfectly predictable. In these micro environments, machines only encounter objects that are familiar to them. As a result, they always know how they should act and respond. Unfortunately, these same systems quickly become confused when they are deployed in the real world, as many objects aren’t familiar to them. This is a bit of a problem because, when an AI system becomes confused, the results can be deadly.

Consider, for example, a self-driving car that encounters a novel object. Should it speed up, or should it slow down? Or consider an autonomous weapon system that sees an anomaly. Should it attack, or should it power down? Each of these examples involve life-and-death decisions, and they reveal why, if we are to deploy advanced AI systems in real world environments, we must be confident that they will behave correctly when they encounter unfamiliar objects.

Thomas G. Dietterich, Emeritus Professor of Computer Science at Oregon State University, explains that solving this identification problem begins with ensuring that our AI systems aren’t too confident — that they recognize when they encounter a foreign object and don’t misidentify it as something that they are acquainted with. To achieve this, Dietterich asserts that we must move away from (or, at least, greatly modify) the discriminative training methods that currently dominate AI research.

However, to do that, we must first address the “open category problem.”

 

Understanding the Open Category Problem

When driving down the road, we can encounter a near infinite number of anomalies. Perhaps a violent storm will arise, and hail will start to fall. Perhaps our vision will become impeded by smoke or excessive fog. Although these encounters may be unexpected, the human brain is able to easily analyze new information and decide on the appropriate course of action — we will recognize a newspaper drifting across the road and, instead of abruptly slamming on the breaks, continue on our way.

Because of the way that they are programmed, our computer systems aren’t able to do the same.

“The way we use machine learning to create AI systems and software these days generally uses something called ‘discriminative training,’” Dietterich explains, “which implicitly assumes that the world consists of only, say, a thousand different kinds of objects.” This means that, if a machine encounters a novel object, it will assume that it must be one of the thousand things that it was trained on. As a result, such systems misclassify all foreign objects.

This is the “open category problem” that Dietterich and his team are attempting to solve. Specifically, they are trying to ensure that our machines don’t assume that they have encountered every possible object, but are, instead, able to reliably detect — and ultimately respond to — new categories of alien objects.

Dietterich notes that, from a practical standpoint, this means creating an anomaly detection algorithm that assigns an anomaly score to each object detected by the AI system. That score must be compared against a set threshold and, if the anomaly score exceeds the threshold, the system will need to raise an alarm. Dietterich states that, in response to this alarm, the AI system should take a pre-determined safety action. For example, a self-driving car that detects an anomaly might slow down and pull off to the side of the road.

 

Creating a Theoretical Guarantee of Safety

There are two challenges to making this method work. First, Dietterich asserts that we need good anomaly detection algorithms. Previously, in order to determine what algorithms work well, the team compared the performance of eight state-of-the-art anomaly detection algorithms on a large collection of benchmark problems.

The second challenge is to set the alarm threshold so that the AI system is guaranteed to detect a desired fraction of the alien objects, such as 99%. Dietterich says that formulating a reliable setting for this threshold is one of the most challenging research problems because there are, potentially, infinite kinds of alien objects. “The problem is that we can’t have labeled training data for all of the aliens. If we had such data, we would simply train the discriminative classifier on that labeled data,” Dietterich says.

To circumvent this labeling issue, the team assumes that the discriminative classifier has access to a representative sample of “query objects” that reflect the larger statistical population. Such a sample could, for example, be obtained by collecting data from cars driving on highways around the world. This sample will include some fraction of unknown objects, and the remaining objects belong to known object categories.

Notably, the data in the sample is not labeled. Instead, the AI system is given an estimate of the fraction of aliens in the sample. And by combining the information in the sample with the labeled training data that was employed to train the discriminative classifier, the team’s new algorithm can choose a good alarm threshold. If the estimated fraction of aliens is known to be an over-estimate of the true fraction, then the chosen threshold is guaranteed to detect the target percentage of aliens (i.e. 99%).

Ultimately, the above is the first method that can give a theoretical guarantee of safety for detecting alien objects, and a paper reporting the results was presented at ICML 2018. “We are able to guarantee, with high probability, that we can find 99% all of these new objects,” Dietterich says.

In the next stage of their research, Dietterich and his team plan to begin testing their algorithm in a more complex setting. Thus far, they’ve been looking primarily at classification, where the system looks at an image and classifies it. Next, they plan to move to controlling an agent, like a robot of self-driving car. “At each point in time, in order to decide what action to choose, our system will do a ‘look ahead search’ based on a learned model of the behavior of the agent and its environment. If the look ahead arrives at a state that is rated as ‘alien’ by our method, then this indicates that the agent is about to enter a part of the state space where it is not competent to choose correct actions,” Dietterich says. In response, as previously mentioned, the agent should execute a series of safety actions and request human assistance.

But what does this safety action actually consist of?

 

Responding to Aliens

Dietterich notes that, once something is identified as an anomaly and the alarm is sounded, the nature of this fall back system will depend on the machine in question, like whether the AI system is in a self-driving car or autonomous weapon.

To explain how these secondary systems operate, Dietterich turns to self-driving cars. “In the Google car, if the computers lose power, then there’s a backup system that automatically slows the car down and pulls it over to the side of the road.” However, Dietterich clarifies that stopping isn’t always the best course of action. One may assume that a car should come to a halt if an unidentified object crosses its path; however, if the unidentified object happens to be a blanket of snow on a particularly icy day, hitting the breaks gets more complicated. The system would need to factor in the icy roads, any cars that may be driving behind, and whether these cars can break in time to avoid a rear end collision.

But if we can’t predict every eventuality, how can we expect to program an AI system so that it behaves correctly and in a way that is safe?

Unfortunately, there’s no easy answer; however, Dietterich clarifies that there are some general best practices; “There’s no universal solution to the safety problem, but obviously there are some actions that are safer than others. Generally speaking, removing energy from the system is a good idea,” he says. Ultimately, Dietterich asserts that all the work related to programming safe AI really boils down to determining how we want our machines to behave under specific scenarios, and he argues that we need to rearticulate how we characterize this problem, and focus on accounting for all the factors, if we are to develop a sound approach.

Dietterich notes that “when we look at these problems, they tend to get lumped under a classification of ‘ethical decision making,’ but what they really are is problems that are incredibly complex. They depend tremendously on the context in which they are operating, the human beings, the other innovations, the other automated systems, and so on. The challenge is correctly describing how we want the system to behave and then ensuring that our implementations actually comply with those requirements.” And he concludes, “the big risk in the future of AI is the same as the big risk in any software system, which is that we build the wrong system, and so it does the wrong thing. Arthur C Clark in 2001: A Space Odyssey had it exactly right. The Hal 9000 didn’t ‘go rogue;’ it was just doing what it had been programmed to do.”

This article is part of a Future of Life series on the AI safety research grants, which were funded by generous donations from Elon Musk and the Open Philanthropy Project.

European Parliament Passes Resolution Supporting a Ban on Killer Robots

Click here to see this page in other languages:  Russian 

The European Parliament passed a resolution on September 12, 2018 calling for an international ban on lethal autonomous weapons systems (LAWS). The resolution was adopted with 82% of the members voting in favor of it.

Among other things, the resolution calls on its Member States and the European Council “to develop and adopt, as a matter of urgency … a common position on lethal autonomous weapon systems that ensures meaningful human control over the critical functions of weapon systems, including during deployment.”

The resolution also urges Member States and the European Council “to work towards the start of international negotiations on a legally binding instrument prohibiting lethal autonomous weapons systems.”

This call for urgency comes shortly after recent United Nations talks where countries were unable to reach a consensus about whether or not to consider a ban on LAWS. Many hope that statements such as this from leading government bodies could help sway the handful of countries still holding out against banning LAWS.

Daan Kayser of PAX, one of the NGO members of the Campaign to Stop Killer Robots, said, “The voice of the European parliament is important in the international debate. At the UN talks in Geneva this past August it was clear that most European countries see the need for concrete measures. A European parliament resolution will add to the momentum toward the next step.”

The countries that took the strongest stances against a LAWS ban at the recent UN meeting were the United States, Russia, South Korea, and Israel.

 

Scientists’ Voices Are Heard

Also mentioned in the resolution were the many open letters signed by AI researchers and scientists from around the world, who are calling on the UN to negotiate a ban on LAWS.

Two sections of the resolution stated:

“having regard to the open letter of July 2015 signed by over 3,000 artificial intelligence and robotics researchers and that of 21 August 2017 signed by 116 founders of leading robotics and artificial intelligence companies warning about lethal autonomous weapon systems, and the letter by 240 tech organisations and 3,089 individuals pledging never to develop, produce or use lethal autonomous weapon systems,” and

“whereas in August 2017, 116 founders of leading international robotics and artificial intelligence companies sent an open letter to the UN calling on governments to ‘prevent an arms race in these weapons’ and ‘to avoid the destabilising effects of these technologies.’”

Toby Walsh, a prominent AI researcher who helped create the letters, said, “It’s great to see politicians listening to scientists and engineers. Starting in 2015, we’ve been speaking loudly about the risks posed by lethal autonomous weapons. The European Parliament has joined the calls for regulation. The challenge now is for the United Nations to respond. We have several years of talks at the UN without much to show. We cannot let a few nations hold the world hostage, to start an arms race with technologies that will destabilize the current delicate world order and that many find repugnant.”

State of California Endorses Asilomar AI Principles

Click here to see this page in other languages:  Russian 

On August 30, the State of California unanimously adopted legislation in support of the Future of Life Institute’s Asilomar AI Principles.

The Asilomar AI Principles are a set of 23 principles intended to promote the safe and beneficial development of artificial intelligence. The principles – which include research issues, ethics and values, and longer-term issues – emerged from a collaboration between AI researchers, economists, legal scholars, ethicists, and philosophers in Asilomar, California in January of 2017.

The Principles are the most widely adopted effort of their kind. They have been endorsed by AI research leaders at Google DeepMind, GoogleBrain, Facebook, Apple, and OpenAI. Signatories include Demis Hassabis, Yoshua Bengio, Elon Musk, Ray Kurzweil, the late Stephen Hawking, Tasha McCauley, Joseph Gordon-Levitt, Jeff Dean, Tom Gruber, Anthony Romero, Stuart Russell, and more than 3,800 other AI researchers and experts.

With ACR 215 passing the State Senate with unanimous support, the California Legislature has now been added to that list.

Assemblyman Kevin Kiley, who led the effort, said, “By endorsing the Asilomar Principles, the State Legislature joins in the recognition of shared values that can be applied to AI research, development, and long-term planning — helping to reinforce California’s competitive edge in the field of artificial intelligence, while assuring that its benefits are manifold and widespread.”

The third Asilomar AI principle indicates the importance of constructive and healthy exchange between AI researchers and policymakers, and the passing of this resolution highlights the value of that endeavor. While the principles do not establish enforceable policies or regulations, the action taken by the California Legislature is an important and historic show of support across sectors towards a common goal of enabling safe and beneficial AI.

The Future of Life Institute (FLI), the nonprofit organization that led the creation of the Asilomar AI Principles, is thrilled by this latest development, and encouraged that the principles continue to serve as guiding values for the development of AI and related public policy.

“By endorsing the Asilomar AI Principles, California has taken a historic step towards the advancement of beneficial AI and highlighted its leadership of this transformative technology,” said Anthony Aguirre, cofounder of FLI and physics professor at the University of California, Santa Cruz. “We are grateful to Assemblyman Kevin Kiley for leading the charge and to the dozens of co-authors of this resolution for their foresight on this critical matter.”

Profound societal impacts of AI are no longer merely a question of science fiction, but are already being realized today – from facial recognition technology, to drone surveillance, and the spread of targeted disinformation campaigns. Advances in AI are helping to connect people around the world, improve productivity and efficiencies, and uncover novel insights. However, AI may also pose safety and security threats, exacerbate inequality, and constrain privacy and autonomy.

“New norms are needed for AI that counteract dangerous race dynamics and instead center on trust, security, and the common good,” says Jessica Cussins, AI Policy Lead for FLI. “Having the official support of California helps establish a framework of shared values between policymakers, AI researchers, and other stakeholders. FLI encourages other governmental bodies to support the 23 principles and help shape an exciting and equitable future.”

Governing AI: An Inside Look at the Quest to Ensure AI Benefits Humanity

Click here to see this page in other languages:  Russian 

Finance, education, medicine, programming, the arts — artificial intelligence is set to disrupt nearly every sector of our society. Governments and policy experts have started to realize that, in order to prepare for this future, in order to minimize the risks and ensure that AI benefits humanity, we need to start planning for the arrival of advanced AI systems today.

Although we are still in the early moments of this movement, the landscape looks promising. Several nations and independent firms have already started to strategize and develop polices for the governance of AI. Last year, the UAE appointed the world’s first Minister of Artificial Intelligence, and Germany took smaller, but similar, steps in 2017, when the Ethics Commission at the German Ministry of Transport and Digital Infrastructure developed the world’s first set of regulatory guidelines for automated and connected driving.

This work is notable; however, these efforts have yet to coalesce into a larger governance framework that extends beyond national boundaries. Nick Bostrom’s Strategic Artificial Intelligence Research Center seeks to assist in resolving this issue by understanding, and ultimately shaping, the strategic landscape of long-term AI development on a global scale.

 

Developing a Global Strategy: Where We Are Today

The Strategic Artificial Intelligence Research Center was founded in 2015 with the knowledge that, to truly circumvent the threats posed by AI, the world needs a concerted effort focused on tackling unsolved problems related to AI policy and development. The Governance of AI Program (GovAI), co-directed by Bostrom and Allan Dafoe, is the primary research program that has evolved from this center. Its central mission, as articulated by the directors, is to “examine the political, economic, military, governance, and ethical dimensions of how humanity can best navigate the transition to such advanced AI systems.” In this respect, the program is focused on strategy — on shaping the social, political, and governmental systems that influence AI research and development — as opposed to focusing on the technical hurdles that must be overcome in order to create and program safe AI.

To develop a sound AI strategy, the program works with social scientists, politicians, corporate leaders, and artificial intelligence/machine learning engineers to address questions of how we should approach the challenge of governing artificial intelligence. In a recent 80,0000 Hours podcast with Rob Wiblin, Dafoe outlined how the team’s research shapes up from a practical standpoint, asserting that the work focuses on answering questions that fall under three primary categories:

  • The Technical Landscape: This category seeks to answer all the questions that are related to research trends in the field of AI with the aim of understanding what future technological trajectories are plausible and how these trajectories affect the challenges of governing advanced AI systems.
  • AI Politics: This category focuses on questions that are related to the dynamics of different groups, corporations, and governments pursuing their own interests in relation to AI, and it seeks to understand what risks might arise as a result and how we may be able to mitigate these risks.
  • AI Governance: This category examines positive visions of a future in which humanity coordinates to govern advanced AI in a safe and robust manner. This raises questions such as how this framework should operate and what values we would want to encode in a governance regime.

The above categories provide a clearer way of understanding the various objectives of those invested in researching AI governance and strategy; however, these categories are fairly large in scope. To help elucidate the work they are performing, Jade Leung, a researcher with GovAI and a DPhil candidate in International Relations at the University of Oxford, outlined some of the specific workstreams that the team is currently pursuing.

One of the most intriguing areas of research is the Chinese AI Strategy workstream. This line of research examines things like China’s AI capabilities vis-à-vis other countries, official documentation regarding China’s AI policy, and the various power dynamics at play in the nation with an aim of understanding, as Leung summarizes, “China’s ambition to become an AI superpower and the state of Chinese thinking on safety, cooperation, and AGI.” Ultimately, GovAI seeks to outline the key features of China’s AI strategy in order to understand one of the most important actors in AI governance. The program published Deciphering China’s AI Dream in March of 2018a report that analyzes new features of China’s national AI strategy, and has plans to build upon research in the near future.

Another workstream is Firm-Government Cooperation, which examines the role that private firms play in relation to the development of advanced AI and how these players are likely to interact with national governments. In a recent talk at EA Global San Francisco, Leung focused on how private industry is already playing a significant role in AI development and why, when considering how to govern AI, private players must be included in strategy considerations as a vital part of the equation. The description of the talk succinctly summarizes the key focal areas, noting that “private firms are the only prominent actors that have expressed ambitions to develop AGI, and lead at the cutting edge of advanced AI research. It is therefore critical to consider how these private firms should be involved in the future of AI governance.”

Other work that Leung highlighted includes modeling technology race dynamics and analyzing the distribution of AI talent and hardware globally.

 

The Road Ahead

When asked how much confidence she has that AI researchers will ultimately coalesce and be successful in their attempts to shape the landscape of long-term AI development internationally, Leung was cautious with her response, noting that far more hands are needed. “There is certainly a greater need for more researchers to be tackling these questions. As a research area as well as an area of policy action, long-term safe and robust AI governance remains a neglected mission,” she said.

Additionally, Leung noted that, at this juncture, although some concrete research is already underway, a lot of the work is focused on framing issues related to AI governance and, in so doing, revealing the various avenues in need of research. As a result, the team doesn’t yet have concrete recommendations for specific actions governing bodies should commit to, as further foundational analysis is needed. “We don’t have sufficiently robust and concrete policy recommendations for the near term as it stands, given the degrees of uncertainty around this problem,” she said.

However, both Leung and Defoe are optimistic and assert that this information gap will likely change — and rapidly. Researchers across disciplines are increasingly becoming aware of the significance of this topic, and as more individuals begin researching and participating in this community, the various avenues of research will become more focused. “In two years, we’ll probably have a much more substantial research community. But today, we’re just figuring out what are the most important and tractable problems and how we can best recruit to work on those problems,” Dafoe told Wiblin.

The assurances that a more robust community will likely form soon are encouraging; however, questions remain regarding whether this community will come together with enough time to develop a solid governance framework. As Dafoe notes, we have never witnessed an intelligence explosion before, so we have no examples to look to for guidance when attempting to develop projections and timelines regarding when we will have advanced AI systems.

Ultimately, the lack of projections is precisely why we must significantly invest in AI strategy research in the immediate future. As Bostrom notes in Superintelligence: Paths, Dangers, and Strategies, AI is not simply a disruptive technology, it is likely the most disruptive technology humanity will ever encounter: “[Superintelligence] is quite possibly the most important and most daunting challenge humanity has ever faced. And — whether we succeed or fail — it is probably the last challenge we will ever face.”

This article is part of a Future of Life series on the AI safety research grants, which were funded by generous donations from Elon Musk and the Open Philanthropy Project.

Edit: The title of the article has been changed to reflect the fact that this is not about regulating AI.

Machine Reasoning and the Rise of Artificial General Intelligences: An Interview With Bart Selman

From Uber’s advanced computer vision system to Netflix’s innovative recommendation algorithm, machine learning technologies are nearly omnipresent in our society. They filter our emails, personalize our newsfeeds, update our GPS systems, and drive our personal assistants. However, despite the fact that such technologies are leading a revolution in artificial intelligence, some would contend that these machine learning systems aren’t truly intelligent.

The argument, in its most basic sense, centers on the fact that machine learning evolved from theories of pattern recognition and, as such, the capabilities of such systems generally extend to just one task and are centered on making predictions from existing data sets. AI researchers like Rodney Brooks, a former professor of Robotics at MIT, argue that true reasoning, and true intelligence, is several steps beyond these kinds of learning systems.

But if we already have machines that are proficient at learning through pattern recognition, how long will it be until we have machines that are capable of true reasoning, and how will AI evolve once it reaches this point?

Understanding the pace and path that artificial reasoning will follow over the coming decades is an important part of ensuring that AI is safe, and that it does not pose a threat to humanity; however, before it is possible to understand the feasibility of machine reasoning across different categories of cognition, and the path that artificial intelligences will likely follow as they continue their evolution, it is necessary to first define exactly what is meant by the term “reasoning.”

 

Understanding Intellect

Bart Selman is a professor of Computer Science at Cornell University. His research is dedicated to understanding the evolution of machine reasoning. According to his methodology, reasoning is described as taking pieces of information, combining them together, and using the fragments to draw logical conclusions or devise new information.

Sports provide a ready example of expounding what machine reasoning is really all about. When humans see soccer players on a field kicking a ball about, they can, with very little difficulty, ascertain that these individuals are soccer players. Today’s AI can also make this determination. However, humans can also see a person in a soccer outfit riding a bike down a city street, and they would still be able to infer that the person is a soccer player. Today’s AIs probably wouldn’t be able to make this connection.

This process— of taking information that is known, uniting it with background knowledge, and making inferences regarding information that is unknown or uncertain — is a reasoning process. To this end, Selman notes that machine reasoning is not about making predictions, it’s about using logical techniques (like the abductive process mentioned above) to answer a question or form an inference.

Since humans do not typically reason through pattern recognition and synthesis, but by using logical processes like induction, deduction, and abduction, Selman asserts that machine reasoning is a form of intelligence that is more like human intelligence. He continues by noting that the creation of machines that are endowed with more human-like reasoning processes, and breaking away from traditional pattern recognition approaches, is the key to making systems that not only predict outcomes but also understand and explain their solutions. However, Selman notes that making human-level AI is also the first step to attaining super-human levels of cognition.

And due to the existential threat this could pose to humanity, it is necessary to understand exactly how this evolution will unfold.

 

The Making of a (super)Mind

It may seem like truly intelligent AI are a problem for future generations. Yet, when it comes to machines, the consensus among AI experts is that rapid progress is already being made in machine reasoning. In fact, many researchers assert that human-level cognition will be achieved across a number of metrics in the next few decades. Yet, questions remain regarding how AI systems will advance once artificial general intelligence is realized. A key question is whether these advances can accelerate farther and scale-up to super-human intelligence.

This process is something that Selman has devoted his life to studying. Specifically, he researches the pace of AI scalability across different categories of cognition and the feasibility of super-human levels of cognition in machines.

Selman states that attempting to make blanket statements about when and how machines will surpass humans is a difficult task, as machine cognition is disjointed and does not draw a perfect parallel with human cognition. “In some ways, machines are far beyond what humans can do,” Selman explains, “for example, when it comes to certain areas in mathematics, machines can take billions of reasoning steps and see the truth of a statement in a fraction of a second. The human has no ability to do that kind of reasoning.”

However, when it comes to the kind of reasoning mentioned above, where meaning is derived from deductive or inductive processes that are based on the integration of new data, Selman says that computers are somewhat lacking. “In terms of the standard reasoning that humans are good at, they are not there yet,” he explains. Today’s systems are very good at some tasks, sometimes far better than humans, but only in a very narrow range of applications.

Given these variances, how can we determine how AI will evolve in various areas and understand how they will accelerate after general human level AI is achieved?

For his work, Selman relies on computational complexity theory, which has two primary functions. First, it can be used to characterize the efficiency of an algorithm used for solving instances of a problem. As Johns Hopkins’ Leslie Hall notes, “broadly stated, the computational complexity of an algorithm is a measure of how many steps the algorithm will require in the worst case for an instance [of a problem] of a given size.” Second, it is a method of classifying tasks (computational problems) according to their inherent difficulty. These two features provide us with a way of determining how artificial intelligences will likely evolve by offering a formal method of determining the easiest, and therefore most probable, areas of advancement. It also provides key insights into the speed of this scalability.

Ultimately, this work is important, as the abilities of our machines are fast-changing. As Selman notes, “The way that we measure the capabilities of programs that do reasoning is by looking at the number of facts that they can combine quickly. About 25 years ago, the best reasoning engines could combine approximately 200 or 300 facts and deduce new information from that. The current reasoning engines can combine millions of facts.” This exponential growth has great significance when it comes to the scale-up to human levels of machine reasoning.

As Selman explains, given the present abilities of our AI systems, it may seem like machines with true reasoning capabilities are still some ways off; however, thanks to the excessive rate of technological progress, we will likely start to see machines that have intellectual abilities that vastly outpace our own in rather short order. “Ten years from now, we’ll still find them [artificially intelligent machines] very much lacking in understanding, but twenty or thirty years from now, machines will have likely built up the same knowledge that a young adult has,” Selman notes. Anticipating exactly when this transition will occur will help us better understand the actions that we should take, and the research that the current generation must invest in, in order to be prepared for this advancement.

This article is part of a Future of Life series on the AI safety research grants, which were funded by generous donations from Elon Musk and the Open Philanthropy Project.

$2 Million Donated to Keep Artificial General Intelligence Beneficial and Robust

$2 million has been allocated to fund research that anticipates artificial general intelligence (AGI) and how it can be designed beneficially. The money was donated by Elon Musk to cover grants through the Future of Life Institute (FLI). Ten grants have been selected for funding.

Said Tegmark, “I’m optimistic that we can create an inspiring high-tech future with AI as long as we win the race between the growing power of AI and the wisdom with which the manage it. This research is to help develop that wisdom and increasing the likelihood that AGI will be best rather than worst thing to happen to humanity.”

Today’s artificial intelligence (AI) is still quite narrow. That is, it can only accomplish narrow sets of tasks, such as playing chess or Go, driving a car, performing an Internet search, or translating languages. While the AI systems that master each of these tasks can perform them at superhuman levels, they can’t learn a new, unrelated skill set (e.g. an AI system that can search the Internet can’t learn to play Go with only its search algorithms).

These AI systems lack that “general” ability that humans have to make connections between disparate activities and experiences and to apply knowledge to a variety of fields. However, a significant number of AI researchers agree that AI could achieve a more “general” intelligence in the coming decades. No one knows how AI that’s as smart or smarter than humans might impact our lives, whether it will prove to be beneficial or harmful, how we can design it safely, or even how to prepare society for advanced AI. And many researchers worry that the transition could occur quickly.

Anthony Aguirre, co-founder of FLI and physics professor at UC Santa Cruz, explains, “The breakthroughs necessary to have machine intelligences as flexible and powerful as our own may take 50 years. But with the major intellectual and financial resources now being directed at the problem it may take much less. If or when there is a breakthrough, what will that look like? Can we prepare? Can we design safety features now, and incorporate them into AI development, to ensure that powerful AI will continue to benefit society? Things may move very quickly and we need research in place to make sure they go well.”

Grant topics include: training multiple AIs to work together and learn from humans about how to coexist, training AI to understand individual human preferences, understanding what “general” actually means, incentivizing research groups to avoid a potentially dangerous AI race, and many more. As the request for proposals stated, “The focus of this RFP is on technical research or other projects enabling development of AI that is beneficial to society and robust in the sense that the benefits have some guarantees: our AI systems must do what we want them to do.”

FLI hopes that this round of grants will help ensure that AI remains beneficial as it becomes increasingly intelligent. The full list of FLI recipients and project titles includes:

Primary Investigator Project Title Amount Recommended Email
Allan Dafoe, Yale University Governance of AI Programme $276,000 allan.dafoe@yale.edu
Stefano Ermon, Stanford University Value Alignment and Multi-agent Inverse Reinforcement Learning $100,000 ermon@cs.stanford.edu
Owain Evans, Oxford University Factored Cognition: Amplifying Human Cognition for Safely Scalable AGI $225,000 owain.evans@philosophy.ox.ac.uk
The Anh Han, Teesside University Incentives for Safety Agreement Compliance in AI Race $224,747 t.han@tees.ac.uk
Jose Hernandez-Orallo, University of Cambridge Paradigms of Artificial General Intelligence and Their Associated Risks $220,000 jorallo@dsic.upv.es
Marcus Hutter, Australian National University The Control Problem for Universal AI: A Formal Investigation $276,000 marcus.hutter@anu.edu.au
James Miller, Smith College Utility Functions: A Guide for Artificial General Intelligence Theorists $78,289 jdmiller@smith.edu
Dorsa Sadigh, Stanford University Safe Learning and Verification of Human-AI Systems $250,000 dorsa@cs.stanford.edu
Peter Stone, University of Texas Ad hoc Teamwork and Moral Feedback as a Framework for Safe Robot Behavior $200,000 pstone@cs.utexas.edu
Josh Tenenbaum, MIT Reverse Engineering Fair Cooperation $150,000 jbt@mit.edu

 

Some of the grant recipients offered statements about why they’re excited about their new projects:

“The team here at the Governance of AI Program are excited to pursue this research with the support of FLI. We’ve identified a set of questions that we think are among the most important to tackle for securing robust governance of advanced AI, and strongly believe that with focused research and collaboration with others in this space, we can make productive headway on them.” -Allan Dafoe

“We are excited about this project because it provides a first unique and original opportunity to explicitly study the dynamics of safety-compliant behaviours within the ongoing AI research and development race, and hence potentially leading to model-based advice on how to timely regulate the present wave of developments and provide recommendations to policy makers and involved participants. It also provides an important opportunity to validate our prior results on the importance of commitments and other mechanisms of trust in inducing global pro-social behavior, thereby further promoting AI for the common good.” -The Ahn Han

“We are excited about the potentials of this project. Our goal is to learn models of humans’ preferences, which can help us build algorithms for AGIs that can safely and reliably interact and collaborate with people.” -Dorsa Sadigh

This is FLI’s second grant round. The first launch in 2015, and a comprehensive list of papers, articles and information from that grant round can be found here. Both grant rounds are part of the original $10 million that Elon Musk pledged to AI safety research.

FLI cofounder, Viktoriya Krakovna, also added: “Our previous grant round promoted research on a diverse set of topics in AI safety and supported over 40 papers. The next grant round is more narrowly focused on research in AGI safety and strategy, and I am looking forward to great work in this area from our new grantees.”

Learn more about these projects here.

AI Companies, Researchers, Engineers, Scientists, Entrepreneurs, and Others Sign Pledge Promising Not to Develop Lethal Autonomous Weapons

Leading AI companies and researchers take concrete action against killer robots, vowing never to develop them.

Stockholm, Sweden (July 18, 2018) After years of voicing concerns, AI leaders have, for the first time, taken concrete action against lethal autonomous weapons, signing a pledge to neither participate in nor support the development, manufacture, trade, or use of lethal autonomous weapons.

The pledge has been signed to date by over 160 AI-related companies and organizations from 36 countries, and 2,400 individuals from 90 countries. Signatories of the pledge include Google DeepMind, University College London, the XPRIZE Foundation, ClearPath Robotics/OTTO Motors, the European Association for AI (EurAI), the Swedish AI Society (SAIS), Demis Hassabis, British MP Alex Sobel, Elon Musk, Stuart Russell, Yoshua Bengio, Anca Dragan, and Toby Walsh.

Max Tegmark, president of the Future of Life Institute (FLI) which organized the effort, announced the pledge on July 18 in Stockholm, Sweden during the annual International Joint Conference on Artificial Intelligence (IJCAI), which draws over 5,000 of the world’s leading AI researchers. SAIS and EurAI were also organizers of this year’s IJCAI.

Said Tegmark, “I’m excited to see AI leaders shifting from talk to action, implementing a policy that politicians have thus far failed to put into effect. AI has huge potential to help the world – if we stigmatize and prevent its abuse. AI weapons that autonomously decide to kill people are as disgusting and destabilizing as bioweapons, and should be dealt with in the same way.”

Lethal autonomous weapons systems (LAWS) are weapons that can identify, target, and kill a person, without a human “in-the-loop.” That is, no person makes the final decision to authorize lethal force: the decision and authorization about whether or not someone will die is left to the autonomous weapons system. (This does not include today’s drones, which are under human control. It also does not include autonomous systems that merely defend against other weapons, since “lethal” implies killing a human.)

The pledge begins with the statement:

“Artificial intelligence (AI) is poised to play an increasing role in military systems. There is an urgent opportunity and necessity for citizens, policymakers, and leaders to distinguish between acceptable and unacceptable uses of AI.”

Another key organizer of the pledge, Toby Walsh, Scientia Professor of Artificial Intelligence at the University of New South Wales in Sydney, points out the thorny ethical issues surrounding LAWS. He states:

“We cannot hand over the decision as to who lives and who dies to machines. They do not have the ethics to do so. I encourage you and your organizations to pledge to ensure that war does not become more terrible in this way.”

Ryan Gariepy, Founder and CTO of both Clearpath Robotics and OTTO Motors, has long been a strong opponent of lethal autonomous weapons. He says:

“Clearpath continues to believe that the proliferation of lethal autonomous weapon systems remains a clear and present danger to the citizens of every country in the world. No nation will be safe, no matter how powerful. Clearpath’s concerns are shared by a wide variety of other key autonomous systems companies and developers, and we hope that governments around the world decide to invest their time and effort into autonomous systems which make their populations healthier, safer, and more productive instead of systems whose sole use is the deployment of lethal force.”

In addition to the ethical questions associated with LAWS, many advocates of an international ban on LAWS are concerned that these weapons will be difficult to control – easier to hack, more likely to end up on the black market, and easier for bad actors to obtain –  which could become destabilizing for all countries, as illustrated in the FLI-released video “Slaughterbots”.

In December 2016, the Review Conference of the Convention on Conventional Weapons (CCW) began formal discussion regarding LAWS at the UN. By the most recent meeting in April, twenty-six countries had announced support for some type of ban, including China. And such a ban is not without precedent. Biological weapons, chemical weapons, and space weapons were also banned not only for ethical and humanitarian reasons, but also for the destabilizing threat they posed.

The next UN meeting on LAWS will be held in August, and signatories of the pledge hope this commitment will encourage lawmakers to develop a commitment at the level of an international agreement between countries. As the pledge states:

“We, the undersigned, call upon governments and government leaders to create a future with strong international norms, regulations and laws against lethal autonomous weapons. … We ask that technology companies and organizations, as well as leaders, policymakers, and other individuals, join us in this pledge.”

 

As seen in the press