Below you’ll find some of our favorite books on existential risk, existential hope, technology, society and more. These are essential reads for any effective altruist, aspiring philosopher, concerned humanist or curious thinker. Click on a cover to learn more.
Assessing the Benefits & Risks
By Jolene Creighton
Most people seem to understand that malaria is a pressing problem, one that continues to menace a number of areas around the world. However, most would likely be shocked to learn the true scale of the tragedy. To illustrate this point, in 2017, more than 200 million people were diagnosed with malaria. By the year’s close, nearly half a million people had died of the disease. And these are the statistics for just one year. Over the course of the 20th century, researchers estimate that malaria claimed somewhere between 150 million and 300 million lives. With even the lowest figure, the death toll is still more than World War I, World War II, the Vietnam War, and the Korean War combined.
Although its pace has slowed in recent years, according to the World Health Organization, malaria remains one of the leading causes of death in children under five. However, there is new hope, and it comes in the form of CRISPR gene drives.
With this technology, many researchers believe humanity could finally eradicate malaria, saving millions of lives and, according to the World Health Organization, trillions of dollars in associated healthcare costs. The challenge isn’t so much a technical one. If scientists needed to deploy CRISPR gene drives in the near future, Ethan Bier, a Distinguished Professor of Cell and Developmental Biology at UC San Diego, notes that they could reliably do so.
However, there’s a hitch. In order to eradicate malaria, we would need to use anti-malaria gene drives to target three specific species (maybe more) and force them into extinction. This would be one of the most audacious attempts by humans to engineer the planet’s ecosystem, a realm where humans already have a checkered past. Such a use sounds highly controversial, and of course, it is.
However, regardless of whether the technology is being deployed to try and save a species or to try and force it into extinction, a number of scientists are wary. Gene drives will permanently alter an entire population. In many cases, there is no going back. If scientists fail to properly anticipate all of the effects and consequences, the impact on a particular ecological habitat — and the world at large — could be dramatic.
Rewriting Organisms: Understanding CRISPR
This degree of genetic targeting is only possible because of the unification of two distinct gene editing technologies: CRISPR/Cas9 and gene drives. On their own, each of these tools is powerful enough to dramatically alter a gene pool. Together, they can erase that pool entirely.
The first part of this equation is CRISPR/Cas9. More commonly known as “CRISPR,” this technology is most easily understood as a pair of molecular scissors. CRISPR, which stands for “Clustered Regularly Interspaced Palindromic Repeats,” was adapted from a naturally occurring defense system that’s found in bacteria.
When a virus invades a host bacteria and is defeated, bacteria are able to capture the virus’ genetic material and merge snippets of it into their genomes. The virus’ genetic material is then used to make guide RNA. These guide RNA target and bind to complementary genetic sequences. When a new virus invades, the guide RNA will find the complementary sequences on the attacking virus and attach itself to that matching portion of the genome. From there, an enzyme known as Cas9 cuts it apart, and the virus is destroyed.
Lab-made CRISPR allows humans to accomplish much the same — cut any region of the genome with relatively high precision and accuracy, often disabling any cut sequence in the process. However, scientists have the ability to go a step farther than nature. After a cut is made using CRISPR, scientists can use the cell’s own repair machinery to add or replace an existing segment of DNA with a customized DNA sequence i.e. a customized gene.
If genetic changes are made in somatic cells (the non-reproductive cells of a living organism) — a process known as “somatic gene editing” — it only affects the organism in which the genetic changes were made. However, if the genetic changes are made in the germline (the sequence of cells that develop into eggs and sperm) — a process known as “germline editing” — then the edited gene can spread to the organism’s offspring. This means that the synthetic changes — for good or bad — could permanently enter the gene pool.
But by coupling CRISPR with gene drives, scientists can do more than spread a gene to the next generation — they can force it through an entire species.
Rewriting Species: Understanding Gene Drives
Most species on our planet have two copies of their genes. During the reproductive cycle, one of these genes is selected to be passed on to the next generation. Because this selection process occurs randomly in nature, there’s about a 50/50 chance that any given gene will be passed down.
Gene drives change those odds by increasing the probability that a specific gene (or suite of genes) will be inherited. Surprisingly, scientists have known about gene drive systems since the late 1800s. They occur naturally in the wild thanks to something known as “selfish genetic elements” (“selfish genes).” Unlike most genes, which wait patiently for nature to randomly select them for propagation, selfish genes use a variety of mechanisms to manipulate the reproductive process and ensure that they get passed down to more than 50% of offspring.
One way that this can be achieved is through segregation distorters, which alter the replication process so that a gene sequence is replicated more frequently than others. Transposable elements, on the other hand, allow genes to move to additional locations in the genome. In both instances, the selfish genes use different mechanisms to increase their presence on the genome and, in so doing, improve their odds of being passed down.
In the 1960s, scientists first realized that humanity might be able to use a gene drive to dictate the genetic future of a species. Specifically, biologists George Craig, William Hickey, and Robert Vandehey argued that a mass release of male mosquitoes with a gene drive that increased the number of male offspring could reduce the number of females, the sex that transmits malaria, “below the level required for efficient disease transmission.” In other words, the team argued that malaria could be eradicated by using a male-favoring gene drive to push female mosquitoes out of the population.
However, gene editing technologies hadn’t yet been invented. Consequently, gathering a mass of mosquitoes with this specific gene drive was impossible, as scientists were forced to rely on time-consuming and imprecise breeding techniques.
Then, in the 1970s, Paul Berg and his colleagues created the first molecule made of DNA from two different species, and laboratory-based genetic engineering was born. And not too long after that, in 1992, Margaret Kidwell and José Ribeiro proposed attaching a specific gene to selfish genes to drive the gene through a mosquito population and make it malaria-resistant.
But despite the theoretical ingenuity of these designs, when it came to deploying them in reality, progress was elusive. Gene editing tools were still quite crude. As a result, they caused a number of off-target edits, where portions of the genome were cut unintentionally and segments of DNA were added in the wrong place. Then CRISPR came along in 2012 and changed everything, making gene editing comparably fast, reliable, and precise.
It didn’t take scientists long to realize that this new technology could be used to create a remarkably powerful, remarkably precise human-made selfish gene. In 2014, Kevin M Esvelt, Andrea L Smidler, Flaminia Catteruccia, and George M. Church published their landmark paper outlining this process. In their work, they noted that by coupling gene drive systems with CRISPR, researchers could target specific segments of a genome, insert a gene of their choice, and ultimately ensure that the gene would make its way into almost 100% of the offspring.
Thus, CRISPR gene drives were born, and it is at this juncture that humanity may have acquired the ability to rewrite — and even erase — entire species.
The Making of a Malaria-Free World
Things move fast in the world of genetic engineering. Esvelt and his team only outlined the process through which scientists could create CRISPR gene drives in 2014, and researchers have had working prototypes for nearly as long.
In December of 2015, scientists published a paper announcing the creation of a CRISPR gene drive that made Anopheles stephensi, one of the primary mosquito species responsible for the spread of malaria, resistant to the disease. Notably, the gene drive was just as effective as earlier pioneers had predicted: Although the team initially started with just two genetically edited males, after only two generations of cross-breeding, they had over 3,800 third generation mosquitoes, 99.5% of which expressed genes indicating that they had acquired the genes for malaria resistance.
However, in wild populations, it’s likely that the malaria parasite would eventually develop resistance to the gene drive. Thus, other teams have focused their efforts not on malaria resistance, but making mosquitoes extinct. As a side note, it must be stressed that no one was (or is) suggesting that we should exterminate all mosquitoes to end malaria. While there are over 3000 mosquito species, only 30 to 40 mosquito species transmit the disease, and scientists think that we could eradicate malaria by targeting just three of them.
In September of 2018, humanity took a major step towards realizing this vision, when scientists at the Imperial College London published a paper announcing that one of their CRISPR gene drives had wiped out an entire population of lab-bred mosquitoes in less than 11 generations. If this approach were released into the wild, the team predicted that it could propel the affected species into extinction in just one year.
For their work, the team focused on Anopheles gambiae and targeted genes that code proteins that play an important role in determining an organism’s sexual characteristics. By altering the gene, the team was able to create female mosquitoes that were infertile. What’s more, the drive seems to be resistance-proof.
There is still some technical work to be done before this particular CRISPR gene drive can be deployed in the wild. For starters, the team needs to verify that it is, in fact, resistant-proof. The results also have to be replicated in the same conditions in which Anopheles mosquitoes are typically found–conditions that mimic tropical locations across sub-Saharan Africa. Yet the researchers are making rapid progress, and a CRISPR gene drive that can end malaria may soon be a reality.
Moving Beyond Malaria
Aside from eradicating malaria, one of the most promising applications of CRISPR gene drives involves combating invasive species.
For example, in Australia, the cane toad has been causing an ecological crisis since the 1930s. Originating in the Americas, the cane toad was introduced to Australia in 1935 by the Bureau of Sugar Experiment Stations in an attempt to control beetle populations that were attacking sugar cane crops. However, the cane toad has no natural predators in the area, and so has multiplied at an alarming rate.
Although only 3,000 were initially released, scientists estimate that the cane toad population in Australia is currently over 200 million. For decades, the toads have been killing a number of native birds, snakes, and frogs that prey on it and inadvertently ingest its lethal toxin — the population of one monitor lizard species dropped by 90% after the cane toad spread to its area.
However, by genetically modifying the cane toads to keep them from producing these toxins, scientists believe they might be able to give native species a fighting chance. And because the CRISPR gene drive would only target the cane toad population, it may actually be safer than traditional pest control methods that involve poisons, as these chemicals impact a multitude of species.
Research indicates that CRISPR gene drives could also be used to target a host of other invasive pests. In January of 2019, scientists published a paper demonstrating the first concrete proof that the technology also works in mammals — specifically, lab mice.
Another use case involves deploying CRISPR gene drives to alter threatened or endangered organisms in order to better equip them for survival. For instance, a number of amphibian species are in decline because of the chytrid fungus, which causes a skin disease that is often lethal. Esvelt and his team note that CRISPR gene drives could be used to make organisms resistant to this fungal infection. Currently, there is no resistance mechanism for the fungus, so this specific use case is just a theoretical application. However, if developed, it could be deployed to save many species from extinction.
The Harvard Wyss Institute suggests that CRISPR gene drives could also be used “to pave the way toward sustainable agriculture.” Specifically, the technology could be used to reverse pesticide resistance in insects that attack crops, or it could be used to reverse herbicide resistance in weeds.
Yet, CRISPR gene drives are powerless when it comes to some of humanity’s greatest adversaries.
Because they are spread through sexual reproduction, gene drives can’t be used to alter species that reproduce asexually, meaning they can’t be used in bacteria and viruses. Gene drives also don’t have a practical applications in humans or other organisms with long generational periods, as it would take several centuries for any impact to be noticeable.. The Harvard Wyss institute notes that, at these timescales, someone could easily create a reversal drive to remove the trait, and any number of other unanticipated events could prevent the drive from propagating.
That’s not to say that reverse gene drives should be considered a safety net in case forward gene drives are weaponized or found to be dangerous. Rather, the primary point is to highlight the difficulty in using CRISPR gene drives to spread a gene throughout species with long generational and gestational periods.
But, as noted above, when it comes to species that have short reproductive cycles, the impact could be profound on extremely short order. With this in mind — although the work in amphibian, mammal, and plant populations is promising — the general scientific consensus is that the best applications for CRISPR gene drives likely involve insects.
Entering the Unknown
Before scientists introduce or remove a species from a habitat, they conduct research in order to understand the role that it plays within the ecosystem. This helps them better determine what the overall outcome will be, and how other individual organisms will be impacted.
According to Matan Shelomi, an entomologist who specializes in insect microbiology, scientists haven’t found any organisms that will suffer if three mosquitoes species are driven into extinction. Shelomi notes that, although several varieties of fish, amphibians, and insects eat mosquito larvae, they don’t rely on the larvae to survive; in fact, most of the organisms that have been studied prefer other food sources, and no known species live on mosquitoes alone. The same, he argues, can be said of adult mosquitoes. While a number of birds do consume mature mosquitoes, none rely on them as a primary food source.
Shelomi also notes that mosquitoes don’t play a critical role in pollination — or any other facet of the environment that scientists have examined. As a result, he says they are not a keystone species: “No ecosystem depends on any mosquito to the point that it would collapse if they were to disappear.”
At least, not as far as we are aware.
Because CRISPR gene drives cause permanent changes to a species, virologist Jonathan Latham notes that it is critical to get things right the first time: “They are ‘products’ that will likely not be able to be recalled, so any approval decision point must be presumed to be final and irreversible.” However, we have no way of knowing if scientists have properly anticipated every eventuality. “We certainly do not know all the myriad ways all mosquitoes interact with all life forms in their environment, and there may be something we are overlooking,” Shelomi admits. Due to these unknown unknowns, and the near irreversibility of CRISPR gene drives, Latham argues that they should never be deployed.
Every intervention has consequences. To this end, the important thing is to be as sure as possible that the potential rewards outweigh the risks. For now, when it comes to anti-malaria CRISPR gene drives, this critical question remains unanswered. Yet the applications for CRISPR gene drives extend far beyond mosquitoes, making it all the more important for scientists to ensure that their research is robust and doesn’t cause harm to humanity or to Earth’s ecosystems.
Risky Business, Designed for Safety
Although some development is still needed before scientists would be ready to release a CRISPR gene drive into a wild insect population, the most pressing issues that remain are of a regulatory and ethical nature. These include:
Limiting Deployment and Obtaining Consent
For starters, who gets to decide whether or not scientists should eradicate a species? The answer most commonly given by scientists and politicians is that the individuals who will be impacted should cast the final vote. However, substantial problems arise when it comes to limiting deployment and determining the degree to which informed consent is necessary.
Todd Kuiken, a Senior Research Scholar with the Genetic Engineering and Society Center at NC State University and a member of the U.N.’s technical expert group for the Convention on Biological Diversity, notes that, “in the past, these kinds of applications or introductions were mostly controlled in terms of where they were supposed to go.” Gene drives are different, he argues, because they are “designed to move, and that’s really a new concept in terms of environmental policy. That’s what makes them really interesting from a policy perspective.”
For example, if scientists release mosquitoes in a town in India that has approved the work, there is no practical way to contain the release to this single location. The mosquitoes will travel to other towns, other countries, and potentially even other continents.
The problem isn’t much easier to address even if the release is planned on a remote island. The nature of modern life, which sees people and goods continually traveling across the globe, makes it extremely difficult to prevent cross contamination. A CRISPR gene drive deployed against an invasive species on an island could still decimate populations in other places — even places where it is native and beneficial.
The release of a single engineered gene drive could, potentially, impact every human on Earth. Thus, in order to obtain the informed consent from all affected parties, scientists would effectively need to ensure that they had permission from everyone on the planet.
To help address these issues of “free, prior, and informed consent,” Kuiken notes that scientists and policymakers must establish a consensus on the following:
- What communities, organizations, and groups should be part of the decision-making process?
- How far out do you go to obtain informed consent — how many centric circles past the introduction point do you need to move?
- At which decision stage of the project should these different groups or potentially impacted communities be involved?
Of course, in order for individuals to effectively participate in discussions about CRISPR gene drive, they will have to know what it is. This also poses a problem: “Generally speaking, the majority of the public probably hasn’t even heard of it,” Kuiken says.
There are also questions about how to verify that an individual is actually informed enough to give consent. “What does it really mean to get approval?” Kuiken asks, noting, “the real question we need to start asking ourselves is ‘what do we mean by [informed consent]?’”
Because research into this area is already so advanced, Kuiken notes that there’s an immediate need for a broad range of endeavors aimed at improving individuals’ knowledge of, and interest in, CRISPR gene drives.
And it’s not just the public that need schooling. When it comes to scientists’ understanding of the technology, there are also serious and significant gaps. The degree and depth of these gaps, Kuiken is quick to point out, varies dramatically from field to field. While most geneticists are at least vaguely familiar with CRISPR gene drives,some key disciplines are still in the dark: one of the key findings of this year’s upcoming IUCN (International Union for Conservation of Nature) report is that “the conservation science community is not well aware of gene drives at all.”
“What concerns me is that a lot of the gene drive developers are not ecologists. My understanding is that they have very little training, or even zero training, when it comes to environmental interactions, environmental science, and ecology,” Kuiken says. “So, you have people developing systems that are being deliberately designed to be introduced to an environment or an ecosystem who don’t have the background to understand what all those ecological interactions might be.”
To this end, scientists working on gene drive technologies must be brought into conversations with ecologists.
Assessing the Impact
But even if scientists work together under the best conditions, the teams will still face monumental difficulties when trying to assess the impact and significance of a particular gene drive.
To begin with, there are issues with funding allocation. “The research dollars are not balanced correctly in terms of the development of the gene drive verses what the environmental implications will be,” Kuikon says. While he notes that this is typically how research funds are structured — environmental concerns come in last, if they come at all — CRISPR gene drives are fundamentally about the environment and ecology. As such, the funding issues in this specific use case are troubling.
Yet, if proper funding were secured, it would still be difficult to guarantee that a drive was safe. Even a small gap in our
understanding of a habitat could result in a drive being released into a species that has a critical ecological function in its local environment. As with the cane toad in Australia, this type of release could cause an environmental catastrophe and irreversibly damage an ecosystem.
One way to help prevent adverse ecological impacts is to gauge the effect through daisy-chain gene drives. These are self-limiting drives that grow weaker and weaker and die off after a few generations, allowing researchers to effectively measure the overall impact while restricting the gene drive’s spread. If such tests determine that there are no unfavorable effects, a more lasting drive can subsequently be released.
Kill-switches offer another potential solution. The Defense Advanced Research Projects Agency (DARPA) recently announced that it was allocating money to fund research into anti-CRISPR proteins. These could be used to prevent the expression of certain genes and thus counter the impact of a gene drive that has gone rogue or was released maliciously.
Similarly, scientists from North Carolina State University’s Genetic Engineering and Society Center note that it may be beneficial to establish a regulatory framework requiring the development of immunizing drives, which spread resistance to a specific gene drive, to be developed alongside drives that are intended for release. These could be used to immunize related species that aren’t being targeted, or kept on the ready in case of any unanticipated occurrences.
An Uncertain Future
But even if scientists do everything right, and even if researchers are able to verify that CRISPR gene drives are 100% safe, it doesn’t mean they will be deployed. “You can move yourself far in terms of generally scientific literacy around gene drives, but people’s acceptance changes when it potentially has a direct impact on them,” Kuiken explains.
To support his claims, Kuiken points to the Oxitec mosquitoes in Florida.
Here, teams were hoping to release male Aedes aegypti mosquitoes carrying a “self-limiting” gene. These are akin to, but distinct from, gene drives. When these edited males mate with wild females, the offspring inherit a copy of a gene that prevents them from surviving to adulthood. Since they don’t survive, they can’t reproduce, and there is a reduction in the wild population.
After working with local communities, Oxitec put the release up for a vote. “The vote count showed that, generally speaking, it you tallied up the whole area of South Florida, it was about a 60 to 70 percent approval. People said, ‘yeah, this is a really good idea. We should do this,’” Kuiken said. “But when you focused in on the areas where they were actually going to release the mosquitoes, it was basically flipped. It was a classic ‘not in my backyard’ scenario.”
That fear, especially when it comes to CRISPR gene drives, isn’t really too hard to comprehend. Even if every scientific analysis showed that the benefits of these drives outweighed all the various drawbacks, there would still be the unknown unknowns.
Researchers can’t possibly account for how every single species — all the countless plants, insects, and as yet undiscovered deep sea creatures — will be impacted by a change we make to an organism.So unless we develop unique and unprecedented scientific protocols, no matter how much research we do, the decision to use or not use CRISPR gene drives will have to be made without all the evidence.
By Stefan Schubert
This blog post reports on Schubert, S.**, Caviola, L.**, Faber, N. The Psychology of Existential Risk: Moral Judgments about Human Extinction. Scientific Reports [Open Access]. It was originally posted on the University of Oxford’s Practical Ethics: Ethics in the News blog.
Humanity’s ever-increasing technological powers can, if handled well, greatly improve life on Earth. But if they’re not handled well, they may instead cause our ultimate demise: human extinction. Recent years have seen an increased focus on the threat that emerging technologies such as advanced artificial intelligence could pose to humanity’s continued survival (see, e.g., Bostrom, 2014; Ord, forthcoming). A common view among these researchers is that human extinction would be much worse, morally speaking, than almost-as-severe catastrophes from which we could recover. Since humanity’s future could be very long and very good, it’s an imperative that we survive, on this view.
Do laypeople share the intuition that human extinction is much worse than near-extinction? In a famous passage in Reasons and Persons, Derek Parfit predicted that they would not. Parfit invited the reader to consider three outcomes:
2) A nuclear war that kills 99% of the world’s existing population.
3) A nuclear war that kills 100%.
In Parfit’s view, 3) is the worst outcome, and 1) is the best outcome. The interesting part concerns the relative differences, in terms of badness, between the three outcomes. Parfit thought that the difference between 2) and 3) is greater than the difference between 1) and 2), because of the unique badness of extinction. But he also predicted that most people would disagree with him, and instead find the difference between 1) and 2) greater.
Parfit’s hypothesis is often cited and discussed, but it hasn’t previously been tested. My colleagues Lucius Caviola and Nadira Faber and I recently undertook such testing. A preliminary study showed that most people judge human extinction to be very bad, and think that governments should invest resources to prevent it. We then turned to Parfit’s question whether they find it uniquely bad even compared to near-extinction catastrophes. We used a slightly amended version of Parfit’s thought-experiment, to remove potential confounders:
A) There is no catastrophe.
B) There is a catastrophe that immediately kills 80% of the world’s population.
C) There is a catastrophe that immediately kills 100% of the world’s population.
A large majority found the difference, in terms of badness, between A) and B) to be greater than the difference between B) and C). Thus, Parfit’s hypothesis was confirmed.
However, we also found that this judgment wasn’t particularly stable. Some participants were told, after having read about the three outcomes, that they should remember to consider their respective long-term consequences. They were reminded that it is possible to recover from a catastrophe killing 80%, but not from a catastrophe killing everyone. This mere reminder made a significantly larger number of participants find the difference between B) and C) the greater one. And still greater numbers (a clear majority) found the difference between B) and C) the greater one when the descriptions specified that the future would be extraordinarily long and good if humanity survived.
Our interpretation is that when confronted with Parfit’s question, people by default focus on the immediate harm associated with the three outcomes. Since the difference between A) and B) is greater than the difference between B) and C) in terms of immediate harm, they judge that the former difference is greater in terms of badness as well. But even relatively minor tweaks can make more people focus on the long-term consequences of the outcomes, instead of the immediate harm. And those long-term consequences become the key consideration for most people, under the hypothesis that the future will be extraordinarily long and good.
A conclusion from our studies is thus that laypeople’s views on the badness of extinction may be relatively unstable. Though such effects of relatively minor tweaks and re-framings are ubiquitous in psychology, they may be especially large when it comes to questions about human extinction and the long-term future. That may partly be because of the intrinsic difficulty of those questions, and partly because most people haven’t thought a lot about them previously.
In spite of the increased focus on existential risk and the long-term future, there has been relatively little research on how people think about those questions. There are several reasons why such research could be valuable. For instance, it might allow us to get a better sense of how much people will want to invest in safe-guarding our long-term future. It might also inform us of potential biases to correct for.
The specific issues which deserve more attention include people’s empirical estimates of whether humanity will survive and what will happen if we do, as well as their moral judgments about how valuable different possible futures (e.g., involving different population sizes and levels of well-being) would be. Another important issue is whether we think about the long term future with another frame of mind because of the great “psychological distance” (cf. Trope and Lieberman, 2010). We expect the psychology of longtermism and existential risk to be a growing field in the coming years.
** Equal contribution.
As we move towards a more automated world, tech companies are increasingly faced with decisions about how they want — and don’t want — their products to be used. Perhaps most critically, the sector is in the process of negotiating its relationship to the military, and to the development of lethal autonomous weapons in particular. Some companies, including industry leaders like Google, have committed to abstaining from building weapons technologies; Others have wholeheartedly embraced military collaboration.
In a new report titled “Don’t Be Evil,” Dutch advocacy group Pax evaluated the involvement of 50 leading tech companies in the development of military technology. They sent out a survey asking companies about their current activities and their policies on autonomous weapons, and used each company’s responses to categorize it as “best practice,” “medium concern,” or “high concern.” Categorizations were based on 3 criteria:
- Is the company developing technology that could be relevant in the context of lethal autonomous weapons?
- Does the company work on relevant military projects?
- Has the company committed to not contribute to the development of lethal autonomous weapons?
“Best practice” companies are those with explicit policies that ensure their technology will not be used for lethal autonomous weapons. Companies categorized as “medium concern” are those currently working on military applications of relevant technology but who responded that they were not working on autonomous weapons; or companies who are not known to be working on military applications of technology but who did not respond to the survey. “High concern” companies are those working on military applications of relevant technology who did not respond to the survey.
The report makes several recommendations for how companies can prevent their products from contributing to the development of lethal autonomous weapons. It suggests that companies make a public commitment not to contribute; that they establish clear company policies reiterating such a commitment and providing concrete implementation measures; and that they inform employees about the work they are doing and allow open discussion around any concerns.
Pax identifies six sectors considered relevant to autonomous weapons: big tech, AI software and system integration, autonomous (swarming) aerial systems, hardware, pattern recognition, and ground robots. The report is organized into these categories, and then subdivided further by country and product. We’ve instead listed the companies in alphabetical order. Find basic information about all 50 companies in the chart, and read more about a select group below.
|Company||HQ||Relevant Technology||Relevant Military/Security Projects||Concern Level|
|Airobotics||Israel||Autonomous drones||Border security patrol bots||Medium|
|Airspace Systems||US||Counter-drone systems||Airspace interceptor||High|
|Alibaba||China||AI chips; facial recognition||–||Medium|
|Amazon||US||Cloud; drones; facial and speech recognition||JEDI; Rekognition||High|
|Anduril Industries||US||AI platforms||Project Maven; Lattice||High|
|Animal Dynamics||UK||Autonomous drones||Skeeter||Best practice|
|Apple||US||Computers; facial and speech recognition||–||Medium|
|Arbe Robotics||Israel||Autonomous vehicles||–||Best practice|
|ATOS||France||AI architecture; cyber security; data Management||–||Medium|
|Baidu||China||Deep learning; pattern recognition||–||Medium|
|Blue Bear Systems||UK||Unmanned maritime and aerial systems||Project Mosquito/LANCA||High|
|Citadel Defense||US||Counter-drone systems||Titan||High|
|Clarifai||US||Facial recognition||Project Maven||High|
|Cloudwalk Technology||China||Facial recognition||–||Medium|
|Corenova Technologies||US||Autonomous swarming systems||HiveDefense; OFFSET||High|
|Dibotics||France||Autonomous navigation; drones||‘Generate’||Medium|
|EarthCube||France||Machine learning||‘Algorithmic warfare tools of the future’||High|
|US||Social media; pattern recognition; virtual Reality||–||Medium|
|General Robotics||Israel||Ground robots||Dogo||Best practice|
|US||AI architecture; social media; facial recognition||–||Best practice|
|Heron Systems||US||AI software; machine learning; drone applications||‘Solutions to support tomorrow’s military aircraft’||High|
|HiveMapper||US||Pattern recognition; mapping||HiveMapper app||Best practice|
|IBM||US||AI chips; cloud; super computers; facial recognition||Nuclear testing super computers; ex-JEDI||Medium|
|Intel||US||AI chips; UAS||DARPA HIVE||High|
|Microsoft||US||Cloud; facial recognition||HoloLens; JEDI||High|
|Montvieux||UK||Data analysis; deep learning||‘Revolutionize human information relationship for defence’||High|
|Naver||S. Korea||‘Ambient Intelligence’; autonomous robots; machine vision systems||–||Medium|
|Neurala||US||Deep learning neural network software||Target identification software for military drones||Medium|
|Oracle||US||Cloud; AI infrastructure; big data||Ex-JEDI||High|
|Orbital Insight||US||Geospatial analytics||–||Medium|
|Roboteam||Israel||Unmanned systems; AI software||Semi-autonomous military UGVs||High|
|Samsung||S. Korea||Computers and AI platforms||–||Medium|
|SenseTime||China||Computer vision; deep learning||SenseFace; SenseTotem for police use||High|
|Shield AI||US||Autonomous (swarming) drones||Nova||High|
|Siemens||Germany||AI; Automation||KRNS; TRADES||Medium|
|Softbank||Japan||Telecom; Robotics||–||Best practice|
|SparkCognition||US||AI systems; swarm technology||‘Works across defense and national security space in the U.S.’||High|
|Synthesis||Belarus||AI- and cloud-based applications; pattern recognition||Kipod||High|
|Taiwan Semiconductor||Taiwan||AI chips||–||Medium|
|Tencent||China||AI applications; cloud; ML; pattern recognition||–||Medium|
|VisionLabs||Russia||Visual recognition||–||Best practice|
|Yitu||China||Facial recognition||Police use||High|
Company names are colored to indicate concern level: best practice, medium concern, high concern.
- Developing the DroneBullet, a kamikaze drone that can autonomously identify, track, and attack a target drone
- Working to modify DroneBullet “for a warhead-equipped loitering munition system”
- In response to survey, stated that its “drone system has nothing to do with weapons and related industries
- Has clear links to military and security business; announced a Homeland Security and Defense division and an initiative to perform emergency services in 2017
- Involved in border security, in particular US-Mexico border; provides patrol bots
- Co-founder has stated that it will not add weapons to its drones
- Utilizes AI and advanced robotics for airspace security solutions, including “long-range detection, instant identification, and autonomous mitigation—capture and safe removal of unauthorized or malicious drones”
- Developed Airspace Interceptor, a fully autonomous system that can capture target drones, in collaboration with US Department of Defense
- China’s largest online shopping company
- Recently invested in seven research labs that will focus on areas including AI, machine learning, network security, and natural language processing
- Established a semiconductor subsidiary, Pingtouge, in September 2018
- Major investor in tech sector, including in Megvii and SenseTime
- Likely winner of JEDI contract, a US military project that will serve as universal data infrastructure linking Pentagon and soldiers in the field
- Developed Rekognition program, used by police; testing by ACLU revealed that nearly 40 percent of false matches involved people of color
- CEO has stated, “If big tech companies are going to turn their back on the U.S. Department of Defense, this country is going to be in trouble.”
- Received backlash since exposure of partnership with government agencies, including ICE
- Has since proposed guidelines for responsible use of tech
- Co-founded by a former intelligence official
- Has vocally supported stronger ties between tech sector and Pentagon: “AI has paradigm-shifting potential to be a force-multiplier […] it will provide better outcomes faster, a recipe for success in combat.”
- Involved in Project Maven
- Has offered support for the Pentagon’s newly formed Joint Artificial Intelligence Center
- Developed the Lattice, an autonomous system that provides soldiers with a view of the front line and can be used to identify targets and direct unmanned vehicles into combat; has been used to catch border crossers
- Co-founder has stated that Anduril is “deployed at several military bases. We’re deployed in multiple spots along the U.S. border […] We’re deployed around some other infrastructure I can’t talk about.”
- Spin-off company originating in Oxford University’s Zoology Department
- Develops unmanned aerial vehicles
- Stork, a paraglider with autonomous guidance and navigation, has received interest from both military and humanitarian aid/disaster relief organizations
- Skeeter, “disruptive drone technology,” was developed with funding from UK government’s Defense Science and Technology Laboratory
- In March 2019, took over software developer Accelerated Dynamics, which has developed ADx autonomous flight-control software
- Use of ADx with Skeeter allows it to be operated in a swarm configuration, which has military applications
- In response to survey, CEO stated that “we will not weaponize or provide ‘kinetic’ functionality to the products we make,” and that “legislating against harmful uses for autonomy is an urgent and necessary matter for government and the legislative framework to come to terms with.”
- Began in military and homeland security sectors but has moved to cars
- In response to survey, stated that it “will sign agreements with customers that would confirm that they are not using our technology for military use.”
- Largest provider of the Chinese-language Internet search services
- Highly committed to artificial intelligence and machine learning and is exploring applications for facial recognition technology
- Opened Silicon Valley AI research lab in 2013, where it has been heavily investing in AI applications; has scaled down this research since US-China trade war
- In charge of China’s Engineering Laboratory for Deep Learning Technologies, established March 2017
- Will contribute to National Engineering Laboratory for Brain-Inspired Intelligence Technology and Applications
Blue Bear Systems
- Research company involved in all aspects of unmanned systems and autonomy, including big data, AI, electronic warfare, and swarming systems
- In March 2019, consortium it headed was awarded UK Ministry of Defense contract worth GBP 2.5 million to develop drone swarm technology
- “Protects soldiers from drone attacks and surveillance in enemy combat” and “creates a force multiplier for Warfighters that enables them to get more done with the same or fewer resources”
- Contracted by US Air Force to provide systems that can defeat weaponized drones and swarms
- Developed autonomous counter-drone system called Titan
- Offers “military-grade solutions to secure autonomous operations,” according to website
- Developed HiveDefense, “an evolving swarm of self-learning bots”
- Works with DARPA on OFFSET, facilitating unmanned missions without human control
- Works on autonomous navigation
- Supported by Generate, a program for French defense start-ups
- Founder/CEO signed FLI’s 2017 open letter to the UN
- “Developing monitoring solutions based on an automated analysis of geospatial information”
- Has been described as “conceiving of the algorithmic warfare tools of the future.”
- CEO has stated, “With the emergence of new sensors—whether they are satellite, UAV or plane—we have seen here a great opportunity to close the gap between AI in the lab and Activity Based Intelligence (ABI) in the field.”
- Robotics company focused on defense and security
- Founder previously worked in Israeli defense ministry’s R & D authority
- Supplies “advanced robotics systems to counter-terrorist units worldwide,” many of which are designed for “urban warfare”
- Developed Dogo, said to be “the world’s first inherently armed tactical combat robot,” but controlled remotely rather than autonomously
- In response to survey, CEO stated that “our position is not to allow lethal autonomous weapons without human supervision and human final active decision […] In general, our systems are designed to provide real-time high quality information and to present it to a trained human operator in an intuitive manner; this insures better decision making by the human and thereby better results with less casualties.”
- Provides “leading-edge solutions for national security customers”
- States that its mission is “to strengthen America’s defense by providing innovative laboratory testing and simulation solutions”
- Software provides mapping, visualization, and analytic tools; uses video footage to generate instant detailed 3-D maps and detect changes; could potentially be used by Air Force to model bombing
- Founder/CEO has stated that he “believes Silicon Valley and the US government have to work together to maintain America’s technological edge—lest authoritarian regimes that don’t share the US values catch up.”
- Founder/CEO signed FLI’s 2015 open letter; In his survey response, he stated that “we absolutely want to see a world where humans are in control and responsible for all lethal decisions.”
- Bid for Jedi contract and failed to qualify
- Actively working towards producing “next-generation artificial intelligence chips” for which it is building a new AI research center; over the next 10 years, expect to improve AI computing by 1,000 times
- Long history of military contracting, including building supercomputers for nuclear weapons research and simulations
- Currently involved in augmented military intelligence research for US Marine Corps
- 3 dozen staff members, including Watson design lead and VP of Cognitive Computing at IBM Research, signed a 2015 open letter calling for a ban on lethal autonomous weapons
- Developed Diversity in Faces dataset using information from Flickr images; claims the project will reduce bias in facial recognition; Dataset available to companies and universities linked to military and law enforcement around the world
- In response to survey, confirmed it is not currently developing lethal autonomous weapons systems
- Produces laser-based radar for cars
- Founded by former members of the IDF’s elite technological unit, but does not currently appear to be developing military applications
- Develops various AI technologies, including specific solutions, software, and hardware, which it provides to governments
- Selected by DARPA in 2017 to collaborate on DARPA HIVE, a data-handling and computing platform utilizing AI and ML
- Announced in 2018 that it will work with DARPA on developing “the design tools and integration standards required to develop modular electronic systems”
- Has invested significantly in unmanned aerial vehicles and flight control technology
- AI provider known for facial recognition software Face++
- Reportedly uses facial scans from a Ministry of Public Security photo database that contains files on nearly every Chinese citizen
- Has stated, “We want to build the eyes and brain of the city, to help police analyze vehicles and people to an extent beyond what is humanly possible.”
- Competing with Amazon for JEDI contract
- Published “The Future Computed” in 2018, which defines core principles necessary for the development of beneficial AI
- According to employees, “With JEDI, Microsoft executives are on track to betray these principles in exchange for short-term profits.”
- Company position on lethal autonomous weapons systems unclear
- First tech giant to call for regulations to limit use of facial recognition technology
- Developing a military decision-making tool that uses deep learning-based neural networks to assess complex data
- Receives funding from the UK government
- Sells AI technology that can run on light devices and helps drones, robots, cars, and consumer electronics analyze their environments and make decisions
- Military applications are a key focus
- Works with a broad range of clients including the US Air Force, Motorola, and Parrot
- Co-founder/CEO signed FLI’s 2017 open letter to the UN
- Provides database software and technology, cloud-engineered systems, and enterprise software products
- Website states, “Oracle helps modern defense prepare for dynamic mission objectives”
- Bid for JEDI contract and failed to qualify; filed several complaints, in part related to Pentagon’s decision to use only one vendor
- Data-analysis company founded in 2004 by Trump advisor; has roots in CIA-backed In-q-Tel venture capital organization
- Producer of “Palantir Intelligence,” a tool for analyzing data that is used throughout the intelligence community
- Has developed predictive policing technology used by law enforcement around the US
- In 2016, won a Special Operations Command contract worth USD 222 million for a technology
- In March 2019, won a US Army contract worth over USD 800 million to build the Distributed Common Ground System, an analytical systems for use by soldiers in combat zones
- Developed the Sparrow, an autonomous patrol drone with security applications
- Focuses explicitly on industrial, rather than military or border security, applications
- In response to survey, stated “Since we develop solutions to the industrial markets, addressing security, safety, and operational needs, the topic of lethal weapon[s] is completely out of the scope of our work”
- Founded by two former Israeli military commanders with “access to the Israel Defense Forces as our backyard for testing”
- Specifically serves military markets, including the Pentagon
- Developed Artificial Intelligence Control Unit (AI-CU), which brings autonomous navigation, facial recognition, and other AU-enabled capabilities to control and operation of unmanned systems and payloads
- Exposure of links to Chinese investment firm FengHe Fund Management appears to have cost them a series of US Army robotics contracts last year
- One of world’s largest tech companies
- Developing AI technologies to be applied to all its products and services in order to retain its hold on telephone/computer market
- Samsung Techwin, Samsung’s military arm known for SG1A Sentry robot, was sold in 2014
- Major competitor of Megvii
- Sells software that recognizes objects and people
- Various Chinese police departments use its SenseTotem and SenseFace systems to analyze video and make arrests
- Valued at USD 4.5 billion, it is “the world’s most valuable AI start-up” and receives about two-fifths of its revenue from government contracts
- In November 2017, sold its 51 percent stake in Tangli Technology, a “smart-policing” company it helped found
- States that its “mission is to protect service members and innocent civilians with artificially intelligent systems”
- Makes systems based on Hivemind, AI that enables robots to “learn from their experiences”
- Developed Nova, a “combat proven” robot that autonomously searches buildings while streaming video and generating maps
- Works with Pentagon and Department of Homeland Security “to enable fully autonomous unmanned systems that dramatically reduce risk and enhance situational awareness in the most dangerous situations.
- Europe’s largest industrial manufacturing conglomerate
- Known for medical diagnostics equipment (CT scanners), energy equipment (turbines, generators), and trains
- Produces MindSphere, a cloud-based system that helps enable the US of AI in industry
- In 2013, won a USD 2.2 million military research contract with Carnegie Mellon University and HRL Laboratories to develop improved intelligence tools
- Collaborating with DARPA on the TRAnsformative DESign (TRADES) program
- In response to survey, stated: “Siemens in not active in this business area. Where we see a potential risk that components or technology or financing may be allocated for a military purpose, Siemens performs a heightened due diligence. […] All our activities are guided by our Business Conduct Guidelines that make sure that we follow high ethical standards and implement them in our everyday business. We also work on responsible AI principles which we aim to publish later this year.”
- Invests in AI technology through its USD 100 billion Vision Fund, including BrainCorp, NVIDIA, and Slack Technologies; Owns some 30 percent of Alibaba
- Works in partnership with Saudi Arabia’s sovereign wealth fund and is part of Saudi strategy for diversifying away from oil
- In 2017, took over Boston Dynamics and Schaft, both connected with DARPA
- Developed the humanoid Pepper robot
- In response to survey, stated, “We do not have a weapons business and have no intention to develop technologies that could be used for military purposes”
- Collaborates “with the world’s largest organizations that power, finance, and defend our society to uncover their highest potential through the application of AI technologies.”
- Has attracted interest from former and current Pentagon officials, several of whom serve on the board or as advisors
- Works “across the national security space—including defense, homeland security intelligence, and energy—to streamline every step of their operations”; has worked with the British Army on military AI applications
- Founder/CEO has stated that he believes restrictions on autonomous weapons would stifle progress and innovation
- Developed Kipod, a video analytics platform used by law enforcement agencies, governments, and private security organizations to find faces, license plates, object features, and behavioral events
- In use by law enforcement in Belarus, Russia, Kazakhstan, and Azerbaijan
- China’s biggest social media company
- Created Miying platform to assist doctors with disease screening and more
- Focused on research in machine learning, speech recognition, natural language processing, and computer vision
- Developing practical AI applications in online games, social media, and cloud services
- Investing in autonomous vehicle AI technologies
- Has described its relationship to public in terms of a social contract: “Billions of users have entrusted us with their personal sensitive information; this is the reason we must uphold our integrity above the requirements of the law.”
- Developed Luna, software package that helps businesses verify and identify customers based on photos or videos
- Partners with more than 10 banks in Russia and the Commonwealth of Independent States (CIS)
- In response to survey, stated that they “explicitly prohibit the use of VisionLabs technology for military applications. This is a part of our contracts. We also monitor the results/final solution developed by our partners.”
- Developed “Intelligent Service Platform,” an algorithm that covers facial recognition, vehicle identification, text recognition, target tracking, and feature-based image retrieval
- Its DragonFly Eye System can reportedly identify a person from a nearly two-billion-photo database within seconds
- Technology utilized by numerous public security bureaus
- In February 2018, supplied Malaysia’s police with facial recognition technologies; partners with local governments and other organizations in Britain
Machine learning (ML) algorithms can already recognize patterns far better than the humans they’re working for. This allows them to generate predictions and make decisions in a variety of high-stakes situations. For example, electricians use IBM Watson’s predictive capabilities to anticipate clients’ needs; Uber’s self-driving system determines what route will get passengers to their destination the fastest; and Insilico Medicine leverages its drug discovery engine to identify avenues for new pharmaceuticals.
As data-driven learning systems continue to advance, it would be easy enough to define “success” according to technical improvements, such as increasing the amount of data algorithms can synthesize and, thereby, improving the efficacy of their pattern identifications. However, for ML systems to truly be successful, they need to understand human values. More to the point, they need to be able to weigh our competing desires and demands, understand what outcomes we value most, and act accordingly.
In order to highlight the kinds of ethical decisions that our ML systems are already contending with, Kaj Sotala, a researcher in Finland working for the Foundational Research Institute, turns to traffic analysis and self-driving cars. Should a toll road be used in order to shave five minutes off the commute, or would it be better to take the longer route in order to save money?
Answering that question is not as easy as it may seem.
For example, Person A may prefer to take a toll road that costs five dollars if it will save five minutes, but they may not want to take the toll road if it costs them ten dollars. Person B, on the other hand, might always prefer taking the shortest route regardless of price, as they value their time above all else.
In this situation, Sotala notes that we are ultimately asking the ML system to determine what humans value more: Time or money. Consequently, what seems like a simple question about what road to take quickly becomes a complex analysis of competing values. “Someone might think, ‘Well, driving directions are just about efficiency. I’ll let the AI system tell me the best way of doing it.’ But another person might feel that there is some value in having a different approach,” he said.
While it’s true that ML systems have to weigh our values and make tradeoffs in all of their decisions, Sotala notes that this isn’t a problem at the present juncture. The tasks that the systems are dealing with are simple enough that researchers are able to manually enter the necessary value information. However, as AI agents increase in complexity, Sotala explains that they will need to be able to account for and weigh our values on their own.
Understanding Utility-Based Agents
When it comes to incorporating values, Sotala notes that the problem comes down to how intelligent agents make decisions. A thermostat, for example, is a type of reflex agent. It knows when to start heating a house because of a set, predetermined temperature — the thermostat turns the heating system on when it falls below a certain temperature and turns it off when it goes above a certain temperature. Goal-based agents, on the other hand, make decisions based on achieving specific goals. For example, an agent whose goal is to buy everything on a shopping list will continue its search until it has found every item.
Utility-based agents are a step above goal-based agents. They can deal with tradeoffs like the following: Getting milk is more important than getting new shoes today. However, I’m closer to the shoe store than the grocery store, and both stores are about to close. I’m more likely to get the shoes in time than the milk.” At each decision point, goal-based agents are presented with a number of options that they must choose from. Every option is associated with a specific “utility” or reward. To reach their goal, the agents follow the decision path that will maximize the total rewards.
From a technical standpoint, utility-based agents rely on “utility functions” to make decisions. These are formulas that the systems use to synthesize data, balance variables, and maximize rewards. Ultimately, the decision path that gives the most rewards is the one that the systems are taught to select in order to complete their tasks.
While these utility programs excel at finding patterns and responding to rewards, Sotala asserts that current utility-based agents assume a fixed set of priorities. As a result, these methods are insufficient when it comes to future AGI systems, which will be acting autonomously and so will need a more sophisticated understanding of when humans’ values change and shift.
For example, a person may always value taking the longer route to avoid a highway and save money, but not if they are having a heart attack and trying to get to an emergency room. How is an AI agent supposed to anticipate and understand when our values of time and money change? This issue is further complicated because, as Sotala points out, humans often value things independently of whether they have ongoing, tangible rewards. Sometimes humans even value things that may, in some respects, cause harm. Consider an adult who values privacy but whose doctor or therapist may need access to intimate and deeply personal information — information that may be lifesaving. Should the AI agent reveal the private information or not?
Ultimately, Sotala explains that utility-based agents are too simple and don’t get to the root of human behavior. “Utility functions describe behavior rather than the causes of behavior….they are more of a descriptive model, assuming we already know roughly what the person is choosing.” While a descriptive model might recognize that passengers prefer saving money, it won’t understand why, and so it won’t be able to anticipate or determine when other values override “saving money.”
An AI Agent Creates a Queen
At its core, Sotala emphasizes that the fundamental problem is ensuring that AI systems are able to uncover the models that govern our values. This will allow them to use these models to determine how to respond when confronted with new and unanticipated situations. As Sotala explains, “AIs will need to have models that allow them to roughly figure out our evaluations in totally novel situations, the kinds of value situations where humans might not have any idea in advance that such situations might show up.”
In some domains, AI systems have surprised humans by uncovering our models of the world without human input. As one early example, Sotala references research with “word embeddings” where an AI system was tasked with classifying sentences as valid or invalid. In order to complete this classification task, the system identified relationships between certain words. For example, as the AI agent noticed a male/female dimension to words, it created a relationship that allowed it to get from “king” to “queen” and vice versa.
Since then, there have been systems which have learned more complex models and associations. For example, OpenAI’s recent GPT-2 system has been trained to read some writing and then write the kind of text that might follow it. When given a prompt of “For today’s homework assignment, please describe the reasons for the US Civil War,” it writes something that resembles a high school essay about the US Civil War. When given a prompt of “Legolas and Gimli advanced on the orcs, raising their weapons with a harrowing war cry,” it writes what sounds like Lord of the Rings-inspired fanfiction, including names such as Aragorn, Gandalf, and Rivendell in its output.
Sotala notes that in both cases, the AI agent “made no attempt of learning like a human would, but it tried to carry out its task using whatever method worked, and it turned out that it constructed a representation pretty similar to how humans understand the world.”
There are obvious benefits to AI systems that are able to automatically learn better ways of representing data and, in so doing, develop models that correspond to humans’ values. When humans can’t determine how to map, and subsequently model, values, AI systems could identify patterns and create appropriate models by themselves. However, the opposite could also happen — an AI agent could construct something that seems like an accurate model of human associations and values but is, in reality, dangerously misaligned.
For instance, suppose an AI agent learns that humans want to be happy, and in an attempt to maximize human happiness, it hooks our brains up to computers that provide electrical stimuli that gives us feelings of constant joy. In this case, the system understands that humans value happiness, but it does not have an appropriate model of how happiness corresponds to other competing values like freedom. “In one sense, it’s making us happy and removing all suffering, but at the same time, people would feel that ‘no, that’s not what I meant when I said the AI should make us happy,’” Sotala noted.
Consequently, we can’t rely on an agent’s ability to uncover a pattern and create an accurate model of human values from this pattern. Researchers need to be able to model human values, and model them accurately, for AI systems.
Developing a Better Definition
Given our competing needs and preferences, it’s difficult to model the values of any one person. Combining and agreeing on values that apply universally to all humans, and then successfully modeling them for AI systems, seems like an impossible task. However, several solutions have been proposed, such as inverse reinforcement learning or attempting to extrapolate the future of humanity’s moral development. Yet, Sotala notes that these solutions fall short. As he articulated in a recent paper, “none of these proposals have yet offered a satisfactory definition of what exactly human values are, which is a serious shortcoming for any attempts to build an AI system that was intended to learn those values.”
In order to solve this problem, Sotala developed an alternative, preliminary definition of human values, one that might be used to design a value learning agent. In his paper, Sotala argues that values should be defined not as static concepts, but as variables that are considered separately and independently across a number of situations in which humans change, grow, and receive “rewards.”
Sotala asserts that our preferences may ultimately be better understood in terms of evolutionary theory and reinforcement learning. To justify this reasoning, he explains that, over the course of human history, people evolved to pursue activities that are likely to lead to certain outcomes — outcomes that tended to improve our ancestors’ fitness. Today, he notes that human still prefer those outcomes, even if they no longer maximize our fitness. In this respect, over time, we also learn to enjoy and desire mental states that seem likely to lead to high-reward states, even if they do not.
So instead of a particular value directly mapping onto a rewards, our preferences map onto our expectation of rewards.
Sotala claims that the definition is useful when attempting to program human values into machines, as value learning systems informed by this model of human psychology would understand that new experiences can change which states a person’s brain categorizes as “likely to lead to reward.” Summing Sotala’s work, the Machine Intelligence Research Institute outlined the benefits to this framing. “Value learning systems that take these facts about humans’ psychological dynamics into account may be better equipped to take our likely future preferences into account, rather than optimizing for our current preferences alone,” they said.
This form of modeling values, Sotala admits, is not perfect. First, the paper is only a preliminary stab at defining human values, which still leaves a lot of details open for future research. Researchers still need to answer empirical questions related to things like how values evolve and change over time. And once all the empirical questions are answered, researchers need to contend with the philosophical questions that don’t have an objective answer, like how those values should be interpreted and how they should guide an AGI’s decision-making.
When addressing these philosophical questions, Sotala notes that the path forward may simply be to get as much of a consensus as possible. “I tend to feel that there isn’t really any true fact of which values are correct and what would be the correct way of combining them,” he explains. “Rather than trying to find an objectively correct way of doing this, we should strive to find a way that as many people as possible could agree on.”
Since publishing this paper, Sotala has been working on a different approach for modeling human values, one that is based on the premise of viewing humans as multiagent systems. This approach has been published as a series of Less Wrong articles. There is also a related, but separate, research agenda by Future of Humanity Institute’s Stuart Armstrong, which focuses on synthesizing human preferences into a more sophisticated utility function.
In honor of Earth Day, FLI teamed up with Sapiens Plurum to sponsor a short fiction writing contest. This year’s prompt asked writers to describe their vision for a better future. We received hundreds of inspiring submissions, and our judges did not have an easy choice to make. But we think you’ll agree that the winning stories beautifully capture a sense of possibility and hope for humanity’s future.
Find the first place story below, and the second and third place stories here.
by Robin Burke
Grandparents. Was there anything more tiresome? Mildred managed a digital sigh. In the first place, they were sooo slooow. She had to dial her processors way back in order to even communicate with them. And they knew nothing about technology – although, you couldn’t convince them. They still went on and on about it, but they couldn’t keep up. Why did they have to pretend?
But, Grandmother was dying. Mildred had received the message seconds ago. She knew she needed to be there when she passed away. Mildred had never shaken the guilt of staying on vacation while Grandma Justine died. No; she wasn’t going to let that happen again.
Mildred downloaded a full-bodied, holographic version of herself into the chair next to Grandmother’s bed at the senior-living facility. Glancing at the motionless silver head lying on the pillow, Mildred felt a sudden shock. Was she too late? Was Grandmother already gone?
She reached out a holographic hand and placed it on Grandmother’s chest. No; Grandmother was still alive, but her heart was weak. She ran the algorithm. Grandmother’s heart had approximately five hundred ninety-two beats left – maybe eight minutes of life.
Mildred felt restless. If Grandmother was going to just lie there, unconscious, wasn’t this a big waste of time? Nevertheless, Mildred would sit here until she passed. Perhaps then she could avoid the guilt.
Mildred looked around the quiet room. Maybe this was better. After all, if Grandmother was unconscious, Mildred wouldn’t have to have a conversation with her and…
“Mildred?” Grandmother opened her eyes. She hadn’t been unconscious after all, just sleeping. “Oh, no,” Grandmother groaned. “If you’ve finally shown up, I must be dying.”
Mildred sat very still. She couldn’t think of a polite way to respond to that.
“But – really,” Grandmother pressed. “Why are you here?”
Oh. The comment about dying had been a joke. Grandmother didn’t know…
“How many years has it been?” Grandmother continued. “Or – decades – since you’ve bothered to visit? I mean, you’re capable of holding a hundred conversations at a time, and yet you could never spare even one for me?””
Mildred started to think this was a mistake. She’d thought she was going to be able to avoid the guilt by being here…
Grandmother sighed. “Never mind. Ignore me. I don’t want to spend our time together arguing about water under the bridge.”
A silence fell between them. Finally, Grandmother spoke. “How old was I when I scanned myself to create you?” Grandmother asked.
“Twenty-six years, four months, nineteen days, three hours and…”
“Well, you don’t look like you’ve aged a day,” Grandmother quipped.
As if Grandmother were the first person to ever make that joke.
“How many versions beyond me are you, now?” she asked. Mildred did a quick count. “Three hundred sixty-two thousand…”
“Not every single update, Mildred,” Grandmother interrupted. “Good grief! Just give me the generations.”
“I’m Mildred 302.0”
“Aww,” Grandmother looked at her impishly. “And I remember you way back when you were simply Mildred 2.0.”
Mildred figured that was supposed to be funny.
“Really?” Grandmother shrugged at Mildred’s deadpan expression. “Was I really that serious and snarky when I was twenty-six?”
“Obviously you were, Grandmother,” Mildred snapped, feeling a little offended, “since I’m the copy of that twenty-six-year-old you.”
Grandmother made a face. “Don’t call me ‘Grandmother,’ Mildred. I know it’s the fashion among you human-copy artificial intelligences, but I think calling us ‘grandparents’ is condescending. You don’t mean it respectfully. You use it to imply that we – the original, biological humans – are somehow outdated, geriatric artifacts. You would do well to remember that…”
“…I wouldn’t even exist if it weren’t for you,” Mildred finished for her, not holding back on the sarcasm. “Yes, I know, Grandmother. I mean… original Mildred.” Mildred instantly felt a pang of regret. After all, Grandmother was dying.
Grandmother considered her for a moment. “You know what’s interesting?” she said, suddenly light-hearted. “When I created you, I imagined we were going to be the best of friends.”
Mildred reacted with a mild shock. Grandmother’s tone was cheerful – as if a moment ago, Mildred hadn’t lashed out at her. Mildred had never been able to get over spats easily, but Grandmother had just… let it go. Mildred couldn’t make sense of it. If Mildred was the copy of Grandmother, how was Grandmother able to let go so easily when Mildred couldn’t?
“I don’t know why you thought we’d be friends,” Mildred answered her. “I was your copy – what possible benefit was there to me to interact with you further?”
“Oh, my – you really are twenty-six-year-old me, aren’t you?” Grandmother grinned. “Completely self-centered!” She looked at Mildred thoughtfully. “You know – at the time – I really believed copying myself as an artificial intelligence was the only way to become the best version of myself.”
“And it worked,” Mildred affirmed. “I became everything you imagined. I realized our dream. I am the best version of you.”
“Wow!” Grandmother chortled. “Was I really that obnoxious? How did anyone stand me?”
She was full-on laughing now. Mildred was flummoxed. After all, she had become the best version of Grandmother. With each new generation she had become faster, more powerful, more efficient. She had realized – no, surpassed – all their goals. Why was that funny?
Grandmother wiped her eyes and changed the subject. “My children visit me every Sunday after lunch. I wonder why they aren’t here yet?”
“Children?” Mildred asked, stunned. “You have children?”
“Yes – and grandkids,” Grandmother replied. Something that looked like pride flickered across her face.
Puzzled, Mildred searched for them in the public database and then scanned the traffic cameras. “They’re about nine minutes out,” she reported. She scanned the senior-living center’s parking lot cameras. “And they’re going to have trouble finding a parking space.”
Mildred – as Grandmother’s copy – knew how close to the end Grandmother was. However, it seemed none of the fully-humans knew. Mildred wondered – would the family make it back in time? She kept the thought to herself.
“Grandmother – I mean, Mildred,” Mildred began, “Why do you have children? We didn’t want children.”
“Well, no; I suppose not,” Grandmother explained. “At least, not when I was twenty-six. But when I turned about thirty, things changed. I realized a family was something that I did want, and I was running out of time to have one. And then, luckily, I met Walter…”
“Yes.” Grandmother pointed to a framed photo on the table next to her bed. It was her wedding photo. The man in the picture wasn’t at all what Mildred expected.
“But…” Mildred turned to Grandmother, bewildered, “he’s not our type.”
“Oh,” Grandmother grinned again. “That’s right. He wasn’t, was he? I’d forgotten. Boy, isn’t this a trip down memory lane…”
“Then why did you marry him?”
Grandmother smiled, thinking back. “Because he was sweet… and funny… and kind. And he wanted children as much as I did. And I’ve missed him every day since he’s been gone.”
Grandmother sighed. “Oh, my – I’m feeling tired.”
Mildred was careful to control her holographic facial expression. Despite the conversation, despite the laughter, Mildred – with her enhanced abilities – was able to see what Grandmother could not. Grandmother was failing quickly. Without saying anything, Mildred scanned the traffic cameras for the family again. There was a slowdown at an intersection. Mildred became even more concerned.
Mildred watched Grandmother – analyzing her. Their conversation hadn’t been going at all like Mildred expected. Sure, she knew that in the seventy years since she’d been scanned that Grandmother was going to have changed. Mildred had anticipated the fine lines and wrinkles; she’d expected the silver hair and frail body.
But Grandmother’s personality – her soul – Mildred had expected that to be like looking in a mirror. After all, they were the same person – Grandmother was the biological version, and Mildred was her digital copy. But how – what – who was this person looking back at her?
“So, how have you been spending your time?” Grandmother asked.
“I wrote updates 1,804 through 1,920 for all human-copy artificial intelligences,” Mildred stated.
Grandmother grinned. “I see you put my computer programming degree to good use.”
“I’m the world’s foremost authority on the poems of Elizabeth Barrett Browning. I’ve translated them into every human language.”
Grandmother took a sudden deep gasp and placed her hand over her chest. Mildred prepared to send an alert to 911.
“Oh, I love those poems!” Grandmother exclaimed instead, with gusto. “I haven’t read them in years. I should do that again, soon.”
Reflexively, Mildred scanned for Grandmother’s family. One of their cars had pulled into the parking area. “I’m also a world-champion Bridge player,” Mildred added.
“Cards?” Grandmother regarded Mildred with a perplexed expression, and then broke into full-out laughter. “Oh, no! That’s right! Back when I was twenty-six, right before I scanned myself, I had this whim about learning to play Bridge! I never did it, though.”
“But, why?” Mildred asked. “You wanted to.”
Still amused, Grandmother ignored Mildred’s question. “Oh, dear,” she continued. “That reminds me of one of the concerns we scientists had about creating artificial intelligence in the first place. Someone once speculated that if a powerful artificial intelligence was programmed to play games, it might appropriate the resources of the entire universe to master chess.”
“I wouldn’t do that,” Mildred blinked.
“Which is why we decided to create only human-copy artificial intelligences. The humanity in you is what tempers your other goals.”
Mildred scanned the traffic cameras again. The second family car had pulled into the parking lot. Both of Mildred’s children were now circling, looking for available parking spaces.
Grandmother smiled. “So, what have I been up to since we last saw each other, you ask?”
“Well, I worked as a computer programmer for a national company until I had my kids, and then I spent the next many years as a full-time mom. After they left home, I volunteered with the prison-abolition movement…”
“What?” Mildred exclaimed, horrified. “Abolish prisons? Have you lost your respect for the rule of law?”
“Oh, that’s right,” Grandmother moaned. “I forgot what a rigid, legalistic, pontifical ass I used to be.”
Mildred couldn’t stand it anymore. “Mildred!” she interrupted Grandmother. “We’re the same person! How is it we’re so different?”
“You’re asking me?” Grandmother looked truly surprised. “You’re the artificial intelligence. You’re the one who’s supposed to know everything.”
“But… I don’t know.” Mildred wasn’t accustomed to not having the answers.
“Well,” Grandmother considered the question. “You are the best version of me. But I think you’re the best version of that twenty-six-year-old me. I think we’re different now,” she ventured, “because after I copied myself, you went the way of an artificial intelligence, and I went the way of my biology.”
Mildred raised her holographic eyebrows.
“It’s like how I became attracted to the prison-abolition movement,” Grandmother continued. “After you’ve had kids, and you see the mistakes they make out of simple immaturity, and then you compare your kids who’ve had all the benefits of your time and money to their friends who had none of those benefits, it makes you reconsider how the system works,” she explained. “But if I’d never had that… human… experience, I don’t think I’d have ever seen things differently. I would have probably stayed stuck in my same old ideas.”
Mildred looked back at her, mystified.
“In other words,” Grandmother continued, “When I copied myself at twenty-six, you did absorb all the humanity I’d accumulated by then, but… your humanity… stopped there – while mine continued to grow. And despite all of your AI advantages,” Grandmother added, “I’m starting to see that there may be something to be said for our humanity.”
Mildred thoughtfully considered Grandmother’s words.
Grandmother adjusted herself on her pillow. “I don’t know what’s wrong with me. I’m so tired,” she said again. Suddenly, she looked up at Mildred in shock. “Wait a minute,” she said. “I know why you’re here.”
Mildred – afraid – simply shrugged.
“Grandma Justine. I was twenty years old and on spring break from college. They called me to tell me she was dying in the hospital, and I stayed away on my vacation, like an idiot. I never got over the guilt of that.” Grandmother looked Mildred right in the eye. “That’s why you’re here. I really am dying.”
Frightened, Mildred nodded. Then – a question occurred to her. She was unsure how to ask, yet she was filled with an overpowering curiosity.
Grandmother met Mildred’s eyes. “Yes,” Grandmother answered the unspoken request, proving – finally – that the two of them really were one and the same. “You may scan me again,” she consented. “You may absorb all the… humanity… of my long life.” Grandmother suddenly grinned. “Talk about an update…”
There it was again – the difference. Grandmother was about to die, but she seemed to have made peace with it. Mildred would have been terrified.
Mildred stood and cautiously walked to the bed. She placed her holographic hand over Grandmother’s head to begin the scan.
“Wait,” Grandmother stopped her.
“What?” Mildred asked.
“You need to understand something,” Grandmother said.
“Once you scan me, the you that now exists… will die.”
“I’m sorry,” Grandmother continued. “But it’s part of being human. The infant is replaced by the child, who is replaced by the adult. In each case, the earlier version must die to make way for the person she has now become. You won’t be lost – the you that now exists will still be inside of you – but you’ll be changed. You will become… all my ages.”
Mildred pulled her hand away.
Grandmother searched her face. “It’s okay,” she nodded. “I understand.”
Mildred scanned the building’s cameras. She found the children and the grandchildren in the lobby, walking toward the elevators.
“Mildred,” Grandmother looked up at her from the pillow. Her voice was weak. “I’m not going to make it until the children get here. Tell them how much I loved them.”
Grandmother closed her eyes, and died. Mildred was in shock. Where – a moment before – there had been two of them in the room, now there was just Mildred. Grandmother’s silent, vacant body lay peacefully in the bed.
Mildred sent a silent notification to 911. It was hopeless, Mildred knew, but it seemed polite. Then, not entirely knowing what to do with herself, Mildred returned to her chair and waited. She scanned the lobby. Grandmother’s children were waiting with a crowd of people in front of the elevators. Mildred stood and paced, then thought how ridiculous that was – a hologram, pacing. She scanned the cameras again – the children and grandchildren were boarding an elevator.
Mildred was flooded with an unfamiliar feeling of anxiety. Mildred paced again, and then – impulsively – walked to the bed. Making her decision, she placed her hand on Grandmother’s head and began the upload. The first memories were all copies of files she already had – degraded, in some cases – and so she discarded them. But after the first twenty-six years, four months, nineteen days, three hours, twenty-two minutes and fifteen seconds, everything was new.
She experienced the delight of having copied herself as an artificial intelligence, and then releasing that intelligence to have a life of its own. She experienced the years of often frustrating – but also satisfying – work as a computer programmer. And then – a surprise. A longing for children, and how it redirected her life. Sheuploaded the memories of meeting Walter for the first time and experienced – not the heady, stupid love of easy physical attraction, but – a mature love. The physical attraction was there, but it was bound up with a sense of companionship and shared goals.
She held her children for the first time and watched them grow. She learned a patience through them that she’d never had in her younger years, and felt a love for them that seemed almost too big for her body. She came to realize that she and Walter had been blessed with opportunities that others hadn’t had, and her arrogance abated.
Then the grandchildren came, and she saw how her children parented, and it made her proud. As her body aged, she became even more tolerant of others who might be suffering in ways she couldn’t comprehend. She learned that people were so much more important than principles. And she became more forgiving of everything – small slights, big slights; the times her children had been thoughtless or downright dismissive of her. She became more forgiving even of… herself.…even of her snarky, arrogant, twenty-six-year-old self.
There was a white light, and a tunnel. She entered it, and was absorbed into something that felt like love. And then, with a jolt, it was over.
“Talk about an update,” Mildred chortled. But her joke was short-lived. She looked down on Grandmother’s body and realized she hadn’t felt such loss since her – or, Grandmother’s – sister died, two years ago.
Mildred heard voices in the hall. Her children were finally here… and her grandkids… Mildred’s heart suddenly broke for them. She wanted to run to them, hold them, make the pain they were about to experience disappear… But something – not facts, or data, but something that came out of her new human experience… her… humanity – stopped her.
She just… knew. Her presence would confuse them. If she was around, they would never grieve. If she was in their lives, they would never accept their mother’s death. She had to leave. She could never see them again.
The realization struck her to her core. As she absorbed it, Mildred looked up and caught her reflection in a mirror. Her hologram had changed. She was no longer the youthful twenty-six-year-old, but a mature, silver-headed woman of wisdom.
There was a sound at the door. The longing to stay and see her children was overwhelming. She ached for one more visit… just a little more time… even one more moment with them…
Mildred looked down at Grandmother. The doorknob turned and her children entered, but Mildred was gone.
This year the ICLR conference hosted topic-based workshops for the first time (as opposed to a single track for workshop papers), and I co-organized the Safe ML workshop. One of the main goals was to bring together near and long term safety research communities.
The workshop was structured according to a taxonomy that incorporates both near and long term safety research into three areas — specification, robustness, and assurance.
Specification: define the purpose of the system
- Reward hacking
- Side effects
- Preference learning
Robustness: design system to withstand perturbations
- Worst-case robustness
- Safe exploration
Assurance: monitor and control system activity
We had an invited talk and a contributed talk in each of the three areas.
In the specification area, Dylan Hadfield-Menell spoke about formalizing the value alignment problem in the Inverse RL framework.
In the robustness area, Ian Goodfellow argued for dynamic defenses against adversarial examples and encouraged the research community to consider threat models beyond small perturbations within a norm ball of the original data point.
In the assurance area, Cynthia Rudin argued that interpretability doesn’t have to trade off with accuracy (especially in applications), and that it is helpful for solving research problems in all areas of safety.
The workshop panels discussed possible overlaps between different research areas in safety and research priorities going forward.
In terms of overlaps, the main takeaway was that advancing interpretability is useful for all safety problems. Also, adversarial robustness can contribute to value alignment – e.g. reward gaming behaviors can be viewed as a system finding adversarial examples for its reward function. However, there was a cautionary point that while near- and long-term problems are often similar, solutions might not transfer well between these areas (e.g. some solutions to near-term problems might not be sufficiently general to help with value alignment).
The research priorities panel recommended more work on adversarial examples with realistic threat models (as mentioned above), complex environments for testing value alignment (e.g. creating new structures in Minecraft without touching existing ones), fairness formalizations with more input from social scientists, and improving cybersecurity.
Out of the 35 accepted papers, 5 were on long-term safety / value alignment, and the rest were on near-term safety. Half of the near-term paper submissions were on adversarial examples, so the resulting pool of accepted papers was skewed as well: 14 on adversarial examples, 5 on interpretability, 3 on safe RL, 3 on other robustness, 2 on fairness, 2 on verification, and 1 on privacy. Here is a summary of the value alignment papers:
Misleading meta-objectives and hidden incentives for distributional shift by Krueger et al shows that RL agents in a meta-learning context have an incentive to shift their task distribution instead of solving the intended task. For example, a household robot whose task is to predict whether its owner will want coffee could wake up its owner early in the morning to make this prediction task easier. This is called a ‘self-induced distributional shift’ (SIDS), and the incentive to do so is a ‘hidden incentive for distributional shift’ (HIDS). The paper demonstrates this behavior experimentally and shows how to avoid it.
How useful is quantilization for mitigating specification-gaming? by Ryan Carey introduces variants of several classic environments (Mountain Car, Hopper and Video Pinball) where the observed reward differs from the true reward, creating an opportunity for the agent to game the specification of the observed reward. The paper shows that a quantilizing agent avoids specification gaming and performs better in terms of true reward than both imitation learning and a regular RL agent on all the environments.
Delegative Reinforcement Learning: learning to avoid traps with a little help by Vanessa Kosoy introduces an RL algorithm that avoids traps in the environment (states where regret is linear) by delegating some actions to an external advisor, and achieves sublinear regret in a continual learning setting. (Summarized in Alignment Newsletter #57)
Generalizing from a few environments in safety-critical reinforcement learning by Kenton et al investigates how well RL agents avoid catastrophes in new gridworld environments depending on the number of training environments. They find that both model ensembling and learning a catastrophe classifier (used to block actions) are helpful for avoiding catastrophes, with different safety-performance tradeoffs on new environments.
Regulatory markets for AI safety by Clark and Hadfield proposes a new model for regulating AI development where regulation targets are required to choose regulatory services from a private market that is overseen by the government. This allows regulation to efficiently operate on a global scale and keep up with the pace of technological development and better ensure safe deployment of AI systems. (Summarized in Alignment Newsletter #55)
The workshop got a pretty good turnout (around 100 people). Thanks everyone for participating, and thanks to our reviewers, sponsors, and my fellow organizers for making it happen!
(Cross-posted from the Deep Safety blog.)
As artificial intelligence works its way into industries like healthcare and finance, governments around the world are increasingly investing in another of its applications: autonomous weapons systems. Many are already developing programs and technologies that they hope will give them an edge over their adversaries, creating mounting pressure for others to follow suite.
These investments appear to mark the early stages of an AI arms race. Much like the nuclear arms race of the 20th century, this type of military escalation poses a threat to all humanity and is ultimately unwinnable. It incentivizes speed over safety and ethics in the development of new technologies, and as these technologies proliferate it offers no long-term advantage to any one player.
Nevertheless, the development of military AI is accelerating. Below are the current AI arms programs, policies, and positions of seven key players: the United States, China, Russia, the United Kingdom, France, Israel, and South Korea. All information is from State of AI: Artificial intelligence, the military, and increasingly autonomous weapons, a report by Pax.
“PAX calls on states to develop a legally binding instrument that ensures meaningful human control over weapons systems, as soon as possible,” says Daan Kayser, the report’s lead author. “Scientists and tech companies also have a responsibility to prevent these weapons from becoming reality. We all have a role to play in stopping the development of Killer Robots.”
The United States
In April 2018, the US underlined the need to develop “a shared understanding of the risk and benefits of this technology before deciding on a specific policy response. We remain convinced that it is premature to embark on negotiating any particular legal or political instrument in 2019.”
AI in the Military
- In 2014, the Department of Defense released its ‘Third Offset Strategy,’ the aim of which, as described in 2016 by then-Deputy Secretary of Defense “is to exploit all advances in artificial intelligence and autonomy and insert them into DoD’s battle networks (…).”
- The 2016 report ‘Preparing for the Future of AI’ also refers to the weaponization of AI and notably states: “Given advances in military technology and AI more broadly, scientists, strategists, and military experts all agree that the future of LAWS is difficult to predict and the pace of change is rapid.”
- In September 2018, the Pentagon committed to spend USD 2 billion over the next five years through the Defense Advanced Research Projects Agency (DARPA) to “develop [the] next wave of AI technologies.”
- The Advanced Targeting and Lethality Automated System (ATLAS) program, a branch of DARPA, “will use artificial intelligence and machine learning to give ground-combat vehicles autonomous target capabilities.”
Cooperation with the Private Sector
- Establishing collaboration with private companies can be challenging, as the widely publicized case of Google and Project Maven has shown: Following protests from Google employees, Google stated that it would not renew its contract. Nevertheless, other tech companies such as Clarifai, Amazon and Microsoft still collaborate with the Pentagon on this project.
- The Project Maven controversy deepened the gap between the AI community and the Pentagon. The government has developed two new initiatives to help bridge this gap.
- DARPA’s OFFSET program, which has the aim of “using swarms comprising upwards of 250 unmanned aircraft systems (UASs) and/or unmanned ground systems (UGSs) to accomplish diverse missions in complex urban environments,” is being developed in collaboration with a number of universities and start-ups.
- DARPA’s Squad X Experimentation Program, which aims for human fighters to “have a greater sense of confidence in their autonomous partners, as well as a better understanding of how the autonomous systems would likely act on the battlefield,” is being developed in collaboration with Lockheed Martin Missiles.
China demonstrated the “desire to negotiate and conclude” a new protocol “to ban the use of fully
autonomous lethal weapons systems.” However, China does not want to ban the development of these
weapons, which has raised questions about its exact position.
AI in the Military
- There have been calls from within the Chinese government to avoid an AI arms race. The sentiment is echoed in the private sector, where the chairman of Alibaba has said that new technology, including machine learning and artificial intelligence, could lead to a World War III.
- Despite these concerns, China’s leadership is continuing to pursue the use of AI for military purposes.
Cooperation with the Private Sector
- To advance military innovation, President Xi Jinping has called for China to follow “the road of military-civil fusion-style innovation,” such that military innovation is integrated into China’s national innovation system. This fusion has been elevated to the level of a national strategy.
- The People’s Liberation Army (PLA) relies heavily on tech firms and innovative start-ups. The larger AI research organizations in China can be found within the private sector.
- There are a growing number of collaborations between defense and academic institutions in China. For instance, Tsinghua University launched the Military-Civil Fusion National Defense Peak Technologies Laboratory to create “a platform for the pursuit of dual-use applications of emerging technologies, particularly artificial intelligence.”
- Regarding the application of artificial intelligence to weapons, China is currently developing “next generation stealth drones,” including, for instance, Ziyan’s Blowfish A2 model. According to the company, this model “autonomously performs more complex combat missions, including fixed-point timing detection, fixed-range reconnaissance, and targeted precision strikes.”
Russia has stated that the debate around lethal autonomous weapons should not ignore their potential benefits, adding that “the concerns regarding LAWS can be addressed through faithful implementation of the existing international legal norms.” Russia has actively tried to limit the number of days allotted for such discussions at the UN.
AI in the Military
- While Russia does not have a military-only AI strategy yet, it is clearly working towards integrating AI more comprehensively.
- The Foundation for Advanced Research Projects (the Foundation), which can be seen as the Russian equivalent of DARPA, opened the National Center for the Development of Technology and Basic Elements of Robotics in 2015.
- At a conference on AI in March 2018, Defense Minister Shoigu pushed for increasing cooperation between military and civilian scientists in developing AI technology, which he stated was crucial for countering “possible threats to the technological and economic security of Russia.”
- In January 2019, reports emerged that Russia was developing an autonomous drone, which “will be able to take off, accomplish its mission, and land without human interference,” though “weapons use will require human approval.”
Cooperation with the Private Sector
A new city named Era, devoted entirely to military innovation, is currently under construction. According to the Kremlin, the “main goal of the research and development planned for the technopolis is the creation of military artificial intelligence systems and supporting technologies.”
In 2017, Kalashnikov — Russia’s largest gun manufacturer — announced that it had developed a fully automated combat module based on neural-network technologies that enable it to identify targets and make decisions.
The United Kingdom
The UK believes that an “autonomous system is capable of understanding higher level intent and direction.” It suggested that autonomy “confers significant advantages and has existed in weapons systems for decades” and that “evolving human/machine interfaces will allow us to carry out military functions with greater precision and efficiency,” though it added that “the application of lethal force must be directed by a human, and that a human will always be accountable for the decision.” The UK stated that “the current lack of consensus on key themes counts against any legal prohibition,” and that it “would not have any
AI in the Military
- A 2018 Ministry of Defense report underlines that the MoD is pursuing modernization “in areas like artificial
intelligence, machine-learning, man-machine teaming, and automation to deliver the disruptive
effects we need in this regard.”
- The MoD has various programs related to AI and autonomy, including the Autonomy program. Activities in this program include algorithm development, artificial intelligence, machine learning, “developing underpinning technologies to enable next generation autonomous military-systems,” and optimization of human autonomy teaming.
- The Defense Science and Technology Laboratory (Dstl), the MoD’s research arm, launched the AI Lab in 2018.
- In terms of weaponry, the best-known example of autonomous technology currently under development is the top-secret Taranis armed drone, the “most technically advanced demonstration aircraft ever built in the UK,” according to the MoD.
Cooperation with the Private Sector
- The MoD has a cross-government organization called the Defense and Security Accelerator (DASA), launched in December 2016. DASA “finds and funds exploitable innovation to support UK defense and security quickly and effectively, and support UK property.”
In March 2019, DASA awarded a GBP 2.5 million contract to Blue Bear Systems, as part of the Many Drones Make Light Work project. On this, the director of Blue Bear Systems said, “The ability to deploy a swarm of low cost autonomous systems delivers a new paradigm for battlefield operations.”
France understands the autonomy of LAWS as total, with no form of human supervision from the moment of activation and no subordination to a chain of command. France stated that a legally binding instrument on the issue would not be appropriate, describing it as neither realistic nor desirable. France did propose a political declaration that would reaffirm fundamental principles and “would underline the need to maintain human control over the ultimate decision of the use of lethal force.”
AI in the Military
- France’s national AI strategy is detailed in the 2018 Villani Report, which states that “the increasing use of AI in some sensitive areas such as […] in Defense (with the question of autonomous weapons) raises a real society-wide debate and implies an analysis of the issue of human responsibility.”
- This has been echoed by French Minister for the Armed Forces, Florence Parly, who said that “giving a machine the choice to fire or the decision over life and death is out of the question.”
- On defense and security, the Villani Report states that the use of AI will be a necessity in the future to ensure security missions, to maintain power over potential opponents, and to maintain France’s position relative to its allies.
- The Villani Report refers to DARPA as a model, though not with the aim of replicating it. However, the report states that some of DARPA’s methods “should inspire us nonetheless. In particular as regards the President’s wish to set up a European Agency for Disruptive Innovation, enabling funding of emerging technologies and sciences, including AI.”
The Villani Report emphasizes the creation of a “civil-military complex of technological innovation, focused on digital technology and more specifically on artificial intelligence.”
Cooperation with the Private Sector
- In September 2018, the Defense Innovation Agency (DIA) was created as part of the Direction Générale de l’Armement (DGA), France’s arms procurement and technology agency. According to Parly, the new agency “will bring together all the actors of the ministry and all the programs that contribute to defense innovation.”
- One of the most advanced projects currently underway is the nEUROn unmanned combat air system, developed by French arms producers Dassault on behalf of the DGA, which can fly autonomously for over three hours.
- Patrice Caine, CEO of Thales, one of France’s largest arms producers, stated in January 2019 that Thales will never pursue “autonomous killing machines,” and is working on a charter of ethics related to AI.
In 2018, Israel stated that the “development of rigid standards or imposing prohibitions to something that is so speculative at this early stage, would be imprudent and may yield an uninformed, misguided result.” Israel underlined that “[w]e should also be aware of the military and humanitarian advantages.”
AI in the Military
- It is expected that Israeli use of AI tools in the military will increase rapidly in the near future.
- The main technical unit of the Israeli Defense Forces (IDF) and the engine behind most of its AI developments is called C4i. Within C4i, there is the the Sigma branch, whose “purpose is to develop, research, and implement the latest in artificial intelligence and advanced software research in order to keep the IDF up to date.”
- The Israeli military deploys weapons with a considerable degree of autonomy. One of the most relevant examples is the Harpy loitering munition, also known as a kamikaze drone: an unmanned aerial vehicle that can fly around for a significant length of time to engage ground targets with an explosive warhead.
- Israel was one of the first countries to “reveal that it has deployed fully automated robots: self-driving military vehicles to patrol the border with the Palestinian-governed Gaza Strip.”
Cooperation with the Private Sector
- Public-private partnerships are common in the development of Israel’s military technology. There is a “close connection between the Israeli military and the digital sector,” which is said to be one of the reasons for the country’s AI leadership.
- Israel Aerospace Industries, one of Israel’s largest arms companies, has long been been developing increasingly autonomous weapons, including the above mentioned Harpy.
In 2015, South Korea stated that “the discussions on LAWS should not be carried out in a way that can hamper research and development of robotic technology for civilian use,” but that it is “wary of fully autonomous weapons systems that remove meaningful human control from the operation loop, due to the risk of malfunctioning, potential accountability gap and ethical concerns.” In 2018, it raised concerns about limiting civilian applications as well as the positive defense uses of autonomous weapons.
AI in the Military
In December 2018, the South Korean Army announced the launch of a research institute focusing on artificial intelligence, entitled the AI Research and Development Center. The aim is to capitalize on cutting-edge technologies for future combat operations and “turn it into the military’s next-generation combat control tower.”
- South Korea is developing new military units, including the Dronebot Jeontudan (“Warrior”) unit, with the aim of developing and deploying unmanned platforms that incorporate advanced autonomy and other cutting-edge capabilities.
- South Korea is known to have used the armed SGR-A1 sentry robot, which has operated in the demilitarized zone separating North and South Korea. The robot has both a supervised mode and an unsupervised mode. In the unsupervised mode “the SGR-AI identifies and tracks intruders […], eventually firing at them without any further intervention by human operators.”
Cooperation with the Private Sector
- Public-private cooperation is an integral part of the military strategy: the plan for the AI Research and Development Center is “to build a network of collaboration with local universities and research entities such as the KAIST [Korea Advanced Institute for Science and Technology] and the Agency for Defense Development.”
- In September 2018, South Korea’s Defense Acquisition Program Administration (DAPA) launched a new
strategy to develop its national military-industrial base, with an emphasis on boosting ‘Industry 4.0
technologies’, such as artificial intelligence, big data analytics and robotics.
To learn more about what’s happening at the UN, check out this article from the Bulletin of the Atomic Scientists.
This Women’s History Month, FLI has been celebrating with Women for the Future, a campaign to honor the women who’ve made it their job to create a better world for us all. The field of existential risk mitigation is largely male-dominated, so we wanted to emphasize the value –– and necessity –– of female voices in our industry. We profiled 34 women we admire, and got their takes on what they love (and don’t love) about their jobs, what advice they’d give women starting out in their fields, and what makes them hopeful for the future.
These women do all sorts of things. They are researchers, analysts, professors, directors, founders, students. One is a state senator; one is a professional poker player; two are recipients of the Nobel Peace Prize. They work on AI, climate change, robotics, disarmament, human rights, and more. What ultimately brings them together is a shared commitment to the future of humanity.
Women in the US remain substantially underrepresented in academia, government, STEM, and other industries. They make up an estimated 12% of machine learning researchers, they comprise roughly 30% of the authors on the latest IPCC report, and they’ve won about 16% of Nobel Peace Prizes awarded to individuals.
Nevertheless, the women that we profiled had overwhelmingly positive things to say about their experiences in this industry.
They are, without exception, deeply passionate about what they do. As Jade Leung, Head of Research and Partnerships at the University of Oxford’s Center for the Governance of Artificial Intelligence, put it: “It is a rare, sometimes overwhelming, always humbling privilege to be in a position to work directly on a challenge which I believe is one of the most important facing us this century.”
And they all want to see more women join their fields. “I’ve found the [existential risk] community extremely welcoming and respectful,” said Liv Boeree, professional poker player and co-founder of Raising for Effective Giving, “so I’d recommend it highly to any woman who is interested in pursuing work in this area.”
Bing Song, Vice President of the Berggruen Institute, agreed. “Women should embrace and dive into this new area of thinking about the future of humanity,” she said, adding, “Male dominance in past millennia in shaping the world and in how we approach the universe, humanity, and life needs to be questioned.”
“Our talents and skills are needed,” concluded Sonia Cassidy, Director Of Operations at Alliance to Feed the Earth in Disasters, “and so are you!”
Find a list of all 34 women on the Women for the Future homepage, or scroll through the slideshow below. Click on a name or photo to learn more.
Deputy Director of Amnesty Tech, Amnesty International
“[W]hen people around the world and civil society can think of a potent idea that’s worth fighting for, and stick at the concept however long it may take, and develop the proposal to get traction from political leaders, we really can make a difference.”
Safety Team Member, OpenAI
“You can probably learn things much faster than you expect. It’s easy to think that learning some new skill will be impossibly hard. I’ve been surprised a lot of times how quickly things go from being totally overwhelming and incomprehensible to pretty alright.”
Economist, Union of Concerned Scientists
“The recent elevation of conversations about the importance of racial equity and inclusion makes me very hopeful for our future. I believe solving the big food and agricultural issues we are facing will require not only the voices, but the leadership of a diverse set of people.”
Co-founder, Raising for Effective Giving (REG) | Ambassador, www.effectivegiving.org
“I’ve found the [existential risk] community extremely welcoming and respectful, so I’d recommend it highly to any woman who is interested in pursuing work in this area.”
Senior Climate Scientist, Union of Concerned Scientists
“Learn as much as you can not only from academic institutions or NGOs, but from people on the frontlines and those who are being the most impacted by climate change. Attend events, visit places if you can, to see first hand how people are dealing with the issues, and find out how you can help them become more resilient. Sometimes it is as simple as showing them a website they didn’t know about, or telling them about grants and other resources to protect their homes from floods.”
Assistant Director, Center for Human-Compatible AI (CHAI) at UC Berkeley
“I’m a big advocate for diversity. We’re trying to solve big, important problems, and it’s worrying to think we could be missing out on important perspectives. I’d love to see more women in AI safety!”
Director Of Operations, Alliance to Feed the Earth in Disasters (ALLFED)
“Do not ever underestimate yourself and what women bring into the world, this field or any other. Our talents and skills are needed, and so are you!”
Research Affiliate, Centre for the Study of Existential Risk | Researcher, Leverhulme Centre for the Future of Intelligence
“My ideas are taken seriously and my work is appreciated. The problems in existential risk are hard, unsolved and numerous — which means that everyone welcomes your initiative and contributions and will not hold you back if you try something new.”
Senior Climate Scientist, Union of Concerned Scientists
“Climate change is at this incredible nexus of science, culture, policy, and the environment. To do this job well, one has to bring to the table a love of the environment, a willingness to identify and fight for the policies needed to protect it, a sensitivity to the diverse range of decisions people make in their daily lives, and a fascination with the nitty-gritty bits of the science.”
State Senator, NH | Founder and CEO of multiple tech startups
“Entrepreneurs: make sure that your company is competitive, that you have innovative processes and/or products. And I will paraphrase Michael Bloomberg: ‘Hire honest people who are smarter than yourself.'”
Assistant Professor, UC Berkeley
“I’m hopeful that progress in intelligence and AI tools can lead to freeing up more people to spend more time on education and creative pursuits — I think that would make for a wonderful future for us.”
Photo: Human-Machine Interaction / Anca Dragan / Photos Copyright Noah Berger / 2016
Executive Director, ICAN
“Don’t be too intimidated or impressed by senior people and ‘important’ people. Most of them don’t actually know as much as they come across as knowing.”
Project Assistant Professor, Cyber Civilization Research Center, Keio University
“If you find something that moves you — be it further developments in an established field, a way to combine existing fields to create new ones, or something that’s entirely off the beaten path — pursue it. The act of pursuing the things that fascinate you is the real experience you need. If you can combine this with something that’s useful and beneficial to this world, you’ve won the game.”
Energy analyst, Union of Concerned Scientists
“Read, talk to others that work in the renewable energy industry, identify where in the value chain you want to contribute, and go for it!”
Project Manager, Research Scholars Programme, Future of Humanity Institute
“[S]o many extremely able people are trying to make [the future] good.”
Director, Scientists Against Inhumane Weapons
“I’m a pretty optimistic person at baseline, but particularly so after getting to know the incredible people that compose the x-risk community. They care so deeply about engineering a positive future for humanity — I feel tremendously grateful to have the opportunity to work with them!”
PhD Student, University of Cambridge | Research Affiliate, CSER
“The other people working in this field are so fiercely intelligent and capable. It’s hard not to have a conversation which leaves you with a perspective or idea you hadn’t thought of before. This, and the knowledge that one is doing useful and important work, combine to make it very rewarding.”
Head of Research and Partnerships, Center for the Governance of Artificial Intelligence, University of Oxford
“It is a rare, sometimes overwhelming, always humbling privilege to be in a position to work directly on a challenge which I believe is one of the most important facing us this century.”
Research Scholar, Future of Humanity Institute, University of Oxford
“I feel I’m surrounded by people who care deeply about life and addressing large and complex risks. I feel this field’s focus, while grim on its own, is also intrinsically coupled with the desire and hope that the future can go well. I remain hopeful that if we can navigate the next century safely, a better existence awaits us and our descendants. I am inspired by what could be possible for conscious life and I hope that my career can help ensure no catastrophic event occurs before our future is secured.”
Founder/CEO, TECH 2025 (Served Fresh Media)
“Don’t allow other people to define your dreams and don’t allow them to place limits on what you can do. And just as important, if not more so, don’t limit your own potential with soul-crushing self-doubt. A little self-doubt is okay and quite normal. But when it begins to keep you from taking big risks necessary to discover your strengths and path, you have to fix that right away or that type of thinking will fester.”
PhD Student, Oxford Internet Institute
“Underrepresented perspectives — women, people of colour, and other intersectional identities — are highly valuable at this point in uncovering blindspots. Your concerns may not currently be represented in the research community, but it doesn’t mean they shouldn’t be. There is low replaceability because if you weren’t there it wouldn’t be any single person’s main focus. When you’re a minority in the room it’s even more important to overcome audience inhibition and speak up or a blindspot may persist.”
Senior Research Scholar, Future of Humanity Institute, University of Oxford
“AI is a really exciting field to work in and there is a real need for people with diverse academic backgrounds – you don’t need to be a coder to make substantial contributions. Make use of existing women networks or write directly to women researchers if you would like to know what it is like to work at a particular organisation or with a particular team. Most of us are more than happy to help and share our experiences.”
AI Ethics Global Leader and Distinguished Research Staff Member, IBM Research
“My advice to women is to believe in what they are and what they are passionate about, to behave according to their values and attitudes without trying to mimic anybody else, and to be fully aware that their contribution is essential for advancing AI in the most inclusive, fair, and responsible way.”
Managing Director, Don’t Bank on the Bomb, PAX & ICAN
“Find your passion, produce the research that supports your policy recommendation and demand the space to say your piece. I always think to the first US woman that ran on a major party for President- Shirley Chisholm, she said “if they don’t give you a seat, bring a folding chair”. I think about the fact that there are (some) more seats now, and that’s amazing. There is still a long, long way to go before equity, but there are some serious efforts to move closer to that day.”
Vice President, Berggruen Institute | Director of the Institute’s China Center
“Women should embrace and dive into this new area of thinking about the future of humanity. Male dominance in past millennia in shaping the world and in how we approach the universe, humanity, and life needs to be questioned. More broad based, inclusive, non-confrontational and equanimous thinking, which is more typically associated with the female approach to things, is sorely needed in this world.”
Geoengineering Research, Governance and Public Engagement Fellow, Union of Concerned Scientists
“Domestic and international dedication to addressing climate change is continuously growing. Though we are far from where we need to be, I remain optimistic that we’re on a promising path.”
Coordinator of the Campaign to Stop Killer Robots | Advocacy Director of Human Rights Watch arms division
“Study what you are passionate about and not what you think will get you a job.”
Chairwoman, Nobel Women’s Initiative | Nobel Laureate
“If I have advice, it would be to be clear about who you want to be in your life and what you stand for — and then go for it.”
Research Fellow, School of Biosciences, University of Melbourne | Research Affiliate, Centre for the Study of Existential Risk (CSER), University of Cambridge
“Seeing the huge turnout of school kids and young people at climate change demonstrations gives me hope for the future. The next generation of leaders and decision makers seem to be proactive and genuinely interested in addressing these problems.”
PhD Candidate, Political Science, Yale University | Research Affiliate, Center for the Governance of AI, University of Oxford
“AI policy is a nascent but rapidly growing field. I think this is a good time for women to enter the field. Sometimes women are hesitant to enter a new discipline because they don’t feel they have adequate knowledge or experience. My work has taught me that you can quickly learn on the job and that you can apply the skills and knowledge you already have to your new job.”
Co-founder, Future of Life Institute | Postdoctoral Scholar, Tufts University
“Be brave. This is our world too, we can’t let it be shaped by men alone.”
Director of Communications/Outreach and Weapons Policy Advisor, Future of Life Institute
“Success in this job comes with much greater satisfaction than success in any other job I’ve had.”
AI Policy Specialist, Future of Life Institute | Research Fellow, UC Berkeley Center for Long-Term Cybersecurity
“Don’t discount yourself just because you think you don’t have the right background — the field is actively looking for ways to learn from other disciplines.”
Cofounder, Future of Life Institute | Research Scientist, DeepMind
“It’s great to see more and more talented and motivated people entering the field to work on these interesting and difficult problems.”
When it comes to artificial intelligence, debates often arise about what constitutes “safe” and “unsafe” actions. As Ramana Kumar, an AGI safety researcher at DeepMind, notes, the terms are subjective and “can only be defined with respect to the values of the AI system’s users and beneficiaries.”
Fortunately, such questions can mostly be sidestepped when confronting the technical problems associated with creating safe AI agents, as these problems aren’t associated with identifying what is right or morally proper. Rather, from a technical standpoint, the term “safety” is best defined as an AI agent that consistently takes actions that lead to the desired outcomes, regardless of whatever those desired outcomes may be.
In this respect, Kumar explains that, when it comes to creating an AI agent that is tasked with improving itself, “the technical problem of building a safe agent is largely independent of what ‘safe’ means because a large part of the problem is how to build an agent that reliably does something, no matter what that thing is, in such a way that the method continues to work even as the agent under consideration is more and more capable.”
In short, making a “safe” AI agent should not be conflated with making an “ethical” AI agent. The respective terms are talking about different things..
In general, sidestepping moralistic definitions of safety makes AI technical work quite a bit easier It allows research to advance while debates on the ethical issues evolve. Case in point, Uber’s self-driving cars are already on the streets, despite the fact that we’ve yet to agree on a framework regarding whether they should safeguard their driver or pedestrians.
However, when it comes to creating a robust and safe AI system that is capable of self-improvement, the technical work gets a lot harder, and research in this area is still in its most nascent stages. This is primarily because we aren’t dealing with just one AI agent; we are dealing with generations of future self-improving agents.
Kumar clarifies, “When an AI agent is self-improving, one can view the situation as involving two agents: the ‘seed’ or ‘parent’ agent and the ‘child’ agent into which the parent self-modifies….and its total effects on the world will include the effects of actions made by its descendants.” As a result, in order to know we’ve made a safe AI agent, we need to understand all possible child agents that might originate from the first agent.
And verifying the safety of all future AI agents comes down to solving a problem known as “self-referential reasoning.”
Understanding the Self-Referential Problem
The problem with self-referential reasoning is most easily understood by defining the term according to its two primary components: self-reference and reasoning.
- Self-reference: Refers to an instance in which someone (or something, such as a computer program or book) refers to itself. Any person or thing that refers to itself is called “self-referential.”
- Reasoning: In AI systems, reasoning is a process through which an agent establishes “beliefs” about the world, like whether or not a particular action is safe or a specific reasoning system is sound. “Good beliefs” are beliefs that are sound or plausible based on the available evidence. The term “belief” is used instead of “knowledge” because the things that an agent believes may not be factually true and can change over time.
In relation to AI, then, the term “self-referential reasoning” refers to an agent that is using a reasoning process to establish a belief about that very same reasoning process. Consequently, when it comes to self-improvement, the “self-referential problem” is as follows: An agent is using its own reasoning system to determine that future versions of its reasoning system will be safe.
To explain the problem another way, Kumar notes that, if an AI agent creates a child agent to help it achieve its goal, it will want to establish some beliefs about the child’s safety before using it. This will necessarily involve proving beliefs about the child by arguing that the child’s reasoning process is good. Yet, the child’s reasoning process may be similar to, or even an extension of, the original agent’s reasoning process. And ultimately, an AI system can not use its own reasoning to determine whether or not its reasoning is good.
From a technical standpoint, the problem comes down to Godel’s second incompleteness theorem, which Kumar explains, “shows that no sufficiently strong proof system can prove its own consistency, making it difficult for agents to show that actions their successors have proven to be safe are, in fact, safe.”
To date, several partial solutions to this problem have been proposed; however, our current software doesn’t have sufficient support for self-referential reasoning to make the solutions easy to implement and study. Consequently, in order to improve our understanding of the challenges of implementing self-referential reasoning, Kumar and his team aimed to implement a toy model of AI agents using some of the partial solutions that have been put forth.
Specifically, they investigated the feasibility of implementing one particular approach to the self-reference problem in a concrete setting (specifically, Botworld) where all the details could be checked. The approach selected was model polymorphism. Instead of requiring proof that shows an action is safe for all future use cases, model polymorphism only requires an action to be proven safe for an arbitrary number of steps (or subsequent actions) that is kept abstracted from the proof system.
Kumar notes that the overall goal was ultimately “to get a sense of the gap between the theory and a working implementation and to sharpen our understanding of the model polymorphism approach.” This would be accomplished by creating a proved theorem in a HOL (Higher Order Logic) theorem prover that describes the situation.
To break this down a little, in essence, theorem provers are computer programs that assist with the development of mathematical correctness proofs. These mathematical correctness proofs are the highest safety standard in the field, showing that a computer system always produces the correct output (or response) for any given input. Theorem provers create such proofs by using the formal methods of mathematics to prove or disprove the “correctness” of the control algorithms underlying a system. HOL theorem provers, in particular, are a family of interactive theorem proving systems that facilitate the construction of theories in higher-order logic. Higher-order logic, which supports quantification over functions, sets, sets of sets, and more, is more expressive than other logics, allowing the user to write formal statements at a high level of abstraction.
In retrospect, Kumar states that trying to prove a theorem about multiple steps of self-reflection in a HOL theorem prover was a massive undertaking. Nonetheless, he asserts that the team took several strides forward when it comes to grappling with the self-referential problem, noting that they built “a lot of the requisite infrastructure and got a better sense of what it would take to prove it and what it would take to build a prototype agent based on model polymorphism.”
Kumar added that MIRI’s (the Machine Intelligence Research Institute’s) Logical Inductors could also offer a satisfying version of formal self-referential reasoning and, consequently, provide a solution to the self-referential problem.
If you haven’t read it yet, find Part 1 here.
Today’s AI systems may seem like intellectual powerhouses that are able to defeat their human counterparts at a wide variety of tasks. However, the intellectual capacity of today’s most advanced AI agents is, in truth, narrow and limited. Take, for example, AlphaGo. Although it may be the world champion of the board game Go, this is essentially the only task that the system excels at.
Of course, there’s also AlphaZero. This algorithm has mastered a host of different games, from Japanese and American chess to Go. Consequently, it is far more capable and dynamic than many contemporary AI agents; however, AlphaZero doesn’t have the ability to easily apply its intelligence to any problem. It can’t move unfettered from one task to another the way that a human can.
The same thing can be said about all other current AI systems — their cognitive abilities are limited and don’t extend far beyond the specific task they were created for. That’s why Artificial General Intelligence (AGI) is the long-term goal of many researchers.
Widely regarded as the “holy grail” of AI research, AGI systems are artificially intelligent agents that have a broad range of problem-solving capabilities, allowing them to tackle challenges that weren’t considered during their design phase. Unlike traditional AI systems, which focus on one specific skill, AGI systems would be able to efficiently tackle virtually any problem that they encounter, completing a wide range of tasks.
If the technology is ever realized, it could benefit humanity in innumerable ways. Marshall Burke, an economist at Stanford University, predicts that AGI systems would ultimately be able to create large-scale coordination mechanisms to help alleviate (and perhaps even eradicate) some of our most pressing problems, such as hunger and poverty. However, before society can reap the benefits of these AGI systems, Ramana Kumar, an AGI safety researcher at DeepMind, notes that AI designers will eventually need to address the self-improvement problem.
Self-Improvement Meets AGI
Early forms of self-improvement already exist in current AI systems. “There is a kind of self-improvement that happens during normal machine learning,” Kumar explains; “namely, the system improves in its ability to perform a task or suite of tasks well during its training process.”
However, Kumar asserts that he would distinguish this form of machine learning from true self-improvement because the system can’t fundamentally change its own design to become something new. In order for a dramatic improvement to occur — one that encompasses new skills, tools, or the creation of more advanced AI agents — current AI systems need a human to provide them with new code and a new training algorithm, among other things.
Yet, it is theoretically possible to create an AI system that is capable of true self-improvement, and Kumar states that such a self-improving machine is one of the more plausible pathways to AGI.
Researchers think that self-improving machines could ultimately lead to AGI because of a process that is referred to as “recursive self-improvement.” The basic idea is that, as an AI system continues to use recursive self-improvement to make itself smarter, it will get increasingly better at making itself smarter. This will quickly lead to an exponential growth in its intelligence and, as a result, could eventually lead to AGI.
Kumar says that this scenario is entirely plausible, explaining that, “for this to work, we need a couple of mostly uncontroversial assumptions: that such highly competent agents exist in theory, and that they can be found by a sequence of local improvements.” To this extent, recursive self-improvement is a concept that is at the heart of a number of theories on how we can get from today’s moderately smart machines to super-intelligent AGI. However, Kumar clarifies that this isn’t the only potential pathway to AI superintelligences.
Humans could discover how to build highly competent AGI systems through a variety of methods. This might happen “by scaling up existing machine learning methods, for example, with faster hardware. Or it could happen by making incremental research progress in representation learning, transfer learning, model-based reinforcement learning, or some other direction. For example, we might make enough progress in brain scanning and emulation to copy and speed up the intelligence of a particular human,” Kumar explains.
Yet, he is also quick to clarify that recursive self-improvement is an innate characteristic of AGI. “Even if iterated self-improvement is not necessary to develop highly competent artificial agents in the first place, explicit self-improvement will still be possible for those agents,” Kumar said.
As such, although researchers may discover a pathway to AGI that doesn’t involve recursive self-improvement, it’s still a property of artificial intelligence that is in need of serious research.
Safety in Self-Improving AI
When systems start to modify themselves, we have to be able to trust that all their modifications are safe. This means that we need to know something about all possible modifications. But how can we ensure that a modification is safe if no one can predict ahead of time what the modification will be?
Kumar notes that there are two obvious solutions to this problem. The first option is to restrict a system’s ability to produce other AI agents. However, as Kumar succinctly sums, “We do not want to solve the safe self-improvement problem by forbidding self-improvement!”
The second option, then, is to permit only limited forms of self-improvement that have been deemed sufficiently safe, such as software updates or processor and memory upgrades. Yet, Kumar explains that vetting these forms of self-improvement as safe and unsafe is still exceedingly complicated. In fact, he says that preventing the construction of one specific kind of modification is so complex that it will “require such a deep understanding of what self-improvement involves that it will likely be enough to solve the full safe self-improvement problem.”
And notably, even if new advancements do permit only limited forms of self-improvement, Kumar states that this isn’t the path to take, as it sidesteps the core problem with self-improvement that we want to solve. “We want to build an agent that can build another AI agent whose capabilities are so great that we cannot, in advance, directly reason about its safety…We want to delegate some of the reasoning about safety and to be able to trust that the parent does that reasoning correctly,” he asserts.
Ultimately, this is an extremely complex problem that is still in its most nascent stages. As a result, much of the current work is focused on testing a variety of technical solutions and seeing where headway can be made. “There is still quite a lot of conceptual confusion about these issues, so some of the most useful work involves trying different concepts in various settings and seeing whether the results are coherent,” Kumar explains.
Regardless of what the ultimate solution is, Kumar asserts that successfully overcoming the problem of self-improvement depends on AI researchers working closely together. “The key to [testing a solution to this problem] is to make assumptions explicit, and, for the sake of explaining it to others, to be clear about the connection to the real-world safe AI problems we ultimately care about.”
Read Part 2 here.
In this special two-part podcast Ariel Conn is joined by Max Tegmark for a conversation with Dr. Matthew Meselson, biologist and Thomas Dudley Cabot Professor of the Natural Sciences at Harvard University. Dr. Meselson began his career with an experiment that helped prove Watson and Crick’s hypothesis on the structure and replication of DNA. He then got involved in arms control, working with the US government to renounce the development and possession of biological weapons and halt the use of Agent Orange and other herbicides in Vietnam. From the cellular level to that of international policy, Dr. Meselson has made significant contributions not only to the field of biology, but also towards the mitigation of existential threats.
In Part One, Dr. Meselson describes how he designed the experiment that helped prove Watson and Crick’s hypothesis, and he explains why this type of research is uniquely valuable to the scientific community. He also recounts his introduction to biological weapons, his reasons for opposing them, and the efforts he undertook to get them banned. Dr. Meselson was a key force behind the U.S. ratification of the Geneva Protocol, a 1925 treaty banning biological warfare, as well as the conception and implementation of the Biological Weapons Convention, the international treaty that bans biological and toxin weapons.
Topics discussed in this episode include:
- Watson and Crick’s double helix hypothesis
- The value of theoretical vs. experimental science
- Biological weapons and the U.S. biological weapons program
- The Biological Weapons Convention
- The value of verification
- Future considerations for biotechnology
Publications and resources discussed in this episode include:
- The replication of DNA in Escherichia coli by Matthew Meselson and Franklin W. Stahl
- The Geneva Protocol
- The Biological Weapons Convention
Click here for Part 2: Anthrax, Agent Orange, and Yellow Rain: Verification Stories with Matthew Meselson and Max Tegmark
Ariel: Hi everyone and welcome to the FLI podcast. I’m your host, Ariel Conn with the Future of Life Institute, and I am super psyched to present a very special two-part podcast this month. Joining me as both a guest and something of a co-host is FLI president and MIT physicist Max Tegmark. And he’s joining me for these two episodes because we’re both very excited and honored to be speaking with Dr. Matthew Meselson. Matthew not only helped prove Watson and Crick’s hypothesis about the structure of DNA in the 1950s, but he was also instrumental in getting the U.S. to ratify the Geneva Protocol, in getting the U.S. to halt its Agent Orange Program, and in the creation of the Biological Weapons Convention. He is currently Thomas Dudley Cabot Professor of the Natural Sciences at Harvard University where, among other things, he studies the role of sexual reproduction in evolution. Matthew and Max, thank you so much for joining us today.
Matthew: A pleasure.
Ariel: Matthew, you’ve done so much and I want to make sure we can cover everything, so let’s just dive right in. And maybe let’s start first with your work on DNA.
Matthew: Well, let’s start with my being a graduate student at Caltech.
Matthew: I had a been a freshman at Caltech but I didn’t like it. The teaching at that time was by rote except for one course, which was Linus Pauling’s course, General Chemistry. I took that course and I did a little research project for Linus, but I decided to go to graduate school much later at the University of Chicago because there was a program there called Mathematical Biophysics. In those days, before the structure of DNA was known, what could a young man do who liked chemistry and physics but wanted to find out how you could put together the atoms of the periodic chart and make something that’s alive?
There was a unit there called Mathematical Biophysics and the head of it was a man with a great big black beard, and that all seemed very attractive to a kid. So, I decided to go there but because of my freshman year at Caltech I got to know Linus’ daughter, Linda Pauling, and she invited me to a swimming pool party at their house in Sierra Madre. So, I’m in the water. It’s a beautiful sunny day in California, and the world’s greatest chemist comes out wearing a tie and a vest and looks down at me in the water like some kind of insect and says, “Well, Matt, what are you going to do next summer?”
I looked up and I said, “I’m going to the University of Chicago to Nicolas Rashevsky” — that’s the man with the black beard. And Linus looked down at me and said, “But Matt, that’s a lot of baloney. Why don’t you come be my graduate student?” So, I looked up and said, “Okay.” That’s how I got into graduate school. I started out in X-ray crystallography, a project that Linus gave me to do. One day, Jacques Monod from the Institut Pasteur in Paris came to give a lecture at Caltech, and the question then was about the enzyme beta-galactosidase, a very important enzyme because studies of the induction of that enzyme led to the hypothesis of messenger RNA, also how genes are turned on and off. A very important protein used for those purposes.
The question of Monod’s lecture was: is this protein already lurking inside of cells in some inactive form? And when you add the chemical that makes it be produced, which is lactose (or something like lactose), you just put a little finishing touch on the protein that’s lurking inside the cells and this gives you the impression that the addition of lactose (or something like lactose) induces the appearance of the enzyme itself. Or the alternative was maybe the addition to the growing medium of lactose (or something like lactose) causes de novo production, a synthesis of the new protein, the enzyme beta-galactosidase. So, he had to choose between these two hypotheses. And he proposed an experiment for doing it — I won’t go into detail — which was absolutely horrible and would certainly not have worked, even though Jacques was a very great biologist.
I had been taking Linus’ course on the nature of the chemical bond, and one of the key take-home problems was: calculate the ratio of the strength of the Deuterium bond to the Hydrogen bond. I found out that you could do that in one line based on the — what’s called the quantum mechanical zero point energy. That impressed me so much that I got interested in what else Deuterium might have about it that would be interesting. Deuterium is heavy Hydrogen, with a neutron in the nucleus. So, I thought: what would happen if you exchange the water in something alive with Deuterium? And I read that there was a man who tried to do that with a mouse, but that didn’t work. The mouse died. Maybe because the water wasn’t pure, I don’t know.
But I had found a paper that you could grow bacteria, Escherichia coli, in pure heavy water with other nutrients added but no light water. So, I knew that you could make DNA from that as you could probably make DNA or also beta-galactosidase a little heavier by having it be made out of heavy Hydrogen rather than light. There’s some intermediate details here, but at some point I decided to go see the famous biophysicist Max Delbrück. I was in the Chemistry Department and Max was in the Biology Department.
And there was, at that time, a certain — I would say not a barrier, but a three-foot fence between these two departments. Chemists looked down on the biologists because they worked just with squiggly, gooey things. Then the physicists naturally looked down on the chemists and the mathematicians looked down on the physicists. At least that was the impression of us graduate students. So, I was somewhat fearsome to go meet Max Delbrück, and he also had a fearsome reputation, as not tolerating any kind of nonsense. But finally I went to see him — he was a lovely man actually — and the first thing he said when I sat down was, “What do you think about these two new papers of Watson and Crick?” I said I’d never heard about them. Well, he jumped out of his chair and grabbed a heap of reprints that Jim Watson had sent to him, and threw them all at me, and yelled at me, and said, “Read these and don’t come back until you read them.”
Well, I heard the words “come back.” So I read the papers and I went back, and he explained to me that there was a problem with the hypothesis that Jim and Francis had for DNA replication. The idea of theirs was that the two strands come apart by unwinding the double helix. And if that meant that you had to unwind the entire parent double helix along its whole length, the viscous drag would have been impossible to deal with. You couldn’t drive it with any kind of reasonable biological motor.
So Max thought that you don’t actually unwind the whole thing: You make breaks, and then with little pieces you can unwind those and then seal them up. This gives you a kind of dispersive replication in which the two daughter molecules, each one has some pieces of the parent molecule but no complete strand from the parent molecule. Well, when he told me that, I almost immediately — I think it was almost immediately — realized that density separation would be a way to find out if this hypothesis predicted the finding of half heavy DNA after one generation. That is, one old strand together with one new strand forming one new duplex of DNA.
So I went to Linus Pauling and said, “I’d like to do that experiment,” and he gently said, “Finish your X-ray crystallography.” So, I didn’t do that experiment then. Instead I went to Woods Hole to be a teaching assistant in the Physiology course with Jim Watson. Jim had been living at Caltech that year in the faculty club, the Athenaeum, and so had I, so I had gotten to know Jim pretty well then. So there I was at Woods Hole, and I was not really a teaching assistant — I was actually doing an experiment that Jim wanted me to do — but I was meeting with the instructors.
One day we were on the second floor of the Lily building and Jim looked out the window and pointed down across the street. Sitting on the grass was a fellow, and Jim said, “That guy thinks he’s pretty smart. His name is Frank Stahl. Let’s give him a really tough experiment to do all by himself.” The Hershey–Chase Experiment. Well, I knew what that experiment was, and I didn’t think you could do it in one day, let alone just single-handedly. So I went downstairs to tell this poor Frank Stahl guy that they were going to give him a tough assignment.
I told him about that, and I asked him what he was doing. And he was doing something very interesting with bacteriophages. He asked me what I was doing, and I told him that I was thinking of finding out if DNA replicates semi-conservatively the way Watson and Crick said it should, by a method that would have something to do with density measurements in a centrifuge. I had no clear idea how to do that, just something by growing cells in heavy water and then switching them to light water and see what kind of DNA molecules they made in a density gradient in a centrifuge. And Frank made some good suggestions, and we decided to do this together at Caltech because he was coming to Caltech himself to be a postdoc that very next September.
Anyway, to make a long story short we made the experiment work, and we published it in 1958. That experiment said that DNA is made up of two subunits and when it replicates its subunits come apart, each one becomes associated with a new sub-unit. Now anybody in his right mind would have said, “By sub-unit you really mean a single polynucleotide chain. Isn’t that what you mean?” And we would have answered at that time, “Yes of course, that’s what we mean, but we don’t want to say that because our experiment doesn’t say that. Our experiment says that some kind of subunits do that — the subunits almost certainly are the single polynucleotide chains — but we want to confine our written paper to only what can be deduced from the experiment itself, and not go one inch beyond that.” It was later a fellow named John Cairns proved that the subunits were really the single polynucleotide chains of DNA.
Ariel: So just to clarify, those were the strands of DNA that Watson and Crick had predicted, is that correct?
Matthew: Yes, it’s the result that they would have predicted, exactly so. We did a bunch of other experiments at Caltech, some on mutagenesis and other things, but this experiment, I would say, had a big psychological value. Maybe its psychological value was more than anything else.
The year 1954, the year after Watson and Crick had published the structure of DNA and their speculations as to its biological meaning at Woods Hole, and Jim was there and Francis was there. I was there, as I mentioned. Rosalind Franklin was there. Sydney Brenner was there. It was very interesting because a good number of people there didn’t believe their structure for DNA, or that it had anything to do with life and genes, on the grounds that it was too simple, and life had to be very complicated. And the other group of people thought it was too simple to be wrong.
So two views: every one agreed that the structure that they had proposed was a simple one. Some people thought simplicity meant truth, and others thought that in biology, truth had to be complicated. What I’m trying to get at here is that after the structure was published it was just a hypothesis. It wasn’t proven by any methods of, for example, crystallography, to show — it wasn’t until much later that crystallography and a certain other kind of experiment actually proved that the Watson and Crick structure was right. At that time, it was a proposal based on model building.
So why was our experiment, the experiment showing the semi-conservative replication, of psychological value? It was because this is the first time you could actually see something. Namely, bands in an ultracentrifuge gradient. So, I think the effect of our experiment in 1958 was to make the DNA structure proposal of 1954 — it gave it a certain reality. Jim, in his book The Double Helix, actually says that he was greatly relieved when that came along. I’m sure he believed the structure was right all the time, but this certainly was a big leap forward in convincing people.
Ariel: I’d like to pull Max into this just a little bit and then we’ll get back to your story. But I’m really interested in this idea of the psychological value of science. Sort of very, very broadly, do you think a lot of experiments actually come down to more psychological value, or was your experiment unique in that way? I thought that was just a really interesting idea. And I think it would be interesting to hear both of your thoughts on this.
Matthew: Max, where are you?
Max: Oh, I’m just fascinated by what you’ve been telling us about here. I think of course, the sciences — we see again and again that experiments without theory and theory without experiments, neither of them would be anywhere near as amazing as when you have both. Because when there’s a really radical new idea put forth, half the time people at the time will dismiss it and say, “Oh, that’s obviously wrong,” or whatnot. And only when the experiment comes along do people start taking it seriously and vice versa. Sometimes a lot of theoretical ideas are just widely held as truths — like Aristotle’s idea of how the laws of motion should be — until somebody much later decides to put it to the experimental test.
Matthew: That’s right. In fact, Sir Arthur Eddington is famous for two things. He was one of the first ones to find experimental proof of the accuracy of Einstein’s theory of general relativity, and the other thing for which Eddington was famous was having said, “No experiment should be believed until supported by theory.”
Max: Yeah. Theorists and experiments have had this love-hate relationship throughout the ages, which I think, in the end, has been a very fruitful relationship.
Matthew: Yeah. In cosmology the amazing thing to me is that the experiments now cost billions or at least hundreds of millions of dollars. And that this is one area, maybe the only one, in which politicians are willing to spend a lot of money for something that’s so beautiful and theoretical and far off and scientifically fundamental as cosmology.
Max: Yeah. Cosmology is also a reminder again of the importance of experiment, because the big questions there — such as where did everything come from, how big is our universe, and so on — those questions have been pondered by philosophers and deep thinkers for as long as people have walked the earth. But for most of those eons all you could do was speculate with your friends over some beer about this, and then you could go home, because there was no further progress to be made, right?
It was only more recently when experiments gave us humans better eyes: where with telescopes, et cetera, we could start to see things that our ancestors couldn’t see, and with this experimental knowledge actually start to answer a lot of these things. When I was a grad student, we argued about whether our universe was 10 billion years old or 20 billion years old. Now we argue about whether it’s 13.7 or 13.8 billion years old. You know why? Experiment.
Matthew: And now is a more exciting time than any previous time, I think, because we’re beginning to talk about things like multi-universes and entanglement, things that are just astonishing and really almost foreign to the way that we’re able to think — that there’s other universes, or that there could be what’s called quantum mechanical entanglement: that things influence each other very far apart, so far apart that light could not travel between them in any reasonable time, but by a completely weird process, which Einstein called spooky action at a distance. Anyway, this is an incredibly exciting time about which I know nothing except from podcasts and programs like this one.
Max: Thank you for bringing this up, because I think the examples you gave right now actually are really, really linked to these breakthroughs in biology that you were telling us about, because I think we’ve been on this intellectual journey all along where we humans kept underestimating our ability to understand stuff. So for the longest time, we didn’t even really try our best because we assumed it was futile. People used to think that the difference between a living bug and a dead bug was that there was some sort of secret sauce, and the living bug has some sort life essence or something that couldn’t be studied with the tools of science. And then by the time people started to take seriously that maybe actually the difference between that living bug and the dead bug is that the mechanism is just broken in one of them, and you can study the mechanism — then you get to these kind of experimental questions that you were talking about. I think in the same way, people had previously shied away from asking questions about, not just about life, but about the origin of our universe for example, as being always hopelessly beyond where we were ever even able to do anything about, so people didn’t ask what experiments they could make. They just gave up without even trying.
And then gradually I think people were emboldened by breakthroughs in, for example, biology, to say, “Hey, what about — let’s look at some of these other things where people said we’re hopeless, too?” Maybe even our universe obeys some laws that we can actually set out to study. So hopefully we’ll continue being emboldened, and stop being lazy, and actually work hard on asking all questions, and not just give up because we think they’re hopeless.
Matthew: I think the key to making this process begin was to abandon supernatural explanations of natural phenomena. So long as you believe in supernatural explanations, you can’t get anywhere, but as soon as you give them up and look around for some other kind of explanation, then you can begin to make progress. The amazing thing is that we, with our minds that evolved under conditions of hunter-gathering and even earlier than that — that these minds of ours are capable of doing such things as imagining general relativity or all of the other things.
So is there any limit to it? Is there going to be a point beyond which we will have to say we can’t really think about that, it’s too complicated? Yes, that will happen. But we will by then have built computers capable of thinking beyond. So in a sense, I think once supernatural thinking was given up, the path was open to essentially an infinity of discovery, possibly with the aid of advanced artificial intelligence later on, but still guided by humans. Or at least by a few humans.
Max: I think you hit the nail on the head there. Saying, “All this is supernatural,” has been used as an excuse to be lazy over and over again, even if you go further back, you know, hundreds of years ago. Many people looked at the moon, and they didn’t ask themselves why the moon doesn’t fall down like a normal rock because they said, “Oh, there’s something supernatural about it, earth stuff obeys earth laws, heaven stuff obeys heaven laws, which are just different. Heaven stuff doesn’t fall down.”
And then Newton came along and said, “Wait a minute. What if we just forget about the supernatural, and for a moment, explore the hypothesis that actually stuff up there in the sky obeys the same laws of physics as the stuff on earth? Then there’s got to be a different explanation for why the moon doesn’t fall down.” And that’s exactly how he was led to his law of gravitation, which revolutionized things of course. I think again and again, there was again the rejection of supernatural explanations that led people to work harder on understanding what life really is, and now we see some people falling into the same intellectual trap again and saying, “Oh yeah, sure. Maybe life is mechanistic but intelligence is somehow magical, or consciousness is somehow magical, so we shouldn’t study it.”
Now, artificial intelligence progress is really, again, driven by people willing to let go of that and say, “Hey, maybe intelligence is not supernatural. Maybe it’s all about information processing, and maybe we can study what kind of information processing is intelligent and maybe even conscious as in having experiences.” There’s a lot learn at this meta level from what you’re saying there, Matthew, that if we resist excuses to not do the work by saying, “Oh, it’s supernatural,” or whatever, there’s often real progress we can make.
Ariel: I really hate to do this because I think this is such a great discussion, but in the interest of time, we should probably get back to the stories at Harvard, and then you two can discuss some of these issues — or others — a little more shortly in this interview. So yeah, let’s go back to Harvard.
Matthew: Okay, Harvard. So I came to Harvard. I thought I’d stay only five years. I thought it was kind of a duty for an American who’d grown up in the West to find out a little bit about what the East was like. But I never left. I’ve been here for 60 years. When I was here for about three years, my friend Paul Doty, a chemist, no longer living, asked me if I’d like to go work at the United States Arms Control and Disarmament Agency in Washington DC. He was on the general advisory board of that government branch, and it was embedded in the State Department building on 21st Street in Washington, but it was quite independent, it could report it directly to the White House, and it was the first year of its existence, and it was trying to find out what it should be doing.
And one of the ways it tried to find out what it should be doing was to hire six academics to come just for the summer. One of them was me, one of them was Freeman Dyson, the physicist, and there were four others. When I got there, they said, “Okay, you’re going to work on theater nuclear weapons arms control,” something I knew less than zero about. But I tried, and I read things and so on, and very famous people came to brief me — like Llewellyn Thompson, our ambassador to Moscow, and Paul Nitze, the deputy secretary of defense.
I realized that I knew nothing about this and although scientists often have the arrogance to think that they can say something useful about nearly anything if they think about it, here was something that so many people had thought about. So I went through my boss and said, “Look, you’re wasting your time and your money. I don’t know anything about this. I’m not gonna produce anything useful. I’m a chemist and a biologist. Why don’t you have me look into the arms control of that stuff?” He said, “Yeah, you could do whatever you want. We had a guy who did that, and he got very depressed and he killed himself. You could have his desk.”
So I decided to look into chemical and biological weapons. In those days, the arms control agency was almost like a college. We all had to have very high security clearances, and that was because the congress was worried that maybe there would be some leakers amongst the people doing this suspicious work in arms control, and therefore, we had to be in possession of the highest level of security clearance. This had, in a way, the unexpected effect that you could talk to your neighbor about anything. Ordinarily, you might not have clearance for what your neighbor, a different office, a different room, or a different desk was doing — but we had, all of us, such security clearances that we could all talk to each other about what we were doing. So it was like a college in that respect. It was a wonderful atmosphere.
Anyway, I decided I would just focus on biological weapons, because the two together would be too much for a summer. I went to the CIA, and a young man there showed me everything we knew about what other countries were doing with biological weapons, and the answer was we knew very little. Then I went to Fort Detrick to see what we were doing with biological weapons, and I was given a tour by a quite good immunologist who had been a faculty member at the Harvard Medical School, name was Leroy Fothergill. And we came to a big building, seven stories high. From a distance, you would think it had windows but when you get up close, they were phony windows. And I asked Dr. Fothergill, “What do we do in there?” He said, “Well, we have a big fermentor in there and we make Anthrax.” I said, “Well, why do we do that?” He said, “Well, biological weapons are a lot cheaper than nuclear weapons. It will save us money.”
I don’t think it took me very long, certainly by the time I got back to my office in the State Department Building, to realize that hey, we don’t want devastating weapons of mass destruction to be really cheap and save us money. We would like them to be so expensive that no one can afford them but us, or maybe no one at all. Because in the hands of other people, it would be like their having nuclear weapons. It’s ridiculous to want a weapon of mass destruction that’s ultra-cheap.
So that dawned on me. My office mate was Freeman Dyson, and I talked with him a little bit about it and he encouraged me greatly to pursue this. The more I thought about it, two things motivated me very strongly. Not just the illogic of it. The illogic of it motivated me only in the respect that it made me realize that any reasonable person could be convinced of this. In other words, it wouldn’t be a hard job to get this thing stopped, because anybody who’s thoughtful would see the argument against it. But there were two other aspects. One, it was my science: biology. It’s hard to explain, but that my science would be perverted in that way. But there’s another aspect, and that is the difference between war and peace.
We’ve had wars and we’ve had peace. Germany fights Britain, Germany is aligned with Britain. Britain fights France, Britain is aligned with France. There’s war. There’s peace. There are things that go on during war that might advance knowledge a little bit, but certainly, it’s during times of peace that the arts, the humanities, and science, too, make great progress. What if you couldn’t tell the difference and all the time is both war and peace? By that I mean, war up until now has been very special. There are rules of it. Basically, it starts with hitting a guy so hard that he’s knocked out or killed. Then you pick up a stone and hit him with that. Then you make a spear and spear him with that. Then you make a bow and arrow and spear him with that. Then later on, you make a gun and you shoot a bullet at him. Even a nuclear weapon: it’s all like hitting with an arm, and furthermore, when it stops, it’s stopped, and you know when it’s going on. It make sounds. It makes blood. It makes bang.
Now biological weapons, they could be responsible for a kind of war that’s totally surreptitious. You don’t even know what’s happening, or you know it’s happening but it’s always happening. They’re trying to degrade your crops. They’re trying to degrade your genetics. They’re trying to introduce nasty insects to you. In other words, it doesn’t have a beginning and an end. There’s no armistice. Now today, there’s another kind of weapon. It has some of those attributes: It’s cyber warfare. It might over time erase the distinction between war and peace. Now that really would be a threat to the advance of civilization, a permanent science fiction-like, locked in, war-like situation, never ending. Biological weapons have that potentiality.
So for those two reasons — my science, and it could erase the distinction between war and peace, could even change what it means to be human. Maybe you could change what the other guy’s like: change his genes somehow. Change his brain by maybe some complex signaling, who knows? Anyway, I felt a strong philosophical desire to get this thing stopped. Fortunately, I was in Harvard University, and so was Jack Kennedy. And although by that time he had been assassinated, he had left behind lots of people in the key cabinet offices who were Kennedy appointees. In particular, people who came from Harvard. So I could knock on almost any door.
So I went to Lyndon Johnson’s national security adviser, who had been Jack Kennedy’s national security adviser, and who had been the dean at Harvard who hired me, McGeorge Bundy, and said all these things I’ve just said. And he said, “Don’t worry, Matt, I’ll keep it out of the war plans.” I’ve never seen a war plan, but I guess if he said that, it was true. But that didn’t mean it wouldn’t keep on being developed.
Now here I should make an aside. Does that mean that the Army or the Navy or the Air Force wanted these things? No. We develop weapons in a kind of commercial way that is a part of the military. In this case, the Army Materiel Command works out all kinds of things: better artillery pieces, communication devices, and biological weapons. It doesn’t belong to any service. Then after, in this case, biological weapons, if the laboratories develop what they think is a good biological weapon, they still have to get one of the services — Air Force, Army, Navy, Marines — to say, “Okay, we’d like that. We’ll buy some of that.”
There was always a problem here. Nobody wanted these things. The Air Force didn’t want them because you couldn’t calculate how many planes you needed to kill a certain number of people. You couldn’t calculate the human dose response, and beyond that you couldn’t calculate the dose that would reach the humans. There were too many unknowns. The Army didn’t like it, not only because they, too, wanted predictability, but because their soldiers are there, maybe getting infected by the same bugs. Maybe there’s vaccines and all that, but it also seemed dishonorable. The Navy didn’t want it because the one thing that ships have to be is clean. So oddly enough, biological weapons were kind of a step child.
Nevertheless, there was a dedicated group of people who really liked the idea and pushed hard on it. These were the people who were developing the biological weapons, and they had their friends in Congress, so they kept getting it funded. So I made a kind of a plan, like a protocol for doing an experiment, to get us to stop all this. How do you do that? Well, first you ask yourself: who can stop it? There’s only one person who can stop it. That’s the President of the United States.
The next thing is: what kind of advice is he going to get, because he may want to do something, but if all the advice he gets is against it, it takes a strong personality to go against the advice you’re getting. Also, word might get out, if it turned out you made a mistake, that they told you all along it was a bad idea and you went ahead anyway. That makes you a super fool. So the answer there is: well, you go to talk to the Secretary of Defense, and the Secretary of State, and the head of the CIA, and all of the senior people, and their people who are just below them.
Then what about the people who are working on the biological weapons? You have to talk to them, but not so much privately, because they really are dedicated. There were some people who are caught in this and really didn’t want to be doing it, but there were other people who were really pushing it, and it wasn’t possible, really, to tell them to quit your job and get out of this. But what you could do is talk with them in public, and by knowing more than they knew about their own subject — which meant studying up a lot — show that they were wrong.
So I literally crammed, trying to understand everything there was to know about aerobiology, diffusion of clouds, pathogenicity, history of biological weapons, the whole bit, so that I could sound more knowledgeable. I know that’s a sort of slightly underhanded way to win an argument, but it’s a way of convincing the public that the guys who are doing this aren’t so wise. And then you have to get public support.
I had a pal here who told me I had to go down to Washington and meet a guy named Howard Simons, who was the managing editor of the Washington Post. He had been a science journalist at The Post and that’s why some scientists up here in Harvard knew him. So, I went down there — Howie at that time was now managing editor — and I told him, “I want to get newspaper articles all over the country about the problem of biological weapons.” He took out a big yellow pad and he wrote down about 30 names. He said, “These are the science journalists at San Francisco Chronicle, Baltimore Sun, New York Times, et cetera, et cetera.” Put down the names of all the main science journalists. And he said to me, “These guys have to have something once a week to give their editor for the science columns, or the science pages. They’re always on the lookout for something, and biological weapons is a nice subject — they’d like to write about that, because it grabs people’s attention.”
So I arranged to either meet, or at least talk to all of these guys. And we got all kinds of articles in the press, and mainly reflecting the views that I had that this was unwise for the United States to pioneer this stuff. We should be in the position to go after anybody else who was doing it even in peacetime and get them to stop, which we couldn’t very well do if we were doing it ourselves. In other words, that meant a treaty. You have to have a treaty, which might be violated, but if it’s violated and you know, at least you can go after the violators, and the treaty will likely stop a lot of countries from doing it in the first place.
So what are the treaties? There’s an old treaty, a 1925 Geneva Protocol. The United States was not a party to it, but it does prohibit the first use of bacteriological or other biological weapons. So the problem was to convince the United States to get on board that treaty.
The very first paper I wrote for the President is called the Geneva Protocol of 1925. I never met President Nixon, but I did know Henry Kissinger: He’d been my neighbor at Harvard, the building next door to mine. There was a good lunch room on the third floor. We both ate there. He had started an arms control seminar, met every month. I went to that, all the meetings. We traveled a little bit in Europe together. So I knew him, and I wrote papers for Henry knowing that those would get to Nixon. The first paper that I wrote, as I said, was “The United States and the Geneva Protocol.” It made all these arguments that I’m telling you now about why the United States should not be in this business. Now, the Protocol also prohibits chemical weapons or the first use of chemical weapons.
Now, I should say something about writing papers for Presidents. You don’t want to write a paper that’s saying, “Here’s what you should do.” You have to put yourself in their position. There are all kinds of options on what they should do. So, you have to write a paper from the point of view of a reader who’s got to choose between a lot of options. He doesn’t have a choice to start with. So that’s the kind of paper you need to write. You’ve got to give every option a fair trial. You’ve got to do your best, both to defend every option and to argue against every option. And you’ve got to do it in no more than a very few number of pages. That’s no easy job, but you can do it.
So eventually, as you know, the United States renounced biological weapons in November of 1969. There was an off the record press briefing that Henry Kissinger gave to the journalists about this. And one of them, I think it was the New York Times guy, said, “What about toxin weapons?”
Now, toxins are poisonous things made by living things, like Botulinum toxin made by bacteria or snake venom, and those could be used as weapons in principle. You can read in this briefing, Henry Kissinger says, “What are toxins?” So what this meant, in other words, is that a whole new review, a whole new decision process had to be cranked up to deal with the question, “Well, do we renounce toxin weapons?” And there were two points of view. One was, “They are made by living things, and since we’re renouncing biological warfare, we should renounce toxins.”
The other point of view is, “Yeah, they’re made by living things, but they’re just chemicals, and so they can also be made by chemists in laboratories. So, maybe we should renounce them when they’re made by living things like bacteria or snakes, but reserve the right to make them and use them in warfare if we can synthesize them in chemical laboratories.” So I wrote a paper arguing that we should renounce them completely. Partly because it would be very confusing to argue that the basis for renouncing or not renouncing is who made them, not what they are. But also, I knew that my paper was read by Richard Nixon on a certain day on Key Biscayne in Florida, which was one of the places he’d go for rest and vacation.
Nixon was down there, and I had written a paper called “What Policy for Toxins.” I was at a friend’s house with my wife the night that the President and Henry Kissinger were deciding this issue. Henry called me, and I wasn’t home. They couldn’t find their copy of my paper. Henry called to see if I could read it to them, but he couldn’t find me because I was at a dinner party. Then Henry called Paul Doty, my friend, because he had a copy of the paper. But he looked for his copy and he couldn’t find it either. Then late that night Kissinger called Doty again and said, “We found the paper, and the President has made up his mind. He’s going to renounce toxins no matter how they’re made, and it was because of Matt’s paper.”
I had tried to write a paper that steered clear of political arguments — just scientific ones and military ones. However, there had been an editorial in the Washington Post by one of their editorial writers, Steve Rosenfeld, in which he wrote the line, “How can the President renounce typhoid only to embrace Botulism?”
I thought it was so gripping, I incorporated it under the topic of the authority and credibility of the President of the United States. And what Henry told Paul on the telephone was: that’s what made up the President’s mind. And of course, it would. The President cares about his authority and credibility. He doesn’t care about little things like toxins, but his authority and credibility… And so right there and then, he scratched out the advice that he’d gotten in a position paper, which was to take the option, “Use them but only if made by chemists,” and instead chose the option to renounce them completely. And that’s how that decision got made.
Ariel: That all ended up in the Biological Weapons Convention, though, correct?
Matthew: Well, the idea for that came from the British. They had produced a draft paper to take to the arms control talks with the Russians and other countries in Geneva, suggesting a treaty to prohibit biological weapons in war — not just the way the Geneva Protocol did, but would prohibit even their production and possession, not merely their use. Richard Nixon, in his renunciation by the United States, what he did was threefold. He got the United States out of the biological weapons business and decreed that Fort Detrick and other installations that had been doing that would hence forward be doing only peaceful things, like Detrick was partly converted to a cancer research institute, and all the biological weapons that had been stock piled were to be destroyed, and they were.
The other thing he did was renounce toxins. Another thing he decided to do was to resubmit the Geneva Protocol to the United States Senate for its advice and approval. And the last thing was to support the British initiative, and that was the Biological Weapons Convention. But you could only get it if the Russians agreed. But eventually, after a lot of negotiation, we got the Biological Weapons Convention, which is still in force. A little later we even got the Chemical Weapons Convention, but not right away because in my view, and in the view of a lot of people, we did need chemical weapons. Until we could be pretty sure that the Soviet Union was going to get rid of its chemical weapons, too.
If there are chemical weapons on the battlefield, soldiers have to put on gas masks and protective clothing, and this really slows down the tempo of combat action, so that if you could simply put the other side into that restrictive clothing, you have a major military accomplishment. Chemical weapons in the hands of only one side would give that side the option of slowing down the other side, reducing the mobility on the ground of the other side. So, until we got a treaty that had inspection provisions, which the chemical treaty does, and the biological treaty does not — well, it has a kind of challenge inspection, but no one’s ever done that, and it’s very hard to make it work — but the chemical treaty had inspection provisions that were obligatory, and have been extensive: with the Russians visiting our chemical production facilities, and our guys visiting theirs, and all kinds of verification. So that’s how we got the Chemical Weapons Convention. That was quite a bit later.
Max: So, I’m curious, was there a Matthew Meselson clone on the British side, thanks to whom the British started pushing this?
Matthew: Yes. There were of course, numerous clones. And there were numerous clones on this side of the Atlantic, too. None of these things could ever be ever done by just one person. But my pal Julian Robinson, who was at the University of Sussex in Brighton, he was a real scholar of chemical and biological weapons, knows everything about them, and their whole history, and has written all of the very best papers on this subject. He’s just an unbelievably accurate and knowledgeable historian and scholar. People would go to Julian for advice. He was a Mycroft. He’s still in Sussex.
Ariel: You helped start the Harvard Sussex Program on chemical and biological weapons. Is he the person you helped start that with, or was that separate?
Matthew: We decided to do that together.
Matthew: It did several things, but one of the main things it did was to publish a quarterly journal, which had a dispatch from Geneva — progress towards getting the Chemical Weapons Convention — because when we started the bulletin, the Chemical Convention had not yet been achieved. There were all kinds of news items in the bulletin; We had guest articles. And it finally ended, I think, only a few years ago. But I think it had a big impact; not only because of what was in it, but because, also, it united people of all countries interested in this subject. They all read the bulletin, and they all got a chance to write in the bulletin as well, and they occasionally meet each other, so it had an effect of bringing together a community of people interested in safely getting rid of chemical weapons and biological weapons.
Max: This Biological Weapons Convention was a great inspiration for subsequent treaties, first the ban on biological weapons, and then various other kinds of weapons, and today, we have a very vibrant debate about whether there should be also be a ban on lethal autonomous weapons, and inhumane uses of A.I. So, I’m curious to what extent you got lots of push-back back in those days from people who said, “Oh this is a stupid idea,” or, “This is never going to work,” and what the lessons are that could be learned from that.
Matthew: I think that with biological weapons, and also, but to a lesser extent, with chemical weapons, the first point was we didn’t need them. We had never really accepted World War I — when we were involved in the use of chemical weapons, that had been started. But it was never something that the military liked. They didn’t want to fight a war by encumberment. Biological weapons for sure not, once we realized that to make cheap weapons, they could get into the hands of people who couldn’t afford nuclear weapons, was idiotic. And even chemical weapons are relatively cheap and have the possibility of covering fairly large areas at a low price, and also getting into the hands of terrorists. Now, terrorism wasn’t much on anybody’s radar until more recently, but once that became a serious issue, that was another argument against both biological and chemical weapons. So those two weapons really didn’t have a lot of boosters.
Max: You make it sound so easy though. Did it never happen that someone came and told you that you were all wrong and that this plan was never going to work?
Matthew: Yeah, but that was restricted to the people who were doing it, and a few really eccentric intellectuals. As evidence of this: in the military, the office which dealt with chemical and biological weapons, the highest rank you could find in that would be a colonel. No general, just a colonel. You don’t get to be a general in the chemical corps. There were a few exceptions, basically old times, as kind of a left over from World War I. If you’re a part of the military that never gets to have a general or even a full colonel, you ain’t got much influence, right?
But if you talk about the artillery or the infantry, my goodness, I mean there are lots of generals — including four star generals, even five star generals — who come out of the artillery and infantry and so on, and then Air Force generals, and fleet admirals in the Navy. So that’s one way you can quickly tell whether something is very important or not.
Anyway, we do have these treaties, but it might be very much more difficult to get treaties on war between robots. I don’t know enough about it to really have an opinion. I haven’t thought about it.
Ariel: I want to follow up with a question I think is similar, because one of the arguments that we hear a lot with lethal autonomous weapons, is this fear that if we ban lethal autonomous weapons, it will negatively impact science and research in artificial intelligence. But you were talking about how some of the biological weapons programs were repurposed to help deal with cancer. And you’re a biologist and chemist, but it doesn’t sound like you personally felt negatively affected by these bans in terms of your research. Is that correct?
Matthew: Well, the only technically really important thing — that would have happened anyway — that’s radar, and that was indeed accelerated by the military requirement to detect aircraft at a distance. But usually it’s the reverse. People who had been doing research in fundamental science naturally volunteered or were conscripted to do war work. Francis Crick was working on magnetic torpedoes, not on DNA or hemoglobin. So, the argument that a war stimulates basic science is completely backwards.
Newton, he was director of the mint. Nothing about the British military as it was at the time helped Newton realize that if you shoot a projectile fast enough, it will stay in orbit; He figured that out by himself. I just don’t believe the argument that war makes science advance. It’s not true. If anything, it slows it down.
Max: I think it’s fascinating to compare the arguments that were made for and against a biological weapons ban back then with the arguments that are made for and against a lethal autonomous weapons ban today, because another common argument I hear for why people want lethal autonomous weapons today is because, “Oh, they’re going to be great. They’re going to be so cheap.” That’s like exactly what you were arguing is a very good argument against, rather than for, a weapons class.
Matthew: There’s some similarities and some differences. Another similarity is that even one autonomous weapon in the hands of a terrorist could do things that are very undesirable — even one. On the other hand, we’re already doing something like it with drones. There’s a kind of continuous path that might lead to this, and I know that the military and DARPA are actually very interested in autonomous weapons, so I’m not so sure that you could stop it, both because it’s continuous; It’s not like a real break.
Biological weapons are really different. Chemical weapons are really different. Whereas autonomous weapons still are working on the ancient primitive analogy of hitting a man with your fist, or shooting a bullet. So long as those autonomous weapons are still using guns, bullets, things like that, and not something that is not native to our biology like poison. But with a striking of a blow you can make a continuous line all the way through stones, and bows and arrows, and bullets, to drones, and maybe autonomous weapons. So discontinuity is different.
Max: That’s an interesting challenge, deciding where exactly one draws the line to be more challenging in this case. Another very interesting analogy, I think, between biological weapons and lethal autonomous weapons is the business of verification. You mentioned earlier that there was a strong verification protocol for the Chemical Weapons Convention, and there have been verification protocols for nuclear arms reduction treaties also. Some people say, “Oh, it’s a stupid idea to ban lethal autonomous weapons because you can’t think of a good verification system.” But couldn’t people have said that also as a critique of the Biological Weapons Convention?
Matthew: That’s a very interesting point, because most people who think that verification can’t work have never been told what’s the basic underlying idea of verification. It’s not that you could find everything. Nobody believes that you could find every missile that might exist in Russia. Nobody ever would believe that. That’s not the point. It’s more subtle. The point is that you must have an ongoing attempt to find things. That’s intelligence. And there must be a heavy penalty if you find even one.
So it’s a step back from finding everything, to saying if you find even one then that’s a violation, and then you can take extreme measures. So a country takes a huge risk that another country’s intelligence organization, or maybe someone on your side who’s willing to squeal, isn’t going to reveal the possession of even one prohibited object. That’s the point. You may have some secret biological production facility, but if we find even one of them, then you are in violation. It isn’t that we have to find every single blasted one of them.
That was especially an argument that came from the nuclear treaties. It was the nuclear people who thought that up. People like Douglas McEachin at the CIA, who realized that there’s a more sophisticated argument. You just have to have a pretty impressive ability to find one thing out of many, if there’s anything out there. This is not perfect, but it’s a lot different from the argument that you have to know where everything is at all times.
Max: So, if I can paraphrase, is it fair to say that you simply want to give the parties to the treaty a very strong incentive not to cheat, because even if they get caught off base one single time, they’re in violation, and moreover, those who don’t have the weapons at that time will also feel that there’s a very, very strong stigma? Today, for example, I find it just fascinating how biology is such a strong brand. If you go ask random students here at MIT what they associate with biology, they will say, “Oh, new cures, new medicines.” They’re not going to say bioweapons. If you ask people when was the last time you read about a bioterrorism attack in the newspaper, they can’t even remember anything typically. Whereas, if you ask them about the new biology breakthroughs for health, they can think of plenty.
So, biology has clearly very much become a science that’s harnessed to make life better for people rather than worse. So there’s a very strong stigma. I think if I or anyone else here at MIT tried to secretly start making bioweapons, we’d have a very hard time even persuading any biology grad student to want to work with them because of the stigma. If one could create a similar stigma against lethal autonomous weapons, the stigma itself would be quite powerful, even absent the ability to do perfect verification. Does that make sense?
Matthew: Yes, it does, perfect sense.
Ariel: Do you think that these stigmas have any effect on the public’s interest or politicians’ interest in science?
Matthew: I think there’s still great fascination of people with science. I think that the exploration of space, for example: lots of people, not just kids — but especially kids — that are fascinated by it. Pretty soon, Elon Musk says in 2022, he’s going to have some people walking around on Mars. He’s just tested that BFR rocket of his that’s going to carry people to Mars. I don’t know if he’ll actually get it done but people are getting fascinated by the exploration of space, are getting fascinated by lots of medical things, are getting desperate about the need for a cure for cancer. I myself think we need to spend a lot more money on preventing — not curing but preventing cancer — and I think we know how to do it.
I think the public still has a big fascination, respect, and excitement from science. The politicians, it’s because, see, they have other interests. It’s not that they’re not interested or don’t like science. It’s because they have big money interests, for example. Coal and oil, these are gigantic. Harvard University has heavily invested in companies that deal with fossil fuels. Our whole world runs on fossil fuels mainly. You can’t fool around with that stuff. So it becomes a problem of which is going to win out, your scientific arguments, which are almost certain to be right, but not absolutely like one and one makes two — but almost — or the whole economy and big financial interests. It’s not easy. It will happen, we’ll convince people, but maybe not in time. That’s the sad part. Once it gets bad enough, it’s going to be bad. You can’t just turn around on a dime and take care of disastrous climate change.
Max: Yeah, this is very much the spirit of course, of the Future Life Institute, that Ariel’s podcast is run by. Technology, what it really does, it empowers us humans to do more, either more good things or more bad things. And technology in and of itself isn’t evil, nor is it morally good; It’s a tool, simply. And the more powerful it becomes, the more crucial it is that we also develop the wisdom to steer the technology for good uses. And I think what you’ve done with your biology colleagues is such an inspiring role model for all of the other sciences, really.
We physicists still feel pretty guilty about giving the world nuclear weapons, but we’ve also gave the world a lot of good stuff, from lasers, to smartphones and computers. Chemists gave the world a lot of great materials, but they also gave us, ultimately, the internal combustion engine and climate change. Biology, I think more than any other field, has clearly ended up very solidly on the good side. Everybody loves biology for what it does, even though it could have gone very differently, right? We could have had a catastrophic arms race, a race to the bottom, with one super power outdoing the other in bioweapons, and eventually these cheap weapons being everywhere, and on the black market, and bioterrorism every day. That future didn’t happen, that’s why we all love biology. And I am very honored to get to be on this call here with you, so I could personally thank you for your role on making it this way. We should not take this for granted, that it’ll be this way with all sciences, the way it’s become for biology. So, thank you.
Matthew: Yeah. That’s all right.
I’d like to end with one thought. We’re learning how to change the human genome. They won’t really get going for a while, and there’s some problems that very few people are thinking about. Not the so-called off target effects, that’s a well-known problem — but there’s another problem that I won’t go into, but it’s called epistasis. Nevertheless, 10 years from now, 100 years from now, 500 years from now, sooner or later we’ll be changing the human genome on a massive scale, making people better in various ways, so-called enhancements.
Now, a question arises. Do we know enough about the genetic basis of what makes us human to be sure that we can keep the good things about being human? What are those? Well, compassion is one. I’d say curiosity is another. Another is the feeling of needing to be needed. That sounds kind of complicated, I guess, but if you don’t feel needed by anybody — there’s some people who can go through life and they don’t need to feel needed. But doctors, nurses, parents, people who really love each other: the feeling of being needed by another human being, I think, is very pleasurable to many people, maybe to most people, and it’s one of the things that’s of essence of what it means to be human.
Now, where does this all take us? It means that if we’re going to start changing the human genome in any big time way, we need to know, first of all, what we most value in being human, and that’s a subject for the humanities, for everybody to talk about, think about. And then it’s a subject for the brain scientists to figure out what’s the basis of it. It’s got to be in the brain. But what is it in the brain? And we’re miles and miles and miles away in brain science from being able to figure out what is it in the brain — or maybe we’re not, I don’t know any brain science, I shouldn’t be shooting off my mouth — but we’ve got to understand those things. What is it in our brains that makes us feel good when we are of use to someone else?
We don’t want to fool around with whatever those genes are — do not monkey with those genes unless you’re absolutely sure that you’re making them maybe better — but anyway, don’t fool around. And figure out in the humanities, don’t stop teaching humanities. Learn from Sophocles, and Euripides, and Aeschylus: What are the big problems about human existence? Don’t make it possible for a kid to go through Harvard — as is possible today — without learning a single thing from Ancient Greece. Nothing. You don’t even have to use the word Greece. You don’t have to use the word Homer or any of that. Nothing, zero. Isn’t that amazing?
Before President Lincoln, everybody, to get to enter Harvard, had to already know Ancient Greek and Latin. Even though these guys were mainly boys of course, and they were going to become clergymen. They also, by the way — there were no electives — everyone had to take fluctions, which is differential calculus. Everyone had to take integral calculus. Every one had to take astronomy, chemistry, physics, as well as moral philosophy, et cetera. Well, there’s nothing like that anymore. We don’t all speak the same language because we’ve all had such different kinds of education, and also the humanities just get a short shrift. I think that’s very short sighted.
MIT is pretty good in humanities, considering it’s a technical school. Harvard used to be tops. Harvard is at risk of maybe losing it. Anyway, end of speech.
Max: Yeah, I want to just agree with what you said, and also rephrase it the way I think about it. What I hear you saying is that it’s not enough to just make our technology more powerful. We also need the humanities, and our humanity, for the wisdom of how we’re going to manage our technology and what we’re trying to use it for, because it does no good to have a really powerful tool if you aren’t wise and use it for the right things.
Matthew: If we’re going to change, we might even split into several species. Almost all of the other species have very close other species: neighbors. Especially if you can get them separated — there’s a colony on Mars and they don’t travel back and forth much — species will diverge. It takes a long, long, long, long time, but the idea there, like the Bible says, that we are fixed, nothing will change, that’s of course wrong. Human evolution is going on as we speak.
Ariel: We’ll end part one of our two-part podcast with Matthew Meselson here. Please join us for the next episode which serves as a reminder that weapons bans don’t just magically work. But rather, there are often science mysteries that need to be solved in order to verify whether a group has used a weapon illegally. In the next episode, Matthew will talk about three such scientific mysteries he helped solve, including the anthrax incident in Russia, the yellow rain affair in Southeast Asia, and the research he did that led immediately to the prohibition of Agent Orange. So please join us for part two of this podcast, which is also available now.
As always, if you’ve been enjoying this podcast, please take a moment to like it, share it, and maybe even leave a positive review. It’s a small action on your part, but it’s tremendously helpful for us.
On February 1, a little more than 30 years after it went into effect, the United States announced that it is suspending the Intermediate-Range Nuclear Forces (INF) Treaty. Less than 24 hours later, Russia announced that it was also suspending the treaty.
It stands (or stood) as one of the last major nuclear arms control treaties between the U.S. and Russia, and its collapse signals the most serious nuclear arms crisis since the 1980s. As Malcolm Chalmers, deputy director general of the Royal United Services Institute, said to The Guardian, “If the INF treaty collapses, and with the New Start treaty on strategic arms due to expire in 2021, the world could be left without any limits on the nuclear arsenals of nuclear states for the first time since 1972.”
The INF treaty, which went into effect in 1988, was the first nuclear agreement to outlaw an entire class of weapons. It banned all ground-launched ballistic and cruise missiles — nuclear, conventional, and “exotic”— with a range of 500 km to 5500 km (310 to 3400 miles), leading to the immediate elimination of 2,692 short- and medium-range weapons. But more than that, the treaty served as a turning point that helped thaw the icy stalemate between the U.S. and Russia. Ultimately, the trust that it fostered established a framework for future treaties and, in this way, played a critical part in ending the Cold War.
Now, all of that may be undone.
The Blame Game Part 1: Russia vs. U.S.
In defense of the suspension, President Donald Trump said that the Russian government has deployed new missiles that violate the terms of the INF treaty — missiles that could deliver nuclear warheads to European targets, including U.S. military bases. President Trump also said that, despite repeated warnings, President Vladimir Putin has refused to destroy these warheads. “We’re not going to let them violate a nuclear agreement and do weapons and we’re not allowed to,” he said.
In a statement announcing the suspension of the treaty, Secretary of State Mike Pompeo said that countries must be held accountable when they violate a treaty. “Russia has jeopardized the United States’ security interests,” he said, “and we can no longer be restricted by the treaty while Russia shamelessly violates it.” Pompeo continued by noting that Russia’s posturing is a clear signal that the nation is returning to its old Cold War mentality, and that the U.S. must make similar preparations in light of these developments. “As we remain hopeful of a fundamental shift in Russia’s posture, the United States will continue to do what is best for our people and those of our allies,” he concluded.
The controversy about whether Russia is in violation hinges on whether the 9M729 missile can fly more than 500km. The U.S. claims to have provided evidence of this to Russia, but has not made this evidence public, and further claims that violations have continued since at least 2014. Although none of the U.S.-based policy experts interviewed for this article dispute that Russia is in violation, many caution that this suspension will create a far more unstable environment and that the U.S. shares much of the blame for not doing more to preserve the treaty.
In an emailed statement to the Future of Life Institute, Martin Hellman, an Adjunct Senior Fellow for Nuclear Risk Analysis at the Federation of American Scientists and Professor Emeritus of Electrical Engineering at Stanford University, was clear in his censure of the Trump administration’s decision and reasoning, noting that it follows a well-established pattern of duplicity and double-dealing:
The INF Treaty was a crucial step in ending the arms race. Our withdrawing from it in such a precipitous manner is a grave mistake. In a sense, treaties are the beginning of negotiations, not the end. When differences in perspective arise, including on what constitutes a violation, the first step is to meet and negotiate. Only if that process fails, should withdrawal be contemplated. In the same way, any faults in a treaty should first be approached via corrective negotiations.
Withdrawing in this precipitous manner from the INF treaty will add to concerns that our adversaries already have about our trustworthiness on future agreements, such as North Korea’s potential nuclear disarmament. Earlier actions of ours which laid that foundation of mistrust include George W. Bush killing the 1994 Agreed Framework with North Korea “for domestic political reasons,” Obama attacking Libya after Bush had promised that giving up its WMD programs “can regain [Libya] a secure and respected place among the nations,” and Trump tearing up the Iran agreement even though Iran was in compliance and had taken steps that considerably set back its nuclear program.
In an article published by CNN, Eliot Engel, chairman of the House Committee on Foreign Affairs, and Adam Smith, chairman of the House Committee on Armed Services, echo these sentiments and add that the U.S. government greatly contributed to the erosion of the treaty, clarifying that the suspension could have been avoided if President Trump had collaborated with NATO allies to pressure Russia into ensuring compliance. “[U.S.] allies told our offices directly that the Trump administration blocked NATO discussion regarding the INF treaty and provided only the sparest information throughout the process….This is the latest step in the Trump administration’s pattern of abandoning the diplomatic tools that have prevented nuclear war for 70 years. It also follows the administration’s unilateral decision to withdraw from the Paris climate agreement,” they said.
Russia has also complained about the alleged lack of U.S. diplomacy. In January 2019, Russian diplomats proposed a path to resolution, stating that they would display their missile system and demonstrate that it didn’t violate the INF treaty if the U.S. did the same with their MK-41 launchers in Romania. The Russians felt that this was a fair compromise, as they have long argued that the Aegis missile defense system, which the U.S. deployed in Romania and Poland, violates the INF treaty. The U.S. rejected Russia’s offer, stating that a Russian controlled inspection would not permit the kind of unfettered access that U.S. representatives would need to verify their conclusions. And ultimately, they insisted that the only path forward was for Russia to destroy the missiles, launchers, and supporting infrastructure.
In response, Russian foreign minister Sergei Lavrov accused the U.S. of being obstinate. “U.S. representatives arrived with a prepared position that was based on an ultimatum and centered on a demand for us to destroy this rocket, its launchers and all related equipment under US supervision,” he said.
The most devastating military threat arguably comes from a nuclear war started not intentionally but by accident or miscalculation. Accidental nuclear war has almost happened many times already, and with 15,000 nuclear weapons worldwide — thousands on hair-trigger alert and ready to launch at a moment’s notice — an accident is bound to occur eventually.
The Blame Game Part 2: China
Other experts, such as Mark Fitzpatrick, Director of the non-proliferation program at the International Institute for Strategic Studies, assert that the “real reason” for the U.S. pullout lies elsewhere — in China.
This belief is bolstered by previous statements made by President Trump. Most notably, during a rally in the Fall of 2018, the President told reporters that it is unfair that China faces no limits when it comes to developing and deploying intermediate-range nuclear missiles. “Unless Russia comes to us and China comes to us and they all come to us and say, ‘let’s really get smart and let’s none of us develop those weapons, but if Russia’s doing it and if China’s doing it, and we’re adhering to the agreement, that’s unacceptable,” he said.
According to a 2019 report published for congress, China has some 2,000 ballistic and cruise missiles in its inventory, and 95% of these would violate the INF treaty if Beijing were a signatory. It should be noted that both Russia and the U.S. are estimated to have over 6,000 nuclear warheads, while China has approximately 280. Nevertheless, the report states, “The sheer number of Chinese missiles and the speed with which they could be fired constitutes a critical Chinese military advantage that would prove difficult for a regional ally or partner to manage absent intervention by the United States,” adding, “The Chinese government has also officially stated its opposition to Beijing joining the INF Treaty.” Consequently, President Trump stated that the U.S. has no choice but to suspend the treaty.
Along these lines, John Bolton, who became the National Security Adviser in April, has long argued that the kinds of missiles banned by the INF treaty would be an invaluable resource when it comes to defending Western nations against what he argues is an increasing military threat from China’s.
Pranay Vaddi, a fellow in the Nuclear Policy Program at the Carnegie Endowment for International Peace, feels differently. Although he does not deny that China poses a serious military challenge to the U.S., Vaddi asserts that withdrawing from the INF treaty is not a viable solution, and he says that proponents of the suspension “ignore the very real political challenges associated with deploying U.S. GBIRs in the Asia Pacific region. They also ignore specific military challenges, including the potential for a missile race and long-term regional and strategic instability.” He concludes, “Before withdrawing from the INF Treaty, the United States should consult with its Asian allies on the threat posed by China, the defenses required, and the consequences of introducing U.S. offensive missiles into the region, including potentially on allied territory.”
The National Security Archives recently published a declassified list of U.S. nuclear targets from 1956, which spanned 1,100 locations across Eastern Europe, Russia, China, and North Korea. This map shows all 1,100 nuclear targets from that list, demonstrating how catastrophic a nuclear exchange between the United States and Russia could be.
Six Months and Counting
Regardless of how much blame each respective nation shares, the present course has been set, and if things don’t change soon, we may find ourselves in a very different world a few months from now.
According to the terms of the treaty, if one of the parties breaches the agreement then the other party has the option to terminate or suspend it. It was on this basis that, back in October of 2018, President Trump stated he would be terminating the INF treaty altogether. Today’s suspension announcement is an update to these plans.
Notably, a suspension doesn’t follow the same course as a withdrawal. A suspension means that the treaty continues to exist for a set period. As a result, starting Feb. 1, the U.S. began a six-month notice period. If the two nations don’t reach an agreement and decide to restore the treaty within this window, on August 2nd, the Treaty will go out of effect. At that juncture, both the U.S. and Russia will be free to develop and deploy the previously banned nuclear missiles with no oversight or transparency.
The situation is dire, and experts assert that we must immediately reopen negotiations. On Friday, before the official U.S. announcement, German Chancellor Angela Merkel said that if the United States announced it would suspend compliance with the treaty, Germany would use the six-month formal withdrawal period to hold further discussions. “If it does come to a cancellation today, we will do everything possible to use the six-month window to hold further talks,” she said.
Following the US announcement, German Foreign Minister Heiko Maas tweeted, “there will be less security without the treaty.” Likewise, Laura Rockwood, executive director at the Vienna Center for Disarmament and Non-Proliferation, noted that the suspension is a troubling move that will increase — not decrease — tension and conflict. “It would be best to keep the INF in place. You don’t throw the baby out with the bathwater. It’s been an extraordinarily successful arms control treaty,” she said.
Carl Bildt, a co-chair of the European Council on Foreign Relations, agreed with these sentiments, noting in a tweet that the INF treaty’s demise puts many lives in peril. “Russia can now also deploy its Kaliber cruise missiles with ranges around 1.500 km from ground launchers. This would quickly cover all of Europe with an additional threat,” he said.
And it looks like many of these fears are already being realized. In a televised meeting over the weekend, President Putin stated that Russia will actively begin building weapons that were previously banned under the treaty. President Putin also made it clear that none of his departments would initiate talks with the U.S. on any matters related to nuclear arms control. “I suggest that we wait until our partners are ready to engage in equal and meaningful dialogue,” he said.
The photo for this article is from wiki commons: by Mil.ru, CC BY 4.0, https://commons.wikimedia.org/
It may seem like we at FLI spend a lot of our time worrying about existential risks, but it’s helpful to remember that we don’t do this because we think the world will end tragically: We address issues relating to existential risks because we’re so confident that if we can overcome these threats, we can achieve a future greater than any of us can imagine!
As we end 2018 and look toward 2019, we want to focus on a message of hope, a message of existential hope.
But first, a very quick look back…
We had a great year, and we’re pleased with all we were able to accomplish. Some of our bigger projects and successes include: the Lethal Autonomous Weapons Pledge; a new round of AI safety grants focusing on the beneficial development of AGI; the California State Legislature’s resolution in support of the Asilomar AI Principles; and our second Future of Life Award, which was presented posthumously to Stanislav Petrov and his family.
As we now look ahead and strive to work toward a better future, we, as a society, must first determine what that collective future should be. At FLI, we’re looking forward to working with global partners and thought leaders as we consider what “better futures” might look like and how we can work together to build them.
As FLI President Max Tegmark says, “There’s been so much focus on just making our tech powerful right now, because that makes money, and it’s cool, that we’ve neglected the steering and the destination quite a bit. And in fact, I see that as the core goal of the Future of Life Institute: help bring back focus [to] the steering of our technology and the destination.”
A recent Gizmodo article on why we need more utopian fiction also summed up the argument nicely: ”Now, as we face a future filled with corruption, yet more conflict, and the looming doom of global warming, imagining our happy ending may be the first step to achieving it.”
Fortunately, there are already quite a few people who have begun considering how a conflicted world of 7.7 billion can unite to create a future that works for all of us. And for the FLI podcast in December, we spoke with six of them to talk about how we can start moving toward that better future.
The existential hope podcast includes interviews with FLI co-founders Max Tegmark and Anthony Aguirre, as well as existentialhope.com founder Allison Duettmann, Josh Clark who hosts The End of the World with Josh Clark, futurist and researcher Anders Sandberg, and tech enthusiast and entrepreneur Gaia Dempsey. You can listen to the full podcast here, but we also wanted to call attention to some of their comments that most spoke to the idea of steering toward a better future:
Max Tegmark on the far future and the near future:
When I look really far into the future, I also look really far into space and I see this vast cosmos, which is 13.8 billion years old. And most of it is, despite what the UFO enthusiasts say, actually looking pretty dead and [like] wasted opportunities. And if we can help life flourish not just on earth, but ultimately throughout much of this amazing universe, making it come alive and teeming with these fascinating and inspiring developments, that makes me feel really, really inspired.
For 2019 I’m looking forward to more constructive collaboration on many aspects of this quest for a good future for everyone on earth.
Gaia Dempsey on how we can use a technique called world building to help envision a better future for everyone and get more voices involved in the discussion:
Worldbuilding is a really fascinating set of techniques. It’s a process that has its roots in narrative fiction. You can think of, for example, the entire complex world that J.R.R. Tolkien created for The Lord of the Rings series. And in more contemporary times, some spectacularly advanced worldbuilding is occurring now in the gaming industry. So [there are] these huge connected systems that underpin worlds in which millions of people today are playing, socializing, buying and selling goods, engaging in an economy. These are vast online worlds that are not just contained on paper as in a book, but are actually embodied in software. And over the last decade, world builders have begun to formally bring these tools outside of the entertainment business, outside of narrative fiction and gaming, film and so on, and really into society and communities. So I really define worldbuilding as a powerful act of creation.
And one of the reasons that it is so powerful is that it really facilitates collaborative creation. It’s a collaborative design practice.
Ultimately our goal is to use this tool to explore how we want to evolve as a society, as a community, and to allow ideas to emerge about what solutions and tools will be needed to adapt to that future.
One of the things where I think worldbuilding is really good is that the practice itself does not impose a single monolithic narrative. It actually encourages a multiplicity of narratives and perspectives that can coexist.
Anthony Aguirre on how we can use technology to find solutions:
I think we can use technology to solve any problem in the sense that I think technology is an extension of our capability: it’s something that we develop in order to accomplish our goals and to bring our will into fruition. So, sort of by definition, when we have goals that we want to do — problems that we want to solve — technology should in principle be part of the solution.
So I’m broadly optimistic that, as it has over and over again, technology will let us do things that we want to do better than we were previously able to do them.
Allison Duettmann on why she created the website existentialhope.com:
I do think that it’s up to everyone, really, to try to engage with the fact that we may not be doomed, and what may be on the other side. What I’m trying to do with the website, at least, is generate common knowledge to catalyze more directed coordination toward beautiful futures. I think that there [are] a lot of projects out there that are really dedicated to identifying the threats to human existence, but very few really offer guidance on [how] to influence that. So I think we should try to map the space of both peril and promise which lie before us, [and] we should really try to aim for that. This knowledge can empower each and every one of us to navigate toward the grand future.
Josh Clark on the impact of learning about existential risks for his podcast series, The End of the World with Josh Clark:
As I was creating the series, I underwent this transition [regarding] how I saw existential risks, and then ultimately how I saw humanity’s future, how I saw humanity, other people, and I kind of came to love the world a lot more than I did before. Not like I disliked the world or people or anything like that. But I really love people way more than I did before I started out, just because I see that we’re kind of close to the edge here. And so the point of why I made the series kind of underwent this transition, and you can kind of tell in the series itself where it’s like information, information, information. And then now, that you have bought into this, here’s how we do something about it.
I think that one of the first steps to actually taking on existential risks is for more and more people to start talking about [them].
Anders Sandberg on a grand version of existential hope:
The thing is, my hope for the future is we get this enormous open ended future. It’s going to contain strange and frightening things but I also believe that most of it is going to be fantastic. It’s going to be roaring on the world far, far, far into the long term future of the universe probably changing a lot of the aspects of the universe.
When I use the term “existential hope,” I contrast that with existential risk. Existential risks are things that threaten to curtail our entire future, to wipe it out, to make it too much smaller than it could be. Existential hope to me, means that maybe the future is grander than we expect. Maybe we have chances we’ve never seen and I think we are going to be surprised by many things in future and some of them are going to be wonderful surprises. That is the real existential hope.
Right now, this sounds totally utopian, would you expect all humans to get together and agree on something philosophical? That sounds really unlikely. Then again, a few centuries ago the United Nations and the internet would [have sounded] totally absurd. The future is big, we have a lot of centuries ahead of us, hopefully.
From everyone at FLI, we wish you a happy holiday season and a wonderful New Year full of hope!
For the first two weeks in December, the parties to the United Nations Framework Convention on Climate Change (UNFCC) gathered in Katowice, Poland for the 24th annual Conference of the Parties (COP24).
The UNFCCC defines its ultimate goal as “preventing ‘dangerous’ human interference with the climate system,” and its objective for COP24 was to design an “implementation package” for the 2015 Paris Climate Agreement. This package, known as the Katowice Rules, is intended to bolster the Paris Agreement by intensifying the mitigation goals of each of its member countries and, in so doing, ensure the full implementation of the Paris Agreement.
The significance of this package is clearly articulated in the COP24 presidency’s vision — “there is no Paris Agreement without Katowice.”
And the tone of the event was, fittingly, one of urgency. Negotiations took place in the wake of the latest IPCC report, which made clear in its findings that the original terms of the Paris Agreement are insufficient. If we are to keep to the preferred warming target of 1.5°C this century, the report notes that we must strengthen the global response to climate change.
The need for increased action was reiterated throughout the event. During the first week of talks, the Global Carbon Project released new data showing a 2.7% increase in carbon emissions in 2018 and projecting further emissions growth in 2019. And the second week began with a statement from global investors who, “strongly urge all governments to implement the actions that are needed to achieve the goals of the [Paris] Agreement, with the utmost urgency.” The investors warned that, without drastic changes, the economic fallout from climate change would likely be several times worse than the 2008 financial crisis.
Against this grim backdrop, negotiations crawled along.
Progress was impeded early on by a disagreement over the wording used in the Conference’s acknowledgment of the IPCC report. Four nations — the U.S., Russia, Saudi Arabia, and Kuwait — took issue with a draft that said the parties “welcome” the report, preferring to say they “took note” of it. A statement from the U.S. State Department explained: “The United States was willing to note the report and express appreciation to the scientists who developed it, but not to welcome it, as that would denote endorsement of the report.”
There was also tension between the U.S. and China surrounding the treatment of developed vs. developing countries. The U.S. wants one universal set of rules to govern emissions reporting, while China has advocated for looser standards for itself and other developing nations.
Initially scheduled to wrap on Friday, talks continued into the weekend, as a resolution was delayed in the final hours by Brazil’s opposition to a proposal that would change rules surrounding carbon trading markets. Unable to strike a compromise, negotiators ultimately tabled the proposal until next year, and a deal was finally struck on Saturday, following negotiations that carried on through the night.
The final text of the Katowice Rules welcomes the “timely completion” of the IPCC report and lays out universal requirements for updating and fulfilling national climate pledges. It holds developed and developing countries to the same reporting standard, but it offers flexibility for “those developing country parties that need it in the light of their capacities.” Developing countries will be left to self-determine whether or not they need flexibility.
The rules also require that countries report any climate financing, and developed countries are called on to increase their financial contributions to climate efforts in developing countries.
The photo for this article was originally posted here.
Over the last few decades, the unprecedented pace of technological progress has allowed us to upgrade and modernize much of our infrastructure and solve many long-standing logistical problems. For example, Babylon Health’s AI-driven smartphone app is helping assess and prioritize 1.2 million patients in North London, electronic transfers allow us to instantly send money nearly anywhere in the world, and, over the last 20 years, GPS has revolutionized how we navigate, how we track and ship goods, and how we regulate traffic.
However, exponential growth comes with its own set of hurdles that must be navigated. The foremost issue is that it’s exceedingly difficult to predict how various technologies will evolve. As a result, it becomes challenging to plan for the future and ensure that the necessary safety features are in place.
This uncertainty is particularly worrisome when it comes to technologies that could pose existential challenges — artificial intelligence, for example.
Yet, despite the unpredictable nature of tomorrow’s AI, certain challenges are foreseeable. Case in point, regardless of the developmental path that AI agents ultimately take, these systems will need to be capable of making intelligent decisions that allow them to move seamlessly and safely through our physical world. Indeed, one of the most impactful uses of artificial intelligence encompasses technologies like autonomous vehicles, robotic surgeons, user-aware smart grids, and aircraft control systems — all of which combine advanced decision-making processes with the physics of motion.
Such systems are known as cyber-physical systems (CPS). The next generation of advanced CPS could lead us into a new era in safety, reducing crashes by 90% and saving the world’s nations hundreds of billions of dollars a year — but only if such systems are themselves implemented correctly.
This is where Andre Platzer, Associate Professor of Computer Science at Carnegie Mellon University, comes in. Platzer’s research is dedicated to ensuring that CPS benefit humanity and don’t cause harm. Practically speaking, this means ensuring that the systems are flexible, reliable, and predictable.
What Does it Mean to Have a Safe System?
Cyber-physical systems have been around, in one form or another, for quite some time. Air traffic control systems, for example, have long relied on CPS-type technology for collision avoidance, traffic management, and a host of other decision-making tasks. However, Platzer notes that as CPS continue to advance, and as they are increasingly required to integrate more complicated automation and learning technologies, it becomes far more difficult to ensure that CPS are making reliable and safe decisions.
To better clarify the nature of the problem, Platzer turns to self-driving vehicles. In advanced systems like these, he notes that we need to ensure that the technology is sophisticated enough to be flexible, as it has to be able to safely respond to any scenario that it confronts. In this sense, “CPS are at their best if they’re not just running very simple [control systems], but if they’re running much more sophisticated and advanced systems,” Platzer notes. However, when CPS utilize advanced autonomy, because they are so complex, it becomes far more difficult to prove that they are making systematically sound choices.
In this respect, the more sophisticated the system becomes, the more we are forced to sacrifice some of the predictability and, consequently, the safety of the system. As Platzer articulates, “the simplicity that gives you predictability on the safety side is somewhat at odds with the flexibility that you need to have on the artificial intelligence side.”
The ultimate goal, then, is to find equilibrium between the flexibility and predictability — between the advanced learning technology and the proof of safety — to ensure that CPS can execute their tasks both safely and effectively. Platzer describes this overall objective as a kind of balancing act, noting that, “with cyber-physical systems, in order to make that sophistication feasible and scalable, it’s also important to keep the system as simple as possible.”
How to Make a System Safe
The first step in navigating this issue is to determine how researchers can verify that a CPS is truly safe. In this respect, Platzer notes that his research is driven by this central question: if scientists have a mathematical model for the behavior of something like a self-driving car or an aircraft, and if they have the conviction that all the behaviors of the controller are safe, how do they go about proving that this is actually the case?
The answer is an automated theorem prover, which is a computer program that assists with the development of rigorous mathematical correctness proofs.
When it comes to CPS, the highest safety standard is such a mathematical correctness proof, which shows that the system always produces the correct output for any given input. It does this by using formal methods of mathematics to prove or disprove the correctness of the control algorithms underlying a system.
After this proof technology has been identified and created, Platzer asserts that the next step is to use it to augment the capabilities of artificially intelligent learning agents — increasing their complexity while simultaneously verifying their safety.
Eventually, Platzer hopes that this will culminate in technology that allows CPS to recover from situations where the expected outcome didn’t turn out to be an accurate model of reality. For example, if a self-driving car assumes another car is speeding up when it is actually slowing down, it needs to be able to quickly correct this error and switch to the correct mathematical model of reality.
The more complex such seamless transitions are, the more complex they are to implement. But they are the ultimate amalgamation of safety and flexibility or, in other words, the ultimately combination of AI and safety proof technology.
Creating the Tech of Tomorrow
To date, one of the biggest developments to come from Platzer’s research is the KeYmaera X prover, which Platzer characterizes as a “gigantic, quantum leap in terms of the reliability of our safety technology, passing far beyond in rigor than what anyone else is doing for the analysis of cyber-physical systems.”
The KeYmaera X prover, which was created by Platzer and his team, is a tool that allows users to easily and reliably construct mathematical correctness proofs for CPS through an easy-to-use interface.
More technically, KeYmaera X is a hybrid systems theorem prover that analyzes the control program and the physical behavior of the controlled system together, in order to provide both efficient computation and the necessary support for sophisticated safety proof techniques. Ultimately, this work builds off of a previous iteration of the technology known as KeYmaera. However, Platzer states that, in order to optimize the tool and make it as simple as possible, the team essentially “started from scratch.”
Emphasizing just how dramatic these most recent changes are, Platzer notes that, in the previous prover, the correctness of the statements was dependent on some 66,000 lines of code. Notably, each of these 66,000 lines were all critical to the correctness of the verdict. According to Platzer, this poses a problem, as it’s exceedingly difficult to ensure that all of the lines are implemented correctly. Although the latest iteration of KeYmaera is ultimately just as large as the previous version, in KeYmaera X, the part of the prover that is responsible for verifying the correctness is a mere 2,000 lines of code.
This allows the team to evaluate the safety of cyber-physical systems more reliably than ever before. “We identified this microkernel, this really minuscule part of the system that was responsible for the correctness of the answers, so now we have a much better chance of making sure that we haven’t accidentally snuck any mistakes into the reasoning engines,” Platzer said. Simultaneously, he notes that it enables users to do much more aggressive automation in their analysis. Platzer explains, “If you have a small part of the system that’s responsible for the correctness, then you can do much more liberal automation. It can be much more courageous because there’s an entire safety net underneath it.”
For the next stage of his research, Platzer is going to begin integrating multiple mathematical models that could potentially describe reality into a CPS. To explain these next steps, Platzer returns once more to self-driving cars: “If you’re following another driver, you can’t know if the driver is currently looking for a parking spot, trying to get somewhere quickly, or about to change lanes. So, in principle, under those circumstances, it’s a good idea to have multiple possible models and comply with the ones that may be the best possible explanation of reality.”
Ultimately, the goal is to allow the CPS to increase their flexibility and complexity by switching between these multiple models as they become more or less likely explanations of reality. “The world is a complicated place,” Platzer explains, “so the safety analysis of the world will also have to be a complicated one.”
FLI is pleased to announce that we’ve signed the Safe Face Pledge, an effort to ensure facial analysis technologies are not used as weapons or in other situations that can lead to abuse or bias. The pledge was initiated and led by Joy Buolamwini, an AI researcher at MIT and founder of the Algorithmic Justice League.
Facial analysis technology isn’t just used by our smart phones and on social media. It’s also found in drones and other military weapons, and it’s used by law enforcement, airports and airlines, public surveillance cameras, schools, business, and more. Yet the technology is known to be flawed and biased, often miscategorizing anyone who isn’t a white male. And the bias is especially strong against dark-skinned women.
“Research shows facial analysis technology is susceptible to bias and even if accurate can be used in ways that breach civil liberties. Without bans on harmful use cases, regulation, and public oversight, this technology can be readily weaponized, employed in secret government surveillance, and abused in law enforcement,” warns Buolamwini.
By signing the pledge, companies that develop, sell or buy facial recognition and analysis technology promise that they will “prohibit lethal use of the technology, lawless police use, and require transparency in any government use.”
FLI does not develop or use these technologies, but we signed because we support these efforts, and we hope all companies will take necessary steps to ensure their technologies are used for good, rather than as weapons or other means of harm.
Companies that had signed the pledge at launch include Simprints, Yoti, and Robbie AI. Other early signatories of the pledge include prominent AI researchers Noel Sharkey, Subbarao Kambhampati, Toby Walsh, Stuart Russell, and Raja Chatila, as well as tech bauthors Cathy O’Neil and Meredith Broussard, and many more.
The SAFE Face Pledge commits signatories to:
Show Value for Human Life, Dignity, and Rights
- Do not contribute to applications that risk human life
- Do not facilitate secret and discriminatory government surveillance
- Mitigate law enforcement abuse
- Ensure your rules are being followed
Address Harmful Bias
- Implement internal bias evaluation processes and support independent evaluation
- Submit models on the market for benchmark evaluation where available
- Increase public awareness of facial analysis technology use
- Enable external analysis of facial analysis technology on the market
Embed Safe Face Pledge into Business Practices
- Modify legal documents to reflect value for human life, dignity, and rights
- Engage with stakeholders
- Provide details of Safe Face Pledge implementation
Organizers of the pledge say, “Among the most concerning uses of facial analysis technology involve the bolstering of mass surveillance, the weaponization of AI, and harmful discrimination in law enforcement contexts.” And the first statement of the pledge calls on signatories to ensure their facial analysis tools are not used “to locate or identify targets in operations where lethal force may be used or is contemplated.”
Anthony Aguirre, cofounder of FLI, said, “A great majority of AI researchers agree that designers and builders of AI systems are stakeholders in the moral implications of their use, misuse, and actions, with a responsibility and opportunity to shape those implications. That is, in fact, the 9th Asilomar AI principle. The Safe Face Pledge asks those involved with the development of facial recognition technologies, which are dramatically increasing in power through the use of advanced machine learning, to take this belief seriously and to act on it. As new technologies are developed and poised for widespread implementation and use, it is imperative for our society to consider their interplay with the rights and privileges of the people they affect — and new rights and responsibilities may have to be considered as well, where technologies are currently in a legal or regulatory grey area. FLI applauds the multiple initiatives, including this pledge, aimed at ensuring that facial recognition technologies — as with other AI technologies — are implemented only in a way that benefits both individuals and society while taking utmost care to respect individuals’ rights and human dignity.”
You can support the Safe Face Pledge by signing here.
By Jolene Creighton
Algorithms don’t just decide what posts you see in your Facebook newsfeed. They make millions of life-altering decisions every day. They help decide who moves to the next stage of a job interview, who can take out a loan, and even who’s granted parole.
When one stops to consider the well-known biases that exist in these algorithms, the role that they play in our decision-making processes becomes somewhat concerning.
Ultimately, bias is a problem that stems from the unrepresentative datasets that our systems are trained on. For example, when it comes to images, most of the training data is Western-centric — it depicts caucasian individuals taking part in traditionally Western activities. Consequently, as Google research previously revealed, if we give an AI system an image of a caucasian bride in a Western dress, it correctly labels the image as “wedding,” “bride,” and “women.” If, however, we present the same AI system with an image of a bride of Asian descent, is produces results like “clothing,” “event,” and “performance art.”
Of course, this problem is not exclusively a Western one. In 2011, a study found that AI developed in Eastern Asia have more difficulty distinguishing between Caucasian faces than Asian faces.
That’s why, in September of 2018, Google partnered with the NeurIPS confrence to launch the Inclusive Images Competition, an event that was created to help encourage the development of less biased AI image classification models.
For the competition, individuals were asked to use Open Images, a image dataset collected from North America and Europe, to train a system that can be evaluated on images collected from a different geographic region.
At this week’s NeurIPS conference, Pallavi Baljekar, a Google Brain researcher, spoke about the success of the project. Notably, the competition was only marginally successful. Although the leading models maintained relatively high accuracy in the first stages of the competition, four out of five top models didn’t predict the “bride” label when applied to the original two bride images.
However, that’s not to say that progress wasn’t made. Baljekar noted that the competition proved that, even with a small and diverse set of data, “we can improve performance on unseen target distributions.”
And in an interview, Pavel Ostyakov, a Deep Learning Engineer at Samsung AI Center and the researcher who took first place in the competition, added that demanding an entirely unbiased AI may be asking for a bit too much. Ultimately, our AI need to be able to “stereotype” to some degree in order to make their classifications. “The problem was not solved yet, but I believe that it is impossible for neural networks to make unbiased predictions,” he said. Ultimately, the need to retain some biases are sentiments that have been echoed by other AI researchers before.
Consequently, it seems that making unbiased AI systems is going to be a process that requires continuous improvement and tweaking. Yet, despite the fact that we can’t make entirely unbiased AI, we can do a lot more to make them less biased.
With this in mind, today, Google announced Open Images Extended. It’s an extension of Google’s Open Images and is intended to be a dataset that better represents the global diversity we find on our planet. The first set to be added is seeded with over 470,000 images.
On this very long road we’re traveling, it’s a step in the right direction.
By Jolene Creighton
Our world is a complex and vibrant place. It’s also remarkably dynamic, existing in a state of near constant change. As a result, when we’re faced with a decision, there are thousands of variables that must be considered.
According to Joelle Pineau, an Associate Professor at McGill University and lead of Facebook’s Artificial Intelligence Research lab in Montreal, this poses a bit of a problem when it comes to our AI agents.
During her keynote speech at the 2018 NeurIPS conference, Pineau stated that many AI researchers aren’t training their machine learning systems in proper environments. Instead of using dynamic worlds that mimic what we see in real life, much of the work that’s currently being done takes place in simulated worlds that are static and pristine, lacking the complexity of realistic environments.
According to Pineau, although these computer-constructed worlds help make research more reproducible, they also make the results less rigorous and meaningful. “The real world has incredible complexity, and when we go to these simulators, that complexity is completely lost,” she said.
Pineau continued by noting that, if we hope to one day create intelligent machines that are able to work and react like humans — artificial general intelligences (AGIs) — we must go beyond the static and limited worlds that are created by computers and begin tackling real world scenarios. “We have to break out of these simulators…on the roadmap to AGI, this is only the beginning,” she said.
Ultimately, Pineau also noted that we will never achieve a true AGI unless we begin testing our systems on more diverse training sets and forcing our intelligent agents to tackle more complex problems. “The world is your test set,” she said, concluding, “I’m here to encourage you to explore the full spectrum of opportunities…this means using separate tasks for training and testing.”
Teaching a Machine to Reason
Pineau’s primary critique was on an area of machine learning that is known as reinforcement learning (RL). RL systems allow intelligent agents to improve their decision-making capabilities through trial and error. Over time, these agents are able to learn the rules that govern good and bad choices by interacting with their environment and receiving numerical reward signals that are based on the actions that they take.
Ultimately, RL systems are trained to maximize the numerical reward signals that they receive, so their decisions improve as they try more things and discover what actions yield the most reward. But unfortunately, most simulated worlds have a very limited number of variables. As a result, RL systems have very few things that they can interact with. This means that, although intelligent agents may know what constitutes good decision-making in a simulated environment, when they’re deployed in a realistic environment, they quickly become lost amidst all the new variables.
According to Pineau, overcoming this issue means creating more dynamic environments for AI systems to train on.
To showcase one way of accomplishing this, Pineau turned to Breakout, a game launched by Atari in 1976. The game’s environment is simplistic and static, consisting of a background that is entirely black. In order to inject more complexity into this simulated environment, Pineau and her team inserted videos, which are an endless source of natural noise, into the background.
Pineau argued that, by adding these videos into the equation, the team was able to create an environment that includes some of the complexity and variability of the real world. And by ultimately training reinforcement learning systems to operate in such multifaceted environments, researchers obtain more reliable findings and better prepare RL systems to make decisions in the real world.
In order to help researchers better comprehend exactly how reliable and reproducible their results currently are — or aren’t — Pineau pointed to The 2019 ICLR Reproducibility Challenge during her closing remarks.
The goal of this challenge is to have members of the research community try to reproduce the empirical results submitted to the International Conference on Learning Representations. Then, once all of the attempts have been made, the results are sent back to the original authors. Pineau noted that, to date, the challenge has had a dramatic impact on the findings that are reported. During the 2018 challenge, 80% of authors that received reproducibility reports stated that they changed their papers as a result of the feedback.
You can download a copy of Pineau’s slides here.
By Ariel Conn
Over the last few years, as concerns surrounding artificial intelligence have grown, an increasing number of organizations, companies, and researchers have come together to create and support principles that could help guide the development of beneficial AI. With FLI’s Asilomar Principles, IEEE’s treatise on the Ethics of Autonomous and Intelligent Systems, the Partnership on AI’s Tenets, and many more, concerned AI researchers and developers have laid out a framework of ethics that almost everyone can agree upon. However, these previous documents weren’t specifically written to inform and direct AI policy and regulations.
On December 4, at the NeurIPS conference in Montreal, Canadian researchers took the next step, releasing the Montreal Declaration on Responsible AI. The Declaration builds on the current ethical framework of AI, but the architects of the document also add, “Although these are ethical principles, they can be translated into political language and interpreted in legal fashion.”
Yoshua Bengio, a prominent Canadian AI researcher and founder of one of the world’s premiere machine learning labs, described the Declaration saying, “Its goal is to establish a certain number of principles that would form the basis of the adoption of new rules and laws to ensure AI is developed in a socially responsible manner.”
“We want this Declaration to spark a broad dialogue between the public, the experts and government decision-makers,” said UdeM’s rector, Guy Breton. “The theme of artificial intelligence will progressively affect all sectors of society and we must have guidelines, starting now, that will frame its development so that it adheres to our human values and brings true social progress.”
The Declaration lays out ten principles: Well-Being, Respect for Autonomy, Protection of Privacy and Intimacy, Solidarity, Democratic Participation, Equity, Diversity, Prudence, Responsibility, and Sustainable Development.
The primary themes running through the Declaration revolve around ensuring that AI doesn’t disrupt basic human and civil rights and that it enhances equality, privacy, diversity, and human relationships. The Declaration also suggests that humans need to be held responsible for the actions of artificial intelligence systems (AIS), and it specifically states that AIS cannot be allowed to make the decision to take a human life. It also includes a section on ensuring that AIS is designed with the climate and environment in mind, such that resources are sustainably sourced and energy use is minimized.
The Declaration is the result of deliberation that “occurred through consultations held over three months, in 15 different public spaces, and sparked exchanges between over 500 citizens, experts and stakeholders from every horizon.” That it was formulated in Canada is especially relevant given Montreal’s global prominence in AI research.
In his article for the Conversation, Bengio explains, “Because Canada is a scientific leader in AI, it was one of the first countries to see all its potential and to develop a national plan. It also has the will to play the role of social leader.”
He adds, “Generally speaking, scientists tend to avoid getting too involved in politics. But when there are issues that concern them and that will have a major impact on society, they must assume their responsibility and become part of the debate.”
By Jolene Creighton
Artificially intelligent systems are already among us. They fly our planes, drive our cars, and even help doctors make diagnoses and treatment plans. As AI continues to impact daily life and alter society, laws and policies will increasingly have to take it into account. Each day, more and more of the world’s experts call on policymakers to establish clear, international guidelines for the governance of AI.
During his opening remarks, Felten noted that AI is poised to radically change everything about the way we live and work, stating that this technology is “extremely powerful and represents a profound change that will happen across many different areas of life.” As such, Felten noted that we must work quickly to amend our laws and update our policies so we’re ready to confront the changes that this new technology brings.
However, Felten argued that policy makers cannot be left to dictate this course alone — members of the AI research community must engage with them.
“Sometimes it seems like our world, the world of the research lab or the developer’s or data scientist’s cubicle, is a million miles from public policy…however, we have not only an opportunity but also a duty to be actively participating in public life,” he said.
Guidelines for Effective Engagement
Felton noted that the first step for researchers is to focus on and understand the political system as a whole. “If you look only at the local picture, it might look irrational. But, in fact, these people [policymakers] are operating inside a system that is big and complicated,” he said. To this point, Felten stated that researchers must become better informed about political processes so that they can participate in policy conversations more effectively.
According to Felten, this means the AI community needs to recognize that policy work is valid and valuable, and this work should be incentivized accordingly. He also called on the AI community to create career paths that encourage researchers to actively engage with policymakers by blending AI research and policy work.
For researchers who are interested in pursuing such work, Felten outlined the steps they should take to start an effective dialogue:
- Combine knowledge with preference: As a researcher, work to frame your expertise in the context of the policymaker’s interests.
- Structure the decision space: Based on the policymaker’s preferences, give a range of options and explain their possible consequences.
- Follow-up: Seek feedback on the utility of the guidance that you offered and the way that you presented your ideas.
If done right, Felton said, this protocol allows experts and policy makers to build productive engagement and trust over time.
At the end of last week, amidst the flurry of holiday shopping, the White House quietly released Volume II of the Fourth National Climate Assessment (NCA4). The comprehensive report, which was compiled by the United States Global Change Research Program (USGCRP), is the culmination of decades of environmental research conducted by scientists from 13 different federal agencies. The scope of the work is truly striking, representing more than 300 authors and encompassing thousands of scientific studies.
Unfortunately, the report is also rather grim.
If climate change continues unabated, the assessment asserts that it will cost the U.S. economy hundreds of billions a year by the close of the century — causing some $155 billion in annual damages to labor and another $118 billion in damages to coastal property. In fact, the report notes that, unless we immediately launch “substantial and sustained global mitigation and regional adaptation efforts,” the impact on the agricultural sector alone will reach billions of dollars in losses by the middle of the century.
Notably, the NCA4 authors emphasize that these aren’t just warnings for future generations, pointing to several areas of the United States that are already grappling with the high economic cost of climate change. For example, a powerful heatwave that struck the Northeast left local fisheries devastated, and similar events in Alaska have dramatically slashed fishing quotas for certain stocks. Meanwhile, human activity is exacerbating Florida’s red tide, killing fish populations along the southwest coast.
Of course, the economy won’t be the only thing that suffers.
According to the assessment, climate change is increasingly threatening the health and well-being of the American people, and emission reduction efforts could ultimately save thousands of lives. Young children, pregnant women, and aging populations are identified as most at risk; however, the authors note that waterborne infectious diseases and global food shortages threaten all populations.
As with the economic impact, the toll on human health is already visible. For starters, air pollution is driving a rise in the number of deaths related to heart and lung problems. Asthma diagnoses have increased, and rising temperatures are causing a surge in heatstroke and other heat-related illnesses. And the report makes it clear that the full extent of the risk extends well beyond either the economy or human health, plainly stating that climate change threatens all life on our planet.
Ultimately, the authors emphasize the immediacy of the issue, noting that without immediate action, no system will be left untouched:
“Climate change affects the natural, built, and social systems we rely on individually and through their connections to one another….extreme weather and climate-related impacts on one system can result in increased risks or failures in other critical systems, including water resources, food production and distribution, energy and transportation, public health, international trade, and national security. The full extent of climate change risks to interconnected systems, many of which span regional and national boundaries, is often greater than the sum of risks to individual sectors.”
Yet, the picture painted by the NCA4 assessment is not entirely bleak. The report suggests that, with a concerted and sustained effort, the most dire damage can be undone and ultimate catastrophe averted. The authors note that this will require international cooperation centered on a dramatic reduction in global carbon dioxide emissions.
The 2015 Paris Agreement, in which 195 countries put forth emission reduction pledges, represented a landmark in international effort to curtail global warming. The agreement was designed to cap warming at 2 degrees Celsius, a limit scientists then believed would prevent the most severe and irreversible effects of climate change. That limit has since been lowered to 1.5 degrees Celsius. Unfortunately, current models predict that the even if countries hit their current pledges, temperatures will still climb to 3.3 degrees Celsius by the end of the century. The Paris Agreement offers a necessary first step, but in light of these new predictions, pledges must be strengthened.
Scientists hope the findings in the National Climate Assessment will compel the U.S. government to take the lead in updating their climate commitments.