State of AI: Artificial Intelligence, the Military and Increasingly Autonomous Weapons

As artificial intelligence works its way into industries like healthcare and finance, governments around the world are increasingly investing in another of its applications: autonomous weapons systems. Many are already developing programs and technologies that they hope will give them an edge over their adversaries, creating mounting pressure for others to follow suite.

These investments appear to mark the early stages of an AI arms race. Much like the nuclear arms race of the 20th century, this type of military escalation poses a threat to all humanity and is ultimately unwinnable. It incentivizes speed over safety and ethics in the development of new technologies, and as these technologies proliferate it offers no long-term advantage to any one player.

Nevertheless, the development of military AI is accelerating. Below are the current AI arms programs, policies, and positions of seven key players: the United States, China, Russia, the United Kingdom, France, Israel, and South Korea. All information is from State of AI: Artificial intelligence, the military, and increasingly autonomous weapons, a report by Pax.

“PAX calls on states to develop a legally binding instrument that ensures meaningful human control over weapons systems, as soon as possible,” says Daan Kayser, the report’s lead author. “Scientists and tech companies also have a responsibility to prevent these weapons from becoming reality. We all have a role to play in stopping the development of Killer Robots.”

The United States

UN Position

In April 2018, the US underlined the need to develop “a shared understanding of the risk and benefits of this technology before deciding on a specific policy response. We remain convinced that it is premature to embark on negotiating any particular legal or political instrument in 2019.”

AI in the Military

  • In 2014, the Department of Defense released its ‘Third Offset Strategy,’ the aim of which, as described in 2016 by then-Deputy Secretary of Defense “is to exploit all advances in artificial intelligence and autonomy and insert them into DoD’s battle networks (…).”
  • The 2016 report ‘Preparing for the Future of AI’ also refers to the weaponization of AI and notably states: “Given advances in military technology and AI more broadly, scientists, strategists, and military experts all agree that the future of LAWS is difficult to predict and the pace of change is rapid.”
  • In September 2018, the Pentagon committed to spend USD 2 billion over the next five years through the Defense Advanced
  • Research Projects Agency (DARPA) to “develop [the] next wave of AI technologies.”
  • The Advanced Targeting and Lethality Automated System (ATLAS) program, a branch of DARPA, “will use artificial intelligence and machine learning to give ground-combat vehicles autonomous target capabilities.”

Cooperation with the Private Sector

  • Establishing collaboration with private companies can be challenging, as the widely publicized case of Google and Project Maven has shown: Following protests from Google employees, Google stated that it would not renew its contract. Nevertheless, other tech companies such as Clarifai, Amazon and Microsoft still collaborate with the Pentagon on this project.
  • The Project Maven controversy deepened the gap between the AI community and the Pentagon. The government has developed two new initiatives to help bridge this gap.
  • DARPA’s OFFSET program, which has the aim of “using swarms comprising upwards of 250 unmanned aircraft systems (UASs) and/or unmanned ground systems (UGSs) to accomplish diverse missions in complex urban environments,” is being developed in collaboration with a number of universities and start-ups.
  • DARPA’s Squad X Experimentation Program, which aims for human fighters to “have a greater sense of confidence in their autonomous partners, as well as a better understanding of how the autonomous systems would likely act on the battlefield,” is being developed in collaboration with Lockheed Martin Missiles.

China

UN Position

China demonstrated the “desire to negotiate and conclude” a new protocol “to ban the use of fully
autonomous lethal weapons systems.” However, China does not want to ban the development of these
weapons, which has raised questions about its exact position.

AI in the Military

  • There have been calls from within the Chinese government to avoid an AI arms race. The sentiment is echoed in the private sector, where the chairman of Alibaba has said that new technology, including machine learning and artificial intelligence, could lead to a World War III.
  • Despite these concerns, China’s leadership is continuing to pursue the use of AI for military purposes.

Cooperation with the Private Sector

  • To advance military innovation, President Xi Jinping has called for China to follow “the road of military-civil fusion-style innovation,” such that military innovation is integrated into China’s national innovation system. This fusion has been elevated to the level of a national strategy.
  • The People’s Liberation Army (PLA) relies heavily on tech firms and innovative start-ups. The larger AI research organizations in China can be found within the private sector.
  • There are a growing number of collaborations between defense and academic institutions in China. For instance, Tsinghua University launched the Military-Civil Fusion National Defense Peak Technologies Laboratory to create “a platform for the pursuit of dual-use applications of emerging technologies, particularly artificial intelligence.”
  • Regarding the application of artificial intelligence to weapons, China is currently developing “next generation stealth drones,” including, for instance, Ziyan’s Blowfish A2 model. According to the company, this model “autonomously performs more complex combat missions, including fixed-point timing detection, fixed-range reconnaissance, and targeted precision strikes.”

Russia

UN Position

Russia has stated that the debate around lethal autonomous weapons should not ignore their potential benefits, adding that “the concerns regarding LAWS can be addressed through faithful implementation of the existing international legal norms.” Russia has actively tried to limit the number of days allotted for such discussions at the UN.

AI in the Military

  • While Russia does not have a military-only AI strategy yet, it is clearly working towards integrating AI more comprehensively.
  • The Foundation for Advanced Research Projects (the Foundation), which can be seen as the Russian equivalent of DARPA, opened the National Center for the Development of Technology and Basic Elements of Robotics in 2015.
  • At a conference on AI in March 2018, Defense Minister Shoigu pushed for increasing cooperation between military and civilian scientists in developing AI technology, which he stated was crucial for countering “possible threats to the technological and economic security of Russia.”
  • In January 2019, reports emerged that Russia was developing an autonomous drone, which “will be able to take off, accomplish its mission, and land without human interference,” though “weapons use will require human approval.”

Cooperation with the Private Sector

  • A new city named Era, devoted entirely to military innovation, is currently under construction. According to the Kremlin, the “main goal of the research and development planned for the technopolis is the creation of military artificial intelligence systems and supporting technologies.”
  • In 2017, Kalashnikov — Russia’s largest gun manufacturer — announced that it had developed a fully automated combat module based on neural-network technologies that enable it to identify targets and make decisions.

The United Kingdom

UN Position

The UK believes that an “autonomous system is capable of understanding higher level intent and direction.” It suggested that autonomy “confers significant advantages and has existed in weapons systems for decades” and that “evolving human/machine interfaces will allow us to carry out military functions with greater precision and efficiency,” though it added that “the application of lethal force must be directed by a human, and that a human will always be accountable for the decision.” The UK stated that “the current lack of consensus on key themes counts against any legal prohibition,” and that it “would not have any
practical effect.”

AI in the Military

  • A 2018 Ministry of Defense report underlines that the MoD is pursuing modernization “in areas like artificial
    intelligence, machine-learning, man-machine teaming, and automation to deliver the disruptive
    effects we need in this regard.”
  • The MoD has various programs related to AI and autonomy, including the Autonomy program. Activities in this program include algorithm development, artificial intelligence, machine learning, “developing underpinning technologies to enable next generation autonomous military-systems,” and optimization of human autonomy teaming.
  • The Defense Science and Technology Laboratory (Dstl), the MoD’s research arm, launched the AI Lab in 2018.
  • In terms of weaponry, the best-known example of autonomous technology currently under development is the top-secret Taranis armed drone, the “most technically advanced demonstration aircraft ever built in the UK,” according to the MoD.

Cooperation with the Private Sector

  • The MoD has a cross-government organization called the Defense and Security Accelerator (DASA), launched in December 2016. DASA “finds and funds exploitable innovation to support UK defense and security quickly and effectively, and support UK property.”
  • In March 2019, DASA awarded a GBP 2.5 million contract to Blue Bear Systems, as part of the Many Drones Make Light Work project. On this, the director of Blue Bear Systems said, “The ability to deploy a swarm of low cost autonomous systems delivers a new paradigm for battlefield operations.”

France

UN Position

France understands the autonomy of LAWS as total, with no form of human supervision from the moment of activation and no subordination to a chain of command. France stated that a legally binding instrument on the issue would not be appropriate, describing it as neither realistic nor desirable. France did propose a political declaration that would reaffirm fundamental principles and “would underline the need to maintain human control over the ultimate decision of the use of lethal force.”

AI in the Military

  • France’s national AI strategy is detailed in the 2018 Villani Report, which states that “the increasing use of AI in some sensitive areas such as […] in Defense (with the question of autonomous weapons) raises a real society-wide debate and implies an analysis of the issue of human responsibility.”
  • This has been echoed by French Minister for the Armed Forces, Florence Parly, who said that “giving a machine the choice to fire or the decision over life and death is out of the question.”
  • On defense and security, the Villani Report states that the use of AI will be a necessity in the future to ensure security missions, to maintain power over potential opponents, and to maintain France’s position relative to its allies.
  • The Villani Report refers to DARPA as a model, though not with the aim of replicating it. However, the report states that some of DARPA’s methods “should inspire us nonetheless. In particular as regards the President’s wish to set up a European Agency for Disruptive Innovation, enabling funding of emerging technologies and sciences, including AI.”
  • The Villani Report emphasizes the creation of a “civil-military complex of technological innovation, focused on digital
    technology and more specifically on artificial intelligence.”

Cooperation with the Private Sector

  • In September 2018, the Defense Innovation Agency (DIA) was created as part of the Direction Générale de l’Armement (DGA), France’s arms procurement and technology agency. According to Parly, the new agency “will bring together all the actors of the ministry and all the programs that contribute to defense innovation.”
  • One of the most advanced projects currently underway is the nEUROn unmanned combat air system, developed by French arms producers Dassault on behalf of the DGA, which can fly autonomously for over three hours.
  • Patrice Caine, CEO of Thales, one of France’s largest arms producers, stated in January 2019 that Thales will never pursue “autonomous killing machines,” and is working on a charter of ethics related to AI.

Israel

UN Position

In 2018, Israel stated that the “development of rigid standards or imposing prohibitions to something that is so speculative at this early stage, would be imprudent and may yield an uninformed, misguided result.” Israel underlined that “[w]e should also be aware of the military and humanitarian advantages.”

AI in the Military

  • It is expected that Israeli use of AI tools in the military will increase rapidly in the near future.
  • The main technical unit of the Israeli Defense Forces (IDF) and the engine behind most of its AI developments is called C4i. Within C4i, there is the the Sigma branch, whose “purpose is to develop, research, and implement the latest in artificial intelligence and advanced software research in order to keep the IDF up to date.”
  • The Israeli military deploys weapons with a considerable degree of autonomy. One of the most relevant examples is the Harpy loitering munition, also known as a kamikaze drone: an unmanned aerial vehicle that can fly around for a significant length of time to engage ground targets with an explosive warhead.
  • Israel was one of the first countries to “reveal that it has deployed fully automated robots: self-driving military vehicles to patrol the border with the Palestinian-governed Gaza Strip.”

Cooperation with the Private Sector

  • Public-private partnerships are common in the development of Israel’s military technology. There is a “close connection between the Israeli military and the digital sector,” which is said to be one of the reasons for the country’s AI leadership.
  • Israel Aerospace Industries, one of Israel’s largest arms companies, has long been been developing increasingly autonomous weapons, including the above mentioned Harpy.

South Korea

UN Position

In 2015, South Korea stated that “the discussions on LAWS should not be carried out in a way that can hamper research and development of robotic technology for civilian use,” but that it is “wary of fully autonomous weapons systems that remove meaningful human control from the operation loop, due to the risk of malfunctioning, potential accountability gap and ethical concerns.” In 2018, it raised concerns about limiting civilian applications as well as the positive defense uses of autonomous weapons.

AI in the Military

  • In December 2018, the South Korean Army announced the launch of a research institute focusing on artificial intelligence, entitled the AI Research and Development Center. The aim is to capitalize on cutting-edge technologies for future combat operations and “turn it into the military’s next-generation combat control tower.”
  • South Korea is developing new military units, including the Dronebot Jeontudan (“Warrior”) unit, with the aim of developing and deploying unmanned platforms that incorporate advanced autonomy and other cutting-edge capabilities.
  • South Korea is known to have used the armed SGR-A1 sentry robot, which has operated in the demilitarized zone separating North and South Korea. The robot has both a supervised mode and an unsupervised mode. In the unsupervised mode “the SGR-AI identifies and tracks intruders […], eventually firing at them without any further intervention by human operators.”

Cooperation with the Private Sector

  • Public-private cooperation is an integral part of the military strategy: the plan for the AI Research and Development Center is “to build a network of collaboration with local universities and research entities such as the KAIST [Korea Advanced Institute for Science and Technology] and the Agency for Defense Development.”
  • In September 2018, South Korea’s Defense Acquisition Program Administration (DAPA) launched a new
    strategy to develop its national military-industrial base, with an emphasis on boosting ‘Industry 4.0
    technologies’, such as artificial intelligence, big data analytics and robotics.

To learn more about what’s happening at the UN, check out this article from the Bulletin of the Atomic Scientists.

Dr. Matthew Meselson Wins 2019 Future of Life Award

On April 9th, Dr. Matthew Meselson received the $50,000 Future of Life Award at a ceremony at the University of Boulder’s Conference on World Affairs. Dr. Meselson was a driving force behind the 1972 Biological Weapons Convention, an international ban that has prevented one of the most inhumane forms of warfare known to humanity. April 9th marked the eve of the Convention’s 47th anniversary.

Meselson’s long career is studded with highlights: proving Watson and Crick’s hypothesis on DNA structure, solving the Sverdlovsk Anthrax mystery, ending the use of Agent Orange in Vietnam. But it is above all his work on biological weapons that makes him an international hero.

“Through his work in the US and internationally, Matt Meselson was one of the key forefathers of the 1972 Biological Weapons Convention,” said Daniel Feakes, Chief of the Biological Weapons Convention Implementation Support Unit. “The treaty bans biological weapons and today has 182 member states. He has continued to be a guardian of the BWC ever since. His seminal warning in 2000 about the potential for the hostile exploitation of biology foreshadowed many of the technological advances we are now witnessing in the life sciences and responses which have been adopted since.”

Meselson became interested in biological weapons during the 60s, while employed with the U.S. Arms Control and Disarmament Agency. It was on a tour of Fort Detrick, where the U.S. was then manufacturing anthrax, that he learned the motivation for developing biological weapons: they were cheaper than nuclear weapons. Meselson was struck, he says, by the illogic of this — it would be an obvious national security risk to decrease the production cost of WMDs.

Do you know someone deserving of the Future of Life Award? If so, please consider submitting their name to our Unsung Hero Search page. If we decide to give the award to your nominee, you will receive a $3,000 prize from FLI for your contribution.

The use of biological weapons was already prohibited by the 1925 Geneva Protocol, an international treaty that the U.S. had never ratified. So Meselson wrote a paper, “The United States and the Geneva Protocol,” outlining why it should do so. Meselson knew Henry Kissinger, who passed his paper along to President Nixon, and by the end of 1969 Nixon renounced biological weapons.

Next came the question of toxins — poisons derived from living organisms. Some of Nixon’s advisors believed that the U.S. should renounce the use of naturally derived toxins, but retain the right to use artificial versions of the same substances. It was another of Meselson’s papers, “What Policy for Toxins,” that led Nixon to reject this arbitrary distinction and to renounce the use of all toxin weapons.

On Meselson’s advice, Nixon had resubmitted the Geneva Protocol to the Senate for approval. But he also went beyond the terms of the Protocol — which only ban the use of biological weapons — to renounce offensive biological research itself. Stockpiles of offensive biological substances, like the anthrax that Meselson had discovered at Fort Detrick, were destroyed.

Once the U.S. adopted this more stringent policy, Meselson turned his attention to the global stage. He and his peers wanted an international agreement stronger than the Geneva Protocol, one that would ban stockpiling and offensive research in addition to use and would provide for a verification system. From their efforts came the Biological Weapons Convention, which was signed in 1972 and is still in effect today.

“Thanks in significant part to Professor Matthew Meselson’s tireless work, the world came together and banned biological weapons, ensuring that the ever more powerful science of biology helps rather than harms humankind. For this, he deserves humanity’s profound gratitude,” said former UN Secretary-General Ban Ki-Moon.

Meselson has said that biological warfare “could erase the distinction between war and peace.” Other forms of war have a beginning and an end — it’s clear what is warfare and what is not. Biological warfare would be different: “You don’t know what’s happening, or you know it’s happening but it’s always happening.”

And the consequences of biological warfare can be greater, even, than mass destruction; Attacks on DNA could fundamentally alter humankind. FLI honors Matthew Meselson for his efforts to protect not only human life but also the very definition of humanity.

Said Astronomer Royal Lord Martin Rees, “Matt Meselson is a great scientist — and one of very few who have been deeply committed to making the world safe from biological threats. This will become a challenge as important as the control of nuclear weapons — and much more challenging and intractable. His sustained and dedicated efforts fully deserve wider acclaim.”

“Today biotech is a force for good in the world, associated with saving rather than taking lives, because Matthew Meselson helped draw a clear red line between acceptable and unacceptable uses of biology”, added MIT Professor and FLI President Max Tegmark. “This is an inspiration for those who want to draw a similar red line between acceptable and unacceptable uses of artificial intelligence and ban lethal autonomous weapons.

To learn more about Matthew Meselson, listen to FLI’s two-part podcast featuring him in conversation with Ariel Conn and Max Tegmark. In Part One, Meselson describes how he helped prove Watson and Crick’s hypothesis of DNA structure and recounts the efforts he undertook to get biological weapons banned. Part Two focuses on three major incidents in the history of biological weapons and the role played by Meselson in resolving them.

Publications by Meselson include:

The Future of Life Award is a prize awarded by the Future of Life Institute for a heroic act that has greatly benefited humankind, done despite personal risk and without being rewarded at the time. This prize was established to help set the precedent that actions benefiting future generations will be rewarded by those generations. The inaugural Future of Life Award was given to the family of Vasili Arkhipov in 2017 for single-handedly preventing a Soviet nuclear attack against the US in 1962, and the 2nd Future of Life Award was given to the family of Stanislav Petrov for preventing a false-alarm nuclear war in 1983.

 

 

 

Women for the Future

This Women’s History Month, FLI has been celebrating with Women for the Future, a campaign to honor the women who’ve made it their job to create a better world for us all. The field of existential risk mitigation is largely male-dominated, so we wanted to emphasize the value –– and necessity –– of female voices in our industry. We profiled 34 women we admire, and got their takes on what they love (and don’t love) about their jobs, what advice they’d give women starting out in their fields, and what makes them hopeful for the future.

These women do all sorts of things. They are researchers, analysts, professors, directors, founders, students. One is a state senator; one is a professional poker player; two are recipients of the Nobel Peace Prize. They work on AI, climate change, robotics, disarmament, human rights, and more. What ultimately brings them together is a shared commitment to the future of humanity.

Women in the US remain substantially underrepresented in academia, government, STEM, and other industries. They make up an estimated 12% of machine learning researchers, they comprise roughly 30% of the authors on the latest IPCC report, and they’ve won about 16% of Nobel Peace Prizes awarded to individuals.

Nevertheless, the women that we profiled had overwhelmingly positive things to say about their experiences in this industry.

They are, without exception, deeply passionate about what they do. As Jade Leung, Head of Research and Partnerships at the University of Oxford’s Center for the Governance of Artificial Intelligence, put it: “It is a rare, sometimes overwhelming, always humbling privilege to be in a position to work directly on a challenge which I believe is one of the most important facing us this century.”

And they all want to see more women join their fields. “I’ve found the [existential risk] community extremely welcoming and respectful,” said Liv Boeree, professional poker player and co-founder of Raising for Effective Giving, “so I’d recommend it highly to any woman who is interested in pursuing work in this area.”

Bing Song, Vice President of the Berggruen Institute, agreed. “Women should embrace and dive into this new area of thinking about the future of humanity,” she said, adding, “Male dominance in past millennia in shaping the world and in how we approach the universe, humanity, and life needs to be questioned.”

“Our talents and skills are needed,” concluded Sonia Cassidy, Director Of Operations at Alliance to Feed the Earth in Disasters, “and so are you!”

Find a list of all 34 women on the Women for the Future homepage, or scroll through the slideshow below. Click on a name or photo to learn more. 



Rasha Abdul Rahim

Deputy Director of Amnesty Tech, Amnesty International

“[W]hen people around the world and civil society can think of a potent idea that’s worth fighting for, and stick at the concept however long it may take, and develop the proposal to get traction from political leaders, we really can make a difference.”


Elizabeth Barnes

Safety Team Member, OpenAI

“You can probably learn things much faster than you expect. It’s easy to think that learning some new skill will be impossibly hard. I’ve been surprised a lot of times how quickly things go from being totally overwhelming and incomprehensible to pretty alright.”


Rebecca Boehm

Economist, Union of Concerned Scientists

“The recent elevation of conversations about the importance of racial equity and inclusion makes me very hopeful for our future. I believe solving the big food and agricultural issues we are facing will require not only the voices, but the leadership of a diverse set of people.”


Liv Boeree

Co-founder, Raising for Effective Giving (REG) | Ambassador, www.effectivegiving.org

“I’ve found the [existential risk] community extremely welcoming and respectful, so I’d recommend it highly to any woman who is interested in pursuing work in this area.”


Astrid Caldas

Senior Climate Scientist, Union of Concerned Scientists

“Learn as much as you can not only from academic institutions or NGOs, but from people on the frontlines and those who are being the most impacted by climate change. Attend events, visit places if you can, to see first hand how people are dealing with the issues, and find out how you can help them become more resilient. Sometimes it is as simple as showing them a website they didn’t know about, or telling them about grants and other resources to protect their homes from floods.”


Rosie Campbell

Assistant Director, Center for Human-Compatible AI (CHAI) at UC Berkeley

“I’m a big advocate for diversity. We’re trying to solve big, important problems, and it’s worrying to think we could be missing out on important perspectives. I’d love to see more women in AI safety!”


Sonia Cassidy

Director Of Operations, Alliance to Feed the Earth in Disasters (ALLFED)

“Do not ever underestimate yourself and what women bring into the world, this field or any other. Our talents and skills are needed, and so are you!”


Carla Zoe Cremer

Research Affiliate, Centre for the Study of Existential Risk | Researcher, Leverhulme Centre for the Future of Intelligence

“My ideas are taken seriously and my work is appreciated. The problems in existential risk are hard, unsolved and numerous — which means that everyone welcomes your initiative and contributions and will not hold you back if you try something new.”


Kristina Dahl

Senior Climate Scientist, Union of Concerned Scientists

“Climate change is at this incredible nexus of science, culture, policy, and the environment. To do this job well, one has to bring to the table a love of the environment, a willingness to identify and fight for the policies needed to protect it, a sensitivity to the diverse range of decisions people make in their daily lives, and a fascination with the nitty-gritty bits of the science.”


Jeanne Dietsch

State Senator, NH | Founder and CEO of multiple tech startups

“Entrepreneurs: make sure that your company is competitive, that you have innovative processes and/or products. And I will paraphrase Michael Bloomberg: ‘Hire honest people who are smarter than yourself.'”


Anca Dragan

Assistant Professor, UC Berkeley

“I’m hopeful that progress in intelligence and AI tools can lead to freeing up more people to spend more time on education and creative pursuits — I think that would make for a wonderful future for us.”

Photo: Human-Machine Interaction / Anca Dragan / Photos Copyright Noah Berger / 2016


Beatrice Fihn

Executive Director, ICAN

“Don’t be too intimidated or impressed by senior people and ‘important’ people. Most of them don’t actually know as much as they come across as knowing.”


Danit Gal

Project Assistant Professor, Cyber Civilization Research Center, Keio University

“If you find something that moves you — be it further developments in an established field, a way to combine existing fields to create new ones, or something that’s entirely off the beaten path — pursue it. The act of pursuing the things that fascinate you is the real experience you need. If you can combine this with something that’s useful and beneficial to this world, you’ve won the game.”


Paula Garcia

Energy analyst, Union of Concerned Scientists

“Read, talk to others that work in the renewable energy industry, identify where in the value chain you want to contribute, and go for it!”


Rose Hadshar

Project Manager, Research Scholars Programme, Future of Humanity Institute

“[S]o many extremely able people are trying to make [the future] good.”


Emilia Javorsky

Director, Scientists Against Inhumane Weapons

“I’m a pretty optimistic person at baseline, but particularly so after getting to know the incredible people that compose the x-risk community. They care so deeply about engineering a positive future for humanity — I feel tremendously grateful to have the opportunity to work with them!”


Natalie Jones

PhD Student, University of Cambridge | Research Affiliate, CSER

“The other people working in this field are so fiercely intelligent and capable. It’s hard not to have a conversation which leaves you with a perspective or idea you hadn’t thought of before. This, and the knowledge that one is doing useful and important work, combine to make it very rewarding.”


Jade Leung

Head of Research and Partnerships, Center for the Governance of Artificial Intelligence, University of Oxford

“It is a rare, sometimes overwhelming, always humbling privilege to be in a position to work directly on a challenge which I believe is one of the most important facing us this century.”


Cassidy Nelson

Research Scholar, Future of Humanity Institute, University of Oxford

“I feel I’m surrounded by people who care deeply about life and addressing large and complex risks. I feel this field’s focus, while grim on its own, is also intrinsically coupled with the desire and hope that the future can go well. I remain hopeful that if we can navigate the next century safely, a better existence awaits us and our descendants. I am inspired by what could be possible for conscious life and I hope that my career can help ensure no catastrophic event occurs before our future is secured.”


Charlie Oliver

Founder/CEO, TECH 2025 (Served Fresh Media)

“Don’t allow other people to define your dreams and don’t allow them to place limits on what you can do. And just as important, if not more so, don’t limit your own potential with soul-crushing self-doubt. A little self-doubt is okay and quite normal. But when it begins to keep you from taking big risks necessary to discover your strengths and path, you have to fix that right away or that type of thinking will fester.”


Marie-Therese Png

PhD Student, Oxford Internet Institute

“Underrepresented perspectives — women, people of colour, and other intersectional identities — are highly valuable at this point in uncovering blindspots. Your concerns may not currently be represented in the research community, but it doesn’t mean they shouldn’t be. There is low replaceability because if you weren’t there it wouldn’t be any single person’s main focus. When you’re a minority in the room it’s even more important to overcome audience inhibition and speak up or a blindspot may persist.”


Carina Prunkl

Senior Research Scholar, Future of Humanity Institute, University of Oxford

“AI is a really exciting field to work in and there is a real need for people with diverse academic backgrounds – you don’t need to be a coder to make substantial contributions. Make use of existing women networks or write directly to women researchers if you would like to know what it is like to work at a particular organisation or with a particular team. Most of us are more than happy to help and share our experiences.”


Francesca Rossi

AI Ethics Global Leader and Distinguished Research Staff Member, IBM Research

“My advice to women is to believe in what they are and what they are passionate about, to behave according to their values and attitudes without trying to mimic anybody else, and to be fully aware that their contribution is essential for advancing AI in the most inclusive, fair, and responsible way.”


Susi Snyder

Managing Director, Don’t Bank on the Bomb, PAX & ICAN

“Find your passion, produce the research that supports your policy recommendation and demand the space to say your piece. I always think to the first US woman that ran on a major party for President- Shirley Chisholm, she said “if they don’t give you a seat, bring a folding chair”.  I think about the fact that there are (some) more seats now, and that’s amazing. There is still a long, long way to go before equity, but there are some serious efforts to move closer to that day.”


Bing Song

Vice President, Berggruen Institute | Director of the Institute’s China Center

“Women should embrace and dive into this new area of thinking about the future of humanity. Male dominance in past millennia in shaping the world and in how we approach the universe, humanity, and life needs to be questioned. More broad based, inclusive, non-confrontational and equanimous thinking, which is more typically associated with the female approach to things, is sorely needed in this world.”


Shuchi Talati

Geoengineering Research, Governance and Public Engagement Fellow, Union of Concerned Scientists

“Domestic and international dedication to addressing climate change is continuously growing. Though we are far from where we need to be, I remain optimistic that we’re on a promising path.”


Mary Wareham

Coordinator of the Campaign to Stop Killer Robots | Advocacy Director of Human Rights Watch arms division

“Study what you are passionate about and not what you think will get you a job.”


Jody Williams

Chairwoman, Nobel Women’s Initiative | Nobel Laureate

“If I have advice, it would be to be clear about who you want to be in your life and what you stand for — and then go for it.”


Bonnie Wintle

Research Fellow, School of Biosciences, University of Melbourne | Research Affiliate, Centre for the Study of Existential Risk (CSER), University of Cambridge

“Seeing the huge turnout of school kids and young people at climate change demonstrations gives me hope for the future. The next generation of leaders and decision makers seem to be proactive and genuinely interested in addressing these problems.”


Baobao Zhang

PhD Candidate, Political Science, Yale University | Research Affiliate, Center for the Governance of AI, University of Oxford

“AI policy is a nascent but rapidly growing field. I think this is a good time for women to enter the field. Sometimes women are hesitant to enter a new discipline because they don’t feel they have adequate knowledge or experience. My work has taught me that you can quickly learn on the job and that you can apply the skills and knowledge you already have to your new job.”


Meia Chita-Tegmark

Co-founder, Future of Life Institute | Postdoctoral Scholar, Tufts University

“Be brave. This is our world too, we can’t let it be shaped by men alone.”


Ariel Conn

Director of Communications/Outreach and Weapons Policy Advisor, Future of Life Institute

“Success in this job comes with much greater satisfaction than success in any other job I’ve had.”


Jessica Cussins Newman

AI Policy Specialist, Future of Life Institute | Research Fellow, UC Berkeley Center for Long-Term Cybersecurity

“Don’t discount yourself just because you think you don’t have the right background — the field is actively looking for ways to learn from other disciplines.”


Victoria Krakovna

Cofounder, Future of Life Institute | Research Scientist, DeepMind

“It’s great to see more and more talented and motivated people entering the field to work on these interesting and difficult problems.”

The Problem of Self-Referential Reasoning in Self-Improving AI: An Interview with Ramana Kumar, Part 2

When it comes to artificial intelligence, debates often arise about what constitutes “safe” and “unsafe” actions. As Ramana Kumar, an AGI safety researcher at DeepMind, notes, the terms are subjective and “can only be defined with respect to the values of the AI system’s users and beneficiaries.”

Fortunately, such questions can mostly be sidestepped when confronting the technical problems associated with creating safe AI agents, as these problems aren’t associated with identifying what is right or morally proper. Rather, from a technical standpoint, the term “safety” is best defined as an AI agent that consistently takes actions that lead to the desired outcomes, regardless of whatever those desired outcomes may be.

In this respect, Kumar explains that, when it comes to creating an AI agent that is tasked with improving itself, “the technical problem of building a safe agent is largely independent of what ‘safe’ means because a large part of the problem is how to build an agent that reliably does something, no matter what that thing is, in such a way that the method continues to work even as the agent under consideration is more and more capable.”

In short, making a “safe” AI agent should not be conflated with making an “ethical” AI agent. The respective terms are talking about different things..

In general, sidestepping moralistic definitions of safety makes AI technical work quite a bit easier It allows research to advance while debates on the ethical issues evolve. Case in point, Uber’s self-driving cars are already on the streets, despite the fact that we’ve yet to agree on a framework regarding whether they should safeguard their driver or pedestrians.

However, when it comes to creating a robust and safe AI system that is capable of self-improvement, the technical work gets a lot harder, and research in this area is still in its most nascent stages. This is primarily because we aren’t dealing with just one AI agent; we are dealing with generations of future self-improving agents.

Kumar clarifies, “When an AI agent is self-improving, one can view the situation as involving two agents: the ‘seed’ or ‘parent’ agent and the ‘child’ agent into which the parent self-modifies….and its total effects on the world will include the effects of actions made by its descendants.” As a result, in order to know we’ve made a safe AI agent, we need to understand all possible child agents that might originate from the first agent.

And verifying the safety of all future AI agents comes down to solving a problem known as “self-referential reasoning.”

Understanding the Self-Referential Problem

The problem with self-referential reasoning is most easily understood by defining the term according to its two primary components: self-reference and reasoning.

  • Self-reference: Refers to an instance in which someone (or something, such as a computer program or book) refers to itself. Any person or thing that refers to itself is called “self-referential.”
  • Reasoning: In AI systems, reasoning is a process through which an agent establishes “beliefs” about the world, like whether or not a particular action is safe or a specific reasoning system is sound. “Good beliefs” are beliefs that are sound or plausible based on the available evidence. The term “belief” is used instead of “knowledge” because the things that an agent believes may not be factually true and can change over time.

In relation to AI, then, the term “self-referential reasoning” refers to an agent that is using a reasoning process to establish a belief about that very same reasoning process. Consequently, when it comes to self-improvement, the “self-referential problem” is as follows: An agent is using its own reasoning system to determine that future versions of its reasoning system will be safe.

To explain the problem another way, Kumar notes that, if an AI agent creates a child agent to help it achieve its goal, it will want to establish some beliefs about the child’s safety before using it. This will necessarily involve proving beliefs about the child by arguing that the child’s reasoning process is good. Yet, the child’s reasoning process may be similar to, or even an extension of, the original agent’s reasoning process. And ultimately, an AI system can not use its own reasoning to determine whether or not its reasoning is good.

From a technical standpoint, the problem comes down to Godel’s second incompleteness theorem, which Kumar explains, “shows that no sufficiently strong proof system can prove its own consistency, making it difficult for agents to show that actions their successors have proven to be safe are, in fact, safe.”

Investigating Solutions

To date, several partial solutions to this problem have been proposed; however, our current software doesn’t have sufficient support for self-referential reasoning to make the solutions easy to implement and study. Consequently, in order to improve our understanding of the challenges of implementing self-referential reasoning, Kumar and his team aimed to implement a toy model of AI agents using some of the partial solutions that have been put forth.

Specifically, they investigated the feasibility of implementing one particular approach to the self-reference problem in a concrete setting (specifically, Botworld) where all the details could be checked. The approach selected was model polymorphism. Instead of requiring proof that shows an action is safe for all future use cases, model polymorphism only requires an action to be proven safe for an arbitrary number of steps (or subsequent actions) that is kept abstracted from the proof system.

Kumar notes that the overall goal was ultimately “to get a sense of the gap between the theory and a working implementation and to sharpen our understanding of the model polymorphism approach.” This would be accomplished by creating a proved theorem in a HOL (Higher Order Logic) theorem prover that describes the situation.

To break this down a little, in essence, theorem provers are computer programs that assist with the development of mathematical correctness proofs. These mathematical correctness proofs are the highest safety standard in the field, showing that a computer system always produces the correct output (or response) for any given input. Theorem provers create such proofs by using the formal methods of mathematics to prove or disprove the “correctness” of the control algorithms underlying a system. HOL theorem provers, in particular, are a family of interactive theorem proving systems that facilitate the construction of theories in higher-order logic. Higher-order logic, which supports quantification over functions, sets, sets of sets, and more, is more expressive than other logics, allowing the user to write formal statements at a high level of abstraction.

In retrospect, Kumar states that trying to prove a theorem about multiple steps of self-reflection in a HOL theorem prover was a massive undertaking. Nonetheless, he asserts that the team took several strides forward when it comes to grappling with the self-referential problem, noting that they built “a lot of the requisite infrastructure and got a better sense of what it would take to prove it and what it would take to build a prototype agent based on model polymorphism.”

Kumar added that MIRI’s (the Machine Intelligence Research Institute’s) Logical Inductors could also offer a satisfying version of formal self-referential reasoning and, consequently, provide a solution to the self-referential problem.

If you haven’t read it yet, find Part 1 here.

The Unavoidable Problem of Self-Improvement in AI: An Interview with Ramana Kumar, Part 1

Today’s AI systems may seem like intellectual powerhouses that are able to defeat their human counterparts at a wide variety of tasks. However, the intellectual capacity of today’s most advanced AI agents is, in truth, narrow and limited. Take, for example, AlphaGo. Although it may be the world champion of the board game Go, this is essentially the only task that the system excels at.

Of course, there’s also AlphaZero. This algorithm has mastered a host of different games, from Japanese and American chess to Go. Consequently, it is far more capable and dynamic than many contemporary AI agents; however, AlphaZero doesn’t have the ability to easily apply its intelligence to any problem. It can’t move unfettered from one task to another the way that a human can.

The same thing can be said about all other current AI systems — their cognitive abilities are limited and don’t extend far beyond the specific task they were created for. That’s why Artificial General Intelligence (AGI) is the long-term goal of many researchers.

Widely regarded as the “holy grail” of AI research, AGI systems are artificially intelligent agents that have a broad range of problem-solving capabilities, allowing them to tackle challenges that weren’t considered during their design phase. Unlike traditional AI systems, which focus on one specific skill, AGI systems would be able to efficiently tackle virtually any problem that they encounter, completing a wide range of tasks.

If the technology is ever realized, it could benefit humanity in innumerable ways. Marshall Burke, an economist at Stanford University, predicts that AGI systems would ultimately be able to create large-scale coordination mechanisms to help alleviate (and perhaps even eradicate) some of our most pressing problems, such as hunger and poverty. However, before society can reap the benefits of these AGI systems, Ramana Kumar, an AGI safety researcher at DeepMind, notes that AI designers will eventually need to address the self-improvement problem.

Self-Improvement Meets AGI

Early forms of self-improvement already exist in current AI systems. “There is a kind of self-improvement that happens during normal machine learning,” Kumar explains; “namely, the system improves in its ability to perform a task or suite of tasks well during its training process.”

However, Kumar asserts that he would distinguish this form of machine learning from true self-improvement because the system can’t fundamentally change its own design to become something new. In order for a dramatic improvement to occur — one that encompasses new skills, tools, or the creation of more advanced AI agents — current AI systems need a human to provide them with new code and a new training algorithm, among other things.

Yet, it is theoretically possible to create an AI system that is capable of true self-improvement, and Kumar states that such a self-improving machine is one of the more plausible pathways to AGI.

Researchers think that self-improving machines could ultimately lead to AGI because of a process that is referred to as “recursive self-improvement.” The basic idea is that, as an AI system continues to use recursive self-improvement to make itself smarter, it will get increasingly better at making itself smarter. This will quickly lead to an exponential growth in its intelligence and, as a result, could eventually lead to AGI.

Kumar says that this scenario is entirely plausible, explaining that, “for this to work, we need a couple of mostly uncontroversial assumptions: that such highly competent agents exist in theory, and that they can be found by a sequence of local improvements.” To this extent, recursive self-improvement is a concept that is at the heart of a number of theories on how we can get from today’s moderately smart machines to super-intelligent AGI. However, Kumar clarifies that this isn’t the only potential pathway to AI superintelligences.

Humans could discover how to build highly competent AGI systems through a variety of methods. This might happen “by scaling up existing machine learning methods, for example, with faster hardware. Or it could happen by making incremental research progress in representation learning, transfer learning, model-based reinforcement learning, or some other direction. For example, we might make enough progress in brain scanning and emulation to copy and speed up the intelligence of a particular human,” Kumar explains.

Yet, he is also quick to clarify that recursive self-improvement is an innate characteristic of AGI. “Even if iterated self-improvement is not necessary to develop highly competent artificial agents in the first place, explicit self-improvement will still be possible for those agents,” Kumar said.

As such, although researchers may discover a pathway to AGI that doesn’t involve recursive self-improvement, it’s still a property of artificial intelligence that is in need of serious research.

Safety in Self-Improving AI

When systems start to modify themselves, we have to be able to trust that all their modifications are safe. This means that we need to know something about all possible modifications. But how can we ensure that a modification is safe if no one can predict ahead of time what the modification will be?  

Kumar notes that there are two obvious solutions to this problem. The first option is to restrict a system’s ability to produce other AI agents. However, as Kumar succinctly sums, “We do not want to solve the safe self-improvement problem by forbidding self-improvement!”

The second option, then, is to permit only limited forms of self-improvement that have been deemed sufficiently safe, such as software updates or processor and memory upgrades. Yet, Kumar explains that vetting these forms of self-improvement as safe and unsafe is still exceedingly complicated. In fact, he says that preventing the construction of one specific kind of modification is so complex that it will “require such a deep understanding of what self-improvement involves that it will likely be enough to solve the full safe self-improvement problem.”

And notably, even if new advancements do permit only limited forms of self-improvement, Kumar states that this isn’t the path to take, as it sidesteps the core problem with self-improvement that we want to solve. “We want to build an agent that can build another AI agent whose capabilities are so great that we cannot, in advance, directly reason about its safety…We want to delegate some of the reasoning about safety and to be able to trust that the parent does that reasoning correctly,” he asserts.

Ultimately, this is an extremely complex problem that is still in its most nascent stages. As a result, much of the current work is focused on testing a variety of technical solutions and seeing where headway can be made. “There is still quite a lot of conceptual confusion about these issues, so some of the most useful work involves trying different concepts in various settings and seeing whether the results are coherent,” Kumar explains.

Regardless of what the ultimate solution is, Kumar asserts that successfully overcoming the problem of self-improvement depends on AI researchers working closely together. “The key to [testing a solution to this problem] is to make assumptions explicit, and, for the sake of explaining it to others, to be clear about the connection to the real-world safe AI problems we ultimately care about.”

Read Part 2 here

FLI Podcast (Part 1): From DNA to Banning Biological Weapons With Matthew Meselson and Max Tegmark

In this special two-part podcast Ariel Conn is joined by Max Tegmark for a conversation with Dr. Matthew Meselson, biologist and Thomas Dudley Cabot Professor of the Natural Sciences at Harvard University. Dr. Meselson began his career with an experiment that helped prove Watson and Crick’s hypothesis on the structure and replication of DNA. He then got involved in arms control, working with the US government to renounce the development and possession of biological weapons and halt the use of Agent Orange and other herbicides in Vietnam. From the cellular level to that of international policy, Dr. Meselson has made significant contributions not only to the field of biology, but also towards the mitigation of existential threats.  

In Part One, Dr. Meselson describes how he designed the experiment that helped prove Watson and Crick’s hypothesis, and he explains why this type of research is uniquely valuable to the scientific community. He also recounts his introduction to biological weapons, his reasons for opposing them, and the efforts he undertook to get them banned. Dr. Meselson was a key force behind the U.S. ratification of the Geneva Protocol, a 1925 treaty banning biological warfare, as well as the conception and implementation of the Biological Weapons Convention, the international treaty that bans biological and toxin weapons.

Topics discussed in this episode include:

  • Watson and Crick’s double helix hypothesis
  • The value of theoretical vs. experimental science
  • Biological weapons and the U.S. biological weapons program
  • The Biological Weapons Convention
  • The value of verification
  • Future considerations for biotechnology

Publications and resources discussed in this episode include:

Click here for Part 2: Anthrax, Agent Orange, and Yellow Rain: Verification Stories with Matthew Meselson and Max Tegmark

Ariel: Hi everyone and welcome to the FLI podcast. I’m your host, Ariel Conn with the Future of Life Institute, and I am super psyched to present a very special two-part podcast this month. Joining me as both a guest and something of a co-host is FLI president and MIT physicist Max Tegmark. And he’s joining me for these two episodes because we’re both very excited and honored to be speaking with Dr. Matthew Meselson. Matthew not only helped prove Watson and Crick’s hypothesis about the structure of DNA in the 1950s, but he was also instrumental in getting the U.S. to ratify the Geneva Protocol, in getting the U.S. to halt its Agent Orange Program, and in the creation of the Biological Weapons Convention. He is currently Thomas Dudley Cabot Professor of the Natural Sciences at Harvard University where, among other things, he studies the role of sexual reproduction in evolution. Matthew and Max, thank you so much for joining us today.

Matthew: A pleasure.

Max: Pleasure.

Ariel: Matthew, you’ve done so much and I want to make sure we can cover everything, so let’s just dive right in. And maybe let’s start first with your work on DNA.

Matthew: Well, let’s start with my being a graduate student at Caltech.

Ariel: Okay.

Matthew: I had a been a freshman at Caltech but I didn’t like it. The teaching at that time was by rote except for one course, which was Linus Pauling’s course, General Chemistry. I took that course and I did a little research project for Linus, but I decided to go to graduate school much later at the University of Chicago because there was a program there called Mathematical Biophysics. In those days, before the structure of DNA was known, what could a young man do who liked chemistry and physics but wanted to find out how you could put together the atoms of the periodic chart and make something that’s alive?

There was a unit there called Mathematical Biophysics and the head of it was a man with a great big black beard, and that all seemed very attractive to a kid. So, I decided to go there but because of my freshman year at Caltech I got to know Linus’ daughter, Linda Pauling, and she invited me to a swimming pool party at their house in Sierra Madre. So, I’m in the water. It’s a beautiful sunny day in California, and the world’s greatest chemist comes out wearing a tie and a vest and looks down at me in the water like some kind of insect and says, “Well, Matt, what are you going to do next summer?”

I looked up and I said, “I’m going to the University of Chicago to Nicolas Rashevsky” that’s the man with the black beard. And Linus looked down at me and said, “But Matt, that’s a lot of baloney. Why don’t you come be my graduate student?” So, I looked up and said, “Okay.” That’s how I got into graduate school. I started out in X-ray crystallography, a project that Linus gave me to do. One day, Jacques Monod from the Institut Pasteur in Paris came to give a lecture at Caltech, and the question then was about the enzyme beta-galactosidase, a very important enzyme because studies of the induction of that enzyme led to the hypothesis of messenger RNA, also how genes are turned on and off. A very important protein used for those purposes.

The question of Monod’s lecture was: is this protein already lurking inside of cells in some inactive form? And when you add the chemical that makes it be produced, which is lactose (or something like lactose), you just put a little finishing touch on the protein that’s lurking inside the cells and this gives you the impression that the addition of lactose (or something like lactose) induces the appearance of the enzyme itself. Or the alternative was maybe the addition to the growing medium of lactose (or something like lactose) causes de novo production, a synthesis of the new protein, the enzyme beta-galactosidase. So, he had to choose between these two hypotheses. And he proposed an experiment for doing it — I won’t go into detail — which was absolutely horrible and would certainly not have worked, even though Jacques was a very great biologist.

I had been taking Linus’ course on the nature of the chemical bond, and one of the key take-home problems was: calculate the ratio of the strength of the Deuterium bond to the Hydrogen bond. I found out that you could do that in one line based on the — what’s called the quantum mechanical zero point energy. That impressed me so much that I got interested in what else Deuterium might have about it that would be interesting. Deuterium is heavy Hydrogen, with a neutron in the nucleus. So, I thought: what would happen if you exchange the water in something alive with Deuterium? And I read that there was a man who tried to do that with a mouse, but that didn’t work. The mouse died. Maybe because the water wasn’t pure, I don’t know.

But I had found a paper that you could grow bacteria, Escherichia coli, in pure heavy water with other nutrients added but no light water. So, I knew that you could make DNA from that as you could probably make DNA or also beta-galactosidase a little heavier by having it be made out of heavy Hydrogen rather than light. There’s some intermediate details here, but at some point I decided to go see the famous biophysicist Max Delbrück. I was in the Chemistry Department and Max was in the Biology Department.

And there was, at that time, a certain — I would say not a barrier, but a three-foot fence between these two departments. Chemists looked down on the biologists because they worked just with squiggly, gooey things. Then the physicists naturally looked down on the chemists and the mathematicians looked down on the physicists. At least that was the impression of us graduate students. So, I was somewhat fearsome to go meet Max Delbrück, and he also had a fearsome reputation, as not tolerating any kind of nonsense. But finally I went to see him — he was a lovely man actually — and the first thing he said when I sat down was, “What do you think about these two new papers of Watson and Crick?” I said I’d never heard about them.  Well, he jumped out of his chair and grabbed a heap of reprints that Jim Watson had sent to him, and threw them all at me, and yelled at me, and said, “Read these and don’t come back until you read them.”

Well, I heard the words “come back.” So I read the papers and I went back, and he explained to me that there was a problem with the hypothesis that Jim and Francis had for DNA replication. The idea of theirs was that the two strands come apart by unwinding the double helix. And if that meant that you had to unwind the entire parent double helix along its whole length, the viscous drag would have been impossible to deal with. You couldn’t drive it with any kind of reasonable biological motor.

So Max thought that you don’t actually unwind the whole thing: You make breaks, and then with little pieces you can unwind those and then seal them up. This gives you a kind of dispersive replication in which the two daughter molecules, each one has some pieces of the parent molecule but no complete strand from the parent molecule. Well, when he told me that, I almost immediately — I think it was almost immediately — realized that density separation would be a way to find out if this hypothesis predicted the finding of half heavy DNA after one generation. That is, one old strand together with one new strand forming one new duplex of DNA.

So I went to Linus Pauling and said, “I’d like to do that experiment,” and he gently said, “Finish your X-ray crystallography.” So, I didn’t do that experiment then. Instead I went to Woods Hole to be a teaching assistant in the Physiology course with Jim Watson. Jim had been living at Caltech that year in the faculty club, the Athenaeum, and so had I, so I had gotten to know Jim pretty well then. So there I was at Woods Hole, and I was not really a teaching assistant — I was actually doing an experiment that Jim wanted me to do — but I was meeting with the instructors.

One day we were on the second floor of the Lily building and Jim looked out the window and pointed down across the street. Sitting on the grass was a fellow, and Jim said, “That guy thinks he’s pretty smart. His name is Frank Stahl. Let’s give him a really tough experiment to do all by himself.” The Hershey–Chase Experiment. Well, I knew what that experiment was, and I didn’t think you could do it in one day, let alone just single-handedly. So I went downstairs to tell this poor Frank Stahl guy that they were going to give him a tough assignment.

I told him about that, and I asked him what he was doing. And he was doing something very interesting with bacteriophages. He asked me what I was doing, and I told him that I was thinking of finding out if DNA replicates semi-conservatively the way Watson and Crick said it should, by a method that would have something to do with density measurements in a centrifuge. I had no clear idea how to do that, just something by growing cells in heavy water and then switching them to light water and see what kind of DNA molecules they made in a density gradient in a centrifuge. And Frank made some good suggestions, and we decided to do this together at Caltech because he was coming to Caltech himself to be a postdoc that very next September.

Anyway, to make a long story short we made the experiment work, and we published it in 1958. That experiment said that DNA is made up of two subunits and when it replicates its subunits come apart, each one becomes associated with a new sub-unit. Now anybody in his right mind would have said, “By sub-unit you really mean a single polynucleotide chain. Isn’t that what you mean?” And we would have answered at that time, “Yes of course, that’s what we mean, but we don’t want to say that because our experiment doesn’t say that. Our experiment says that some kind of subunits do that — the subunits almost certainly are the single polynucleotide chains — but we want to confine our written paper to only what can be deduced from the experiment itself, and not go one inch beyond that.” It was later a fellow named John Cairns proved that the subunits were really the single polynucleotide chains of DNA.

Ariel: So just to clarify, those were the strands of DNA that Watson and Crick had predicted, is that correct?

Matthew: Yes, it’s the result that they would have predicted, exactly so. We did a bunch of other experiments at Caltech, some on mutagenesis and other things, but this experiment, I would say, had a big psychological value. Maybe its psychological value was more than anything else.

The year 1954, the year after Watson and Crick had published the structure of DNA and their speculations as to its biological meaning at Woods Hole, and Jim was there and Francis was there. I was there, as I mentioned. Rosalind Franklin was there. Sydney Brenner was there. It was very interesting because a good number of people there didn’t believe their structure for DNA, or that it had anything to do with life and genes, on the grounds that it was too simple, and life had to be very complicated. And the other group of people thought it was too simple to be wrong.

So two views: every one agreed that the structure that they had proposed was a simple one. Some people thought simplicity meant truth, and others thought that in biology, truth had to be complicated. What I’m trying to get at here is that after the structure was published it was just a hypothesis. It wasn’t proven by any methods of, for example, crystallography, to show — it wasn’t until much later that crystallography and a certain other kind of experiment actually proved that the Watson and Crick structure was right. At that time, it was a proposal based on model building.

So why was our experiment, the experiment showing the semi-conservative replication, of psychological value? It was because this is the first time you could actually see something. Namely, bands in an ultracentrifuge gradient. So, I think the effect of our experiment in 1958 was to make the DNA structure proposal of 1954 — it gave it a certain reality. Jim, in his book The Double Helix, actually says that he was greatly relieved when that came along. I’m sure he believed the structure was right all the time, but this certainly was a big leap forward in convincing people.

Ariel: I’d like to pull Max into this just a little bit and then we’ll get back to your story. But I’m really interested in this idea of the psychological value of science. Sort of very, very broadly, do you think a lot of experiments actually come down to more psychological value, or was your experiment unique in that way? I thought that was just a really interesting idea. And I think it would be interesting to hear both of your thoughts on this.

Matthew: Max, where are you?

Max: Oh, I’m just fascinated by what you’ve been telling us about here. I think of course, the sciences — we see again and again that experiments without theory and theory without experiments, neither of them would be anywhere near as amazing as when you have both. Because when there’s a really radical new idea put forth, half the time people at the time will dismiss it and say, “Oh, that’s obviously wrong,” or whatnot. And only when the experiment comes along do people start taking it seriously and vice versa. Sometimes a lot of theoretical ideas are just widely held as truths — like Aristotle’s idea of how the laws of motion should be — until somebody much later decides to put it to the experimental test.

Matthew: That’s right. In fact, Sir Arthur Eddington is famous for two things. He was one of the first ones to find experimental proof of the accuracy of Einstein’s theory of general relativity, and the other thing for which Eddington was famous was having said, “No experiment should be believed until supported by theory.”

Max: Yeah. Theorists and experiments have had this love-hate relationship throughout the ages, which I think, in the end, has been a very fruitful relationship.

Matthew: Yeah. In cosmology the amazing thing to me is that the experiments now cost billions or at least hundreds of millions of dollars. And that this is one area, maybe the only one, in which politicians are willing to spend a lot of money for something that’s so beautiful and theoretical and far off and scientifically fundamental as cosmology.

Max: Yeah. Cosmology is also a reminder again of the importance of experiment, because the big questions there — such as where did everything come from, how big is our universe, and so on — those questions have been pondered by philosophers and deep thinkers for as long as people have walked the earth. But for most of those eons all you could do was speculate with your friends over some beer about this, and then you could go home, because there was no further progress to be made, right?

It was only more recently when experiments gave us humans better eyes: where with telescopes, et cetera, we could start to see things that our ancestors couldn’t see, and with this experimental knowledge actually start to answer a lot of these things. When I was a grad student, we argued about whether our universe was 10 billion years old or 20 billion years old. Now we argue about whether it’s 13.7 or 13.8 billion years old. You know why? Experiment.

Matthew: And now is a more exciting time than any previous time, I think, because we’re beginning to talk about things like multi-universes and entanglement, things that are just astonishing and really almost foreign to the way that we’re able to think — that there’s other universes, or that there could be what’s called quantum mechanical entanglement: that things influence each other very far apart, so far apart that light could not travel between them in any reasonable time, but by a completely weird process, which Einstein called spooky action at a distance. Anyway, this is an incredibly exciting time about which I know nothing except from podcasts and programs like this one.

Max: Thank you for bringing this up, because I think the examples you gave right now actually are really, really linked to these breakthroughs in biology that you were telling us about, because I think we’ve been on this intellectual journey all along where we humans kept underestimating our ability to understand stuff. So for the longest time, we didn’t even really try our best because we assumed it was futile. People used to think that the difference between a living bug and a dead bug was that there was some sort of secret sauce, and the living bug has some sort life essence or something that couldn’t be studied with the tools of science. And then by the time people started to take seriously that maybe actually the difference between that living bug and the dead bug is that the mechanism is just broken in one of them, and you can study the mechanism — then you get to these kind of experimental questions that you were talking about. I think in the same way, people had previously shied away from asking questions about, not just about life, but about the origin of our universe for example, as being always hopelessly beyond where we were ever even able to do anything about, so people didn’t ask what experiments they could make. They just gave up without even trying.

And then gradually I think people were emboldened by breakthroughs in, for example, biology, to say, “Hey, what about — let’s look at some of these other things where people said we’re hopeless, too?” Maybe even our universe obeys some laws that we can actually set out to study. So hopefully we’ll continue being emboldened, and stop being lazy, and actually work hard on asking all questions, and not just give up because we think they’re hopeless.

Matthew: I think the key to making this process begin was to abandon supernatural explanations of natural phenomena. So long as you believe in supernatural explanations, you can’t get anywhere, but as soon as you give them up and look around for some other kind of explanation, then you can begin to make progress. The amazing thing is that we, with our minds that evolved under conditions of hunter-gathering and even earlier than that — that these minds of ours are capable of doing such things as imagining general relativity or all of the other things.

So is there any limit to it? Is there going to be a point beyond which we will have to say we can’t really think about that, it’s too complicated? Yes, that will happen. But we will by then have built computers capable of thinking beyond. So in a sense, I think once supernatural thinking was given up, the path was open to essentially an infinity of discovery, possibly with the aid of advanced artificial intelligence later on, but still guided by humans. Or at least by a few humans.

Max: I think you hit the nail on the head there. Saying, “All this is supernatural,” has been used as an excuse to be lazy over and over again, even if you go further back, you know, hundreds of years ago. Many people looked at the moon, and they didn’t ask themselves why the moon doesn’t fall down like a normal rock because they said, “Oh, there’s something supernatural about it, earth stuff obeys earth laws, heaven stuff obeys heaven laws, which are just different. Heaven stuff doesn’t fall down.”

And then Newton came along and said, “Wait a minute. What if we just forget about the supernatural, and for a moment, explore the hypothesis that actually stuff up there in the sky obeys the same laws of physics as the stuff on earth? Then there’s got to be a different explanation for why the moon doesn’t fall down.” And that’s exactly how he was led to his law of gravitation, which revolutionized things of course. I think again and again, there was again the rejection of supernatural explanations that led people to work harder on understanding what life really is, and now we see some people falling into the same intellectual trap again and saying, “Oh yeah, sure. Maybe life is mechanistic but intelligence is somehow magical, or consciousness is somehow magical, so we shouldn’t study it.”

Now, artificial intelligence progress is really, again, driven by people willing to let go of that and say, “Hey, maybe intelligence is not supernatural. Maybe it’s all about information processing, and maybe we can study what kind of information processing is intelligent and maybe even conscious as in having experiences.” There’s a lot learn at this meta level from what you’re saying there, Matthew, that if we resist excuses to not do the work by saying, “Oh, it’s supernatural,” or whatever, there’s often real progress we can make.

Ariel: I really hate to do this because I think this is such a great discussion, but in the interest of time, we should probably get back to the stories at Harvard, and then you two can discuss some of these issues — or others — a little more shortly in this interview. So yeah, let’s go back to Harvard.

Matthew: Okay, Harvard. So I came to Harvard. I thought I’d stay only five years. I thought it was kind of a duty for an American who’d grown up in the West to find out a little bit about what the East was like. But I never left. I’ve been here for 60 years. When I was here for about three years, my friend Paul Doty, a chemist, no longer living, asked me if I’d like to go work at the United States Arms Control and Disarmament Agency in Washington DC. He was on the general advisory board of that government branch, and it was embedded in the State Department building on 21st Street in Washington, but it was quite independent, it could report it directly to the White House, and it was the first year of its existence, and it was trying to find out what it should be doing.

And one of the ways it tried to find out what it should be doing was to hire six academics to come just for the summer. One of them was me, one of them was Freeman Dyson, the physicist, and there were four others. When I got there, they said, “Okay, you’re going to work on theater nuclear weapons arms control,” something I knew less than zero about. But I tried, and I read things and so on, and very famous people came to brief me — like Llewellyn Thompson, our ambassador to Moscow, and Paul Nitze, the deputy secretary of defense.

I realized that I knew nothing about this and although scientists often have the arrogance to think that they can say something useful about nearly anything if they think about it, here was something that so many people had thought about. So I went through my boss and said, “Look, you’re wasting your time and your money. I don’t know anything about this. I’m not gonna produce anything useful. I’m a chemist and a biologist. Why don’t you have me look into the arms control of that stuff?” He said, “Yeah, you could do whatever you want. We had a guy who did that, and he got very depressed and he killed himself. You could have his desk.”

So I decided to look into chemical and biological weapons. In those days, the arms control agency was almost like a college. We all had to have very high security clearances, and that was because the congress was worried that maybe there would be some leakers amongst the people doing this suspicious work in arms control, and therefore, we had to be in possession of the highest level of security clearance. This had, in a way, the unexpected effect that you could talk to your neighbor about anything. Ordinarily, you might not have clearance for what your neighbor, a different office, a different room, or a different desk was doing — but we had, all of us, such security clearances that we could all talk to each other about what we were doing. So it was like a college in that respect. It was a wonderful atmosphere.

Anyway, I decided I would just focus on biological weapons, because the two together would be too much for a summer. I went to the CIA, and a young man there showed me everything we knew about what other countries were doing with biological weapons, and the answer was we knew very little. Then I went to Fort Detrick to see what we were doing with biological weapons, and I was given a tour by a quite good immunologist who had been a faculty member at the Harvard Medical School, name was Leroy Fothergill. And we came to a big building, seven stories high. From a distance, you would think it had windows but when you get up close, they were phony windows. And I asked Dr. Fothergill, “What do we do in there?” He said, “Well, we have a big fermentor in there and we make Anthrax.” I said, “Well, why do we do that?” He said, “Well, biological weapons are a lot cheaper than nuclear weapons. It will save us money.”

I don’t think it took me very long, certainly by the time I got back to my office in the State Department Building, to realize that hey, we don’t want devastating weapons of mass destruction to be really cheap and save us money. We would like them to be so expensive that no one can afford them but us, or maybe no one at all. Because in the hands of other people, it would be like their having nuclear weapons. It’s ridiculous to want a weapon of mass destruction that’s ultra-cheap.

So that dawned on me. My office mate was Freeman Dyson, and I talked with him a little bit about it and he encouraged me greatly to pursue this. The more I thought about it, two things motivated me very strongly. Not just the illogic of it. The illogic of it motivated me only in the respect that it made me realize that any reasonable person could be convinced of this. In other words, it wouldn’t be a hard job to get this thing stopped, because anybody who’s thoughtful would see the argument against it. But there were two other aspects. One, it was my science: biology. It’s hard to explain, but that my science would be perverted in that way. But there’s another aspect, and that is the difference between war and peace.

We’ve had wars and we’ve had peace. Germany fights Britain, Germany is aligned with Britain. Britain fights France, Britain is aligned with France. There’s war. There’s peace. There are things that go on during war that might advance knowledge a little bit, but certainly, it’s during times of peace that the arts, the humanities, and science, too, make great progress. What if you couldn’t tell the difference and all the time is both war and peace? By that I mean, war up until now has been very special. There are rules of it. Basically, it starts with hitting a guy so hard that he’s knocked out or killed. Then you pick up a stone and hit him with that. Then you make a spear and spear him with that. Then you make a bow and arrow and spear him with that. Then later on, you make a gun and you shoot a bullet at him. Even a nuclear weapon: it’s all like hitting with an arm, and furthermore, when it stops, it’s stopped, and you know when it’s going on. It make sounds. It makes blood. It makes bang.

Now biological weapons, they could be responsible for a kind of war that’s totally surreptitious. You don’t even know what’s happening, or you know it’s happening but it’s always happening. They’re trying to degrade your crops. They’re trying to degrade your genetics. They’re trying to introduce nasty insects to you. In other words, it doesn’t have a beginning and an end. There’s no armistice. Now today, there’s another kind of weapon. It has some of those attributes: It’s cyber warfare. It might over time erase the distinction between war and peace. Now that really would be a threat to the advance of civilization, a permanent science fiction-like, locked in, war-like situation, never ending. Biological weapons have that potentiality.

So for those two reasons — my science, and it could erase the distinction between war and peace, could even change what it means to be human. Maybe you could change what the other guy’s like: change his genes somehow. Change his brain by maybe some complex signaling, who knows? Anyway, I felt a strong philosophical desire to get this thing stopped. Fortunately, I was in Harvard University, and so was Jack Kennedy. And although by that time he had been assassinated, he had left behind lots of people in the key cabinet offices who were Kennedy appointees. In particular, people who came from Harvard. So I could knock on almost any door.

So I went to Lyndon Johnson’s national security adviser, who had been Jack Kennedy’s national security adviser, and who had been the dean at Harvard who hired me, McGeorge Bundy, and said all these things I’ve just said. And he said, “Don’t worry, Matt, I’ll keep it out of the war plans.” I’ve never seen a war plan, but I guess if he said that, it was true. But that didn’t mean it wouldn’t keep on being developed.

Now here I should make an aside. Does that mean that the Army or the Navy or the Air Force wanted these things? No. We develop weapons in a kind of commercial way that is a part of the military. In this case, the Army Materiel Command works out all kinds of things: better artillery pieces, communication devices, and biological weapons. It doesn’t belong to any service. Then after, in this case, biological weapons, if the laboratories develop what they think is a good biological weapon, they still have to get one of the services — Air Force, Army, Navy, Marines —  to say, “Okay, we’d like that. We’ll buy some of that.”

There was always a problem here. Nobody wanted these things. The Air Force didn’t want them because you couldn’t calculate how many planes you needed to kill a certain number of people. You couldn’t calculate the human dose response, and beyond that you couldn’t calculate the dose that would reach the humans. There were too many unknowns. The Army didn’t like it, not only because they, too, wanted predictability, but because their soldiers are there, maybe getting infected by the same bugs. Maybe there’s vaccines and all that, but it also seemed dishonorable. The Navy didn’t want it because the one thing that ships have to be is clean. So oddly enough, biological weapons were kind of a step child.

Nevertheless, there was a dedicated group of people who really liked the idea and pushed hard on it. These were the people who were developing the biological weapons, and they had their friends in Congress, so they kept getting it funded. So I made a kind of a plan, like a protocol for doing an experiment, to get us to stop all this. How do you do that? Well, first you ask yourself: who can stop it? There’s only one person who can stop it. That’s the President of the United States.

The next thing is: what kind of advice is he going to get, because he may want to do something, but if all the advice he gets is against it, it takes a strong personality to go against the advice you’re getting. Also, word might get out, if it turned out you made a mistake, that they told you all along it was a bad idea and you went ahead anyway. That makes you a super fool. So the answer there is: well, you go to talk to the Secretary of Defense, and the Secretary of State, and the head of the CIA, and all of the senior people, and their people who are just below them.

Then what about the people who are working on the biological weapons? You have to talk to them, but not so much privately, because they really are dedicated. There were some people who are caught in this and really didn’t want to be doing it, but there were other people who were really pushing it, and it wasn’t possible, really, to tell them to quit your job and get out of this. But what you could do is talk with them in public, and by knowing more than they knew about their own subject — which meant studying up a lot — show that they were wrong.

So I literally crammed, trying to understand everything there was to know about aerobiology, diffusion of clouds, pathogenicity, history of biological weapons, the whole bit, so that I could sound more knowledgeable. I know that’s a sort of slightly underhanded way to win an argument, but it’s a way of convincing the public that the guys who are doing this aren’t so wise. And then you have to get public support.

I had a pal here who told me I had to go down to Washington and meet a guy named Howard Simons, who was the managing editor of the Washington Post. He had been a science journalist at The Post and that’s why some scientists up here in Harvard knew him. So, I went down there — Howie at that time was now managing editor — and I told him, “I want to get newspaper articles all over the country about the problem of biological weapons.” He took out a big yellow pad and he wrote down about 30 names. He said, “These are the science journalists at San Francisco Chronicle, Baltimore Sun, New York Times, et cetera, et cetera.” Put down the names of all the main science journalists. And he said to me, “These guys have to have something once a week to give their editor for the science columns, or the science pages. They’re always on the lookout for something, and biological weapons is a nice subject — they’d like to write about that, because it grabs people’s attention.”

So I arranged to either meet, or at least talk to all of these guys. And we got all kinds of articles in the press, and mainly reflecting the views that I had that this was unwise for the United States to pioneer this stuff. We should be in the position to go after anybody else who was doing it even in peacetime and get them to stop, which we couldn’t very well do if we were doing it ourselves. In other words, that meant a treaty. You have to have a treaty, which might be violated, but if it’s violated and you know, at least you can go after the violators, and the treaty will likely stop a lot of countries from doing it in the first place.

So what are the treaties? There’s an old treaty, a 1925 Geneva Protocol. The United States was not a party to it, but it does prohibit the first use of bacteriological or other biological weapons. So the problem was to convince the United States to get on board that treaty.

The very first paper I wrote for the President is called the Geneva Protocol of 1925. I never met President Nixon, but I did know Henry Kissinger: He’d been my neighbor at Harvard, the building next door to mine. There was a good lunch room on the third floor. We both ate there. He had started an arms control seminar, met every month. I went to that, all the meetings. We traveled a little bit in Europe together. So I knew him, and I wrote papers for Henry knowing that those would get to Nixon. The first paper that I wrote, as I said, was “The United States and the Geneva Protocol.” It made all these arguments that I’m telling you now about why the United States should not be in this business. Now, the Protocol also prohibits chemical weapons or the first use of chemical weapons.

Now, I should say something about writing papers for Presidents. You don’t want to write a paper that’s saying, “Here’s what you should do.” You have to put yourself in their position. There are all kinds of options on what they should do. So, you have to write a paper from the point of view of a reader who’s got to choose between a lot of options. He doesn’t have a choice to start with. So that’s the kind of paper you need to write. You’ve got to give every option a fair trial. You’ve got to do your best, both to defend every option and to argue against every option. And you’ve got to do it in no more than a very few number of pages. That’s no easy job, but you can do it.

So eventually, as you know, the United States renounced biological weapons in November of 1969. There was an off the record press briefing that Henry Kissinger gave to the journalists about this. And one of them, I think it was the New York Times guy, said, “What about toxin weapons?”

Now, toxins are poisonous things made by living things, like Botulinum toxin made by bacteria or snake venom, and those could be used as weapons in principle. You can read in this briefing, Henry Kissinger says, “What are toxins?” So what this meant, in other words, is that a whole new review, a whole new decision process had to be cranked up to deal with the question, “Well, do we renounce toxin weapons?” And there were two points of view. One was, “They are made by living things, and since we’re renouncing biological warfare, we should renounce toxins.”

The other point of view is, “Yeah, they’re made by living things, but they’re just chemicals, and so they can also be made by chemists in laboratories. So, maybe we should renounce them when they’re made by living things like bacteria or snakes, but reserve the right to make them and use them in warfare if we can synthesize them in chemical laboratories.” So I wrote a paper arguing that we should renounce them completely. Partly because it would be very confusing to argue that the basis for renouncing or not renouncing is who made them, not what they are. But also, I knew that my paper was read by Richard Nixon on a certain day on Key Biscayne in Florida, which was one of the places he’d go for rest and vacation.

Nixon was down there, and I had written a paper called “What Policy for Toxins.” I was at a friend’s house with my wife the night that the President and Henry Kissinger were deciding this issue. Henry called me, and I wasn’t home. They couldn’t find their copy of my paper. Henry called to see if I could read it to them, but he couldn’t find me because I was at a dinner party. Then Henry called Paul Doty, my friend, because he had a copy of the paper. But he looked for his copy and he couldn’t find it either. Then late that night Kissinger called Doty again and said, “We found the paper, and the President has made up his mind. He’s going to renounce toxins no matter how they’re made, and it was because of Matt’s paper.”

I had tried to write a paper that steered clear of political arguments — just scientific ones and military ones. However, there had been an editorial in the Washington Post by one of their editorial writers, Steve Rosenfeld, in which he wrote the line, “How can the President renounce typhoid only to embrace Botulism?”

I thought it was so gripping, I incorporated it under the topic of the authority and credibility of the President of the United States. And what Henry told Paul on the telephone was: that’s what made up the President’s mind. And of course, it would. The President cares about his authority and credibility. He doesn’t care about little things like toxins, but his authority and credibility… And so right there and then, he scratched out the advice that he’d gotten in a position paper, which was to take the option, “Use them but only if made by chemists,” and instead chose the option to renounce them completely. And that’s how that decision got made.

Ariel: That all ended up in the Biological Weapons Convention, though, correct?

Matthew: Well, the idea for that came from the British. They had produced a draft paper to take to the arms control talks with the Russians and other countries in Geneva, suggesting a treaty to prohibit biological weapons in war — not just the way the Geneva Protocol did, but would prohibit even their production and possession, not merely their use. Richard Nixon, in his renunciation by the United States, what he did was threefold. He got the United States out of the biological weapons business and decreed that Fort Detrick and other installations that had been doing that would hence forward be doing only peaceful things, like Detrick was partly converted to a cancer research institute, and all the biological weapons that had been stock piled were to be destroyed, and they were.

The other thing he did was renounce toxins. Another thing he decided to do was to resubmit the Geneva Protocol to the United States Senate for its advice and approval. And the last thing was to support the British initiative, and that was the Biological Weapons Convention. But you could only get it if the Russians agreed. But eventually, after a lot of negotiation, we got the Biological Weapons Convention, which is still in force. A little later we even got the Chemical Weapons Convention, but not right away because in my view, and in the view of a lot of people, we did need chemical weapons. Until we could be pretty sure that the Soviet Union was going to get rid of its chemical weapons, too.

If there are chemical weapons on the battlefield, soldiers have to put on gas masks and protective clothing, and this really slows down the tempo of combat action, so that if you could simply put the other side into that restrictive clothing, you have a major military accomplishment. Chemical weapons in the hands of only one side would give that side the option of slowing down the other side, reducing the mobility on the ground of the other side. So, until we got a treaty that had inspection provisions, which the chemical treaty does, and the biological treaty does not — well, it has a kind of challenge inspection, but no one’s ever done that, and it’s very hard to make it work — but the chemical treaty had inspection provisions that were obligatory, and have been extensive: with the Russians visiting our chemical production facilities, and our guys visiting theirs, and all kinds of verification. So that’s how we got the Chemical Weapons Convention. That was quite a bit later.

Max: So, I’m curious, was there a Matthew Meselson clone on the British side, thanks to whom the British started pushing this?

Matthew: Yes. There were of course, numerous clones. And there were numerous clones on this side of the Atlantic, too. None of these things could ever be ever done by just one person. But my pal Julian Robinson, who was at the University of Sussex in Brighton, he was a real scholar of chemical and biological weapons, knows everything about them, and their whole history, and has written all of the very best papers on this subject. He’s just an unbelievably accurate and knowledgeable historian and scholar. People would go to Julian for advice. He was a Mycroft. He’s still in Sussex.

Ariel: You helped start the Harvard Sussex Program on chemical and biological weapons. Is he the person you helped start that with, or was that separate?

Matthew: We decided to do that together.

Ariel: Okay.

Matthew: It did several things, but one of the main things it did was to publish a quarterly journal, which had a dispatch from Geneva — progress towards getting the Chemical Weapons Convention — because when we started the bulletin, the Chemical Convention had not yet been achieved. There were all kinds of news items in the bulletin; We had guest articles. And it finally ended, I think, only a few years ago. But I think it had a big impact; not only because of what was in it, but because, also, it united people of all countries interested in this subject. They all read the bulletin, and they all got a chance to write in the bulletin as well, and they occasionally meet each other, so it had an effect of bringing together a community of people interested in safely getting rid of chemical weapons and biological weapons.

Max: This Biological Weapons Convention was a great inspiration for subsequent treaties, first the ban on biological weapons, and then various other kinds of weapons, and today, we have a very vibrant debate about whether there should be also be a ban on lethal autonomous weapons, and inhumane uses of A.I. So, I’m curious to what extent you got lots of push-back back in those days from people who said, “Oh this is a stupid idea,” or, “This is never going to work,” and what the lessons are that could be learned from that.

Matthew: I think that with biological weapons, and also, but to a lesser extent, with chemical weapons, the first point was we didn’t need them. We had never really accepted World War I — when we were involved in the use of chemical weapons, that had been started. But it was never something that the military liked. They didn’t want to fight a war by encumberment. Biological weapons for sure not, once we realized that to make cheap weapons, they could get into the hands of people who couldn’t afford nuclear weapons, was idiotic. And even chemical weapons are relatively cheap and have the possibility of covering fairly large areas at a low price, and also getting into the hands of terrorists. Now, terrorism wasn’t much on anybody’s radar until more recently, but once that became a serious issue, that was another argument against both biological and chemical weapons. So those two weapons really didn’t have a lot of boosters.

Max: You make it sound so easy though. Did it never happen that someone came and told you that you were all wrong and that this plan was never going to work?

Matthew: Yeah, but that was restricted to the people who were doing it, and a few really eccentric intellectuals. As evidence of this: in the military, the office which dealt with chemical and biological weapons, the highest rank you could find in that would be a colonel. No general, just a colonel. You don’t get to be a general in the chemical corps. There were a few exceptions, basically old times, as kind of a left over from World War I. If you’re a part of the military that never gets to have a general or even a full colonel, you ain’t got much influence, right?

But if you talk about the artillery or the infantry, my goodness, I mean there are lots of generals — including four star generals, even five star generals — who come out of the artillery and infantry and so on, and then Air Force generals, and fleet admirals in the Navy. So that’s one way you can quickly tell whether something is very important or not.

Anyway, we do have these treaties, but it might be very much more difficult to get treaties on war between robots. I don’t know enough about it to really have an opinion. I haven’t thought about it.

Ariel: I want to follow up with a question I think is similar, because one of the arguments that we hear a lot with lethal autonomous weapons, is this fear that if we ban lethal autonomous weapons, it will negatively impact science and research in artificial intelligence. But you were talking about how some of the biological weapons programs were repurposed to help deal with cancer. And you’re a biologist and chemist, but it doesn’t sound like you personally felt negatively affected by these bans in terms of your research. Is that correct?

Matthew: Well, the only technically really important thing — that would have happened anyway — that’s radar, and that was indeed accelerated by the military requirement to detect aircraft at a distance. But usually it’s the reverse. People who had been doing research in fundamental science naturally volunteered or were conscripted to do war work. Francis Crick was working on magnetic torpedoes, not on DNA or hemoglobin. So, the argument that a war stimulates basic science is completely backwards.

Newton, he was director of the mint. Nothing about the British military as it was at the time helped Newton realize that if you shoot a projectile fast enough, it will stay in orbit; He figured that out by himself. I just don’t believe the argument that war makes science advance. It’s not true. If anything, it slows it down.

Max: I think it’s fascinating to compare the arguments that were made for and against a biological weapons ban back then with the arguments that are made for and against a lethal autonomous weapons ban today, because another common argument I hear for why people want lethal autonomous weapons today is because, “Oh, they’re going to be great. They’re going to be so cheap.” That’s like exactly what you were arguing is a very good argument against, rather than for, a weapons class.

Matthew: There’s some similarities and some differences. Another similarity is that even one autonomous weapon in the hands of a terrorist could do things that are very undesirable — even one. On the other hand, we’re already doing something like it with drones. There’s a kind of continuous path that might lead to this, and I know that the military and DARPA are actually very interested in autonomous weapons, so I’m not so sure that you could stop it, both because it’s continuous; It’s not like a real break.

Biological weapons are really different. Chemical weapons are really different. Whereas autonomous weapons still are working on the ancient primitive analogy of hitting a man with your fist, or shooting a bullet. So long as those autonomous weapons are still using guns, bullets, things like that, and not something that is not native to our biology like poison. But with a striking of a blow you can make a continuous line all the way through stones, and bows and arrows, and bullets, to drones, and maybe autonomous weapons. So discontinuity is different.

Max: That’s an interesting challenge, deciding where exactly one draws the line to be more challenging in this case. Another very interesting analogy, I think, between biological weapons and lethal autonomous weapons is the business of verification. You mentioned earlier that there was a strong verification protocol for the Chemical Weapons Convention, and there have been verification protocols for nuclear arms reduction treaties also. Some people say, “Oh, it’s a stupid idea to ban lethal autonomous weapons because you can’t think of a good verification system.” But couldn’t people have said that also as a critique of the Biological Weapons Convention?

Matthew:  That’s a very interesting point, because most people who think that verification can’t work have never been told what’s the basic underlying idea of verification. It’s not that you could find everything. Nobody believes that you could find every missile that might exist in Russia. Nobody ever would believe that. That’s not the point. It’s more subtle. The point is that you must have an ongoing attempt to find things. That’s intelligence. And there must be a heavy penalty if you find even one.

So it’s a step back from finding everything, to saying if you find even one then that’s a violation, and then you can take extreme measures. So a country takes a huge risk that another country’s intelligence organization, or maybe someone on your side who’s willing to squeal, isn’t going to reveal the possession of even one prohibited object. That’s the point. You may have some secret biological production facility, but if we find even one of them, then you are in violation. It isn’t that we have to find every single blasted one of them.

That was especially an argument that came from the nuclear treaties. It was the nuclear people who thought that up. People like Douglas McEachin at the CIA, who realized that there’s a more sophisticated argument. You just have to have a pretty impressive ability to find one thing out of many, if there’s anything out there. This is not perfect, but it’s a lot different from the argument that you have to know where everything is at all times.

Max: So, if I can paraphrase, is it fair to say that you simply want to give the parties to the treaty a very strong incentive not to cheat, because even if they get caught off base one single time, they’re in violation, and moreover, those who don’t have the weapons at that time will also feel that there’s a very, very strong stigma? Today, for example, I find it just fascinating how biology is such a strong brand. If you go ask random students here at MIT what they associate with biology, they will say, “Oh, new cures, new medicines.” They’re not going to say bioweapons. If you ask people when was the last time you read about a bioterrorism attack in the newspaper, they can’t even remember anything typically. Whereas, if you ask them about the new biology breakthroughs for health, they can think of plenty.

So, biology has clearly very much become a science that’s harnessed to make life better for people rather than worse. So there’s a very strong stigma. I think if I or anyone else here at MIT tried to secretly start making bioweapons, we’d have a very hard time even persuading any biology grad student to want to work with them because of the stigma. If one could create a similar stigma against lethal autonomous weapons, the stigma itself would be quite powerful, even absent the ability to do perfect verification. Does that make sense?

Matthew: Yes, it does, perfect sense.

Ariel: Do you think that these stigmas have any effect on the public’s interest or politicians’ interest in science?

Matthew: I think there’s still great fascination of people with science. I think that the exploration of space, for example: lots of people, not just kids — but especially kids — that are fascinated by it. Pretty soon, Elon Musk says in 2022, he’s going to have some people walking around on Mars. He’s just tested that BFR rocket of his that’s going to carry people to Mars. I don’t know if he’ll actually get it done but people are getting fascinated by the exploration of space, are getting fascinated by lots of medical things, are getting desperate about the need for a cure for cancer. I myself think we need to spend a lot more money on preventing — not curing but preventing cancer — and I think we know how to do it.

I think the public still has a big fascination, respect, and excitement from science. The politicians, it’s because, see, they have other interests. It’s not that they’re not interested or don’t like science. It’s because they have big money interests, for example. Coal and oil, these are gigantic. Harvard University has heavily invested in companies that deal with fossil fuels. Our whole world runs on fossil fuels mainly. You can’t fool around with that stuff. So it becomes a problem of which is going to win out, your scientific arguments, which are almost certain to be right, but not absolutely like one and one makes two — but almost — or the whole economy and big financial interests. It’s not easy. It will happen, we’ll convince people, but maybe not in time. That’s the sad part. Once it gets bad enough, it’s going to be bad. You can’t just turn around on a dime and take care of disastrous climate change.

Max: Yeah, this is very much the spirit of course, of the Future Life Institute, that Ariel’s podcast is run by. Technology, what it really does, it empowers us humans to do more, either more good things or more bad things. And technology in and of itself isn’t evil, nor is it morally good; It’s a tool, simply. And the more powerful it becomes, the more crucial it is that we also develop the wisdom to steer the technology for good uses. And I think what you’ve done with your biology colleagues is such an inspiring role model for all of the other sciences, really.

We physicists still feel pretty guilty about giving the world nuclear weapons, but we’ve also gave the world a lot of good stuff, from lasers, to smartphones and computers. Chemists gave the world a lot of great materials, but they also gave us, ultimately, the internal combustion engine and climate change. Biology, I think more than any other field, has clearly ended up very solidly on the good side. Everybody loves biology for what it does, even though it could have gone very differently, right? We could have had a catastrophic arms race, a race to the bottom, with one super power outdoing the other in bioweapons, and eventually these cheap weapons being everywhere, and on the black market, and bioterrorism every day. That future didn’t happen, that’s why we all love biology. And I am very honored to get to be on this call here with you, so I could personally thank you for your role on making it this way. We should not take this for granted, that it’ll be this way with all sciences, the way it’s become for biology. So, thank you.

Matthew: Yeah. That’s all right.

I’d like to end with one thought. We’re learning how to change the human genome. They won’t really get going for a while, and there’s some problems that very few people are thinking about. Not the so-called off target effects, that’s a well-known problem — but there’s another problem that I won’t go into, but it’s called epistasis. Nevertheless, 10 years from now, 100 years from now, 500 years from now, sooner or later we’ll be changing the human genome on a massive scale, making people better in various ways, so-called enhancements.

Now, a question arises. Do we know enough about the genetic basis of what makes us human to be sure that we can keep the good things about being human? What are those? Well, compassion is one. I’d say curiosity is another. Another is the feeling of needing to be needed. That sounds kind of complicated, I guess, but if you don’t feel needed by anybody — there’s some people who can go through life and they don’t need to feel needed. But doctors, nurses, parents, people who really love each other: the feeling of being needed by another human being, I think, is very pleasurable to many people, maybe to most people, and it’s one of the things that’s of essence of what it means to be human.

Now, where does this all take us? It means that if we’re going to start changing the human genome in any big time way, we need to know, first of all, what we most value in being human, and that’s a subject for the humanities, for everybody to talk about, think about. And then it’s a subject for the brain scientists to figure out what’s the basis of it. It’s got to be in the brain. But what is it in the brain? And we’re miles and miles and miles away in brain science from being able to figure out what is it in the brain — or maybe we’re not, I don’t know any brain science, I shouldn’t be shooting off my mouth — but we’ve got to understand those things. What is it in our brains that makes us feel good when we are of use to someone else?

We don’t want to fool around with whatever those genes are — do not monkey with those genes unless you’re absolutely sure that you’re making them maybe better — but anyway, don’t fool around. And figure out in the humanities, don’t stop teaching humanities. Learn from Sophocles, and Euripides, and Aeschylus: What are the big problems about human existence? Don’t make it possible for a kid to go through Harvard — as is possible today — without learning a single thing from Ancient Greece. Nothing. You don’t even have to use the word Greece. You don’t have to use the word Homer or any of that. Nothing, zero. Isn’t that amazing?

Before President Lincoln, everybody, to get to enter Harvard, had to already know Ancient Greek and Latin. Even though these guys were mainly boys of course, and they were going to become clergymen. They also, by the way — there were no electives — everyone had to take fluctions, which is differential calculus. Everyone had to take integral calculus. Every one had to take astronomy, chemistry, physics, as well as moral philosophy, et cetera. Well, there’s nothing like that anymore. We don’t all speak the same language because we’ve all had such different kinds of education, and also the humanities just get a short shrift. I think that’s very short sighted.

MIT is pretty good in humanities, considering it’s a technical school. Harvard used to be tops. Harvard is at risk of maybe losing it. Anyway, end of speech.

Max: Yeah, I want to just agree with what you said, and also rephrase it the way I think about it. What I hear you saying is that it’s not enough to just make our technology more powerful. We also need the humanities, and our humanity, for the wisdom of how we’re going to manage our technology and what we’re trying to use it for, because it does no good to have a really powerful tool if you aren’t wise and use it for the right things.

Matthew: If we’re going to change, we might even split into several species. Almost all of the other species have very close other species: neighbors. Especially if you can get them separated — there’s a colony on Mars and they don’t travel back and forth much — species will diverge. It takes a long, long, long, long time, but the idea there, like the Bible says, that we are fixed, nothing will change, that’s of course wrong. Human evolution is going on as we speak.

Ariel: We’ll end part one of our two-part podcast with Matthew Meselson here. Please join us for the next episode which serves as a reminder that weapons bans don’t just magically work. But rather, there are often science mysteries that need to be solved in order to verify whether a group has used a weapon illegally. In the next episode, Matthew will talk about three such scientific mysteries he helped solve, including the anthrax incident in Russia, the yellow rain affair in Southeast Asia, and the research he did that led immediately to the prohibition of Agent Orange. So please join us for part two of this podcast, which is also available now.

As always, if you’ve been enjoying this podcast, please take a moment to like it, share it, and maybe even leave a positive review. It’s a small action on your part, but it’s tremendously helpful for us.

The Breakdown of the INF: Who’s to Blame for the Collapse of the Landmark Nuclear Treaty?

On February 1, a little more than 30 years after it went into effect, the United States announced that it is suspending the Intermediate-Range Nuclear Forces (INF) Treaty. Less than 24 hours later, Russia announced that it was also suspending the treaty.

It stands (or stood) as one of the last major nuclear arms control treaties between the U.S. and Russia, and its collapse signals the most serious nuclear arms crisis since the 1980s. As Malcolm Chalmers, deputy director general of the Royal United Services Institute, said to The Guardian, “If the INF treaty collapses, and with the New Start treaty on strategic arms due to expire in 2021, the world could be left without any limits on the nuclear arsenals of nuclear states for the first time since 1972.”

The INF treaty, which went into effect in 1988, was the first nuclear agreement to outlaw an entire class of weapons. It banned all ground-launched ballistic and cruise missiles — nuclear, conventional, and “exotic”— with a range of 500 km to 5500 km (310 to 3400 miles), leading to the immediate elimination of 2,692 short- and medium-range weapons. But more than that, the treaty served as a turning point that helped thaw the icy stalemate between the U.S. and Russia. Ultimately, the trust that it fostered established a framework for future treaties and, in this way, played a critical part in ending the Cold War.

Now, all of that may be undone.

The Blame Game Part 1: Russia vs. U.S.

In defense of the suspension, President Donald Trump said that the Russian government has deployed new missiles that violate the terms of the INF treaty — missiles that could deliver nuclear warheads to European targets, including U.S. military bases. President Trump also said that, despite repeated warnings, President Vladimir Putin has refused to destroy these warheads. “We’re not going to let them violate a nuclear agreement and do weapons and we’re not allowed to,” he said.

In a statement announcing the suspension of the treaty, Secretary of State Mike Pompeo said that countries must be held accountable when they violate a treaty. “Russia has jeopardized the United States’ security interests,” he said, “and we can no longer be restricted by the treaty while Russia shamelessly violates it.” Pompeo continued by noting that Russia’s posturing is a clear signal that the nation is returning to its old Cold War mentality, and that the U.S. must make similar preparations in light of these developments. “As we remain hopeful of a fundamental shift in Russia’s posture, the United States will continue to do what is best for our people and those of our allies,” he concluded.

The controversy about whether Russia is in violation hinges on whether the 9M729 missile can fly more than 500km. The U.S. claims to have provided evidence of this to Russia, but has not made this evidence public, and further claims that violations have continued since at least 2014. Although none of the U.S.-based policy experts interviewed for this article dispute that Russia is in violation, many caution that this suspension will create a far more unstable environment and that the U.S. shares much of the blame for not doing more to preserve the treaty.

In an emailed statement to the Future of Life Institute, Martin Hellman, an Adjunct Senior Fellow for Nuclear Risk Analysis at the Federation of American Scientists and Professor Emeritus of Electrical Engineering at Stanford University, was clear in his censure of the Trump administration’s decision and reasoning, noting that it follows a well-established pattern of duplicity and double-dealing:

The INF Treaty was a crucial step in ending the arms race. Our withdrawing from it in such a precipitous manner is a grave mistake. In a sense, treaties are the beginning of negotiations, not the end. When differences in perspective arise, including on what constitutes a violation, the first step is to meet and negotiate. Only if that process fails, should withdrawal be contemplated. In the same way, any faults in a treaty should first be approached via corrective negotiations.

Withdrawing in this precipitous manner from the INF treaty will add to concerns that our adversaries already have about our trustworthiness on future agreements, such as North Korea’s potential nuclear disarmament. Earlier actions of ours which laid that foundation of mistrust include George W. Bush killing the 1994 Agreed Framework with North Korea “for domestic political reasons,” Obama attacking Libya after Bush had promised that giving up its WMD programs “can regain [Libya] a secure and respected place among the nations,” and Trump tearing up the Iran agreement even though Iran was in compliance and had taken steps that considerably set back its nuclear program.

In an article published by CNN, Eliot Engel, chairman of the House Committee on Foreign Affairs, and Adam Smith, chairman of the House Committee on Armed Services, echo these sentiments and add that the U.S. government greatly contributed to the erosion of the treaty, clarifying that the suspension could have been avoided if President Trump had collaborated with NATO allies to pressure Russia into ensuring compliance. “[U.S.] allies told our offices directly that the Trump administration blocked NATO discussion regarding the INF treaty and provided only the sparest information throughout the process….This is the latest step in the Trump administration’s pattern of abandoning the diplomatic tools that have prevented nuclear war for 70 years. It also follows the administration’s unilateral decision to withdraw from the Paris climate agreement,” they said.

Russia has also complained about the alleged lack of U.S. diplomacy. In January 2019, Russian diplomats proposed a path to resolution, stating that they would display their missile system and demonstrate that it didn’t violate the INF treaty if the U.S. did the same with their MK-41 launchers in Romania. The Russians felt that this was a fair compromise, as they have long argued that the Aegis missile defense system, which the U.S. deployed in Romania and Poland, violates the INF treaty. The U.S. rejected Russia’s offer, stating that a Russian controlled inspection would not permit the kind of unfettered access that U.S. representatives would need to verify their conclusions. And ultimately, they insisted that the only path forward was for Russia to destroy the missiles, launchers, and supporting infrastructure.

In response, Russian foreign minister Sergei Lavrov accused the U.S. of being obstinate. “U.S. representatives arrived with a prepared position that was based on an ultimatum and centered on a demand for us to destroy this rocket, its launchers and all related equipment under US supervision,” he said.

Suggested Reading

Accidental Nuclear War: A Timeline of Close Calls

The most devastating military threat arguably comes from a nuclear war started not intentionally but by accident or miscalculation. Accidental nuclear war has almost happened many times already, and with 15,000 nuclear weapons worldwide — thousands on hair-trigger alert and ready to launch at a moment’s notice — an accident is bound to occur eventually.

The Blame Game Part 2: China

Other experts, such as Mark Fitzpatrick, Director of the non-proliferation program at the International Institute for Strategic Studies, assert that the “real reason” for the U.S. pullout lies elsewhere — in China.

This belief is bolstered by previous statements made by President Trump. Most notably, during a rally in the Fall of 2018, the President told reporters that it is unfair that China faces no limits when it comes to developing and deploying intermediate-range nuclear missiles. “Unless Russia comes to us and China comes to us and they all come to us and say, ‘let’s really get smart and let’s none of us develop those weapons, but if Russia’s doing it and if China’s doing it, and we’re adhering to the agreement, that’s unacceptable,” he said.

According to a 2019 report published for congress, China has some 2,000 ballistic and cruise missiles in its inventory, and 95% of these would violate the INF treaty if Beijing were a signatory. It should be noted that both Russia and the U.S. are estimated to have over 6,000 nuclear warheads, while China has approximately 280. Nevertheless, the report states, “The sheer number of Chinese missiles and the speed with which they could be fired constitutes a critical Chinese military advantage that would prove difficult for a regional ally or partner to manage absent intervention by the United States,” adding, “The Chinese government has also officially stated its opposition to Beijing joining the INF Treaty.” Consequently, President Trump stated that the U.S. has no choice but to suspend the treaty.

Along these lines, John Bolton, who became the National Security Adviser in April, has long argued that the kinds of missiles banned by the INF treaty would be an invaluable resource when it comes to defending Western nations against what he argues is an increasing military threat from China’s.

Pranay Vaddi, a fellow in the Nuclear Policy Program at the Carnegie Endowment for International Peace, feels differently. Although he does not deny that China poses a serious military challenge to the U.S., Vaddi asserts that withdrawing from the INF treaty is not a viable solution, and he says that proponents of the suspension “ignore the very real political challenges associated with deploying U.S. GBIRs in the Asia Pacific region. They also ignore specific military challenges, including the potential for a missile race and long-term regional and strategic instability.” He concludes, “Before withdrawing from the INF Treaty, the United States should consult with its Asian allies on the threat posed by China, the defenses required, and the consequences of introducing U.S. offensive missiles into the region, including potentially on allied territory.”

Suggested Reading

1100 Declassified U.S. Nuclear Targets

The National Security Archives recently published a declassified list of U.S. nuclear targets from 1956, which spanned 1,100 locations across Eastern Europe, Russia, China, and North Korea. This map shows all 1,100 nuclear targets from that list, demonstrating how catastrophic a nuclear exchange between the United States and Russia could be.

Six Months and Counting

Regardless of how much blame each respective nation shares, the present course has been set, and if things don’t change soon, we may find ourselves in a very different world a few months from now.

According to the terms of the treaty, if one of the parties breaches the agreement then the other party has the option to terminate or suspend it. It was on this basis that, back in October of 2018, President Trump stated he would be terminating the INF treaty altogether. Today’s suspension announcement is an update to these plans.

Notably, a suspension doesn’t follow the same course as a withdrawal. A suspension means that the treaty continues to exist for a set period. As a result, starting Feb. 1, the U.S. began a six-month notice period. If the two nations don’t reach an agreement and decide to restore the treaty within this window, on August 2nd, the Treaty will go out of effect. At that juncture, both the U.S. and Russia will be free to develop and deploy the previously banned nuclear missiles with no oversight or transparency.

The situation is dire, and experts assert that we must immediately reopen negotiations. On Friday, before the official U.S. announcement, German Chancellor Angela Merkel said that if the United States announced it would suspend compliance with the treaty, Germany would use the six-month formal withdrawal period to hold further discussions. “If it does come to a cancellation today, we will do everything possible to use the six-month window to hold further talks,” she said.

Following the US announcement, German Foreign Minister Heiko Maas tweeted, “there will be less security without the treaty.” Likewise, Laura Rockwood, executive director at the Vienna Center for Disarmament and Non-Proliferation, noted that the suspension is a troubling move that will increase — not decrease — tension and conflict. “It would be best to keep the INF in place. You don’t throw the baby out with the bathwater. It’s been an extraordinarily successful arms control treaty,” she said.

Carl Bildt, a co-chair of the European Council on Foreign Relations, agreed with these sentiments, noting in a tweet that the INF treaty’s demise puts many lives in peril. “Russia can now also deploy its Kaliber cruise missiles with ranges around 1.500 km from ground launchers. This would quickly cover all of Europe with an additional threat,” he said.

And it looks like many of these fears are already being realized. In a televised meeting over the weekend, President Putin stated that Russia will actively begin building weapons that were previously banned under the treaty. President Putin also made it clear that none of his departments would initiate talks with the U.S. on any matters related to nuclear arms control. “I suggest that we wait until our partners are ready to engage in equal and meaningful dialogue,” he said.

 

The photo for this article is from wiki commons: by Mil.ru, CC BY 4.0, https://commons.wikimedia.org/w/index.php?curid=63633975

Planning for Existential Hope

It may seem like we at FLI spend a lot of our time worrying about existential risks, but it’s helpful to remember that we don’t do this because we think the world will end tragically: We address issues relating to existential risks because we’re so confident that if we can overcome these threats, we can achieve a future greater than any of us can imagine!

As we end 2018 and look toward 2019, we want to focus on a message of hope, a message of existential hope.

But first, a very quick look back…

We had a great year, and we’re pleased with all we were able to accomplish. Some of our bigger projects and successes include: the Lethal Autonomous Weapons Pledge; a new round of AI safety grants focusing on the beneficial development of AGI; the California State Legislature’s resolution in support of the Asilomar AI Principles; and our second Future of Life Award, which was presented posthumously to Stanislav Petrov and his family.

As we now look ahead and strive to work toward a better future, we, as a society, must first determine what that collective future should be. At FLI, we’re looking forward to working with global partners and thought leaders as we consider what “better futures” might look like and how we can work together to build them.

As FLI President Max Tegmark says, “There’s been so much focus on just making our tech powerful right now, because that makes money, and it’s cool, that we’ve neglected the steering and the destination quite a bit. And in fact, I see that as the core goal of the Future of Life Institute: help bring back focus [to] the steering of our technology and the destination.”

A recent Gizmodo article on why we need more utopian fiction also summed up the argument nicely: ​​”Now, as we face a future filled with corruption, yet more conflict, and the looming doom of global warming, imagining our happy ending may be the first step to achieving it.”

Fortunately, there are already quite a few people who have begun considering how a conflicted world of 7.7 billion can unite to create a future that works for all of us. And for the FLI podcast in December, we spoke with six of them to talk about how we can start moving toward that better future.

The existential hope podcast includes interviews with FLI co-founders Max Tegmark and Anthony Aguirre, as well as existentialhope.com founder Allison Duettmann, Josh Clark who hosts The End of the World with Josh Clark, futurist and researcher Anders Sandberg, and tech enthusiast and entrepreneur Gaia Dempsey. You can listen to the full podcast here, but we also wanted to call attention to some of their comments that most spoke to the idea of steering toward a better future:

Max Tegmark on the far future and the near future:

When I look really far into the future, I also look really far into space and I see this vast cosmos, which is 13.8 billion years old. And most of it is, despite what the UFO enthusiasts say, actually looking pretty dead and [like] wasted opportunities. And if we can help life flourish not just on earth, but ultimately throughout much of this amazing universe, making it come alive and teeming with these fascinating and inspiring developments, that makes me feel really, really inspired.

For 2019 I’m looking forward to more constructive collaboration on many aspects of this quest for a good future for everyone on earth.

Gaia Dempsey on how we can use a technique called world building to help envision a better future for everyone and get more voices involved in the discussion:

Worldbuilding is a really fascinating set of techniques. It’s a process that has its roots in narrative fiction. You can think of, for example, the entire complex world that J.R.R. Tolkien created for The Lord of the Rings series. And in more contemporary times, some spectacularly advanced worldbuilding is occurring now in the gaming industry. So [there are] these huge connected systems that underpin worlds in which millions of people today are playing, socializing, buying and selling goods, engaging in an economy. These are vast online worlds that are not just contained on paper as in a book, but are actually embodied in software. And over the last decade, world builders have begun to formally bring these tools outside of the entertainment business, outside of narrative fiction and gaming, film and so on, and really into society and communities. So I really define worldbuilding as a powerful act of creation.

And one of the reasons that it is so powerful is that it really facilitates collaborative creation. It’s a collaborative design practice.

Ultimately our goal is to use this tool to explore how we want to evolve as a society, as a community, and to allow ideas to emerge about what solutions and tools will be needed to adapt to that future.

One of the things where I think worldbuilding is really good is that the practice itself does not impose a single monolithic narrative. It actually encourages a multiplicity of narratives and perspectives that can coexist.

Anthony Aguirre on how we can use technology to find solutions:

I think we can use technology to solve any problem in the sense that I think technology is an extension of our capability: it’s something that we develop in order to accomplish our goals and to bring our will into fruition. So, sort of by definition, when we have goals that we want to do — problems that we want to solve — technology should in principle be part of the solution.

So I’m broadly optimistic that, as it has over and over again, technology will let us do things that we want to do better than we were previously able to do them.

Allison Duettmann on why she created the website existentialhope.com:

I do think that it’s up to everyone, really, to try to engage with the fact that we may not be doomed, and what may be on the other side. What I’m trying to do with the website, at least, is generate common knowledge to catalyze more directed coordination toward beautiful futures. I think that there [are] a lot of projects out there that are really dedicated to identifying the threats to human existence, but very few really offer guidance on [how] to influence that. So I think we should try to map the space of both peril and promise which lie before us, [and] we should really try to aim for that. This knowledge can empower each and every one of us to navigate toward the grand future.

Josh Clark on the impact of learning about existential risks for his podcast series, The End of the World with Josh Clark:

As I was creating the series, I underwent this transition [regarding] how I saw existential risks, and then ultimately how I saw humanity’s future, how I saw humanity, other people, and I kind of came to love the world a lot more than I did before. Not like I disliked the world or people or anything like that. But I really love people way more than I did before I started out, just because I see that we’re kind of close to the edge here. And so the point of why I made the series kind of underwent this transition, and you can kind of tell in the series itself where it’s like information, information, information. And then now, that you have bought into this, here’s how we do something about it.

I think that one of the first steps to actually taking on existential risks is for more and more people to start talking about [them].

Anders Sandberg on a grand version of existential hope:

The thing is, my hope for the future is we get this enormous open ended future. It’s going to contain strange and frightening things but I also believe that most of it is going to be fantastic. It’s going to be roaring on the world far, far, far into the long term future of the universe probably changing a lot of the aspects of the universe.

When I use the term “existential hope,” I contrast that with existential risk. Existential risks are things that threaten to curtail our entire future, to wipe it out, to make it too much smaller than it could be. Existential hope to me, means that maybe the future is grander than we expect. Maybe we have chances we’ve never seen and I think we are going to be surprised by many things in future and some of them are going to be wonderful surprises. That is the real existential hope.

Right now, this sounds totally utopian, would you expect all humans to get together and agree on something philosophical? That sounds really unlikely. Then again, a few centuries ago the United Nations and the internet would [have sounded] totally absurd. The future is big, we have a lot of centuries ahead of us, hopefully.

From everyone at FLI, we wish you a happy holiday season and a wonderful New Year full of hope!

https://www.flickr.com/photos/lamoncloa_gob_es/32291640108

Updates From the COP24 Climate Change Meeting

For the first two weeks in December, the parties to the United Nations Framework Convention on Climate Change (UNFCC) gathered in Katowice, Poland for the 24th annual Conference of the Parties (COP24).

The UNFCCC defines its ultimate goal as “preventing ‘dangerous’ human interference with the climate system,” and its objective for COP24 was to design an “implementation package” for the 2015 Paris Climate Agreement. This package, known as the Katowice Rules, is intended to bolster the Paris Agreement by intensifying the mitigation goals of each of its member countries and, in so doing, ensure the full implementation of the Paris Agreement.

The significance of this package is clearly articulated in the COP24 presidency’s vision — “there is no Paris Agreement without Katowice.”

And the tone of the event was, fittingly, one of urgency. Negotiations took place in the wake of the latest IPCC report, which made clear in its findings that the original terms of the Paris Agreement are insufficient. If we are to keep to the preferred warming target of 1.5°C this century, the report notes that we must strengthen the global response to climate change.

The need for increased action was reiterated throughout the event. During the first week of talks, the Global Carbon Project released new data showing a 2.7% increase in carbon emissions in 2018 and projecting further emissions growth in 2019. And the second week began with a statement from global investors who, “strongly urge all governments to implement the actions that are needed to achieve the goals of the [Paris] Agreement, with the utmost urgency.” The investors warned that, without drastic changes, the economic fallout from climate change would likely be several times worse than the 2008 financial crisis.

Against this grim backdrop, negotiations crawled along.

Progress was impeded early on by a disagreement over the wording used in the Conference’s acknowledgment of the IPCC report. Four nations — the U.S., Russia, Saudi Arabia, and Kuwait — took issue with a draft that said the parties “welcome” the report, preferring to say they “took note” of it. A statement from the U.S. State Department explained: “The United States was willing to note the report and express appreciation to the scientists who developed it, but not to welcome it, as that would denote endorsement of the report.”

There was also tension between the U.S. and China surrounding the treatment of developed vs. developing countries. The U.S. wants one universal set of rules to govern emissions reporting, while China has advocated for looser standards for itself and other developing nations.

Initially scheduled to wrap on Friday, talks continued into the weekend, as a resolution was delayed in the final hours by Brazil’s opposition to a proposal that would change rules surrounding carbon trading markets. Unable to strike a compromise, negotiators ultimately tabled the proposal until next year, and a deal was finally struck on Saturday, following negotiations that carried on through the night.

The final text of the Katowice Rules welcomes the “timely completion” of the IPCC report and lays out universal requirements for updating and fulfilling national climate pledges. It holds developed and developing countries to the same reporting standard, but it offers flexibility for “those developing country parties that need it in the light of their capacities.” Developing countries will be left to self-determine whether or not they need flexibility.

The rules also require that countries report any climate financing, and developed countries are called on to increase their financial contributions to climate efforts in developing countries.

The photo for this article was originally posted here.

How to Create AI That Can Safely Navigate Our World — An Interview With Andre Platzer

Over the last few decades, the unprecedented pace of technological progress has allowed us to upgrade and modernize much of our infrastructure and solve many long-standing logistical problems. For example, Babylon Health’s AI-driven smartphone app is helping assess and prioritize 1.2 million patients in North London, electronic transfers allow us to instantly send money nearly anywhere in the world, and, over the last 20 years, GPS has revolutionized  how we navigate, how we track and ship goods, and how we regulate traffic.

However, exponential growth comes with its own set of hurdles that must be navigated. The foremost issue is that it’s exceedingly difficult to predict how various technologies will evolve. As a result, it becomes challenging to plan for the future and ensure that the necessary safety features are in place.

This uncertainty is particularly worrisome when it comes to technologies that could pose existential challenges — artificial intelligence, for example.

Yet, despite the unpredictable nature of tomorrow’s AI, certain challenges are foreseeable. Case in point, regardless of the developmental path that AI agents ultimately take, these systems will need to be capable of making intelligent decisions that allow them to move seamlessly and safely through our physical world. Indeed, one of the most impactful uses of artificial intelligence encompasses technologies like autonomous vehicles, robotic surgeons, user-aware smart grids, and aircraft control systems — all of which combine advanced decision-making processes with the physics of motion.

Such systems are known as cyber-physical systems (CPS). The next generation of advanced CPS could lead us into a new era in safety, reducing crashes by 90% and saving the world’s nations hundreds of billions of dollars a year — but only if such systems are themselves implemented correctly.

This is where Andre Platzer, Associate Professor of Computer Science at Carnegie Mellon University, comes in. Platzer’s research is dedicated to ensuring that CPS benefit humanity and don’t cause harm. Practically speaking, this means ensuring that the systems are flexible, reliable, and predictable.

What Does it Mean to Have a Safe System?

Cyber-physical systems have been around, in one form or another, for quite some time. Air traffic control systems, for example, have long relied on CPS-type technology for collision avoidance, traffic management, and a host of other decision-making tasks. However, Platzer notes that as CPS continue to advance, and as they are increasingly required to integrate more complicated automation and learning technologies, it becomes far more difficult to ensure that CPS are making reliable and safe decisions.

To better clarify the nature of the problem, Platzer turns to self-driving vehicles. In advanced systems like these, he notes that we need to ensure that the technology is sophisticated enough to be flexible, as it has to be able to safely respond to any scenario that it confronts. In this sense, “CPS are at their best if they’re not just running very simple [control systems], but if they’re running much more sophisticated and advanced systems,” Platzer notes. However, when CPS utilize advanced autonomy, because they are so complex, it becomes far more difficult to prove that they are making systematically sound choices.

In this respect, the more sophisticated the system becomes, the more we are forced to sacrifice some of the predictability and, consequently, the safety of the system. As Platzer articulates, “the simplicity that gives you predictability on the safety side is somewhat at odds with the flexibility that you need to have on the artificial intelligence side.”

The ultimate goal, then, is to find equilibrium between the flexibility and predictability — between the advanced learning technology and the proof of safety — to ensure that CPS can execute their tasks both safely and effectively. Platzer describes this overall objective as a kind of balancing act, noting that, “with cyber-physical systems, in order to make that sophistication feasible and scalable, it’s also important to keep the system as simple as possible.”

How to Make a System Safe

The first step in navigating this issue is to determine how researchers can verify that a CPS is truly safe. In this respect, Platzer notes that his research is driven by this central question: if scientists have a mathematical model for the behavior of something like a self-driving car or an aircraft, and if they have the conviction that all the behaviors of the controller are safe, how do they go about proving that this is actually the case?

The answer is an automated theorem prover, which is a computer program that assists with the development of rigorous mathematical correctness proofs.

When it comes to CPS, the highest safety standard is such a mathematical correctness proof, which shows that the system always produces the correct output for any given input. It does this by using formal methods of mathematics to prove or disprove the correctness of the control algorithms underlying a system.

After this proof technology has been identified and created, Platzer asserts that the next step is to use it to augment the capabilities of artificially intelligent learning agents — increasing their complexity while simultaneously verifying their safety.

Eventually, Platzer hopes that this will culminate in technology that allows CPS to recover from situations where the expected outcome didn’t turn out to be an accurate model of reality. For example, if a self-driving car assumes another car is speeding up when it is actually slowing down, it needs to be able to quickly correct this error and switch to the correct mathematical model of reality.

The more complex such seamless transitions are, the more complex they are to implement. But they are the ultimate amalgamation of safety and flexibility or, in other words, the ultimately combination of AI and safety proof technology.

Creating the Tech of Tomorrow

To date, one of the biggest developments to come from Platzer’s research is the KeYmaera X prover, which Platzer characterizes as a “gigantic, quantum leap in terms of the reliability of our safety technology, passing far beyond in rigor than what anyone else is doing for the analysis of cyber-physical systems.”

The KeYmaera X prover, which was created by Platzer and his team, is a tool that allows users to easily and reliably construct mathematical correctness proofs for CPS through an easy-to-use interface.

More technically, KeYmaera X is a hybrid systems theorem prover that analyzes the control program and the physical behavior of the controlled system together, in order to provide both efficient computation and the necessary support for sophisticated safety proof techniques. Ultimately, this work builds off of a previous iteration of the technology known as KeYmaera. However, Platzer states that, in order to optimize the tool and make it as simple as possible, the team essentially “started from scratch.”

Emphasizing just how dramatic these most recent changes are, Platzer notes that, in the previous prover, the correctness of the statements was dependent on some 66,000 lines of code. Notably, each of these 66,000 lines were all critical to the correctness of the verdict. According to Platzer, this poses a problem, as it’s exceedingly difficult to ensure that all of the lines are implemented correctly. Although the latest iteration of KeYmaera is ultimately just as large as the previous version, in KeYmaera X, the part of the prover that is responsible for verifying the correctness is a mere 2,000 lines of code.

This allows the team to evaluate the safety of cyber-physical systems more reliably than ever before. “We identified this microkernel, this really minuscule part of the system that was responsible for the correctness of the answers, so now we have a much better chance of making sure that we haven’t accidentally snuck any mistakes into the reasoning engines,” Platzer said. Simultaneously, he notes that it enables users to do much more aggressive automation in their analysis. Platzer explains, “If you have a small part of the system that’s responsible for the correctness, then you can do much more liberal automation. It can be much more courageous because there’s an entire safety net underneath it.”

For the next stage of his research, Platzer is going to begin integrating multiple mathematical models that could potentially describe reality into a CPS. To explain these next steps, Platzer returns once more to self-driving cars: “If you’re following another driver, you can’t know if the driver is currently looking for a parking spot, trying to get somewhere quickly, or about to change lanes. So, in principle, under those circumstances, it’s a good idea to have multiple possible models and comply with the ones that may be the best possible explanation of reality.”

Ultimately, the goal is to allow the CPS to increase their flexibility and complexity by switching between these multiple models as they become more or less likely explanations of reality. “The world is a complicated place,” Platzer explains, “so the safety analysis of the world will also have to be a complicated one.”

FLI Signs Safe Face Pledge

FLI is pleased to announce that we’ve signed the Safe Face Pledge, an effort to ensure facial analysis technologies are not used as weapons or in other situations that can lead to abuse or bias. The pledge was initiated and led by Joy Buolamwini, an AI researcher at MIT and founder of the Algorithmic Justice League.  

Facial analysis technology isn’t just used by our smart phones and on social media. It’s also found in drones and other military weapons, and it’s used by law enforcement, airports and airlines, public surveillance cameras, schools, business, and more. Yet the technology is known to be flawed and biased, often miscategorizing anyone who isn’t a white male. And the bias is especially strong against dark-skinned women.

Research shows facial analysis technology is susceptible to bias and even if accurate can be used in ways that breach civil liberties. Without bans on harmful use cases, regulation, and public oversight, this technology can be readily weaponized, employed in secret government surveillance, and abused in law enforcement,” warns Buolamwini.

By signing the pledge, companies that develop, sell or buy facial recognition and analysis technology promise that they will “prohibit lethal use of the technology, lawless police use, and require transparency in any government use.”

FLI does not develop or use these technologies, but we signed because we support these efforts, and we hope all companies will take necessary steps to ensure their technologies are used for good, rather than as weapons or other means of harm.

Companies that had signed the pledge at launch include Simprints, Yoti, and Robbie AI. Other early signatories of the pledge include prominent AI researchers Noel Sharkey, Subbarao Kambhampati, Toby Walsh, Stuart Russell, and Raja Chatila, as well as tech bauthors Cathy O’Neil and Meredith Broussard, and many more.

The SAFE Face Pledge commits signatories to:

Show Value for Human Life, Dignity, and Rights

  • Do not contribute to applications that risk human life
  • Do not facilitate secret and discriminatory government surveillance
  • Mitigate law enforcement abuse
  • Ensure your rules are being followed

Address Harmful Bias

  • Implement internal bias evaluation processes and support independent evaluation
  • Submit models on the market for benchmark evaluation where available

Facilitate Transparency

  • Increase public awareness of facial analysis technology use
  • Enable external analysis of facial analysis technology on the market

Embed Safe Face Pledge into Business Practices

  • Modify legal documents to reflect value for human life, dignity, and rights
  • Engage with stakeholders
  • Provide details of Safe Face Pledge implementation

Organizers of the pledge say, “Among the most concerning uses of facial analysis technology involve the bolstering of mass surveillance, the weaponization of AI, and harmful discrimination in law enforcement contexts.” And the first statement of the pledge calls on signatories to ensure their facial analysis tools are not used “to locate or identify targets in operations where lethal force may be used or is contemplated.”

Anthony Aguirre, cofounder of FLI, said, “A great majority of AI researchers agree that designers and builders of AI systems are stakeholders in the moral implications of their use, misuse, and actions, with a responsibility and opportunity to shape those implications.  That is, in fact, the 9th Asilomar AI principle. The Safe Face Pledge asks those involved with the development of facial recognition technologies, which are dramatically increasing in power through the use of advanced machine learning, to take this belief seriously and to act on it.  As new technologies are developed and poised for widespread implementation and use, it is imperative for our society to consider their interplay with the rights and privileges of the people they affect — and new rights and responsibilities may have to be considered as well, where technologies are currently in a legal or regulatory grey area.  FLI applauds the multiple initiatives, including this pledge, aimed at ensuring that facial recognition technologies — as with other AI technologies — are implemented only in a way that benefits both individuals and society while taking utmost care to respect individuals’ rights and human dignity.”

You can support the Safe Face Pledge by signing here.

 

Highlights From NeurIPS 2018

The Top Takeaway from Google’s Attempt to Remove Racial Biases From AI

By Jolene Creighton

Algorithms don’t just decide what posts you see in your Facebook newsfeed. They make millions of life-altering decisions every day. They help decide who moves to the next stage of a job interview, who can take out a loan, and even who’s granted parole.

When one stops to consider the well-known biases that exist in these algorithms, the role that they play in our decision-making processes becomes somewhat concerning.

Ultimately, bias is a problem that stems from the unrepresentative datasets that our systems are trained on. For example, when it comes to images, most of the training data is Western-centric — it depicts caucasian individuals taking part in traditionally Western activities. Consequently, as Google research previously revealed, if we give an AI system an image of a caucasian bride in a Western dress, it correctly labels the image as “wedding,” “bride,” and “women.” If, however, we present the same AI system with an image of a bride of Asian descent, is produces results like “clothing,” “event,” and “performance art.”

Of course, this problem is not exclusively a Western one. In 2011, a study found that AI developed in Eastern Asia have more difficulty distinguishing between Caucasian faces than Asian faces.

That’s why, in September of 2018, Google partnered with the NeurIPS confrence to launch the Inclusive Images Competition, an event that was created to help encourage the development of less biased AI image classification models.

For the competition, individuals were asked to use Open Images, a image dataset collected from North America and Europe, to train a system that can be evaluated on images collected from a different geographic region.

At this week’s NeurIPS conference, Pallavi Baljekar, a Google Brain researcher, spoke about the success of the project. Notably, the competition was only marginally successful. Although the leading models maintained relatively high accuracy in the first stages of the competition, four out of five top models didn’t predict the “bride” label when applied to the original two bride images.

However, that’s not to say that progress wasn’t made. Baljekar noted that the competition proved that, even with a small and diverse set of data, “we can improve performance on unseen target distributions.”

And in an interview, Pavel Ostyakov, a Deep Learning Engineer at Samsung AI Center and the researcher who took first place in the competition, added that demanding an entirely unbiased AI may be asking for a bit too much.  Ultimately, our AI need to be able to “stereotype” to some degree in order to make their classifications. “The problem was not solved yet, but I believe that it is impossible for neural networks to make unbiased predictions,” he said. Ultimately, the need to retain some biases are sentiments that have been echoed by other AI researchers before.

Consequently, it seems that making unbiased AI systems is going to be a process that requires continuous improvement and tweaking. Yet, despite the fact that we can’t make entirely unbiased AI, we can do a lot more to make them less biased.

With this in mind, today, Google announced Open Images Extended. It’s an extension of Google’s Open Images and is intended to be a dataset that better represents the global diversity we find on our planet. The first set to be added is seeded with over 470,000 images.

On this very long road we’re traveling, it’s a step in the right direction.

 

 

 

The Reproducibility Problem: AI Agents Should be Trained in More Realistic Environments

By Jolene Creighton

Our world is a complex and vibrant place. It’s also remarkably dynamic, existing in a state of near constant change. As a result, when we’re faced with a decision, there are thousands of variables that must be considered.

According to Joelle Pineau, an Associate Professor at McGill University and lead of Facebook’s Artificial Intelligence Research lab in Montreal, this poses a bit of a problem when it comes to our AI agents.

During her keynote speech at the 2018 NeurIPS conference, Pineau stated that many AI researchers aren’t training their machine learning systems in proper environments. Instead of using dynamic worlds that mimic what we see in real life, much of the work that’s currently being done takes place in simulated worlds that are static and pristine, lacking the complexity of realistic environments.

According to Pineau, although these computer-constructed worlds help make research more reproducible, they also make the results less rigorous and meaningful. “The real world has incredible complexity, and when we go to these simulators, that complexity is completely lost,” she said.

Pineau continued by noting that, if we hope to one day create intelligent machines that are able to work and react like humans — artificial general intelligences (AGIs) — we must go beyond the static and limited worlds that are created by computers and begin tackling real world scenarios. “We have to break out of these simulators…on the roadmap to AGI, this is only the beginning,” she said.

Ultimately, Pineau also noted that we will never achieve a true AGI unless we begin testing our systems on more diverse training sets and forcing our intelligent agents to tackle more complex problems. “The world is your test set,” she said, concluding, “I’m here to encourage you to explore the full spectrum of opportunities…this means using separate tasks for training and testing.”

Teaching a Machine to Reason

Pineau’s primary critique was on an area of machine learning that is known as reinforcement learning (RL). RL systems allow intelligent agents to improve their decision-making capabilities through trial and error. Over time, these agents are able to learn the rules that govern good and bad choices by interacting with their environment and receiving numerical reward signals that are based on the actions that they take.

Ultimately, RL systems are trained to maximize the numerical reward signals that they receive, so their decisions improve as they try more things and discover what actions yield the most reward. But unfortunately, most simulated worlds have a very limited number of variables. As a result, RL systems have very few things that they can interact with. This means that, although intelligent agents may know what constitutes good decision-making in a simulated environment, when they’re deployed in a realistic environment, they quickly become lost amidst all the new variables.

According to Pineau, overcoming this issue means creating more dynamic environments for AI systems to train on.

To showcase one way of accomplishing this, Pineau turned to Breakout, a game launched by Atari in 1976. The game’s environment is simplistic and static, consisting of a background that is entirely black. In order to inject more complexity into this simulated environment, Pineau and her team inserted videos, which are an endless source of natural noise, into the background.

Pineau argued that, by adding these videos into the equation, the team was able to create an environment that includes some of the complexity and variability of the real world. And by ultimately training reinforcement learning systems to operate in such multifaceted environments, researchers obtain more reliable findings and better prepare RL systems to make decisions in the real world.

In order to help researchers better comprehend exactly how reliable and reproducible their results currently are — or aren’t — Pineau pointed to The 2019 ICLR Reproducibility Challenge during her closing remarks.

The goal of this challenge is to have members of the research community try to reproduce the empirical results submitted to the International Conference on Learning Representations. Then, once all of the attempts have been made, the results are sent back to the original authors. Pineau noted that, to date, the challenge has had a dramatic impact on the findings that are reported. During the 2018 challenge, 80% of authors that received reproducibility reports stated that they changed their papers as a result of the feedback.

You can download a copy of Pineau’s slides here.

 

 

Montreal Declaration on Responsible AI May Be Next Step Toward the Development of AI Policy

By Ariel Conn

Over the last few years, as concerns surrounding artificial intelligence have grown, an increasing number of organizations, companies, and researchers have come together to create and support principles that could help guide the development of beneficial AI. With FLI’s Asilomar Principles, IEEE’s treatise on the Ethics of Autonomous and Intelligent Systems, the Partnership on AI’s Tenets, and many more, concerned AI researchers and developers have laid out a framework of ethics that almost everyone can agree upon. However, these previous documents weren’t specifically written to inform and direct AI policy and regulations.

On December 4, at the NeurIPS conference in Montreal, Canadian researchers took the next step, releasing the Montreal Declaration on Responsible AI. The Declaration builds on the current ethical framework of AI, but the architects of the document also add, “Although these are ethical principles, they can be translated into political language and interpreted in legal fashion.”

Yoshua Bengio, a prominent Canadian AI researcher and founder of one of the world’s premiere machine learning labs, described the Declaration saying, “Its goal is to establish a certain number of principles that would form the basis of the adoption of new rules and laws to ensure AI is developed in a socially responsible manner.”

“We want this Declaration to spark a broad dialogue between the public, the experts and government decision-makers,” said UdeM’s rector, Guy Breton. “The theme of artificial intelligence will progressively affect all sectors of society and we must have guidelines, starting now, that will frame its development so that it adheres to our human values ​​and brings true social progress.”

The Declaration lays out ten principles: Well-Being, Respect for Autonomy, Protection of Privacy and Intimacy, Solidarity, Democratic Participation, Equity, Diversity, Prudence, Responsibility, and Sustainable Development.

The primary themes running through the Declaration revolve around ensuring that AI doesn’t disrupt basic human and civil rights and that it enhances equality, privacy, diversity, and human relationships. The Declaration also suggests that humans need to be held responsible for the actions of artificial intelligence systems (AIS), and it specifically states that AIS cannot be allowed to make the decision to take a human life. It also includes a section on ensuring that AIS is designed with the climate and environment in mind, such that resources are sustainably sourced and energy use is minimized.

The Declaration is the result of deliberation that “occurred through consultations held over three months, in 15 different public spaces, and sparked exchanges between over 500 citizens, experts and stakeholders from every horizon.” That it was formulated in Canada is especially relevant given Montreal’s global prominence in AI research.

In his article for the Conversation, Bengio explains, “Because Canada is a scientific leader in AI, it was one of the first countries to see all its potential and to develop a national plan. It also has the will to play the role of social leader.”

He adds, “Generally speaking, scientists tend to avoid getting too involved in politics. But when there are issues that concern them and that will have a major impact on society, they must assume their responsibility and become part of the debate.”

 

 

Making an Impact: What Role Should Scientists Play in Creating AI Policy?

By Jolene Creighton

Artificially intelligent systems are already among us. They fly our planes, drive our cars, and even help doctors make diagnoses and treatment plans. As AI continues to impact daily life and alter society, laws and policies will increasingly have to take it into account. Each day, more and more of the world’s experts call on policymakers to establish clear, international guidelines for the governance of AI.

This week, at the 2018 NeurIPS conference, Edward W. Felten, Professor of Computer Science and Public Affairs at Princeton University, took up the call.

During his opening remarks, Felten noted that AI is poised to radically change everything about the way we live and work, stating that this technology is “extremely powerful and represents a profound change that will happen across many different areas of life.” As such, Felten noted that we must work quickly to amend our laws and update our policies so we’re ready to confront the changes that this new technology brings.

However, Felten argued that policy makers cannot be left to dictate this course alone — members of the AI research community must engage with them.

“Sometimes it seems like our world, the world of the research lab or the developer’s or data scientist’s cubicle, is a million miles from public policy…however, we have not only an opportunity but also a duty to be actively participating in public life,” he said.

Guidelines for Effective Engagement

Felton noted that the first step for researchers is to focus on and understand the political system as a whole. “If you look only at the local picture, it might look irrational. But, in fact, these people [policymakers] are operating inside a system that is big and complicated,” he said. To this point, Felten stated that researchers must become better informed about political processes so that they can participate in policy conversations more effectively.

According to Felten, this means the AI community needs to recognize that policy work is valid and valuable, and this work should be incentivized accordingly. He also called on the AI community to create career paths that encourage researchers to actively engage with policymakers by blending AI research and policy work.

For researchers who are interested in pursuing such work, Felten outlined the steps they should take to start an effective dialogue:

  1. Combine knowledge with preference: As a researcher, work to frame your expertise in the context of the policymaker’s interests.
  2. Structure the decision space: Based on the policymaker’s preferences, give a range of options and explain their possible consequences.
  3. Follow-up: Seek feedback on the utility of the guidance that you offered and the way that you presented your ideas.

If done right, Felton said, this protocol allows experts and policy makers to build productive engagement and trust over time.

US Government Releases Its Latest Climate Assessment, Demands Immediate Action

At the end of last week, amidst the flurry of holiday shopping, the White House quietly released Volume II of the Fourth National Climate Assessment (NCA4). The comprehensive report, which was compiled by the United States Global Change Research Program (USGCRP), is the culmination of decades of environmental research conducted by scientists from 13 different federal agencies. The scope of the work is truly striking, representing more than 300 authors and encompassing thousands of scientific studies.

Unfortunately, the report is also rather grim.

If climate change continues unabated, the assessment asserts that it will cost the U.S. economy hundreds of billions a year by the close of the century — causing some $155 billion in annual damages to labor and another $118 billion in damages to coastal property. In fact, the report notes that, unless we immediately launch “substantial and sustained global mitigation and regional adaptation efforts,” the impact on the agricultural sector alone will reach billions of dollars in losses by the middle of the century.

Notably, the NCA4 authors emphasize that these aren’t just warnings for future generations, pointing to several areas of the United States that are already grappling with the high economic cost of climate change. For example, a powerful heatwave that struck the Northeast left local fisheries devastated, and similar events in Alaska have dramatically slashed fishing quotas for certain stocks. Meanwhile, human activity is exacerbating Florida’s red tide, killing fish populations along the southwest coast.

Of course, the economy won’t be the only thing that suffers.

According to the assessment, climate change is increasingly threatening the health and well-being of the American people, and emission reduction efforts could ultimately save thousands of lives. Young children, pregnant women, and aging populations are identified as most at risk; however, the authors note that waterborne infectious diseases and global food shortages threaten all populations.

As with the economic impact, the toll on human health is already visible. For starters, air pollution is driving a rise in the number of deaths related to heart and lung problems. Asthma diagnoses have increased, and rising temperatures are causing a surge in heatstroke and other heat-related illnesses. And the report makes it clear that the full extent of the risk extends well beyond either the economy or human health, plainly stating that climate change threatens all life on our planet.

Ultimately, the authors emphasize the immediacy of the issue, noting that without immediate action, no system will be left untouched:

“Climate change affects the natural, built, and social systems we rely on individually and through their connections to one another….extreme weather and climate-related impacts on one system can result in increased risks or failures in other critical systems, including water resources, food production and distribution, energy and transportation, public health, international trade, and national security. The full extent of climate change risks to interconnected systems, many of which span regional and national boundaries, is often greater than the sum of risks to individual sectors.”

Yet, the picture painted by the NCA4 assessment is not entirely bleak. The report suggests that, with a concerted and sustained effort, the most dire damage can be undone and ultimate catastrophe averted. The authors note that this will require international cooperation centered on a dramatic reduction in global carbon dioxide emissions.

The 2015 Paris Agreement, in which 195 countries put forth emission reduction pledges, represented a landmark in international effort to curtail global warming. The agreement was designed to cap warming at 2 degrees Celsius, a limit scientists then believed would prevent the most severe and irreversible effects of climate change. That limit has since been lowered to 1.5 degrees Celsius. Unfortunately, current models predict that the even if countries hit their current pledges, temperatures will still climb to 3.3 degrees Celsius by the end of the century. The Paris Agreement offers a necessary first step, but in light of these new predictions, pledges must be strengthened.

Scientists hope the findings in the National Climate Assessment will compel the U.S. government to take the lead in updating their climate commitments.

Handful of Countries – Including the US and Russia – Hamper Discussions to Ban Killer Robots at UN

This press release was originally released by the Campaign to Stop Killer Robots and has been lightly edited.

Geneva, 26 November 2018 – Reflecting the fragile nature of multilateralism today, countries have agreed to continue their diplomatic talks on lethal autonomous weapons systems—killer robots—next year. But the discussions will continue with no clear objective and participating countries will have even less time dedicated to making decisions than they’ve had in the past. The outcome at the Convention on Conventional Weapons (CCW) annual meeting—which concluded at 11:55 PM on Friday, November 23—has again demonstrated the weakness of the forum’s decision-making process, which enables a single country or small group of countries to thwart more ambitious measures sought by a majority of countries.

Killer robots are weapons systems that would select and attack targets without meaningful human control over the process — that is, a weapon that could target and kill people without sufficient human oversight.

“We’re dismayed that [countries] could not agree on a more ambitious mandate aimed at negotiating a treaty to prevent the development of fully autonomous weapons,” said Mary Wareham of Human Rights Watch, coordinator of the Campaign to Stop Killer Robots. “This weak outcome underscores the urgent need for bold political leadership and for consideration of  another route to create a new treaty to ban these weapons systems, which would select and attack targets without meaningful human control.”

“The security of the world and future of humanity hinges on achieving a preemptive ban on killer robots,” Wareham added.

The Campaign to Stop Killer Robots urges all countries to heed the call of the UN Secretary-General and prohibit these weapons, which he has deemed “politically unacceptable and morally repugnant.”

Since the first CCW meeting on killer robots in 2014, most of the participating countries have concluded that current international humanitarian and human rights law will need to be strengthened to prevent the development, production, and use of fully autonomous weapons. This includes 28 countries seeking to prohibit fully autonomous weapons. This past week, El Salvador and Morocco added their names to the list of countries calling for a ban. Austria, Brazil, and Chile have formally proposed the urgent negotiation of “a legally-binding instrument to ensure meaningful human control over the critical functions” of weapons systems.

None of the 88 countries participating in the CCW meeting objected to continuing the formal discussions on lethal autonomous weapons systems. However, Russia, Israel, Australia, South Korea, and the United States have indicated they cannot support negotiation of a new treaty via the CCW or any other process. And Russia alone successfully lobbied to limit the amount of time that states will meet in 2019, reducing the talks from just 10 days to only 7 days.

Seven days is insufficient for the CCW to tackle this challenge, and for the Campaign to Stop Killer Robots, the fact the CCW talks on killer robots will proceed next year is no guarantee of a meaningful outcome.

“It seems ever more likely that concerned [countries] will consider other avenues to create a new international treaty to prohibit fully autonomous weapons,” said Wareham. “The Campaign to Stop Killer Robots stands ready to work to secure a new treaty through any means possible.”

The CCW is not the only group within the United Nations that can pass a legally-binding, international treaty. In the past, the CCW has been tasked with banning antipersonnel landmines, cluster munitions, and nuclear weapons, but in each case, because the CCW requires consensus among all participating countries, the group was never able to prohibit the weapons in question. Instead, fueled by mounting public pressure, concerned countries turned to other bodies within the UN to finally establish treaties that banned the each of these inhumane weapons. But even then, these diplomatic efforts only succeeded because of the genuine partnerships between like-minded countries, UN agencies, the International Committee of the Red Cross and dedicated coalitions of non-governmental organizations.

This past week’s CCW meeting approved Mr. Ljupco Jivan Gjorgjinski of the Former Yugoslav Republic of Macedonia to chair next year’s deliberations on LAWS, which will be divided into two meetings: March 25-29 and August 20-21. The CCW’s annual meeting, at which decisions will be made about future work on autonomous weapons, will be held on November 13-15.

“Over the coming year our dynamic campaigners around the world are intensifying their outreach at the national and regional levels,” said Wareham. “We encourage anyone concerned by the disturbing trend towards killer robots to express their strong desire for their government to endorse and work for a ban on fully autonomous weapons without delay. Only with the public’s support will the ban movement prevail.”

To learn more about how you can help, visit autonomousweapons.org.

Benefits & Risks of Biotechnology

“This is a whole new era where we’re moving beyond little edits on single genes to being able to write whatever we want throughout the genome.”

-George Church, Professor of Genetics at Harvard Medical School

What is biotechnology?

How are scientists putting nature’s machinery to use for the good of humanity, and how could things go wrong?

Biotechnology is nearly as old as humanity itself. The food you eat and the pets you love? You can thank our distant ancestors for kickstarting the agricultural revolution, using artificial selection for crops, livestock, and other domesticated animals. When Edward Jenner invented vaccines and when Alexander Fleming discovered antibiotics, they were harnessing the power of biotechnology. And, of course, modern civilization would hardly be imaginable without the fermentation processes that gave us beer, wine, and cheese!

When he coined the term in 1919, the agriculturalist Karl Ereky described ‘biotechnology’ as “all lines of work by which products are produced from raw materials with the aid of living things.” In modern biotechnology, researchers modify DNA and proteins to shape the capabilities of living cells, plants, and animals into something useful for humans. Biotechnologists do this by sequencing, or reading, the DNA found in nature, and then manipulating it in a test tube – or, more recently, inside of living cells.

In fact, the most exciting biotechnology advances of recent times are occurring at the microscopic level (and smaller!) within the membranes of cells. After decades of basic research into decoding the chemical and genetic makeup of cells, biologists in the mid-20th century launched what would become a multi-decade flurry of research and breakthroughs. Their work has brought us the powerful cellular tools at biotechnologists’ disposal today. In the coming decades, scientists will use the tools of biotechnology to manipulate cells with increasing control, from precision editing of DNA to synthesizing entire genomes from their basic chemical building blocks. These cells could go on to become bomb-sniffing plants, miracle cancer drugs, or ‘de-extincted’ wooly mammoths. And biotechnology may be a crucial ally in the fight against climate change.

But rewriting the blueprints of life carries an enormous risk. To begin with, the same technology being used to extend our lives could instead be used to end them. While researchers might see the engineering of a supercharged flu virus as a perfectly reasonable way to better understand and thus fight the flu, the public might see the drawbacks as equally obvious: the virus could escape, or someone could weaponize the research. And the advanced genetic tools that some are considering for mosquito control could have unforeseen effects, possibly leading to environmental damage. The most sophisticated biotechnology may be no match for Murphy’s Law.

While the risks of biotechnology have been fretted over for decades, the increasing pace of progress – from low cost DNA sequencing to rapid gene synthesis to precision genome editing – suggests biotechnology is entering a new realm of maturity regarding both beneficial applications and more worrisome risks. Adding to concerns, DIY scientists are increasingly taking biotech tools outside of the lab. For now, many of the benefits of biotechnology are concrete while many of the risks remain hypotheticals, but it is better to be proactive and cognizant of the risks than to wait for something to go wrong first and then attempt to address the damage.

How does biotechnology help us?

Satellite images make clear the massive changes that mankind has made to the surface of the Earth: cleared forests, massive dams and reservoirs, millions of miles of roads. If we could take satellite-type images of the microscopic world, the impact of biotechnology would be no less obvious. The majority of the food we eat comes from engineered plants, which are modified – either via modern technology or by more traditional artificial selection – to grow without pesticides, to require fewer nutrients, or to withstand the rapidly changing climate. Manufacturers have substituted petroleum-based ingredients with biomaterials in many consumer goods, such as plastics, cosmetics, and fuels. Your laundry detergent? It almost certainly contains biotechnology. So do nearly all of your cotton clothes.

But perhaps the biggest application of biotechnology is in human health. Biotechnology is present in our lives before we’re even born, from fertility assistance to prenatal screening to the home pregnancy test. It follows us through childhood, with immunizations and antibiotics, both of which have drastically improved life expectancy. Biotechnology is behind blockbuster drugs for treating cancer and heart disease, and it’s being deployed in cutting-edge research to cure Alzheimer’s and reverse aging. The scientists behind the technology called CRISPR/Cas9 believe it may be the key to safely editing DNA for curing genetic disease. And one company is betting that organ transplant waiting lists can be eliminated by growing human organs in chimeric pigs.

What are the risks of biotechnology?

Along with excitement, the rapid progress of research has also raised questions about the consequences of biotechnology advances. Biotechnology may carry more risk than other scientific fields: microbes are tiny and difficult to detect, but the dangers are potentially vast. Further, engineered cells could divide on their own and spread in the wild, with the possibility of far-reaching consequences. Biotechnology could most likely prove harmful either through the unintended consequences of benevolent research or from the purposeful manipulation of biology to cause harm. One could also imagine messy controversies, in which one group engages in an application for biotechnology that others consider dangerous or unethical.

 

1. Unintended Consequences

Sugarcane farmers in Australia in the 1930’s had a problem: cane beetles were destroying their crop. So, they reasoned that importing a natural predator, the cane toad, could be a natural form of pest control. What could go wrong? Well, the toads became a major nuisance themselves, spreading across the continent and eating the local fauna (except for, ironically, the cane beetle).

While modern biotechnology solutions to society’s problems seem much more sophisticated than airdropping amphibians into Australia, this story should serve as a cautionary tale. To avoid blundering into disaster, the errors of the past should be acknowledged.

  • In 2014, the Center for Disease Control came under scrutiny after repeated errors led to scientists being exposed to Ebola, anthrax, and the flu. And a professor in the Netherlands came under fire in 2011 when his lab engineered a deadly, airborne version of the flu virus, mentioned above, and attempted to publish the details. These and other labs study viruses or toxins to better understand the threats they pose and to try to find cures, but their work could set off a public health emergency if a deadly material is released or mishandled as a result of human error.
  • Mosquitoes are carriers of disease – including harmful and even deadly pathogens like Zika, malaria, and dengue – and they seem to play no productive role in the ecosystem. But civilians and lawmakers are raising concerns about a mosquito control strategy that would genetically alter and destroy disease-carrying species of mosquitoes. Known as a ‘gene drive,’ the technology is designed to spread a gene quickly through a population by sexual reproduction. For example, to control mosquitoes, scientists could release males into the wild that have been modified to produce only sterile offspring. Scientists who work on gene drive have performed risk assessments and equipped them with safeguards to make the trials as safe as possible. But, since a man-made gene drive has never been tested in the wild, it’s impossible to know for certain the impact that a mosquito extinction could have on the environment. Additionally, there is a small possibility that the gene drive could mutate once released in the wild, spreading genes that researchers never planned for. Even armed with strategies to reverse a rogue gene drive, scientists may find gene drives difficult to control once they spread outside the lab.
  • When scientists went digging for clues in the DNA of people who are apparently immune to HIV, they found that the resistant individuals had mutated a protein that serves as the landing pad for HIV on the surface of blood cells. Because these patients were apparently healthy in the absence of the protein, researchers reasoned that deleting its gene in the cells of infected or at-risk patients could be a permanent cure for HIV and AIDS. With the arrival of the new tool, a set of ‘DNA scissors’ called CRISPR/Cas9, that holds the promise of simple gene surgery for HIV, cancer, and many other genetic diseases, the scientific world started to imagine nearly infinite possibilities. But trials of CRISPR/Cas9 in human cells have produced troubling results, with mutations showing up in parts of the genome that shouldn’t have been targeted for DNA changes. While a bad haircut might be embarrassing, the wrong cut by CRISPR/Cas9 could be much more serious, making you sicker instead of healthier. And if those edits were made to embryos, instead of fully formed adult cells, then the mutations could permanently enter the gene pool, meaning they will be passed on to all future generations. So far, prominent scientists and prestigious journals are calling for a moratorium on gene editing in viable embryos until the risks, ethics, and social implications are better understood.

 

2. Weaponizing biology

The world recently witnessed the devastating effects of disease outbreaks, in the form of Ebola and the Zika virus – but those were natural in origin. The malicious use of biotechnology could mean that future outbreaks are started on purpose. Whether the perpetrator is a state actor or a terrorist group, the development and release of a bioweapon, such as a poison or infectious disease, would be hard to detect and even harder to stop. Unlike a bullet or a bomb, deadly cells could continue to spread long after being deployed. The US government takes this threat very seriously, and the threat of bioweapons to the environment should not be taken lightly either.

Developed nations, and even impoverished ones, have the resources and know-how to produce bioweapons. For example, North Korea is rumored to have assembled an arsenal containing “anthrax, botulism, hemorrhagic fever, plague, smallpox, typhoid, and yellow fever,” ready in case of attack. It’s not unreasonable to assume that terrorists or other groups are trying to get their hands on bioweapons as well. Indeed, numerous instances of chemical or biological weapon use have been recorded, including the anthrax scare shortly after 9/11, which left 5 dead after the toxic cells were sent through the mail. And new gene editing technologies are increasing the odds that a hypothetical bioweapon targeted at a certain ethnicity, or even a single individual like a world leader, could one day become a reality.

While attacks using traditional weapons may require much less expertise, the dangers of bioweapons should not be ignored. It might seem impossible to make bioweapons without plenty of expensive materials and scientific knowledge, but recent advances in biotechnology may make it even easier for bioweapons to be produced outside of a specialized research lab. The cost to chemically manufacture strands of DNA is falling rapidly, meaning it may one day be affordable to ‘print’ deadly proteins or cells at home. And the openness of science publishing, which has been crucial to our rapid research advances, also means that anyone can freely Google the chemical details of deadly neurotoxins. In fact, the most controversial aspect of the supercharged influenza case was not that the experiments had been carried out, but that the researchers wanted to openly share the details.

On a more hopeful note, scientific advances may allow researchers to find solutions to biotechnology threats as quickly as they arise. Recombinant DNA and biotechnology tools have enabled the rapid invention of new vaccines which could protect against new outbreaks, natural or man-made. For example, less than 5 months after the World Health Organization declared Zika virus a public health emergency, researchers got approval to enroll patients in trials for a DNA vaccine.

The ethics of biotechnology

Biotechnology doesn’t have to be deadly, or even dangerous, to fundamentally change our lives. While humans have been altering genes of plants and animals for millennia — first through selective breeding and more recently with molecular tools and chimeras — we are only just beginning to make changes to our own genomes (amid great controversy).

Cutting-edge tools like CRISPR/Cas9 and DNA synthesis raise important ethical questions that are increasingly urgent to answer. Some question whether altering human genes means “playing God,” and if so, whether we should do that at all. For instance, if gene therapy in humans is acceptable to cure disease, where do you draw the line? Among disease-associated gene mutations, some come with virtual certainty of premature death, while others put you at higher risk for something like Alzheimer’s, but don’t guarantee you’ll get the disease. Many others lie somewhere in between. How do we determine a hard limit for which gene surgery to undertake, and under what circumstances, especially given that the surgery itself comes with the risk of causing genetic damage? Scholars and policymakers have wrestled with these questions for many years, and there is some guidance in documents such as the United Nations’ Universal Declaration on the Human Genome and Human Rights.

And what about ways that biotechnology may contribute to inequality in society? Early work in gene surgery will no doubt be expensive – for example, Novartis plans to charge $475,000 for a one-time treatment of their recently approved cancer therapy, a drug which, in trials, has rescued patients facing certain death. Will today’s income inequality, combined with biotechnology tools and talk of ‘designer babies’, lead to tomorrow’s permanent underclass of people who couldn’t afford genetic enhancement?

Advances in biotechnology are escalating the debate, from questions about altering life to creating it from scratch. For example, a recently announced initiative called GP-Write has the goal of synthesizing an entire human genome from chemical building blocks within the next 10 years. The project organizers have many applications in mind, from bringing back wooly mammoths to growing human organs in pigs. But, as critics pointed out, the technology could make it possible to produce children with no biological parents, or to recreate the genome of another human, like making cellular replicas of Einstein. “To create a human genome from scratch would be an enormous moral gesture,” write two bioethicists regarding the GP-Write project. In response, the organizers of GP-Write insist that they welcome a vigorous ethical debate, and have no intention of turning synthetic cells into living humans. But this doesn’t guarantee that rapidly advancing technology won’t be applied in the future in ways we can’t yet predict.

What are the tools of biotechnology?

 

1. DNA Sequencing

It’s nearly impossible to imagine modern biotechnology without DNA sequencing. Since virtually all of biology centers around the instructions contained in DNA, biotechnologists who hope to modify the properties of cells, plants, and animals must speak the same molecular language. DNA is made up of four building blocks, or bases, and DNA sequencing is the process of determining the order of those bases in a strand of DNA. Since the publication of the complete human genome in 2003, the cost of DNA sequencing has dropped dramatically, making it a simple and widespread research tool.

Benefits: Sonia Vallabh had just graduated from law school when her mother died from a rare and fatal genetic disease. DNA sequencing showed that Sonia carried the fatal mutation as well. But far from resigning to her fate, Sonia and her husband Eric decided to fight back, and today they are graduate students at Harvard, racing to find a cure. DNA sequencing has also allowed Sonia to become pregnant, since doctors could test her eggs for ones that don’t have the mutation. While most people’s genetic blueprints don’t contain deadly mysteries, our health is increasingly supported by the medical breakthroughs that DNA sequencing has enabled. For example, researchers were able to track the 2014 Ebola epidemic in real time using DNA sequencing. And pharmaceutical companies are designing new anti-cancer drugs targeted to people with a specific DNA mutation. Entire new fields, such as personalized medicine, owe their existence to DNA sequencing technology.

Risks: Simply reading DNA is not harmful, but it is foundational for all of modern biotechnology. As the saying goes, knowledge is power, and the misuse of DNA information could have dire consequences. While DNA sequencing alone cannot make bioweapons, it’s hard to imagine waging biological warfare without being able to analyze the genes of infectious or deadly cells or viruses. And although one’s own DNA information has traditionally been considered personal and private, containing information about your ancestors, family, and medical conditions,  governments and corporations increasingly include a person’s DNA signature in the information they collect. Some warn that such databases could be used to track people or discriminate on the basis of private medical records – a dystopian vision of the future familiar to anyone who’s seen the movie GATTACA. Even supplying patients with their own genetic information has come under scrutiny, if it’s done without proper context, as evidenced by the dispute between the FDA and the direct-to-consumer genetic testing service 23andMe. Finally, DNA testing opens the door to sticky ethical questions, such as whether to carry to term a pregnancy after the fetus is found to have a genetic mutation.

 

2. Recombinant DNA

The modern field of biotechnology was born when scientists first manipulated – or ‘recombined’ –  DNA in a test tube, and today almost all aspects of society are impacted by so-called ‘rDNA’. Recombinant DNA tools allow researchers to choose a protein they think may be important for health or industry, and then remove that protein from its original context. Once removed, the protein can be studied in a species that’s simple to manipulate, such as E. coli bacteria. This lets researchers reproduce it in vast quantities, engineer it for improved properties, and/or transplant it into a new species. Modern biomedical research, many best-selling drugs, most of the clothes you wear, and many of the foods you eat rely on rDNA biotechnology.

Benefits: Simply put, our world has been reshaped by rDNA. Modern medical advances are unimaginable without the ability to study cells and proteins with rDNA and the tools used to make it, such as PCR, which helps researchers ‘copy and paste’ DNA in a test tube. An increasing number of vaccines and drugs are the direct products of rDNA. For example, nearly all insulin used in treating diabetes today is produced recombinantly. Additionally, cheese lovers may be interested to know that rDNA provides ingredients for a majority of hard cheeses produced in the West. Many important crops have been genetically modified to produce higher yields, withstand environmental stress, or grow without pesticides. Facing the unprecedented threats of climate change, many researchers believe rDNA and GMOs will be crucial in humanity’s efforts to adapt to rapid environmental changes.

Risks: The inventors of rDNA themselves warned the public and their colleagues about the dangers of this technology. For example, they feared that rDNA derived from drug-resistant bacteria could escape from the lab, threatening the public with infectious superbugs. And recombinant viruses, useful for introducing genes into cells in a petri dish, might instead infect the human researchers. Some of the initial fears were allayed when scientists realized that genetic modification is much trickier than initially thought, and once the realistic threats were identified – like recombinant viruses or the handling of deadly toxins –  safety and regulatory measures were put in place. Still, there are concerns that rogue scientists or bioterrorists could produce weapons with rDNA. For instance, it took researchers just 3 years to make poliovirus from scratch in 2006, and today the same could be accomplished in a matter of weeks. Recent flu epidemics have killed over 200,000, and the malicious release of an engineered virus could be much deadlier – especially if preventative measures, such as vaccine stockpiles, are not in place.

3. DNA Synthesis

Synthesizing DNA has the advantage of offering total researcher control over the final product. With many of the mysteries of DNA still unsolved, some scientists believe the only way to truly understand the genome is to make one from its basic building blocks. Building DNA from scratch has traditionally been too expensive and inefficient to be very practical, but in 2010, researchers did just that, completely synthesizing the genome of a bacteria and injecting it into a living cell. Since then, scientists have made bigger and bigger genomes, and recently, the GP-Write project launched with the intention of tackling perhaps the ultimate goal: chemically fabricating an entire human genome. Meeting this goal – and within a 10 year timeline – will require new technology and an explosion in manufacturing capacity. But the project’s success could signal the impact of synthetic DNA on the future of biotechnology.

Benefits: Plummeting costs and technical advances have made the goal of total genome synthesis seem much more immediate. Scientists hope these advances, and the insights they enable, will ultimately make it easier to make custom cells to serve as medicines or even bomb-sniffing plants. Fantastical applications of DNA synthesis include human cells that are immune to all viruses or DNA-based data storage. Prof. George Church of Harvard has proposed using DNA synthesis technology to ‘de-extinct’ the passenger pigeon, wooly mammoth, or even Neanderthals. One company hopes to edit pig cells using DNA synthesis technology so that their organs can be transplanted into humans. And DNA is an efficient option for storing data, as researchers recently demonstrated when they stored a movie file in the genome of a cell.

Risks: DNA synthesis has sparked significant controversy and ethical concerns. For example, when the GP-Write project was announced, some criticized the organizers for the troubling possibilities that synthesizing genomes could evoke, likening it to playing God. Would it be ethical, for instance, to synthesize Einstein’s genome and transplant it into cells? The technology to do so does not yet exist, and GP-Write leaders have backed away from making human genomes in living cells, but some are still demanding that the ethical debate happen well in advance of the technology’s arrival. Additionally, cheap DNA synthesis could one day democratize the ability to make bioweapons or other nuisances, as one virologist demonstrated when he made the horsepox virus (related to the virus that causes smallpox) with DNA he ordered over the Internet. (It should be noted, however, that the other ingredients needed to make the horsepox virus are specialized equipment and deep technical expertise.)

 

4. Genome Editing

Many diseases have a basis in our DNA, and until recently, doctors had very few tools to address the root causes. That appears to have changed with the recent discovery of a DNA editing system called CRISPR/Cas9. (A note on terminology – CRISPR is a bacterial immune system, while Cas9 is one protein component of that system, but both terms are often used to refer to the protein.) It operates in cells like a DNA scissor, opening slots in the genome where scientists can insert their own sequence. While the capability of cutting DNA wasn’t unprecedented, Cas9 dusts the competition with its effectiveness and ease of use. Even though it’s a biotech newcomer, much of the scientific community has already caught ‘CRISPR-fever,’ and biotech companies are racing to turn genome editing tools into the next blockbuster pharmaceutical.

Benefits: Genome editing may be the key to solving currently intractable genetic diseases such as cystic fibrosis, which is caused by a single genetic defect. If Cas9 can somehow be inserted into a patient’s cells, it could fix the mutations that cause such diseases, offering a permanent cure. Even diseases caused by many mutations, like cancer, or caused by a virus, like HIV/AIDS, could be treated using genome editing. Just recently, an FDA panel recommended a gene therapy for cancer, which showed dramatic responses for patients who had exhausted every other treatment. Genome editing tools are also used to make lab models of diseases, cells that store memories, and tools that can detect epidemic viruses like Zika or Ebola. And as described above, if a gene drive, which uses Cas9, is deployed effectively, we could eliminate diseases such as malaria, which kills nearly half a million people each year.

Risks: Cas9 has generated nearly as much controversy as it has excitement, because genome editing carries both safety issues and ethical risks. Cutting and repairing a cell’s DNA is not risk-free, and errors in the process could make a disease worse, not better. Genome editing in reproductive cells, such as sperm or eggs, could result in heritable genetic changes, meaning dangerous mutations could be passed down to future generations. And some warn of unethical uses of genome editing, fearing a rise of ‘designer babies’ if parents are allowed to choose their children’s traits, even though there are currently no straightforward links between one’s genes and their intelligence, appearance, etc. Similarly, a gene drive, despite possibly minimizing the spread of certain diseases, has the potential to create great harm since it is intended to kill or modify an entire species. A successful gene drive could have unintended ecological impacts, be used with malicious intent, or mutate in unexpected ways. Finally, while the capability doesn’t currently exist, it’s not out of the realm of possibility that a rogue agent could develop genetically selective bioweapons to target individuals or populations with certain genetic traits.

 

Recommended References

Videos

Research Papers

Books

Informational Documents

Articles

Organizations

These organizations above all work on biotechnology issues, though many cover other topics as well. This list is undoubtedly incomplete; please contact us to suggest additions or corrections.

 

Trump to Pull US Out of Nuclear Treaty

Click here to see this page in other languages:  Russian 

Last week, U.S. President Donald Trump confirmed that the United States will be pulling out of the landmark Intermediate-Range Nuclear Forces Treaty (INF). The INF treaty, which went into effect in 1987, banned ground-launched nuclear missiles that have a range of 500 km to 5,500 km (310 to 3,400 miles). Although the agreement covers land-based missiles that carry both nuclear and conventional warheads, it doesn’t cover any air-launched or sea-launched weapons.

Nonetheless, when it was signed into effect by Former U.S. President Ronald Reagan and Former Soviet President Mikhail Gorbachev, it led to the elimination of nearly 2,700 short- and medium-range missiles. More significantly, it helped bring an end to a dangerous nuclear standoff between the two nations, and the trust that it fostered played a critical part in defusing the Cold War.

Now, as a result of the recent announcements from the Trump administration, all of this may be undone. As Malcolm Chalmers, deputy director general of the Royal United Services Institute, stated in an interview with The Guardian, “This is the most severe crisis in nuclear arms control since the 1980s. If the INF treaty collapses, and with the New Start treaty on strategic arms due to expire in 2021, the world could be left without any limits on the nuclear arsenals of nuclear states for the first time since 1972.”

Of course, the U.S. isn’t the only player that’s contributing to unravelling an arms treaty that helped curb competition and contributed to bringing an end to the Cold War.

Reports indicate that Russia has been violating the INF treaty since at least 2014, a fact that was previously acknowledged by the Obama administration and which President Trump cited in his INF withdrawal announcement last week. “Russia has violated the agreement. They’ve been violating it for many years, and I don’t know why President Obama didn’t negotiate or pull out,” Trump stated. “We’re not going to let them violate a nuclear agreement and do weapons and we’re not allowed to.…so we’re going to terminate the agreement. We’re going to pull out,” he continued.

Trump also noted that China played a significant role in his decision to pull the U.S. out of the INF treaty. Since China was not a part of the negotiations and is not a signatory, the country faces no limits when it comes to developing and deploying intermediate-range nuclear missiles — a fact that China has exploited in order to amass a robust missile arsenal. Trump noted that the U.S. will  have to develop those weapons, “unless Russia comes to us and China comes to us and they all come to us and say, ‘let’s really get smart and let’s none of us develop those weapons, but if Russia’s doing it and if China’s doing it, and we’re adhering to the agreement, that’s unacceptable.”

 

A Growing Concern

Concerns over Russian missile systems that breach the INF treaty are real and valid. Equally valid are the concerns over China’s weapons strategy. However, experts note that President Trump’s decision to leave the INF treaty doesn’t set us on the path to the negotiating table, but rather, toward another nuclear arms race.

Russian officials have been clear in this regard, with Leonid Slutsky, who chairs the foreign affairs committee in Russia’s lower house of parliament, stating this week that a U.S. withdrawal from the INF agreement “would mean a real new Cold War and an arms race with 100 percent probability” and “a collapse of the planet’s entire nonproliferation and disarmament regime.”

This is precisely why many policy experts assert that withdrawal is not a viable option and, in order to achieve a successful resolution, negotiations must continue. Wolfgang Ischinger, the former German ambassador to the United States, is one such expert. In a statement issued over the weekend, he noted that he is “deeply worried” about President Trump’s plans to dismantle the INF treaty and urged the U.S. government to, instead, work to expand the treaty. “Multilateralizing this agreement would be a lot better than terminating it,” he wrote on Twitter.

Even if the U.S. government is entirely disinterested in negotiating, and the Trump administration seeks only to respond with increased weaponry, policy experts assert that withdrawing from the INF treaty is still an unavailing and unnecessary move. As Jeffrey Lewis, the director of the East Asia nonproliferation program at the Middlebury Institute of International Studies at Monterey, notes, the INF doesn’t prohibit sea- or air-based systems. Consequently, the U.S. could respond to Russian and Chinese political maneuverings with increased armament without escalating international tensions by upending longstanding treaties.

Indeed, since President Trump made his announcement, a number of experts have condemned the move and called for further negotiations. EU spokeswoman Maja Kocijancic said that the U.S. and Russia “need to remain in a constructive dialogue to preserve this treaty” as it “contributed to the end of the Cold War, to the end of the nuclear arms race and is one of the cornerstones of European security architecture.”  

Most notably, in a statement that was issued Monday, the European Union cautioned the U.S. against withdrawing from the INF treaty, saying, “The world doesn’t need a new arms race that would benefit no one and on the contrary, would bring even more instability.”

An image of Hurricane Michael making landfall October 11, 2018. Photo courtesy of NASA.

IPCC 2018 Special Report Paints Dire — But Not Completely Hopeless — Picture of Future

Click here to see this page in other languages:  Russian 

On Wednesday, October 10, the panhandle of Florida was struck by Hurricane Michael, which has already claimed over 30 lives and destroyed communities, homes and infrastructure across multiple states. Michael is the strongest hurricane in recorded history to make landfall in that region. And in coming years, it’s likely that we’ll continue to see an increase in record breaking storms — as well as record-breaking heat waves, droughts, floods, and wildfires.

Only two days before Michael unleashed its devastation on the United States, the United Nations International Panel on Climate Change (IPCC) released a dire report on the prospects for maintaining global temperature rise to 1.5°C—and why we must meet this challenge head on.

In 2015, roughly during the time that the Paris Climate Agreement was being signed, global temperatures reached 1°C above pre-industrial levels. And we’re already feeling the impacts of this increase in the form of bigger storms, bigger wildfires, higher temperatures, melting arctic ice, etc.

The recent IPCC report concludes that, if society continues on its current trajectory — and even if the world abides by the Paris Climate Agreement — the planet will hit 1.5°C of warming in a matter of decades, and possibly in the next 12 years. And every half degree more that temperatures rises is expected to bring on even more extreme effects. Even if we can limit global warming to 1.5°C, the report predicts we’ll lose most coral reefs, sea levels will rise and flood many coastal communities, more people around the world will experience extreme heat waves, and other natural disasters can be expected to increase.

As global temperatures rise, they don’t rise evenly across the globe. Land air is expected to reach higher temperatures than that over the oceans, so what could be 1.5°C on average across earth, might be a 3-4.5°C increase in some sections of the world. This has the potential to trigger deadly heat waves, wildfires and droughts, which would also negatively impact local ecosystems and farmland.

But what about if we reach 2°C? This level of temperature increase is often floated as the highest limit the world can handle without too much suffering – but how much worse will it be than 1.5°C?

A difference of 0.5°C may not seem like much, but it could mean the difference between a world with some surviving coral reefs, and a world in which they — and many other species — are all destroyed. Two degrees could lead to an extra 420 million people experiencing extreme and possibly deadly heat waves. Some regions of the world will see increases in temperatures as high as 4-6°C. Sea levels are predicted to rise an extra 10 centimeters at 2°C versus 1.5°C, which could impact an extra 10 million people along coastal areas.

Meanwhile, human health will deteriorate; diseases like malaria and dengue fever could become more prevalent and spread into new regions with this increase in temperature. Farmland for many staple crops could decrease, and even livestock are expected to be adversely affected as feed quality and water availability may decrease.

The list goes on and on. But perhaps one of the greatest threats of climate change is that those who will likely be the hardest hit by increasing temperatures are those who are already among the poorest and most vulnerable.

Yet we’re not quite out of time. As the report highlights, all of these problems arise as a result of society taking little to no action. But what if we did start taking steps to reduce global warming? What if we could get governments and corporations to recognize the need to reduce emissions and switch to clean, alternative, renewable energy sources? What if individuals made changes to their own lifestyles while also encouraging their government leaders to take action?

The report suggests that under those circumstances, if we can achieve global net-zero emissions — that is, such low levels of carbon or other pollutants are emitted that they can be absorbed by trees and soil — then we can still prevent temperatures from exceeding 1.5°C. Temperatures will still increase somewhat as a result of current emissions, but there’s still time to curtail the most severe effects.

There are other organizations that believe we can achieve global net-zero emissions as well. For example, this summer, the Exponential Climate Action Roadmap was released, which offers a roadmap to achieve the goals of the Paris Climate Agreement by 2030. Or there’s The Solutions Project, which maps out steps to quickly achieve 100% renewable energy. And Drawdown provides 80 steps we can take to reduce emissions.

We don’t have much time left, but it’s not too late. The prospects are dire if we continue on our current trajectory, but if society can recognize the urgency of the situation and come together to take action, there’s still hope of keeping the worst effects of climate change at bay.

An edited version of this article was originally published on Metro. Photo courtesy of NASA.

Genome Editing and the Future of Biowarfare: A Conversation with Dr. Piers Millett

In both 2016 and 2017, genome editing made it into the annual Worldwide Threat Assessment of the US Intelligence Community. (Update: it was also listed in the 2019 Threat Assessment.) One of biotechnology’s most promising modern developments, it had now been deemed a danger to US national security – and then, after two years, it was dropped from the list again. All of which raises the question: what, exactly, is genome editing, and what can it do? 

Most simply, the phrase “genome editing” represents tools and techniques that biotechnologists use to edit the genomethat is, the DNA or RNA of plants, animals, and bacteria. Though the earliest versions of genome editing technology have existed for decades, the introduction of CRISPR in 2013 “brought major improvements to the speed, cost, accuracy, and efficiency of genome editing.

CRISPR, or Clustered Regularly Interspersed Short Palindromic Repeats, is actually an ancient mechanism used by bacteria to remove viruses from their DNA. In the lab, researchers have discovered they can replicate this process by creating a synthetic RNA strand that matches a target DNA sequence in an organism’s genome. The RNA strand, known as a “guide RNA,” is attached to an enzyme that can cut DNA. After the guide RNA locates the targeted DNA sequence, the enzyme cuts the genome at this location. DNA can then be removed, and new DNA can be added. CRISPR has quickly become a powerful tool for editing genomes, with research taking place in a broad range of plants and animals, including humans.

A significant percentage of genome editing research focuses on eliminating genetic diseases. However, with tools like CRISPR, it also becomes possible to alter a pathogen’s DNA to make it more virulent and more contagious. Other potential uses include the creation of “‘killer mosquitos,’ plagues that wipe out staple crops, or even a virus that snips at people’s DNA.”

But does genome editing really deserve a spot among the ranks of global threats like nuclear weapons and cyber hacking? To many members of the scientific community, its inclusion felt like an overreaction. Among them was Dr. Piers Millett, a science policy and international security expert whose work focuses on biotechnology and biowarfare.

Millett wasn’t surprised that biotechnology in general made it into these reports: what he didn’t expect was for one specific tool, genome editing, to be called out. In his words: “I would personally be much more comfortable if it had been a broader sentiment to say ‘Hey, there’s a whole bunch of emerging biotechnologies that could destabilize our traditional risk equation in this space, and we need to be careful with that.’ …But calling out specifically genome editing, I still don’t fully understand any rationale behind it.”

This doesn’t mean, however, that the misuse of genome editing is not cause for concern. Even proper use of the technology often involves the genetic engineering of biological pathogens, research that could very easily be weaponized. Says Millett, “If you’re deliberately trying to create a pathogen that is deadly, spreads easily, and that we don’t have appropriate public health measures to mitigate, then that thing you create is amongst the most dangerous things on the planet.”

 

Biowarfare Before Genome Editing

A medieval depiction of the Black Plague.

Developments such as CRISPR present new possibilities for biowarfare, but biological weapons caused concern long before the advent of gene editing. The first recorded use of biological pathogens in warfare dates back to 600 BC, when Solon, an Athenian statesman, poisoned enemy water supplies during the siege of Krissa. Many centuries later, during the 1346 AD siege of Caffa, the Mongol army catapulted plague-infested corpses into the city, which is thought to have contributed to the 14th century Black Death pandemic that wiped out up to two thirds of Europe’s population.

Though biological weapons were internationally banned by the 1925 Geneva Convention, state biowarfare programs continued and in many cases expanded during World War II and the Cold War. In 1972, as evidence of these violations mounted, 103 nations signed a treaty known as the Biological Weapons Convention (BWC). The treaty bans the creation of biological arsenals and outlaws offensive biological research, though defensive research is permissible. Each year, signatories are required to submit certain information about their biological research programs to the United Nations, and violations reported to the UN Security Council may result in an inspection.

But inspections can be vetoed by the permanent members of the Security Council, and there are no firm guidelines for enforcement. On top of this, the line that separates permissible defensive biological research from its offensive counterpart is murky and remains a subject of controversy. And though the actual numbers remain unknown, pathologist Dr. Riedel asserts that “the number of state-sponsored programs [that have engaged in offensive biological weapons research] has increased significantly during the last 30 years.”

 

Dual Use Research

So biological warfare remains a threat, and it’s one that genome editing technology could hypothetically escalate. Genome editing falls into a category of research and technology that’s known as “dual-use” – that is, it has the potential both for beneficial advances and harmful misuses. “As an enabling technology, it enables you to do things, so it is the intent of the user that determines whether that’s a positive thing or a negative thing,” Millett explains.

And ultimately, what’s considered positive or negative is a matter of perspective. “The same activity can look positive to one group of people, and negative to another. How do we decide which one is right and who gets to make that decision?” Genome editing could be used, for example, to eradicate disease-carrying mosquitoes, an application that many would consider positive. But as Millet points out, some cultures view such blatant manipulation of the ecosystem as harmful or “sacrilegious.”

Millett believes that the most effective way to deal with dual-use research is to get the researchers engaged in the discussion. “We have traditionally treated the scientific community as part of the problem,” he says. “I think we need to move to a point where the scientific community is the key to the solution, where we’re empowering them to be the ones who identify the risks, the ones who initiate the discussion about what forms this research should take.” A good scientist, he adds, is one “who’s not only doing good research, but doing research in a good way.”

 

DIY Genome Editing

But there is a growing worry that dangerous research might be undertaken by those who are not scientists at all. There are already a number of do-it-yourself (DIY) genome editing kits on the market today, and these relatively inexpensive kits allow anyone, anywhere to edit DNA using CRISPR technology. Do these kits pose a real security threat? Millett explains that risk level can be assessed based on two distinct criteria: likelihood and potential impact. Where the “greatest” risks lie will depend on the criterion.

“If you take risk as a factor of likelihood of impact, the most likely attacks will come from low-powered actors, but have a minimal impact and be based on traditional approaches, existing pathogens, and well characterized risks and threats,” Millett explains. DIY genome editors, for example, may be great in number but are likely unable to produce a biological agent capable of causing widespread harm.

“If you switch it around and say where are the most high impact threats going to come from, then I strongly believe that that [type of threat] requires a level of sophistication and technical competency and resources that are not easy to acquire at this point in time,” says Millett. “If you’re looking for advanced stuff: who could misuse genome editing? States would be my bet in the foreseeable future.”

State Bioweapons Programs

Large-scale bioweapons programs, such as those run by states, pose a double threat: there is always the possibility of accidental release alongside the potential for malicious use. Millett believes that these threats are roughly equal, a conclusion backed by a thousand page report from Gryphon Scientific, a US defense contractor.

Historically, both accidental release and malicious use of biological agents have caused damage. In 1979, there was the accidental release of aerosolized anthrax from the Sverdlovsk [now Ekaterinburg] bioweapons production facility in the Soviet Union – a clogged air filter in the facility had been removed, but had not been replaced. Ninety-four people were affected by the incident and at least 64 died, along with a number of livestock. The Soviet secret police attempted a cover-up and it was not until years later that the administration admitted the cause of the outbreak.

More recently, Millett says, a US biodefense facility “failed to kill the anthrax that it sent out for various lab trials, and ended up sending out really nasty anthrax around the world.” Though no one was infected, a 2015 government investigation revealed that “over the course of the last decade, 86 facilities in the United States and seven other countries have received low concentrations of live [anthrax] spore samples… thought to be completely inactivated.”

These incidents pale, however, in comparison with Japan’s intentional use of biological weapons during the 1930s and 40s. There is “a published history that suggests up to 30,000 people were killed in China by the Japanese biological weapons program during the lead up to World War II. And if that data is accurate, that is orders of magnitude bigger than anything else,” Millett says.

Given the near-impossibility of controlling the spread of disease, a deliberate attack may have accidental effects far beyond what was intended. The Japanese, for example, may have meant to target only a few Chinese villages, only to unwittingly trigger an epidemic. There are reports, in fact, that thousands of Japan’s own soldiers became infected during a biological attack in 1941.

Despite the 1972 ban on biological weapons programs, Millett believes that many countries still have the capacity to produce biological weapons. As an example, he explains that the Soviets developed “a set of research and development tools that would answer the key questions and give you all the key capabilities to make biological weapons.”

The BWC only bans offensive research, and “underneath the umbrella of a defensive program,” Millett says, “you can do a whole load of research and development to figure out what you would want to weaponize if you were going to make a weapon.” Then, all a country needs to start producing those weapons is “the capacity to scale up production very, very quickly.” The Soviets, for example, built “a set of state-based commercial infrastructure to make things like vaccines.” On a day-to-day basis, they were making things the Soviet Union needed. “But they could be very radically rebooted and repurposed into production facilities for their biological weapons program,” Millett explains. This is known as a “breakout program.”

Says Millett, “I believe there are many, many countries that are well within the scope of a breakout program … so it’s not that they necessarily at this second have a fully prepared and worked-out biological weapons program that they can unleash on the world tomorrow, but they might well have all of the building blocks they need to do that in place, and a plan for how to turn their existing infrastructure towards a weapons program if they ever needed to. These components would be permissible under current international law.”

 

Biological Weapons Convention

This unsettling reality raises questions about the efficacy of the BWC – namely, what does it do well, and what doesn’t it do well? Millett, who worked for the BWC for well over a decade, has a nuanced view.

“The very fact that we have a ban on these things is brilliant,” he says. “We’re well ahead on biological weapons than many other types of weapons systems. We only got the ban on nuclear weapons – and it was only joined by some tiny number of countries – last year. Chemical weapons, only in 1995. The ban on biological weapons is hugely important. Having a space at the international level to talk about those issues is very important.” But, he adds, “we’re rapidly reaching the end of the space that I can be positive about.”

The ban on biological weapons was motivated, at least in part, by the sense that – unlike chemical weapons – they weren’t particularly useful. Traditionally, chemical and biological weapons were dealt with together. The 1925 Geneva Protocol banned both, and the original proposal for the Biological Weapons Convention, submitted by the UK in 1969, would have dealt with both. But the chemical weapons ban was ultimately dropped from the BWC, Millett says, “because that was during Vietnam, and so there were a number of chemical agents that were being used in Vietnam that weren’t going to be banned.” Once the scope of the ban had been narrowed, however, both the US and the USSR signed on.

Millet describes the resulting document as “aspirational.” He explains,“The Biological Weapons Convention is four pages long, whereas the [1995] Chemical Weapons Convention is 200 pages long, give or take.” And the difference “is about the teeth in the treaty.”

“The BWC is…a short document that’s basically a commitment by states not to make these weapons. The Chemical Weapons Convention is an international regime with an organization, with an inspection regime intended to enforce that. Under the BWC, if you are worried about another state, you’re meant to try to resolve those concerns amicably. But if you can’t do that, we move onto Article Six of the Convention, where you report it to the Security Council. The Security Council is meant to investigate it, but of course if you’re a permanent member of the Security Council, you can veto that, so that doesn’t happen.”

 

De-escalation

One easy way that states can avoid raising suspicion is to be more transparent. As Millett puts it, “If you’re not doing naughty things, then it’s on you to demonstrate that you’re not.” This doesn’t mean revealing everything to everybody. It means finding ways to show other states that they don’t need to worry.

As an example, Millett cites the heightened security culture that developed in the US after 9/11. Following the 2001 anthrax letter attacks, as well as a large investment in US biodefense programs, an initiative was started to prevent foreigners from working in those biodefense facilities. “I’m very glad they didn’t go down that path,” says Millett, “because the greatest risk, I think, was not that a foreign national would sneak in.” Rather, “the advantage of having foreign nationals in those programs was at the international level, when country Y stands up and accuses the US of having an illicit bioweapons program hidden in its biodefense program, there are three other countries that can stand up and say, ‘Well, wait a minute. Our scientists are in those facilities. We work very closely with that program, and we see no evidence of what you’re saying.’”

Historically, secrecy surrounding bioweapons programs has led other countries to begin their own research. Before World War I, the British began exploring the use of bioweapons. The Germans were aware of this. By the onset of the war, the British had abandoned the idea, but the Germans, not knowing this, began their own bioweapons program in an attempt to keep up. By World War II, Germany no longer had a bioweapons program. But the Allies believed they still did, and the U.S. bioweapons program was born of such fears.

 

What now?

Asked if he believes genome editing is a bioweapons “game changer”, Millett says no. “I see it as an enabling technology in the short to medium term, then maybe with longer-term implications [for biowarfare], but then we’re out into the far distance of what we can reasonably talk about and predict,” he says. “Certainly for now, I think its big impact is it makes it easier, faster, cheaper, and more reliable to do things that you could do using traditional approaches.”

But as biotechnology continues to evolve, so too will biowarfare. For example, it will eventually be possible for governments to alter specific genes in their own populations. “Imagine aerosolizing a lovely genome editor that knocks out a specifically nasty gene in your population,” says Millett. “It’s a passive thing. You breathe it in [and it] retroactively alters the population[’s DNA].

A government could use such technology to knock out a gene linked to cancer or other diseases. But, Millett says, “what would happen if you came across a couple of genes that at an individual level were not going to have an impact, but at a population level were connected with something, say, like IQ?” With the help of a genome editor, a government could make their population smarter, on average, by a few IQ points.

“There’s good economic data that says that [average IQ] is … statistically important,” Millett says. “The GDP of the country will be noticeably affected if we could just get another two or three percent IQ points. There are direct national security implications of that. If, for example, Chinese citizens got smarter on average over the next couple of generations by a couple of IQ points per generation, that has national security implications for both the UK and the US.”

For now, such an endeavor remains in the realm of science fiction. But technology is evolving at a breakneck speed, and it’s more important than ever to consider the potential implications of our advancements. That said, Millett is optimistic about the future. “I think the key is the distribution of bad actors versus good actors,” he says. As long as the bad actors remain the minority, there is more reason to be excited for the future of biotechnology than there is to be afraid of it.

Dr. Piers Millett holds fellowships at the Future of Humanity Institute, the University of Oxford, and the Woodrow Wilson Center for International Policy and works as a consultant for the World Health Organization. He also served at the United Nations as the Deputy Head of the Biological Weapons Convention.  

Cognitive Biases and AI Value Alignment: An Interview with Owain Evans

Click here to see this page in other languages:  Russian 

At the core of AI safety, lies the value alignment problem: how can we teach artificial intelligence systems to act in accordance with human goals and values?

Many researchers interact with AI systems to teach them human values, using techniques like inverse reinforcement learning (IRL). In theory, with IRL, an AI system can learn what humans value and how to best assist them by observing human behavior and receiving human feedback.

But human behavior doesn’t always reflect human values, and human feedback is often biased. We say we want healthy food when we’re relaxed, but then we demand greasy food when we’re stressed. Not only do we often fail to live according to our values, but many of our values contradict each other. We value getting eight hours of sleep, for example, but we regularly sleep less because we also value working hard, caring for our children, and maintaining healthy relationships.

AI systems may be able to learn a lot by observing humans, but because of our inconsistencies, some researchers worry that systems trained with IRL will be fundamentally unable to distinguish between value-aligned and misaligned behavior. This could become especially dangerous as AI systems become more powerful: inferring the wrong values or goals from observing humans could lead these systems to adopt harmful behavior.

 

Distinguishing Biases and Values

Owain Evans, a researcher at the Future of Humanity Institute, and Andreas Stuhlmüller, president of the research non-profit Ought, have explored the limitations of IRL in teaching human values to AI systems. In particular, their research exposes how cognitive biases make it difficult for AIs to learn human preferences through interactive learning.

Evans elaborates: “We want an agent to pursue some set of goals, and we want that set of goals to coincide with human goals. The question then is, if the agent just gets to watch humans and try to work out their goals from their behavior, how much are biases a problem there?”

In some cases, AIs will be able to understand patterns of common biases. Evans and Stuhlmüller discuss the psychological literature on biases in their paper, Learning the Preferences of Ignorant, Inconsistent Agents, and in their online book, agentmodels.org. An example of a common pattern discussed in agentmodels.org is “time inconsistency.” Time inconsistency is the idea that people’s values and goals change depending on when you ask them. In other words, “there is an inconsistency between what you prefer your future self to do and what your future self prefers to do.”

Examples of time inconsistency are everywhere. For one, most people value waking up early and exercising if you ask them before bed. But come morning, when it’s cold and dark out and they didn’t get those eight hours of sleep, they often value the comfort of their sheets and the virtues of relaxation. From waking up early to avoiding alcohol, eating healthy, and saving money, humans tend to expect more from their future selves than their future selves are willing to do.

With systematic, predictable patterns like time inconsistency, IRL could make progress with AI systems. But often our biases aren’t so clear. According to Evans, deciphering which actions coincide with someone’s values and which actions spring from biases is difficult or even impossible in general.

“Suppose you promised to clean the house but you get a last minute offer to party with a friend and you can’t resist,” he suggests. “Is this a bias, or your value of living for the moment? This is a problem for using only inverse reinforcement learning to train an AI — how would it decide what are biases and values?”

 

Learning the Correct Values

Despite this conundrum, understanding human values and preferences is essential for AI systems, and developers have a very practical interest in training their machines to learn these preferences.

Already today, popular websites use AI to learn human preferences. With YouTube and Amazon, for instance, machine-learning algorithms observe your behavior and predict what you will want next. But while these recommendations are often useful, they have unintended consequences.

Consider the case of Zeynep Tufekci, an associate professor at the School of Information and Library Science at the University of North Carolina. After watching videos of Trump rallies to learn more about his voter appeal, Tufekci began seeing white nationalist propaganda and Holocaust denial videos on her “autoplay” queue. She soon realized that YouTube’s algorithm, optimized to keep users engaged, predictably suggests more extreme content as users watch more videos. This led her to call the website “The Great Radicalizer.”

This value misalignment in YouTube algorithms foreshadows the dangers of interactive learning with more advanced AI systems. Instead of optimizing advanced AI systems to appeal to our short-term desires and our attraction to extremes, designers must be able to optimize them to understand our deeper values and enhance our lives.

Evans suggests that we will want AI systems that can reason through our decisions better than humans can, understand when we are making biased decisions, and “help us better pursue our long-term preferences.” However, this will entail that AIs suggest things that seem bad to humans on first blush.

One can imagine an AI system suggesting a brilliant, counterintuitive modification to a business plan, and the human just finds it ridiculous. Or maybe an AI recommends a slightly longer, stress-free driving route to a first date, but the anxious driver takes the faster route anyway, unconvinced.

To help humans understand AIs in these scenarios, Evans and Stuhlmüller have researched how AI systems could reason in ways that are comprehensible to humans and can ultimately improve upon human reasoning.

One method (invented by Paul Christiano) is called “amplification,” where humans use AIs to help them think more deeply about decisions. Evans explains: “You want a system that does exactly the same kind of thinking that we would, but it’s able to do it faster, more efficiently, maybe more reliably. But it should be a kind of thinking that if you broke it down into small steps, humans could understand and follow.”

This second concept is called “factored cognition” – the idea of breaking sophisticated tasks into small, understandable steps. According to Evans, it’s not clear how generally factored cognition can succeed. Sometimes humans can break down their reasoning into small steps, but often we rely on intuition, which is much more difficult to break down.

 

Specifying the Problem

Evans and Stuhlmüller have started a research project on amplification and factored cognition, but they haven’t solved the problem of human biases in interactive learning – rather, they’ve set out to precisely lay out these complex issues for other researchers.

“It’s more about showing this problem in a more precise way than people had done previously,” says Evans. “We ended up getting interesting results, but one of our results in a sense is realizing that this is very difficult, and understanding why it’s difficult.”

This article is part of a Future of Life series on the AI safety research grants, which were funded by generous donations from Elon Musk and the Open Philanthropy Project.

$50,000 Award to Stanislav Petrov for helping avert WWIII – but US denies visa

Click here to see this page in other languages:  German Russian 

To celebrate that today is not the 35th anniversary of World War III, Stanislav Petrov, the man who helped avert an all-out nuclear exchange between Russia and the U.S. on September 26 1983 was honored in New York with the $50,000 Future of Life Award at a ceremony at the Museum of Mathematics in New York.

Former United Nations Secretary General Ban Ki-Moon said: “It is hard to imagine anything more devastating for humanity than all-out nuclear war between Russia and the United States. Yet this might have occurred by accident on September 26 1983, were it not for the wise decisions of Stanislav Yevgrafovich Petrov. For this, he deserves humanity’s profound gratitude. Let us resolve to work together to realize a world free from fear of nuclear weapons, remembering the courageous judgement of Stanislav Petrov.”

Stanislav Petrov’s daughter Elena holds the 2018 Future of Life Award flanked by her husband Victor. From left: Ariel Conn (FLI), Lucas Perry (FLI), Hannah Fry, Victor, Elena, Steven Mao (exec. producer of the Petrov film “The Man Who Saved the World”), Max Tegmark (FLI)

Although the U.N. General Assembly, just blocks away, heard politicians highlight the nuclear threat from North Korea’s small nuclear arsenal, none mentioned the greater threat from the many thousands of nuclear weapons in the United States and Russian arsenals that have nearly been unleashed by mistake dozens of times in the past in a seemingly never-ending series of mishaps and misunderstandings.

One of the closest calls occurred thirty-five years ago, on September 26, 1983, when Stanislav Petrov chose to ignore the Soviet early-warning detection system that had erroneously indicated five incoming American nuclear missiles. With his decision to ignore algorithms and instead follow his gut instinct, Petrov helped prevent an all-out US-Russian nuclear war, as detailed in the documentary film “The Man Who Saved the World”, which will be released digitally next week. Since Petrov passed away last year, the award was collected by his daughter Elena. Meanwhile, Petrov’s son Dmitry missed his flight to New York because the U.S. embassy delayed his visa. “That a guy can’t get a visa to visit the city his dad saved from nuclear annihilation is emblematic of how frosty US-Russian relations have gotten, which increases the risk of accidental nuclear war”, said MIT Professor Max Tegmark when presenting the award. Arguably the only recent reduction in the risk of accidental nuclear war came when Donald Trump held a summit with Vladimir Putin in Helsinki earlier this year, which was, ironically, met with widespread criticism.

In Russia, soldiers often didn’t discuss their wartime actions out of fear that it might displease their government, and so, Elena only first heard about her father’s heroic actions in 1998 – 15 years after the event occurred. And even then, Elena and her brother only learned of what her father had done when a German journalist reached out to the family for an article he was working on. It’s unclear if Petrov’s wife, who died in 1997, ever knew of her husband’s heroism. Until his death, Petrov maintained a humble outlook on the event that made him famous. “I was just doing my job,” he’d say.

But most would agree that he went above and beyond his job duties that September day in 1983. The alert of five incoming nuclear missiles came at a time of high tension between the superpowers, due in part to the U.S. military buildup in the early 1980s and President Ronald Reagan’s anti-Soviet rhetoric. Earlier in the month the Soviet Union shot down a Korean Airlines passenger plane that strayed into its airspace, killing almost 300 people, and Petrov had to consider this context when he received the missile notifications. He had only minutes to decide whether or not the satellite data were a false alarm. Since the satellite was found to be operating properly, following procedures would have led him to report an incoming attack. Going partly on gut instinct and believing the United States was unlikely to fire only five missiles, he told his commanders that it was a false alarm before he knew that to be true. Later investigations revealed that reflections of the Sun off of cloud tops had fooled the satellite into thinking it was detecting missile launches.

Last years Nobel Peace Prize Laureate, Beatrice Fihn, who helped establish the recent United Nations treaty banning nuclear weapons, said,“Stanislav Petrov was faced with a choice that no person should have to make, and at that moment he chose the human race — to save all of us. No one person and no one country should have that type of control over all our lives, and all future lives to come. 35 years from that day when Stanislav Petrov chose us over nuclear weapons, nine states still hold the world hostage with 15,000 nuclear weapons. We cannot continue relying on luck and heroes to safeguard humanity. The Treaty on the Prohibition of Nuclear Weapons provides an opportunity for all of us and our leaders to choose the human race over nuclear weapons by banning them and eliminating them once and for all. The choice is the end of us or the end of nuclear weapons. We honor Stanislav Petrov by choosing the latter.”

University College London Mathematics Professor  Hannah Fry, author of  the new book “Hello World: Being Human in the Age of Algorithms”, participated in the ceremony and pointed out that as ever more human decisions get replaced by automated algorithms, it is sometimes crucial to keep a human in the loop – as in Petrov’s case.

The Future of Life Award seeks to recognize and reward those who take exceptional measures to safeguard the collective future of humanity. It is given by the Future of Life Institute (FLI), a non-profit also known for supporting AI safety research with Elon Musk and others. “Although most people never learn about Petrov in school, they might not have been alive were it not for him”, said FLI co-founder Anthony Aguirre. Last year’s award was given to the Vasili Arkhipov, who singlehandedly prevented a nuclear attack on the US during the Cuban Missile Crisis. FLI is currently accepting nominations for next year’s award.

Stanislav Petrov around the time he helped avert WWIII