Max Tegmark and the FLI Team on 2020 and Existential Risk Reduction in the New Year
Max Tegmark and members of the FLI core team come together to discuss favorite projects from 2020, what we've learned from the past year, and what we think is needed for existential risk reduction in 2021.
- FLI's perspectives on 2020 and hopes for 2021
- What our favorite projects from 2020 were
- The biggest lessons we've learned from 2020
- What we see as crucial and needed in 2021 to ensure and make improvements towards existential safety
54:35 Emilia Javorksy on the importance of returning to multilateralism and global dialogue
56:00 Jared Brown on the need for robust government engagement
57:30 Lucas Perry on the need for creating institutions for existential risk mitigation and global cooperation
1:00:10 Outro
Transcript
Lucas Perry: Welcome to The Future of Life Institute Podcast. I'm Lucas Perry. Today's episode is an attempt to share FLI's review of 2020, as well as our hopes for 2021. We explore three questions with members of our core team to share what our favorite projects at FLI were, what we've learned from 2020, and what is needed in the realm of existential risk reduction in 2021. If you're curious to know more about our team, you can visit the Who We Are tab on our front page. I'm deeply grateful for all of our listeners that have joined us in exploring existential risk issues in 2020, and I'm excited for the new year. Thank you for joining us so far and for taking part in these crucial conversations.
Let's start off the new year by jumping into our first question, what was your favorite FLI project from 2020? To start things off, I would like to introduce the FLI president, Max Tegmark.
Max Tegmark: I'm Max Tegmark. My day job is being a professor at MIT where I do machine learning and physics research, but on nights and weekends, I help out as the president of The Future of Life Institute to make sure that the future of life exists. My favorite Future of Life Institute project from 2020 has got to be The Future of Life Award. We have this tradition where we celebrate some unsung heroes who've done enormously awesome things for humanity, even though most people haven't heard of them, where they have way less name recognition than even a C-level Hollywood actor.
And this year, many of my friends, both inside and outside The Future of Life Institute were worried that we were going to fail with this year's The Future of Life Award because we had set the impossible standards. The first award went to Vasili Arkhipov, who single handedly prevented a Soviet nuclear attack on the US Navy. And if it hadn't been for him, I think we wouldn't be here having this conversation today. The second one went to Stanislav Petrov for helping avert a different accidental nuclear war between the US and Russia when his early warning station outside Moscow said that there were five incoming US missiles. And then last year, we gave it to Professor Matthew Meselson who has done more than anyone else, I think, to prevent the arms race in biological weapons.
And it's really thanks to him that we think of biology as a positive force that can help us develop vaccines and cure cancer and other things like this, rather than just new tools for killing people. So how can we possibly follow up with someone as worthy? The good news is, I think David Nicholson and the rest of the FLI team did an amazing job on this search, and we found and awarded the two people who helped save a whopping 200 million lives so far, and counting. It's truly insane number of people we would have lost otherwise. And the way they did this was they took this awful disease of smallpox, which really makes COVID-19 feel like a cakewalk.
Smallpox would kill about one third of everybody who got it. The year I was born, it killed 15 million people. Last year, it killed zero because it was eradicated. This was not only something which took amazing scientific and strategic ingenuity, as demonstrated by Bill Foege, who got one half of the award, but also amazing audacity on the diplomatic front where Viktor Zhdanov helped persuade the two superpowers in the middle of the Cold War to collaborate; the US and the Soviet Union together, team up against smallpox. I found this to be very inspiring in multiple levels, both to get the honor of these amazing unsung heroes, and also to remind ourselves that it's good to be audacious.
It brings out the best in humanity when we set really audacious and inspirational goals and then team up and actually get them done. And finally, in a rough year like what we've seen now with COVID-19, it's important to remember that we've actually gotten through much worse challenges than this one when we have actually teamed up against it and used both smart science and smart scientific collaboration. So that was my favorites.
Lucas Perry: And with that, I'd like to introduce Anthony Aguirre.
Anthony Aguirre: Something that I got involved with a few collaborators, Gaia Dempsey, Harry Surden, and Peter Reiner is something we called AI Loyalty. This is a policy research paper, and the idea of this, which I felt really good about how it turned out, was to think about the following question. When you go to a doctor or a lawyer or a certain other people, have them work for you, or if you hire a personal assistant, you expect that they will basically work in your interest. If there is some conflict of interest between you and some other party, they'll take your side in it, they're working for you. And in general, they'll make decisions and make recommendations that put your interest as paramount.
Sometimes this is legally formalized in terms of fiduciary responsibilities, and there are legal restrictions about what, for example, a doctor can and cannot recommend. They're supposed to recommend things purely and foremost on the basis of what's best for the patient. So we're used to this relationship in the people who are working for us, who we hired to do things. But if we think about what's happening in AI, suppose you have Alexa or Siri or Google on your phone and you ask them to do something for you as a personal assistant, who exactly are they working for?
If you ask Alexa for a recommendation, Alexa is probably not going to recommend that you go and buy something that isn't sold by Amazon, right? And if you ask for an order of recommendations of something from Google, even a route to get from one place to another, what exactly goes into determining that route? Is it purely your interests, the shortest possible route, or are there other considerations that come in? Obviously, there will be, for example, Google probably won't route a series to people through a small neighborhood, and that's a good thing. But given that the routes are pretty similarly lengthy, what's to stop Google routing people closer to some advertising customers versus farther away? Nothing.
And we would never know because those algorithms by which Amazon ranks things or suggests things or Google suggests things are totally obscure and untransparent to the user. And ultimately, it's pretty clear that Alexa is essentially working for Amazon and Google is essentially working for Google. Neither of them is working for you the way a personal assistant, if you hired one, would. You would not expect that you hire a personal assistant, even from a personal assistant firm, say, and they would have that firm's interest and put it above yours, you expect your interest to be primary.
So the question is, as we create more and more capable AI assistants, should they have this sort of responsibility? Should they ultimately be working for some giant tech firm and our interest is the only incidental, like they have to keep us happy in order to keep us engaged and using the system? Or should those systems be designed and required to act primarily in the user's interest, the way that a human fiduciary would? And this is the premise of this paper, that we should at least have a paradigm, maybe a set of standards, a set of qualifications for what it means to have a loyal AI system, one that works primarily for its user, with conflicts of interest either resolved in the User's favor or at least made explicit, and that is a real feature that systems could be given, and even a competitive advantage in the sense that I think many people, given the choice between a system that is loyal to them and a system that's loyal to some giant tech company, will choose the one that's loyal to them.
That sort of loyalty allows in principle, all sorts of additional capabilities that current systems won't. If you really were to be able to trust an AI system or a personal assistant, it can do a lot more things for you than a system that you don't really know whether you can trust it or not. So I think this allows new capabilities that wouldn't be possible in a system right now like Alexa. People tell Alexa a lot of things, but I think most people would be sensitive to saying, "Hey, Alexa, listen to this incredibly personal thing that I want to tell you." Because people are getting to understand more and more that Alexa exists primarily to gather data from them and use it for the purposes of Amazon.
And if people had a system where they genuinely felt, for good reason, with legal responsibility and oversight on the part of the company, that their information was used in a trustworthy way for their benefit, they would be able to share a lot more information and those systems could do a lot more interesting things. For example, if you imagine COVID-19 10 years from now with highly loyal and privacy-respecting AI assistants, that's the sort of system where you could share all kinds of personal information with your AI assistant, that assistant could share exactly what is the minimum that is necessary in respecting of your privacy with other systems in a contact tracing and advice system for what you could do personally, in terms of COVID-19, that would be dramatically more effective than what we have now, which is essentially nothing because all of these tests and trace technological solutions that were developed with great skill and ingenuity have fallen flat, at least in the US, because nobody wants to adopt them. They don't trust them.
So this paper was a lot of fun. You can look it up, it's called AI Loyalty: A New Paradigm For Aligning Stakeholder Interests. And I'm really hopeful that either some companies will take up this idea or perhaps some policymakers will take up this idea and help design the standards or rules or something that allows more of these systems to come into being rather than what we're going to get otherwise.
Lucas Perry: And now I'd like to introduce David Nicholson.
David Nicholson: My favorite project was The Future of Life Award. Outside of it being my primary focus at FLI, it was also the project that gave me the most hope for the immediate future. And of course we know the award went to Viktor Zhdanov and Bill Foege for their efforts to eradicate smallpox. And I think the efforts of these men demonstrate a couple things that the world needs to keep in mind as we move through the century. I think the first one is that international cooperation between states is crucial to address global challenges. And it's, I think, a simple fact that smallpox would still be with us if the Soviet Union and the US did not give their experts the space to marshal resources against the disease.
So if we translate this story, when it comes to global health and existential risk, I think it's clear to see that countries need to be able to put away their differences to address common threats. In this case, through the 2020 FLI award, we saw Zhdanov address the World Health Organization during the Cold War, and by virtue of his address, this enabled countries like the US to work with the World Health Organization to take on smallpox. And then of course, Bill Foege took up the effort through his work with the CDC and eventually eradicated the disease in Africa.
And so, international cooperation is the cornerstone of this particular part of the story. And that's why I think for the immediate future, this is where we have to give voice to political leadership to make sure that they are thinking about international cooperation. And I do hope with the change in administrations, that is going to be the case. The second aspect of the story of this eradication of smallpox is that it took social trust in developing countries among the general population to achieve eradication. It's an aspect of the story that I think is often missed because we too often focus on the technical achievement, which of course, in the case of Bill Foege, it was his containment and surveillance efforts. But without social trust of the population, that would have been a much, much more difficult enterprise.
And I think we're seeing that with COVID-19, where the social trust that we're supposed to have in experts in global health is not present to a sufficient degree that's allowing us to really take on this particular disease. So, international cooperation and social trust are two parts of The Future of Life Award that are important as key takeaways from the story itself.
Lucas Perry: And with that, I'd like to introduce Emilia Javorsky.
Emilia Javorsky: Yes. At The Future of Life Institute this past year, we had the opportunity to be what was called a co-champion for the UN Secretary General's effort on digital cooperation. And the effort that we helped to co-champion as a member of civil society was a recommendation in the report called Recommendation 3C, and it was on artificial intelligence. And what this recommendation sought to outline was, what are some of the key considerations for AI going forward? Things like it being human rights based and peace promoting and safe and trustworthy. And as part of that recommendation, one of the key points that was made was this idea that life and death decisions must not be delegated to machines.
And given our work at The Future of Life Institute, both on safe and trustworthy AI, but more broadly on the work we do for lethal autonomous weapons, that was a really exciting piece of text to have come out of that effort, and also just to have the opportunity to be part of that effort more broadly. I think that this starts to highlight what are some of the key principles and key topics that the global community needs to take up in 2021 within the UN and more broadly. So seeing that kind of language was something that was really exciting for me and participating in that multi-stakeholder effort that involved member states, members of industry, people from academia, people from civil society, multiple UN agencies. So that was a real privilege to be a part of that in 2020.
Lucas Perry: And now I'd like to introduce Jared Brown.
Jared Brown: It took a lot of work to get done, but I really enjoyed, at the beginning of the year and the spring of 2020, working with a number of organizations like FLI in the space. We worked with CHAI, we worked with CSER, we worked with the Future of Humanity Institute, we worked with The Future Society and others to develop very thoughtful, constructive responses to the European Union's white paper on AI. And I should give a special thanks here to our research assistant, Charlotte, on that project. She was amazing. Collectively, we put in a lot of effort to think through the EU's position on how they're going to govern AI.
And I think individually as organizations, we submitted some very thoughtful responses on how the EU can go about managing the risk of AI. This is going to be bedrock policymaking for many, many years to come. And I think our initial responses to their proposals were well received by European Union politicians and policymakers, and I really appreciated the effort that the community put into doing that process. So I should say that this is all available at FLI's website, futureoflife.org/policywork. You can access our actual comments. The key topic for The Future of Life Institute was really how the European Union can approach a regulatory approach that is adaptable to future capability development of artificial intelligence.
We specifically called out the need for thinking through how the EU can regulate on what the technical community calls an online learning system, which policymakers take to mean just being on the web, but in technical AI terms, is a system that continues to learn from the input of data from users. Those systems are particularly challenging to regulate because they're always evolving as they're being used. The prior system that you were able to assess and evaluate prior to going to market is under development on a constant basis because the system continues to learn while it's being used in the market.
Those systems in particular are going to be a challenge for regulators to assess the safety of, and so we really put forward some ideas to the European Union about how they can address the particular challenges through liability schemes, through post-market enforcement mechanisms and so forth.
Lucas Perry: And with that, I'd like to introduce Tucker Davey.
Tucker Davey: Oh, if I had to take my favorite FLI project this year, it was editing the biography of Victor Zhdanov. This was an absolutely fascinating book, and I was really glad to be pulled into the project last minute to help out with it. Victor Zhdanov was a Russian virologist, but reading this book was just absolutely fascinating. I mean, this guy had seemingly unlimited energy. He was working on AIDS, hepatitis, cancer. He was also writing books on viruses at the same time. And his biggest achievement was helping to eradicate smallpox. He was that type of director who leads an institute and isn't just sitting in the office, but he's actually doing the research too.
Victor Zhdanov and Bill Foege were presented The Future of Life Award this year for their work to eradicate smallpox in the '60s and '70s. We give this award to someone who has not received recognition for their contributions to society, so someone who's already gotten a bunch of Nobel prizes won't get this award. Victor Zhdanov should have gotten so many awards for his work, and he got none of them. And it seems like the reason he got none of them was actually pretty calculated. At the time, in the '60s and '70s, he was a huge proponent of international collaboration and he was working with scientists in the United States and across the world on vaccinations and trying to initiate global efforts to eradicate certain diseases.
And he was specifically hated by the Soviet Union because the Soviet Union had this idea that they should not need to go anywhere in the West for science collaboration or for answers to their problems. So, often, Victor Zhdanov would be wanting to work with international scientists, he'd have them to his home, he'd find international conferences. And often, Soviet Union and the KGB would stop him from traveling or they'd stop him from having guests over because they didn't like the international collaboration. He was really an incredible role model from every story that I've heard and that I've read in this book.
The tragedy of it is, first of all, that he didn't get much recognition for his work. But second of all, quite literally, the pressure from the KGB led to him dying. He was receiving death threats and he was getting kicked out of his organization and losing his leadership all because of these KGB informants were sabotaging his work, and it led to him becoming so stressed and overwhelmed with the situation that he had a stroke. and 10 days later he was dead. These stories are unbelievable, this man sacrificing his entire life to work for the betterment of humanity, trying to understand viruses and understand how we can create mass vaccinations globally.
This is a hero. This is the type of person that we should be admiring and looking up to, especially now in this crisis with COVID. How can we look to this model of international cooperation, this fearless scientist, unwilling to succumb to his nationalistic government's desires to stay closed. He was committed to treating viruses and diseases as a global international effort. And I think that's exactly the type of energy and spirit we need now. So the book was just fascinating. At FLI, we're going to get it published in paperback. If you stay in touch with our newsletter, you'll get an update on that. I really highly encourage people to read it. It's a great book.
It's only 60 or so pages. And it's written by his wife, Elena. She has a great sense of humor. She's a scientist as well. And the story just paints a really vivid picture, both of Victor Zhdanov and of being a scientist in the Soviet Union during the '60s and '70s. So that would be my encouragement to all of our listeners, check out that book when we publish it.
Lucas Perry: As for me, I've really valued working on and hosting the FLI Podcast and AI Alignment Podcast. It's become increasingly clear to me the importance and imperative of educating people on existential and global catastrophic risk issues. It's pretty easy from within your own filter bubble to have a pretty skewed understanding or perspective on the degree to which there is knowledge or awareness about some issue. From some broader perspective and experience that I've had this year, it's become increasingly clear to me that the public and government do not understand, in pretty simple ways, existential and global risk issues. And due to this ignorance of the subject, they do not take them seriously enough, which puts us all at risk.
So it's really important for the existential risk community to have outreach efforts, which target both the audience broadly, but also persons in government who may have the power to shape legislation or form and instruct institutions in ways which would be beneficial for mitigating existential risk and global catastrophic risk issues. Without a basic understanding and awareness of these issues, it's really difficult to get anything done or have people be naturally motivated and interested in working and solving these issues. As the podcast has focused on promoting awareness, we've grown 23% in 2020 with about 300,000 listens.
We've had guests including Yuval Noah Harari, Sam Harris, Steven Pinker, Stuart Russell, George Church, and thinkers from OpenAI, Deep Mind and MIRI. So I'm really happy with the degree to which our podcast has both interviewed domain experts quite deeply on important issues with regard to existential risk, but we've also had a lot of wonderful opportunities to expand the audience by bringing in bigger name guests that can help increase exposure around these issues and also increase our audience to include a wider range and variety of listeners.
I'd also like to highlight the Pindex video, which we've worked on this year. It's called This $8 Trillion Coronavirus Mistake Could Kill 100%, with Stephen Fry. AI is Watching. Currently, that has about a million views on YouTube and serves as an introduction to AI, nuclear weapons and biotechnology existential risk issues. I'm also really happy with the quality of that video and how it has done and succeeded on YouTube this year. So it's been quite a successful year for FLI's outreach efforts, and we're at intending more on this front for 2021. So stay tuned.
And now for the second question, what are the biggest lessons you take away from 2020, both from world events and your work in the changing landscape of existential risk?
Max Tegmark: I feel the 2020 has taught us a number of lessons. The most obvious and in our face lesson is obviously that humanity is more fragile than many would have liked to think. Before 2020, it was very easy to dismiss people who worried about existential risk or globally catastrophic risk and say, "Hey, you're just a bunch of loser doomsayers. It's not going to happen. Or if things start to happen, don't worry, I'm sure our governments are competent enough to handle this." After COVID-19 and the response we've seen, I think people are much more receptive to the idea that we are actually much more vulnerable than we should be, both vulnerable to actual threats. We saw a fairly modestly dangerous pandemic compared to smallpox already paralyze the world.
And second, if people used to be cynical about the level of incompetence in so many governments here, and I don't want to single out any one government, I think they're even more cynical now afterwards to see just how incredibly governments failed to do very basic things despite being told many, many times. Many governments seem to have somehow picked almost the worst possible strategy where they bore enormous costs and still ended up with a massive amount of human suffering. So to me, the message is, in the future, when people warn us about other risks, future pandemics, designed pandemics in the form of bio-terrorism, accidental nuclear war, other powerful technologies like AI leading to bad outcomes, I hope we can at least learn from 2020 that we shouldn't be as dismissive of it.
And we also shouldn't be as confident that, "Oh, it's fine. Our governments are obviously going to be on top of this." In other words, I feel that the kind of work we're doing with The Future of Life Institute is actually more important than ever. We are really solely needed and many, many more organizations like us.
Lucas Perry: Do you have any comments to make about humanity and the difficulty we have of learning from catastrophes?
Max Tegmark: Winston Churchill once quipped that the only thing we learned from history is that we never learn from history. Hopefully, it's a little bit better than that. I think countries that have very, very recently suffered a terrible war, for example, like Europeans in 1946, 1947, they had it in such fresh memory that they actually took a lot of constructive action to prevent it from happening again. But maybe humanity is a little bit like people with beginning dementia that we do have short term memory, but not as much long-term memory as we should. You can already see countries that haven't had a war for many decades start to get flippant about the costs of it, countries that haven't had the pandemic for a very long time are more flippant about it.
In fact, I would probably go as far as saying that the reason that South Korea handled the COVID pandemic so much better than virtually all Western countries, they have a population of over 50 million people and about 500 deaths without even doing a full-scale lockdown, is exactly because they remembered when they have the SARS pandemic, it was not that long ago. And because of that, they were more prepared. Hopefully, at least for the next decade or so, we'll remember that COVID happened and that it is much more cost-effective to actually plan a little bit ahead and avoid big problems, or have strategies in place for how to deal with challenges rather than just pretend that nothing bad will ever happen, and just bumble into things.
A second lesson from 2020, which was easy to miss because we were so focused on COVID-19, the threats from artificial intelligence are growing rapidly and we are not dealing with them effectively. The power of AI just keeps growing leaps and bounds. We've had spectacular breakthroughs like GPT-3, like MuZero, etc. I don't want to get into the nerdy details, but these are artificial intelligence results that I think many people five years ago would have viewed as a science fiction almost.
And yet there's very little effective planning for how are we going to govern this evermore powerful AI technology to avoid problems. This was a year when it became very abundant that a lot of the problems with AI had already started happening. So many people had seen stupid Hollywood movies and worried about, the robots are coming to kill us. In fact, what's already happened, obviously is the robots have come not to kill us, but the hack us. We've seen how our societies, especially in the West, have gotten increasingly polarized into filter bubbles, precisely because the machine learning algorithms deployed not for nefarious reasons, but just for good old capitalist profit maximization.
They turn out to be so good at getting people glued to their little rectangles and watching more ads that they figured out that you just have to show people the things that engage them the most emotionally, that really piss them off and it doesn't matter so much if they're true or not, and certainly doesn't matter what the social consequences are. I feel this has done a lot of harm to democracy in general, made it a lot more dysfunctional in many countries. We also see that these same companies that brought this upon us, big tech basically, the five largest companies in the S&P 500 now are big tech. The companies that have the most lobbyists in Washington and in Brussels are big tech.
I have a lot of friends in these companies and admire a lot of the things these companies do. They're not evil, they're just trying to make a profit, which is what their shareholders want. That's what companies want, right? The real failing is not the companies, the failings is us as a society failing to have institutions that create a level playing field and have rules where these companies can't cause too much harm. And I feel that we're epically failing here. The best paper I've read this year, I feel was called The Grey Hoodie Project by Mohamed Abdallah and his brother, which contained a very incisive comparison of the way big tech is dealing with criticism today and the way big tobacco has dealt with criticism over the years, where they basically use money to silence criticism, to get academics to do anything but criticize them.
And I think it's going to be incredibly important as we look forward here to take all the talent and idealism and do good intentions in these companies. And actually as a society, make sure we channel in to things they're actually good for society, not to do things which undermine our democracy or generally destabilize the world.
Anthony Aguirre: I would say the biggest lesson that I took from 2020, and it's a depressing one, is that our current institutions, especially in the US but in many other places also, are just wholly incapable of dealing with large scale, unexpected challenges, even when the stakes are high and totally obvious to everybody. It's been just heartbreaking to see how badly we've managed the COVID-19 situation in the US and many other places. So much suffering that simply didn't have to happen if decisions had been made in a better way and our institutions full of highly capable people had been managed and employed and highly capable way.
On the more technical side, the scientific side, it's been very inspiring to see the level of capability and creativity that many people in institutions have shown, generation of test and trace apps and capabilities, the scientific efforts, the research efforts on understanding the virus, on generating countermeasures, vaccines, the technical efforts of scaling up our communication infrastructure to deal with online work, the ability to create new platforms to understand share information about the virus, all of these things have been top-notch. But what's become clear is that all the technological success in the world will not help if you don't have capable social structures to manage and grow and fund and scale those technical capabilities.
So there's a real challenge there and a mismatch, I think, between our technical efforts and our social and organizational and institutional efforts. And that's been the huge lesson for me, is how big that gap is and how much room there is for improvement. So I think people have come to understand in a more visceral way than ever before that global catastrophes can and do happen. We have this pleasant sense of immunity, I think, from the end of World War II until recently, where there was the Cold War, but it was always a threat, it never actually happened. We haven't really had to live through much of a global catastrophe. It faded from people's minds. We have started to feel safe, which was always a bit of an illusion, the world is a very risky place.
It's been made more clear that we really are in this fragile world, but it's also important to note that COVID-19 as a global catastrophe was almost a best case scenario. It could have been dramatically worse. Imagine smallpox with its 30% fatality rate and high level of contagiousness or something like that emerging rather than less than 1% dying, 30% dying. And it's not at all clear that our response and our capability and our management of that virus would have been that much better than what we did. We easily could have seen something just about as bad in terms of the number of people infected with a 30% fatality rate. So that's very, very scary, I think.
And I think in that sense, apocalypse has become a little bit more real and in my mind, and in many people's. I had the experience of watching a run of the mill apocalypse movie where asteroids are blowing up the world, and somehow I experienced it rather differently this year than I had in the past when it was all in good fun. This felt a little bit too much like kicking the earth while it's down. It just felt a little too real. That's good in a lot of ways, because I think the truth is that these dangers are real and that we need to understand that they're real and take the appropriate level of action to mitigate the risks. And we do have that capability.
David Nicholson: I've thought a lot about 2020, and as we move through it, existential risk as well. And as someone who does a lot with history, I think of lessons like this from the standpoint of history. And I think in this way, the lessons of 2020 are really the same as those that can be drawn from say, the beginning of the 20th century until now. And that is that our species has entered this epoch where technology, and I'm using that term in the broadest sense to include scientific thinking and managerial processes, this idea of technology really dominates our everyday experience.
And of course, we've had technology prior to the 20th century when we have railroads and manufacturing and things like that, But I think the 20th century marked an important turning point in human experience because it is in this century where the technical way of thinking took hold of how we as humans interact with each other and with the natural world. The characteristics of this technical way of thinking include this detached and objective scientific view that restricts our everyday understanding of ourselves and society.
And so when we come today, in now 2021, we have this problem of technology and the technical way of thinking, and it's understood that existential risk falls within this category dominating our lives in such a way that even when we attempt to think of solutions, to things like nuclear weapons, artificial intelligence, and climate change, we do so within the confines of this technical way of thinking. And I think this approach is constraining our efforts to address these challenges. For example, we have nuclear weapons that came into existence during the Second World War, and today, these horrible weapons sit silently in silos and submarines and bombers, but there are still many people that believe these weapons are here to stay.
And they also believe that what they call national security depends on the existence of these weapons because they present themselves to our adversaries as a deterrent. So what happens in this chain of reasoning that they're using is that the acquisition of nuclear weapons provides a prestige and power to states that possess them. So the idea of disarmament is basically a non-starter for many people of this type of thinking. But the point is that even those who do advocate for disarmament tend to do so within a framework of technical thinking.
Again, they present what they believe to be objective reasons for getting rid of nuclear weapons, like they'll mention existential risks or the cost of maintaining a nuclear arsenal, or potential damage to the world economy and global health in the event of a nuclear exchange. These concerns are valid, but they treat the current situation as if we have control over nuclear weapons. And I don't believe that we do have control over nuclear weapons, I think the processes that are in place for nuclear weapons demonstrate quite the contrary that we do not have adequate control over the weapons.
And so what happens in the general population is that people think of technology as a neutral and passive force that we'll submit to the whims of human beings. But the reasons for disarmament that we present still remain encumbered by this technical way of thinking. And I think citing existential risk to someone who believes in nuclear deterrence is not going to move us closer to disarmament. This is a very dangerous loop that ends up happening with the way we think about problems. And I think this is something that although it's not recognized by many people who are in the x-risk community, it's something that we need to think more about going forward.
I think one of the easiest ways to see the problem itself outside of existential risk is the challenge of social media addiction which has been a subject of concern over the last several years. We have billions of people whose minds have been hijacked by the social media applications on mobile devices. So what emerges as a proposal to alleviate the scourge? Well, people were told to remove toxic apps from their phones, they are advised to install apps that monitor usage, and they're also told to turn off notifications. And as an afterthought, they might be encouraged to pause and be more compassionate with people that they disagree with.
These recommendations all fall within a technical way of addressing a problem without really enabling ourselves to examine the relationship that we have with the technology itself. These solutions are not forcing us to confront how it was that this technology came to shape our everyday experience as individuals and as a society. So I think just getting to the point where we can start to have a better way of framing a discussion around these particular areas of existential risk is really where I'm hoping to go from 2021 forward.
Emilia Javorsky: One of the big takeaways that I have from this past year, and it's really something that I hope we will have all learned as a global community this past year is the fact that bad things can happen. And I see a silver lining in what's happened with COVID and there being some redeeming qualities here for it to sensitize people to the idea that things that are destabilizing to society and happen on a global scale can happen. And so often when we try to do communication around existential risks, we think about near misses. We use the nuclear examples quite frequently to say what could have happened.
And people are now seeing the effects in society and broadly of a risk that would barely be classified as a tail risk, and would not necessarily meet the definition of global catastrophic risk. My more optimistic view to find some modicum of hope in what's happened with COVID is that it will make our global community up and down from policymakers to the public, more sensitive to the fact that we need to really think about the risks that are posed to society that could potentially be devastating and be more motivated to actually do something about them and plan better because we saw how poorly we've planned for something that, again, would not meet the definition of a GCR and how that went down.
So I guess that's a big hope of mine for lessons learned from COVID moving forward is that people will be more sensitive to this type of work that we're doing and really see the importance and value and necessity of it and help to prioritize it more than it has been traditionally in the past.
Jared Brown: I think to state the obvious, the world is still very, very vulnerable to high consequence low probability events. I think COVID has certainly shown that to everybody all across the world, as technically advanced as we've become as a society, we're still vulnerable to essentially 20th century and 19th century threats in terms of a naturally occurring virus. And that should make us question how vulnerable we are to future threats that are even more powerful and more dangerous. So that much is obvious, but I've also been optimistically very impressed by how adaptable people have been to the consequence of COVID.
I've seen in my own life, things that I would've never thought possible just doing work with my wife at work and my kids at school all under the same roof using Zoom. These things when you really step back and realize how much human beings have had to change and adapt, it's quite impressive how much we've been able to do so. That has actually led me to think more about what more we could be doing on the consequence management side of mitigating global catastrophic and existential risks. I think there's a lot more that we can think about and try and prepare for, to help us adapt to future challenges should we not be able to prevent them.
I haven't been able to spend as much time as I'd like on that challenge, but it's something I look forward to thinking about more in 2021. And the other thing is, it's obvious as we've seen that scientific expertise is really critical to help inform government policy making. We cannot allow government policymaking to be ill-informed when it comes to these types of risks. And so it's on all of us to help communicate and engage with government and policy makers to mitigate these types of risks in future.
Tucker Davey: Yeah. I think the biggest takeaway from 2020 for me is that catastrophe can happen. I think for a lot of us in the AI safety existential risk communities, it's been an uphill battle over the past few years trying to convince people to take these risks seriously. And I think in the back of our minds, we sometimes wonder how different would this conversation be if an accidental nuclear war did happen, or if a pandemic did happen, or if an AI control problem really caused chaos, would people take us much more seriously? Do people need to see proof? None of us want that proof to happen, I think that's important to emphasize, none of us want this pandemic to happen, none of us want an accidental nuclear war to happen.
But the reality is that when these things happen, people pay attention, people start to reconsider how safe they are. So I'm hoping that we can piggyback off that momentum and help people understand that there are these other catastrophic risks worth preparing for. With COVID in particular, I don't want to give too much of a pat on the back to the community that I work in, but I think it's important just to see comparisons with how different communities perceive and prepare for risks.
There seems to be more of a fear of accidentally spreading the wrong ideas or being misinterpreted and leading to bad outcomes. But I think this COVID example shows that if we can make our voice louder in certain scenarios, we can actually do a ton of good, and we can't forget that positive that can come from us speaking out. I think it's easy to get a little too worried about the hazards of sharing too much information, whether it's about artificial intelligence or biotechnology, but there are certain scenarios where this community is very effective at preparing for and anticipating risks.
I personally am just hoping that we can find ways to make our voice more heard and have more cohesion in the community about why we're making our voice heard and why it's important. I think one of the more negative takeaways I've had in 2020 specifically about AI safety, it's just that it seems like unfortunately a lot of our progress may be false progress and that's a painful thing to admit, but the reality of the situation is that there's been substantial ethics-washing in the community. It's important to work with big tech, it's important not to alienate them, we need to find allies in big tech, but we also can't be naïve about the motives and the intentions of big tech.
I guess I'm hopeful that in this coming year, we can be more realistic about ethics-washing and try to counter it, and try to get real representatives that can represent our concerns, and maybe create some press that can show the extent of big tech infiltrating AI safety and ethics boards.
Lucas Perry: 2020 has clearly demonstrated the extent of global fragility. Prior to COVID-19, I had more of a sense of the stability of human systems and also the competence of the governance and institutions which manage and implement those systems. And COVID-19 has led me to have less confidence in such systems and institutions as they have been rather ineffective and clumsy, stumbling their way through a global pandemic in ways which are obviously counter to reason and science and evidence. And so 2020 has, I think, changed my experience of global catastrophic and existential risk issues towards making it more experientially concrete that really, really bad things can happen.
And so I feel that this helps to mitigate some of this bias in the human experience of the world where we really don't take low probability, high risk, or high impact events seriously enough. And if something worse than COVID-19 happens, something that affects the globe in the same exact way but is more severe and more impactful, we're going to need much stronger institutions and governance and international coordination to first of all, mitigate that issue from actually ever happening, but also responding in the case that we fail to mitigate it, and that a response is required.
For our third question, what do you see as crucial and needed in 2021 to ensure and make improvements towards existential safety?
Max Tegmark: Given the fact that I feel we're backsliding, making negative progress on existential risk mitigation, step one for 2021 should obviously be to stop the deterioration. I already mentioned, we need the whole big tech accountable and make sure that the policies that we enact for artificial intelligence and machine learning deployments throughout the world are determined not by lobbyists from big tech, but by governments that have the best interest of their democracies in mind. A second area where I feel things have gotten worse in 2020 is geopolitics. Since the early '70s, after Nixon travel to China, we've had a pretty harmonious relationship between the US and China.
In 2020, things have gotten dramatically worse, of course, and I think it's going to be very valuable for the future of humanity if this deterioration can be halted and gradually turned around in 2021. And I think the key insight that has really been missing in a lot of political circles is that this is not a zero sum game where America can only get better off as China gets worse off or vice versa. When technology is involved, it's ridiculous to think of things as a zero sum game. If you just go back a long time, think of thousands of years ago, two powerful nations fighting each other about something that we don't even remember anymore.
We don't care anymore really exactly who won in that piecing contest, where we really care about is that they didn't drive humanity extinct, and they gave us all the opportunity to have all the much better technology and opportunities that we have today. And it's quite clear to me that technology is going to just keep getting ever more powerful, which means that either we're all in the future, Americans and Chinese and everybody else, dramatically wealthier and healthier, or we're all going to be completely screwed and quite likely dead. When technology gets sufficiently powerful, we either win together or we all lose together. Technology doesn't know or respect any borders.
Ultimately, if we can get it right, this is, I think, a very positive and hopeful thought because what people actually want, if we go ask them on the street, whether it's be the streets of Boston or Beijing or Brussels, is to have better personal lives, where they not only have prosperity with material things, but also have social lives, friendships they like, and they're doing things they really find meaningful and so on. And there's absolutely no law of physics saying that all humans cannot have that. Even if you just look at how far we've come from the Stone Age until today, it's just so remarkable how this was not the zero sum game and how we can make so much more well-being for everybody if we work together.
And that's nothing compared to all the fruits that artificial intelligence can bring. Sometimes people say, "Oh, Max, you're such a dreamer that's never going to work because we have such different goals." But I would push back on that. The sustainable development goals have been adopted by virtually every country on the planet, including all the aforementioned, and it has all these ambitious things like eliminate extreme poverty, stabilize our climate and do virtually everything that everyone around the world has on their checklist. Can we do them if we work together and make sure that artificial intelligence gets harnessed only for these good things?
Of course we can. And in fact, one of the more fun academic projects I was involved with this year was the paper exactly looking at how artificial intelligence can help us achieve the sustainable development goals faster. And frankly, these SDGs are pretty unambitious if you compare with what artificial intelligence will be able to do if it really advances. It has things like make everybody much healthier whereas of course, if we can really harness artificial intelligence to its full potential, there's no reason we can't solve all diseases. There's no reason why everybody can't prosper in all senses of the word, and we can all have a truly inspiring future of life.
So this is my hope as we look to 2021, that it will hopefully be a better year, not only in that we will worry less about pandemic, but also that we will think more about all the inspiring things we can get from our technology if we articulate these shared goals and work together towards them.
Anthony Aguirre: We desperately need more people to understand expected value and what it means, not just the probability of something happening, but the probability times the impact of it happening. So even if something is 10% probable, like maybe in a decade there being a pandemic is only 10% probable, if the cost of that pandemic is tens of trillions of dollars, then it really behooves you to spend tens or hundreds of billions of dollars to avoid and plan for and mitigate that disaster. Unfortunately, what our institutions tend to do is look for, "Here's some various possibilities, here's the one that's most likely, let's plan for that."
And that is just not the right thing to do when you're making decisions. It's provably not correct decision-making. People know this, this is not news, but really making decisions that way, actually understanding the probabilities, actually understanding the impacts and making decisions like that is not something we tend to do as a society, and we desperately, desperately need to do it more. We need the capability to do that, we need the will and the accountability to make decisions on that basis. Otherwise, we're going to keep making terrible decisions the way we have through this whole crisis, spending trillions of dollars to mitigate things that we could have spent billions of dollars to prevent, it just is the height of insanity, not to mention the loss of life.
Another thing, and this is a little bit connected to that is that it's become clear that our information and shared reality system is just fundamentally broken. We've one way or another created a monstrous system in which there's almost no causal connection between the information about the world that most people get most of the time and the truth of that information, the dynamics of how something becomes seen by lots of people and consumed by lots of information consumers, it's almost only coincidence with what is true. The dynamics of those systems are designed on other bases like virility and advertising revenue and outrage and things that spread certain things to a large number of people.
And not one of those things in its basic design is, is that thing a correct and useful and insightful way to look at the world and what is happening in it. We really, really need a system that prioritizes those things. Even if that system is only adopted at first by a fairly small fraction of people who are looking for such a system, we need one, it doesn't really exist right now. If you say, "I just want to really understand what's actually going on in a responsible manner. I want deep insight into the dynamics of the systems, I want facts that are true, and that's just what I want to consume," where are you going to go for that?
There isn't really a good option, there isn't something that you can trust for good reason to give you the facts about the world. There are elements of that around, but it only takes seeing a news story in print about something that you've seen firsthand or been involved in firsthand to understand how even the best new systems that we have now are pretty misleading, and then the less good ones are terribly misleading. So we desperately need this, we need them to work really well and then to scale them. And that's something that I'm very excited about working on in Metaculus and in Improve the News and some other things.
So 2020 has been a very tough year, it's really laid bare I think the vulnerabilities and fragility of a lot of our institutions, including our core democratic ones, our social ones and so on. But I think to take the optimistic view, there's plenty of room for growth. There are lots of things that we can make much better without that much effort, because some of them are so bad. And it's really inspiring to be part of a group of people at FLI and otherwise who are really just single-mindedly devoted to doing that, to making the things that are in the world work better and work better for more people and prevent really terrible things from happening.
Let's all hope that organizations and people like FLI are successful in 2021 and that things continue to get better, I think they've turned around.
David Nicholson: In terms of making improvements towards existential safety I think we need to take a step back and ask fundamental questions about our relationship with technology and all of the areas of concern, namely; nuclear weapons, artificial intelligence, climate change, and bio weapons. And we need to start asking how these things present themselves to us in our everyday experience. And from that point, we can begin to assess what values we need to disentangle us from these risks. And so just to be as precise as I can, when we think of nuclear weapons, the immediate answer is not disarmament, the answer starts with asking how we found ourselves standing in relation to objects that can potentially end what we understand as human civilization.
And that conversation means we need to move away from technical thinking and engage in a reformulation of values and telling better stories about the future.
Emilia Javorsky: I think it relates somewhat to my favorite project of the past year, which is this idea of a return to multi-lateralism and a real restoration of dialogue, not only between countries, but between disciplines. I think that is something that's been sorely lacking, especially in the past few years. And I think that's something we need to get back to when we're thinking about, how do we develop new and meaningful systems for AI governance that move beyond principles, or nuclear weapons, or lethal autonomous weapons, or climate change. These are all topics that could greatly benefit and be accelerated and necessitate multilateral and multi-stakeholder dialogue.
And I think that that is something that I'm at least hopeful there will be a greater appetite for in 2021 because of the sensitization that's happened due to COVID. And that doesn't necessarily mean a return to old governance structures and solutions and methods of policymaking, those can look quite different or be updated or changed to meet the crises that we face today. But I think foundational to it is starting that dialogue and returning the multi-lateralism and working together as a global community, and breaking down some of those communication barriers that have really propped up in full force the past few years.
Jared Brown: I really think we need robust engagement with government entities across the globe. Two years ago, the concept of AI was still pretty foreign to policy-making circles, and now you see it everywhere. Think tanks all across the globe are having regular events and meetings about this thing called AI. And the same goes for biotech or synthetic biology, depending on your preference. There's going to be opportunities in 2021 for really consequential decision making and policy making that sets the stage for the future on these technologies.
And I really believe that our community needs to do more to engage with policy makers, such that those policies are robust enough to handle increasing capabilities of these technologies. So I look forward to, for instance, the Future of Life Institute and our partners engaging with the National Institute of Standards and Technology, or otherwise known as NIST, on their development of a voluntary risk framework for trustworthy AI systems. That risk framework is going to be required by law, and it's something that they have been told by Congress to spend two years developing with engagement from private sector and nonprofit actors.
And we have to come to the table, they're going to ask us to come to the table, and we have to come to the table and help them understand how we can truly assess and manage the risk of AI.
Lucas Perry: In 2021, it would be really valuable for governments around the world and international governance bodies to be more aware of existential risk issues in the first place. They need some basic literacy and understanding about the issues, the risks types, the probabilities, and what can be done to address them. I also think that such governance bodies would also benefit from strengthening and creating institutions which can actually deal with and mitigate existential risks whose sole priority is actually to focus on and deal with existential risk issues. We're a bit late to the game of having formal institutions that can work on these issues, but it's not too late, and it's sorely needed.
And the first step to that is awareness in the right places. I'd also like to see increased global cooperation on existential and global catastrophic risk issues. The response to the pandemic has been all over the place in the world, and with more serious issues, it's going to require much more global cooperation than we've had in 2020. Issues like coordinating on AI alignment, mitigating accidental, and intentional nuclear war, and also synthetic bio issues will require a lot of coordination by governments, a lot more than is happening. We tend to ignore these issues until it's too late.
We can make substantive progress on mitigating the risks of these if we can come together with awareness of these issues in mind and with foresight and with sensitivity to the reality of low-probability, high-risk events. And if there's anything that I would like us to really bring into 2021 from 2020, it is this recognition of the reality of low-probability, high-risk events. COVID-19 was not even a global catastrophic risk, and we're quite lucky for that, but it does remind us that low-probability, high-risk events do occur.
We're living out human history, and we're not special, we're not protected by some arc of human history, which innately does not allow these risks to occur. They do occur and we should take them seriously and work together to mitigate them.