Skip to content
All Podcast Episodes

Podcast: Existential Hope in 2019 and Beyond

Published
December 21, 2018

Humanity is at a turning point. For the first time in history, we have the technology to completely obliterate ourselves. But we’ve also created boundless possibilities for all life that could enable  just about any brilliant future we can imagine. Humanity could erase itself with a nuclear war or a poorly designed AI, or we could colonize space and expand life throughout the universe: As a species, our future has never been more open-ended.

The potential for disaster is often more visible than the potential for triumph, so as we prepare for 2019, we want to talk about existential hope, and why we should actually be more excited than ever about the future. In this podcast, Ariel talks to six experts--Anthony Aguirre, Max Tegmark, Gaia Dempsey, Allison Duettmann, Josh Clark, and Anders Sandberg--about their views on the present, the future, and the path between them.

Anthony and Max are both physics professors and cofounders of FLI. Gaia is a tech enthusiast and entrepreneur, and with her newest venture, 7th Future, she’s focusing on bringing people and organizations together to imagine and figure out how to build a better future. Allison is a researcher and program coordinator at the Foresight Institute and creator of the website existentialhope.com. Josh is cohost on the Stuff You Should Know Podcast, and he recently released a 10-part series on existential risks called The End of the World with Josh Clark. Anders is a senior researcher at the Future of Humanity Institute with a background in computational neuroscience, and for the past 20 years, he’s studied the ethics of human enhancement, existential risks, emerging technology, and life in the far future.

We hope you’ll come away feeling inspired and motivated--not just to prevent catastrophe, but to facilitate greatness.

Topics discussed in this episode include:

  • How technology aids us in realizing personal and societal goals.
  • FLI’s successes in 2018 and our goals for 2019.
  • Worldbuilding and how to conceptualize the future.
  • The possibility of other life in the universe and its implications for the future of humanity.
  • How we can improve as a species and strategies for doing so.
  • The importance of a shared positive vision for the future, what that vision might look like, and how a shared vision can still represent a wide enough set of values and goals to cover the billions of people alive today and in the future.
  • Existential hope and what it looks like now and far into the future.

You can listen to the podcast above, or read the full transcript below.

Transcript

Ariel: Hi everyone. Welcome back to the FLI podcast. I’m your host, Ariel Conn, and I am truly excited to bring you today’s show. This month, we’re departing from our standard two-guest interview format because we wanted to tackle a big and fantastic topic for the end of the year that would require insight from a few extra people. It may seem as if we at FLI spend a lot of our time worrying about existential risks, but it's helpful to remember that we don't do this because we think the world will end tragically: We address issues relating to existential risks because we're so confident that if we can overcome these threats, we can achieve a future greater than any of us can imagine.

And so, as we end 2018 and look toward 2019, we want to focus on a message of hope, a message of existential hope.

I’m delighted to present Anthony Aguirre, Max Tegmark, Gaia Dempsey, Allison Duettmann, Josh Clark and Anders Sandberg, all of whom were kind enough to come on the show and talk about why they’re so hopeful for the future and just how amazing that future could be.

Anthony and Max are both physics professors and cofounders of FLI. Gaia is a tech enthusiast and entrepreneur, and with her newest venture, 7th Future, she’s focusing on bringing people and organizations together to imagine and figure out how to build a better future. Allison is a researcher and program coordinator at the Foresight Institute and she created the website existentialhope.com. Josh is cohost on the Stuff You Should Know Podcast, and he recently released a 10-part series on existential risks called The End of the World with Josh Clark. Anders is a senior researcher at the Future of Humanity Institute with a background in computational neuroscience, and for the past 20 years, he’s studied the ethics of human enhancement, existential risks, emerging technology, and life in the far future.

Over the course of a few days, I interviewed all six of our guests, and I have to say, it had an incredibly powerful and positive impact on my psyche. We’ve merged these interviews together for you here, and I hope you’ll all also walk away feeling a bit more hope for humanity’s collective future, whatever that might be.

But before we go too far into the future, let’s start with Anthony and Max, who can talk a bit about where we are today.

Anthony: I’m Anthony Aguirre, I'm one of the founders of the Future of Life Institute. And in my day job, I'm a Physicist at the University of California at Santa Cruz.

Max: I am Max Tegmark, a professor doing physics and AI research here at MIT, and also the president of the Future of Life Institute.

Ariel: All right. Thank you so much for joining us today. I'm going to start with sort of a big question. That is, do you think we can use technology to solve today's problems?

Anthony: I think we can use technology to solve any problem in the sense that I think technology is an extension of our capability: it's something that we develop in order to accomplish our goals and to bring our will into fruition. So, sort of by definition, when we have goals that we want to do — problems that we want to solve — technology should in principle be part of the solution.

Max: Take, for example, poverty. It's not like we don't have the technology right now to eliminate poverty. But we're steering the technology in such a way that there are people who starve to death, and even in America there are a lot of children who just don't get enough to eat, through no fault of their own.

Anthony: So I'm broadly optimistic that, as it has over and over again, technology will let us do things that we want to do better than we were previously able to do them. Now, that being said, there are things that are more amenable to better technology, and things that are less amenable. And there are technologies that tend to, rather than functioning as kind of an extension of our will, will take on a bit of a life of their own. If you think about technologies like medicine, or good farming techniques, those tend to be sort of overall beneficial and really are kind of accomplishing purposes that we set. You know, we want to be more healthy, we want to be better fed, we build the technology and it happens. On the other hand, there are obviously technologies that are just as useful or even more useful for negative purposes — socially negative or things that most people agree are negative things: landmines, for example, as opposed to vaccines. These technologies come into being because somebody is trying to accomplish their purpose — defending their country against an invading force, say — but once that technology exists, it's kind of something that is easily used for ill purposes.

Max: Technology simply empowers us to do good things or bad things. Technology isn't evil, but it's also not good. It's morally neutral. Right? You can use fire to warm up your home in the winter or to burn down your neighbor's house. We have to figure out how to steer it and where we want to go with it. I feel that there's been so much focus on just making our tech powerful right now — because that makes money, and it's cool — that we've neglected the steering and the destination quite a bit. And in fact, I see the core goal of the Future of Life Institute: Help bring back focus on the steering of our technology and the destination.

Anthony: There are also technologies that are really tricky in that they give us what we think we want, but then we sort of regret having later, like addictive drugs, or gambling, or cheap sugary foods, or-

Ariel: Social media.

Anthony: … certain online platforms that will go unnamed. We feel like this is what we want to do at the time; We choose to do it. We choose to eat the huge sugary thing, or to spend some time surfing the web. But later, with a different perspective maybe, we look back and say, "Boy, I could've used those calories, or minutes, or whatever, better." So who's right? Is it the person at the time who's choosing to eat or play or whatever? Or is it the person later who's deciding, "Yeah, that wasn't a good use of my time or not." Those technologies I think are very tricky, because in some sense they're giving us what we want. So we reward them, we buy them, we spend money, the industries develop, the technologies have money behind them. At the same time, it's not clear that they make us happier.

So I think there are certain social problems, and problems in general, that technology will be tremendously helpful in improving as long as we can act to sort of wisely try to balance the effects of technology that have dual use toward the positive, and as long as we can somehow get some perspective on what to do about these technologies that take on a life of their own, and tend to make us less happy, even though we dump lots of time and money into them.

Ariel: This sort of idea of technologies — that we're using them and as we use them we think they make us happy and then in the long run we sort of question that — is this a relatively modern problem, or are there examples of anything that goes further back that we can learn from from history?

Anthony: I think it goes fairly far back. Certainly drug use goes a fair ways back. I think there have been periods where drugs were used as part of religious or social ceremonies and in other kind of more socially constructive ways. But then, it's been a fair amount of time where opiates and very addictive things have existed also. Those have certainly caused social problems back at least a few centuries.

I think a lot of these examples of technologies that give us what we seem to want but not really what we want are ones in which we're applying the technology to a species — us — that developed in a very different set of circumstances, and that contrast between what's available and what we evolutionarily wanted is causing a lot of problems. The sugary foods are an obvious example where we can now just supply huge plenitudes of something that was very rare and precious back in more evolutionary times — you know, sweet calories.

Drugs are something similar. We have a set of chemistry that helps us out in various situations, and then we're just feeding those same chemical pathways to make ourselves feel good in a way that is destructive. And violence might be something similar. Violent technologies go way, way back. Those are another one that are clearly things that we want to invent to further our will and accomplish our goals. They're also things that may at some level be addictive to humans. I think it's not entirely clear exactly how — there's a strange mix there, but I think there's certainly something compelling and built into at least many humans' DNA that promotes fighting and hunting and all kinds of things that were evolutionarily useful way back when and perhaps less useful now. It had a clear evolutionary purpose with tribes that had to defend themselves, with animals that needed to be killed for food. But feeding that desire to run around and hunt and shoot people, which most people aren't doing in real life, but tons of people are doing in video games. So there's clearly some built in mechanism that's rewarding that behavior as being fun to do and compelling. Video games are obviously a better way to express that than running around and doing it in real life, but it tells you something about some circuitry that is still there and is left over from early times. So I think there are a number of examples like that — this connection between our biological evolutionary history and what technology makes available in large quantities — where we really have to think carefully about how we want to play that.

Ariel: So, as you look forward to the future, and sort of considering some of these issues that you've brought up, how do you envision us being able to use technology for good and maybe try to overcome some of these issues? I mean, maybe it is good if we've got people playing video games instead of going around shooting people in real life.

Anthony: Yeah. So there may be examples where some of that technology can fulfill a need in a less destructive way than it might otherwise be. I think there are also plenty of examples where a technology can root out or sort of change the nature of a problem that would be enormously difficult to do something about without a technology. So for example, I think eating meat, when you analyze it from almost any perspective, is a pretty destructive thing for humanity to be doing. Ecologically, ethically in terms of the happiness of the animals, health-wise: so many things are destructive about it. And yet, you really have the sense that it's going to be enormously difficult — it would be very unlikely for that to change wholesale on a relatively short period of time.

However, there are technologies — clean meat, cultured meat, really good tasting vegetarian meat substitutes — that are rapidly coming to market. And you could imagine if those things were to get cheap and widely available and perhaps a little bit healthier, that could dramatically change that situation relatively quickly. I think if a non-ecologically destructive, non-suffering inducing, just as tasty and even healthier product were cheaper, I don't think people would be eating meat. Very few people actually like, I think, intrinsically the idea of having an animal suffer in order for them to eat. So I think that's an example of something that would be really, really hard to change through just social actions. Could be jump started quite a lot by technology — that's one of the ones I'm actually quite hopeful about.

Global warming I think is a similar one — it's on some level a social and economic problem. It's a long-term planning problem, which we're very bad at. It's pretty clear how to solve the global warming issue if we really could think on the right time scales and weigh the economic costs and benefits over decades — it'd be quite clear that mitigating global warming now and doing things about it now might take some overall investment that would clearly pay itself off. But we seem unable to accomplish that.

On the other hand, you could easily imagine a really cheap, really power-dense, quickly rechargeable battery being invented and just utterly transforming that problem into a much, much more tractable one. Or feasible, small-scale nuclear fusion power generation that was cheap. You can imagine technologies that would just make that problem so much easier, even though it is ultimately kind of a social or political problem that could be solved. The technology would just make it dramatically easier to do that.

Ariel: Excellent. And so thinking more hopefully — even when we're looking at what's happening in the world today, news is usually focusing on all the bad things that have gone wrong — when you look around the world today, what do you think, "Wow, technology has really helped us achieve this, and this is super exciting?”

Max: Almost everything I love about today is the result of technology. It's because of technology that we've more than doubled the lifespan that we humans used to have, most of human history. More broadly, I feel that the technology is empowering us. Ten thousand years ago, we felt really, really powerless; We were these beings, you know, looking at this great world out there and having very little clue about how it worked — it was largely mysterious to us — and even less ability to actually influence the world in a major way. Then technology enabled science, and vice versa. So the sciences let us understand more and more how the world works, and let us build this technology which lets us shape the world to better suit us. Helping produce much better, much more food, helping keep us warm in the winter, helping make hospitals that can take care of us, and schools that can educate us, and so on.

Ariel: Let’s bring on some of our other guests now. We’ll turn first to Gaia Dempsey. How do you envision technology being used for good?

Gaia: That’s a huge question.

Ariel: It is. Yes.

Gaia: I mean, at its essence I think technology really just means a tool. It means a new way of doing something. Tools can be used to do a lot of good — making our lives easier, saving us time, helping us become more of who we want to be. And I think technology is best used when it supports our individual development in the direction that we actually want to go — when it supports our deeper interests and not just the, say, commercial interests of the company that made it. And I think in order for that to happen, we need for our society to be more literate in technology. And to me that's not just about understanding how computing platforms work, but also understanding the impact that tools have on us as human beings. Because they don't just shape our behavior, they actually shape our minds and how we think.

So I think we need to be very intentional about the tools that we choose to use in our own lives, and also the tools that we build as technologists. I've always been very inspired by Douglas Engelbart's work, and I think that — I was revisiting his original conceptual framework on augmenting human intelligence, which he wrote and published in 1962 — and I really think he had the right idea, which is that tools used by human beings don't exist in a vacuum. They exist in a coherent system and that system involves language: the language that we use to describe the tools and understand how we're using them; the methodology; and of course the training and education around how we learn to use those tools. And I think that as a tool maker it's really important to think about each of those pieces of an overarching coherent system, and imagine how they're all going to work together and fit into an individual's life and beyond: you know, the level of a community and a society.

Ariel: I want to expand on some of this just a little bit. You mentioned this idea of making sure that the tool, the technology tool, is being used for people and not just for the benefit, the profit, of the company. And that that's closely connected to making sure that people are literate about the technology. One, just to confirm that that is actually what you were saying. And, two, I mean one of the reasons I want to confirm this is because that is my own concern — that it's being too focused for making profit and not enough people really understand what's happening. My question to you is, then, how do we educate people? How do we get them more involved?

Gaia: I think for me, my favorite types of tools are the kinds of tools that support us in developing our thinking and that help us accelerate our ability to learn. But I think that some of how we do this in our society is not just about creating new tools or getting trained on new tools, but really doesn't have very much to do with technology at all. And that's in our education system, teaching critical thinking. And teaching, starting at a young age, to not just accept information that is given to you wholesale, but really to examine the motivations and intentions and interests of the creator of that information, and the distributor of that information. And I think these are really just basic tools that we need as citizens in a technological society and in a democracy.

Ariel: That actually moves nicely to another question that I have. Well, I actually think the sentiment might be not quite as strong as it once was, but I do still hear a lot of people who sort of approach technology as the solution to any of today's problems. And I'm personally a little bit skeptical that we can only use technology. I think, again, it comes back to what you were talking about with it's a tool so we can use it, but I think it just seems like there's more that needs to be involved. I guess, how do you envision using technology as a tool, and still incorporating some of these other aspects like teaching critical thinking?

Gaia: You’re really hitting on sort of the core questions that are fundamental to creating the kind of society that we want to live in. And I think that we would do well to spend more time thinking deeply about these questions. I think technology can do really incredible, tremendous things in helping us solve problems and create new capabilities. But it also creates a new set of problems for us to engage with.

We’ve sort of coevolved with our technology. So it's easy to point to things in the culture and say, "Well, this never would have happened without technology X." And I think that's true for things that are both good and bad. I think, again, it's about taking a step back and taking a broader view, and really not just teaching critical thinking and critical analysis, but also systems level thinking. And understanding that we ourselves are complex systems, and we're not perfect in the way that we perceive reality — we have cognitive biases, we cannot necessarily always trust our own perceptions. And I think that's a lifelong piece of work that everyone can engage with, which is really about understanding yourself first. This is something that Yuval Noah Harari talked about in a couple of his recent books and articles that he's been writing, which is: if we don't do the work to really understand ourselves first and our own motivations and interests, and sort of where we want to go in the world, we're much more easily co-opted and hackable by systems that are external to us.

There are many examples of recommendation algorithms and sentiment analysis — audience segmentation tools that companies are using to be able to predict what we want and present that information to us before we've had a chance to imagine that that is something we could want. And while that's potentially useful and lucrative for marketers, the question is what happens when those tools are then utilized not just to sell us a better toothbrush on Amazon, but when it's actually used in a political context. And so with the advent of these vast machine learning, reinforcement learning systems that can look at data and look at our behavior patterns and understand trends in our behavior and our interests, that presents a really huge issue if we are not ourselves able to pause and create a gap, and create a space between the information that's being presented to us within the systems that we're utilizing and really our own internal compass.

Ariel: You've said two things that I think are sort of interesting, especially when they're brought together. And the first is this idea that we've coevolved with technology — which, I actually hadn't thought of it in that phrase before, and I think it's a really, really good description. But then when we consider that we've coevolved with technology, what does that mean in terms of knowing ourselves? And especially knowing ourselves as our biological bodies, and our limiting cognitive biases? I don't know if that's something that you’ve thought about much, but I think that combination of ideas is an interesting one.

Gaia: I mean, I know that I certainly already feel like I'm a cyborg. Part of knowing myself is — it does involve understanding the tools that I use, that feel that they are extensions of myself. That kind of comes back to the idea of technology literacy, and systems literacy, and being intentional about the kinds of tools that I want to use. For me, my favorite types of tools are the kind that I think are very rare: the kind that support us developing the capacity for long-term thinking, and for being true to the long-term intentions and goals that I set for myself.

Ariel: Can you give some examples of those?

Gaia: Yeah, I'll give a couple examples. One example that's sort of probably familiar to a lot of people listening to this comes from the book Ready Player One. And in this book the main character is interacting with his VR system that he sort of lives and breathes in every single day. And at a certain point the system asks him: do you want to activate your health module? I forgot exactly what it was called. And without giving it too much thought, he kind of goes, "Sure. Yeah, I'd like to be healthier." And it instantiates a process whereby he's not allowed to log into the OASIS without going through his exercise routine every morning. To me, what's happening there is: there is a choice.

And it's an interesting system design because he didn't actually do that much deep thinking about, "Oh yeah, this is a choice I really want to commit to." But the system is sort of saying, "We're thinking through the way that your decision making process works, and we think that this is something you really do want to consider. And we think that you're going to need about three months before you make a final decision as to whether this is something you want to continue with."

So that three month period or whatever, and I believe it was three months in the book, is what's known as an akrasia horizon. Which is a term that I learned through a different tool that is sort of a real life version of that, which is called Beeminder. And the akrasia horizon is, really, it's a time period that's long enough that it will sort of circumvent a cognitive bias that we have to really prioritize the near term at the expense of the future. And in the case of the Ready Player One example, the near term desire that he would have that would circumvent the future — his long-term health — is, "I don't feel like working out today. I just want to get into my email or I just want to play a video game right now." And a very similar sort of setup is created in this tool Beeminder, which I love to use to support some goals that I want to make sure I'm really very motivated to meet.

So it's a tool where you can put in your goals and you can track them either yourself by entering the data manually, or you can connect to a number of different tracking capabilities like RescueTime and others. And if you don't stay on track with your goals, they charge your credit card. It’s a very effective sort of motivating force. And so I sort of have a nickname: I call these systems time bridges. Which are really choices made by your long-term thinking self, that in some way supersedes the gravitational pull toward mediocrity inherent in your short-term impulses.

It's about experimenting too. And this is one particular system that creates consequences and accountability. And I love systems. For me if I don't have systems in my life that help me organize the work that I want to do, I'm hopeless. That's why I like to collect and I'm sort of an avid taster of different systems, and I'll try anything, and really collect and see what works. And I think that's important. It's a process of experimentation to see what works for you.

Ariel: Let’s turn to Allison Duettmann now, for her take on how we can use technology to help us become better versions of ourselves and to improve our societal interactions.

Allison: I think there are a lot of technological tools that we can use to aid our reasoning and sense making and coordination. So I think that technologies can be used to help with reasoning, for example, by mitigating trauma, or bias, or by augmenting our intelligence. That's the whole point of creating AI in the first place. Technologies can also be used to help with collective sense-making, for example with truth-finding and knowledge management, and I think your hypertexts and prediction markets — something that Anthony's working on — are really worthy examples here. I also think technologies can be used to help with coordination. Mark Miller, who I'm currently writing a book with, likes to say that if you lower the risks of cooperation, you'll get a more cooperative world. I think that most cooperative interactions may soon be digital.

Ariel: That's sort of an interesting idea, that there's risks to cooperation. Can you maybe expand on that a little bit more?

Allison: Yeah, sure. I think that most of our interactions are already digital ones, for some of us at least, and they will be more and more so in the future. So I think that one step to lowering the risk of cooperation is establishing cybersecurity as a first step, because this would decrease the risk of digital coercion. But I do think that's only part of it, because rather than just freeing us from the restraints that keep us from cooperating, we also need to equip us with the tools to cooperate, right?

Ariel: Yes.

Allison: I think some of those may be smart contracts to allow individuals to credibly commit, but there may be others too. I just think that we have to realize that the same technologies that we're worried about in terms of risks are also the ones that may augment our abilities to decrease those risks.

Ariel: One of the things that came to mind as you were talking about this, using technology to improve cooperation — when we look at the world today, technology isn't spread across the globe evenly. People don't have equal access to these tools that could help. Do you have ideas for how we address various inequality issues, I guess?

Allison: I think inequality is a hot topic to address. I'm currently writing a book with Mark Miller and Christine Peterson on a few strategies to strengthen civilization. In this book we outline a few paths to do so, but also potential positive outcomes. One of the outcomes that we're outlining is a voluntary world in which all entities can cooperate freely with each other to realize their interests. It's kind of based on the premise that finding one utopia that works for everyone is hard, and is perhaps impossible, but that in the absence of knowing what's in everyone's interest, we shouldn't try to impose any interests by one entity — whether that's an AI or an organization or a state — but we should try to create a framework in which different entities, with different interests, whether they're human or artificial, can pursue their interests freely by cooperating. And I think If you look at the strategy, it has worked pretty well so far. If you look at society right now it's really not perfect, but by allowing humans to cooperate freely and engage in some mutually beneficial relationships, civilization already serves our interests quite well. And it's really not perfect by far, I'm not saying this, but I think as a whole, our civilization at least tends imperfectly to plan for pareto-preferred paths. We have survived so far, and in better and better ways.

So a few ways that we propose to strengthen this highly involved process is by proposing kind of general recommendations for solving coordination problems, and then a few more specific ideas on reframing a few risks. But I do think that enabling a voluntary world in which different entities can cooperate freely with each other is the best we can do, given our limited knowledge of what is in everyone's interests.

Ariel: I find that interesting, because I hear lots of people focus on how great intelligence is, and intelligence is great, but it does often seem — and I hear other people say this — that cooperation is also one of the things that our species has gotten right. We fail at it sometimes, but it's been one of the things, I think, that's helped.

Allison: Yeah, I agree. I hosted an event last year at the Internet Archive on different definitions of intelligence. Because in the paper that we wrote last year, we have this very grand, or broad conception of intelligence, which includes civilization as an intelligence. So I think you may be asking yourself the question of, what does it mean to be intelligent, and if what we care about is problem-solving ability then I think that civilization certainly classifies as a system that can solve more problems than any individual that is within it alone. So I do think this is part of the cooperative nature of the individual parts within civilization, and so I don't think that cooperation and intelligence are mutually exclusive at all. Marvin Minsky wrote this amazing book, Society of Mind, and in much of this, has similar ideas.

Ariel: I’d like to take this idea and turn it around, and this is a question specifically for Max and Anthony: looking back at this past year, how has FLI helped foster cooperation and public engagement surrounding the issues we’re concerned about? What would you say were FLI's greatest successes in 2018?

Anthony: Let’s see, 2018. What I've personally enjoyed the most, I would say, is starting the engagement between the technical researchers and the nonprofit community really starting to get more engaged with state and federal governments. So for example the Asilomar principles — which were generated at this nexus of business and nonprofit and academic thinkers about AI and related things — I think were great. But that conversation didn't really include much from people in policy, and governance, and governments, and so on. So, starting to see that thinking, and those recommendations, and those aspirations of the community of people who know about AI and are thinking hard about it and what it should do and what it shouldn't do — seeing that start to come into the political sphere, and the government sphere, and the policy sphere I think is really encouraging.

That seems to be happening in many places at some level. I think the local one that I'm excited about is the passage of the California legislature of a resolution endorsing the Asilomar principles. That felt really good to see that happen and really encouraging that there were people in the legislature that — we didn't go and lobby them to do that, they came to us and said, "This is really important. We want to do something." And we worked with them to do that. That was super encouraging, because it really made it feel like there is a really open door, and there's a desire in the policy world to do something. This thing is getting on people's radar, that there's a huge transformation coming from AI.

They see that their responsibility is to do something about that. They don't intrinsically know what they should be doing, they're not experts in AI, they haven't been following the field. So there needs to be that connection and it’s really encouraging to see how open they are and how much can be produced with honestly not a huge level of effort; Just communication and talking through things I think made a significant impact. I was also happy to see how much support there continues to be for controlling the possibility of lethal autonomous weapons.

The thing we've done this year, the lethal autonomous weapons pledge, I felt really good about the success of. So this was an idea that anybody who's interested, but especially companies who are engaged in developing related technologies, drones, or facial recognition, or robotics, or AI in general — to get them to take that step themselves of saying, "No, we want to develop these technologies for good, and we have no interest in developing things that are going to be weaponized and used in lethal autonomous weapons."

I think having a large number of people and corporations sign on to a pledge like that is useful not so much because they were planning to do all those things and now they signed a pledge, so they're not going to do it anymore. I think that's not really the model so much as it's creating a social and cultural norm that these are things that people just don't want to have anything to do with, just like biotech companies don't really want to be developing biological weapons, they want to be seen as forces for good that are building medicines and therapies and treatments and things. Everybody is happy for biotech companies to be doing those things.

If biotech companies were building biological weapons also, you really start to wonder, "Okay, wait a minute, why are we supporting this? What are they doing with my information? What are they doing with all this genetics that they're getting? What are they doing with the research that's funded by the government? Do we really want to be supporting this?" So keeping that distinction in the industry between all the things that we all support — better technologies for helping people — versus the military applications, particularly in this rather destabilizing and destructive way: I think that is more the purpose — to really make clear that there are companies that are going to develop weapons for the military, and that's part of the reality of the world.

We have militaries; We need, at the moment, militaries. I think I certainly would not advocate that the US should stop defending itself, or shouldn't develop weapons, and I think it's good that there are companies that are building those things. But there are very tricky issues when the companies building military weapons are the same companies that are handling all of the data of all of the people in the world or in the country. I think that really requires a lot of thought, how we're going to handle it. And seeing companies engage with those questions and thinking about how are the technologies that we're developing, how are they going to be used and for what purposes, and what purposes do we not want them to be used for is really, really heartening. It's been very positive I think to see at least in certain companies those sort of conversations go on with our pledge or just in other ways.

You know, seeing companies come out with, "This is something that we're really worried about. We're developing these technologies, but we see that there could be major problems with them." That's very encouraging. I don't think it's necessarily a substitute for something happening at the regulatory or policy level, I think that's probably necessary too, but it's hugely encouraging to see companies being proactive about thinking about the societal and ethical implications of the technologies they’re developing.

Max: There are four things I'm quite excited about. One of them is that we managed to get so many leading companies and AI researchers and universities to pledge to not build lethal autonomous weapons, also known as killer robots. Second is that we were able to channel two million dollars, thanks to Elon Musk, to 10 research groups around the world to help figure out how to make artificial general intelligence safe and beneficial. Third is that the state of California decided to officially endorse the 23 Asilomar Principles. It's really cool that these are getting more taken seriously now, even by policy makers. And the fourth is that we were able to track down the children of Stanislav Petrov in Russia, thanks to whom this year is not the 35th anniversary year of World War III, and actually give them the appreciation we feel that they deserve.

I'll tell you a little more about this one because it's something I think a lot of people still aren't that aware of. But September 26th, 35 years ago, Stanislav Petrov was on shift and in charge of his Soviet early warning station, which showed five US nuclear missiles incoming, one after the other. Obviously, not what he was hoping that would happen at work that day and a really horribly scary situation where the natural response is to do what that system was built for: namely, warning the Soviet Union so that they would immediately strike back. And if that had happened, then thousands of mushroom clouds later, you know, you and I, Ariel, would probably not be having this conversation. Instead, he, mostly on gut instinct, came to the conclusion that there was something wrong and said, "This is a false alarm." And we're incredibly grateful for that level-headed action of him. He passed away recently.

His two children are living on very modest means outside of Moscow and we felt that when someone does something like this, or in his case abstains from doing something, that future generations really appreciate, we should show our appreciation, so that others in his situation later on know that if they sacrifice themselves for the greater good, they will be appreciated. Or if they're dead, their loved ones will. So we organized a ceremony in New York City and invited them to it and bought air tickets for them and so on. And in a very darkly humorous illustration of how screwed up their relationships are at the global level now, the US decided that because — that the way to show appreciation for the US not having gotten nuked was to deny a visa to Stanislav's son. So he could only join by Skype. Fortunately, his daughter was able to get a visa, even though the waiting period to even get a visa point for Moscow was 300 days. We had to fly her to Israel to get her the Visa.

But she came and it was her first time ever outside of Russia. She was super excited to come and see New York. It was very touching for me to see all the affection that the New Yorkers there deemed at her and see her reaction and her husband's reaction and to get to give her this $50,000 award, which for them was actually a big deal. Although it's of course nothing compared to the value for the rest of the world of what their father did. And it was a very sobering reminder that we've had dozens of near misses where we almost had a nuclear war by mistake. And even though the newspapers usually make us worry about North Korea and Iran, of course by far the most likely way in which we might get killed by a nuclear explosion is because another just stupid malfunction or error causing the US and Russia to start a war by mistake.

I hope that this ceremony and the one we did the year before also, for family of Vasili Arkhipov, can also help to remind people that hey, you know, what we're doing here, having 14,000 hydrogen bombs and just relying on luck year after year isn't a sustainable long-term strategy and we should get our act together and reduce nuclear arsenals down to the level needed for deterrence and focus our money on more productive things.

Ariel: So I wanted to just add a quick follow-up to that because I had the privilege of attending the ceremony and I got to meet the Petrovs. And one of the things that I found most touching about meeting them was their own reaction to New York, which was in part just an awe of the freedom that they felt. And I think, especially, this is sort of a US centric version of hope, but it's easy for us to get distracted by how bad things are because of what we see in the news, but it was a really nice reminder of how good things are too.

Max: Yeah. It's very helpful to see things through other people's eyes and in many cases, it's a reminder of how much we have to lose if we screw up.

Ariel: Yeah.

Max: And how much we have that we should be really grateful for and cherish and preserve. It's even more striking if you just look at the whole planet, you know, in a broader perspective. It's a fantastic, fantastic place, this planet. There's nothing else in the solar system even remotely this nice. So I think we have a lot to win if we can take good care of it and not ruin it. And obviously, the quickest way to ruin it would be to have an accidental nuclear war, which — it would be just by far the most ridiculously pathetic thing humans have ever done, and yet, this isn't even really a major election issue. Most people don't think about it. Most people don't talk about it. This is, of course, the reason that we, with the Future of Life Institute, try to keep focusing on the importance of positive uses of technology, whether it be nuclear technology, AI technology, or biotechnology, because if we use it wisely, we can create such an awesome future, like you said: Take the good things we have, make them even better.

Ariel: So this seems like a good moment to introduce another guest, who just did a whole podcast series exploring existential risks relating to AI, biotech, nanotech, and all of the other technologies that could either destroy society or help us achieve incredible advances if we use them right.

Josh: I'm Josh Clark. I'm a podcaster. And I'm the host of a podcast series called the End of the World with Josh Clark.

Ariel: All right. I am really excited to have you on the show today because I listened to all of the End of the World. And it was great. It was a really, really wonderful introduction to existential risks.

Josh: Thank you.

Ariel: I highly recommend it to anyone who hasn't listened to it. But now that you've just done this whole series about how things can go horribly wrong, I thought it would be fun to bring you on and talk about what you're still hopeful for after having just done that whole series.

Josh: Yeah, I’d love that, because a lot of people are hesitant to listen to the series because they're like, well, "it's got to be such a downer." And I mean, it is heavy and it is kind of a downer, but there's also a lot of hope that just kind of emerged naturally from the series just researching this stuff. There is a lot of hope — it’s pretty cool.

Ariel: That’s good. That's exactly what I want to hear. What prompted you to do that series, The End of the World?

Josh: Originally, it was just intellectual curiosity. I ran across a Bostrom paper in like 2005 or 6, my first one, and just immediately became enamored with the stuff he was talking about — it's just baldly interesting. Like anyone who hears about this stuff can't help but be interested in it. And so originally, the point of the podcast was, “Hey, everybody come check this out. Isn't this interesting? There's like, people actually thinking about this kind of stuff and talking about it.” And then as I started to interview some of the guys at the Future of Humanity Institute, started to read more and more papers and research further, I realized, wait, this isn't just like, intellectually interesting. This is real stuff. We're actually in real danger here.

And so as I was creating the series, I underwent this transition for how I saw existential risks, and then ultimately how I saw humanity's future, how I saw humanity, other people, and I kind of came to love the world a lot more than I did before. Not like I disliked the world or people or anything like that. But I really love people way more than I did before I started out, just because I see that we're kind of close to the edge here. And so the point of why I made the series kind of underwent this transition, and you can kind of tell in the series itself where it's like information, information, information. And then now, that you have bought into this, here's how we do something about it.

Ariel: So you have two episodes that go into biotechnology and artificial intelligence, which are two — especially artificial intelligence — they’re both areas that we work on at FLI. And in them, what I thought was nice is that you do get into some of the reasons why we're still pursuing these technologies, even though we do see these existential risks around them. And so, I was curious, as you were doing your research into the series, what did you learn about, where you were like, “Wow, that's amazing, that I'm so psyched that we're doing this, even though there are these risks.”

Josh: Basically everything I learned about. I had to learn particle physics to explain what's going on in large Hadron Collider. I had to learn a lot about AI. I realized when I came into it, that my grasp of AI was beyond elementary. And it's not like I could actually put together a AGI myself from scratch or anything like that now, but I definitely know a lot more than I did before. With biotech in particular, there was a lot that I learned that I found particularly jarring with the number of accidents that are reported every year, and then more than that, the fact that not every lab in the world has to report accidents. I found that extraordinarily unsettling.

So kind of from start to finish, I learned a lot more than I knew going into it, which is actually one of the main reasons why it took me well over a year to make the series because I would start to research something and then I'd realized I need to understand the fundamentals of this. So I’d go understand, I’d go learn that, and then there'd be something else I had to learn first, before I could learn something the next level up. So I kept having to kind of regressively research and I ended up learning quite a bit of stuff.

But I think to answer your question, the thing that struck me the most was learning about physics, about particle physics, and how tenuous our understanding of our existence is, but just how much we've learned so far in just the last like century or so, when we really dove into quantum physics, particle physics and just what we know about things. One of the things that just knocked my socks off was the idea that there's no such thing as particles — like particles, as we think of them are just basically like shorthand. But the rest of the world outside of particle physics has said like, “Okay, particles, there's like protons and neutrons and all that stuff. There's electrons. And we understand that they kind of all fit into this model, like a solar system. And that's how atoms work.”

That is not at all how atoms work, like a particle is just a pack of energetic vibrations and everything that we experience and see and feel, and everything that goes on in the universe is just the interaction of these energetic vibrations in force fields that are everywhere at every point in space and time. And just to understand that, like on a really fundamental level, changed my life actually, changed the way that I see the universe and myself and everything actually.

Ariel: I don't even know where I want to go next with that. I'm going to come back to that because I actually think it connects really nicely to the idea of existential hope. But first I want to ask you a little bit more about this idea of getting people involved more. I mean, I'm coming at this from something of a bubble at this point where I am surrounded by people who are very familiar with the existential risks of artificial intelligence and biotechnology. But like you said, once you start looking at artificial intelligence, if you haven't been doing it already, you suddenly realize that there's a lot there that you don't know.

Josh: Yeah.

Ariel: I guess I'm curious, now that you've done that, to what extent do you think everyone needs to? To what extent do you think that's possible? Do you have ideas for how we can help people understand this more?

Josh: Yeah you know, that really kind of ties into taking on existential risks in general, is just being an interested curious person who dives into the subject and learns as much as you can, but that at this moment in time, as I'm sure you know, that's easier said than done. Like you really have to dedicate a significant portion of your life to spending time focusing on that one issue whether it's AI, it’s biotech or particle physics, or nanotech, whatever. You really have to immerse yourself into it because it's not a general topic of national or global conversation, the existential risks that we're facing, and certainly not the existential risks we're facing from all the technology that everybody's super happy that we're coming out with.

And I think that one of the first steps to actually taking on existential risks is for more and more people to start talking about it. Groups like yours, talking to the public, educating the public. I'm hoping that my series did something like that, just arousing curiosity in people, but also raising awareness of these things like, these are real things, these aren’t crackpots talking about this stuff. This is real, legitimate issues that are coming down the pike, that are being pointed out by real, legitimate scientists and philosophers and people who have given great thought about this. This isn't like a chicken little situation; This is quite real. I think if you can pique someone's curiosity just enough that they listen, stop and listen, do a little research, it sinks in after a minute that this is real. And that, oh, this is something that they want to be a part of doing something about.

And so I think just getting people talking about that just by proxy will interest other people who hear about it, and it will spread further and further out. And I think that that's step one, is to just make it so it’s an okay thing to talk about, so you're not nuts to raise this kind of stuff seriously.

Ariel: Well, I definitely appreciate you doing your series for that reason. I'm hopeful that that will help a lot.

Ariel: Now, Allison — you've got this website which, my understanding is that you're trying to get more people involved in this idea that if we focus on these better ideals for the future, we stand a better shot at actually hitting them.

Allison: At ExistentialHope.com, I keep a map of reading, podcasts, organizations, and people that inspire an optimistic long-term vision for the future.

Ariel: You're clearly doing a lot to try to get more people involved. What is it that you're trying to do now, and what do you think we all need to be doing more of to get more people thinking this way?

Allison: I do think that it's up to everyone, really, to try to, again, engage with the fact that we may not be doomed, and what may be on the other side. What I'm trying to do with the website, at least, is generating common knowledge to catalyze more directed coordination toward beautiful futures. I think that there's a lot of projects out there that are really dedicated to identifying the threats to human existence, but very few really offer guidance on what to influence that. So I think we should try to map the space of both peril and promise which lie before us, but we should really try to aim for that this knowledge can empower each and every one of us to navigate toward the grand future.

For us currently on the website this involves orienting ourselves, so collecting useful models, and relevant broadcasts, and organizations that generate new insights, and then try to synthesize a map of where we came from, and a really kind of long perspective, and where we may go, and then which lenses of science and technology and culture are crucial to consider along the way. Then finally we would like to publish a living document that summarizes those models that are published elsewhere, to outline possible futures, and the idea is that this is a collaborative document. Even already, currently, the website links to a host of different Google docs in which we're trying to really synthesize the current state of the art in the different focus areas. The idea is that this is collaborative. This is why it's on Google docs, because everyone can just comment. And people do, and I think this should really be a collaborative effort.

Ariel: What are some of your favorite examples of content that, presumably, you've added to your website, that look at these issues?

Allison: There's quite a host of things on there, I think, that a good start for people to go on the website is just to go on the overview. Because here I list kind of my top 10 lists about short pieces and long pieces, but my personal ones, I think, as a starting ground: I really like the metaethics sequence by Eliezer Yudkowsky. It contains a really good post, like Existential Angst Factory, and Reality as Fixed Computation. For me this is kind of like existentialism 2.0. Have to get your motivations and expectations right. What can I reasonably hope for? Then I think, relatedly, there's also the Fan Sequence, also by Yudkowsky. But that together with, for example, Letter From Utopia by Nick Bostrom, or Hedonistic Imperative by David Pearce, or Post On Raikoth by Scott Alexander — they are really a nice next step because they actually lay out a few compelling positive versions of utopia.

Then if you want to get into the more nitty gritty there's a longer section on civilization, its past and its future — so, what's wrong and how to improve it. Here Nick Bostrom wrote this piece on the future of human evolution, which lays out two suboptimal paths for humanity's future, and interestingly enough they don't involve extinction. A similar one, I think, which probably many people are familiar with, is Scott Alexander's Meditations On Moloch, and then some that people are less familiar with — Growing Children For Bostrom's Disneyland. They are really interesting, because they are other pieces of this type, which are sketching out competitive and selective pressures that lead toward races to the bottom, as negative futures which don't involve extinction per se. I think the really interesting thing, then, is that even those features are only bad if you think that the bottom is bad.

Next to them I list books, for example, like Robin Hanson, Age of M, which argues that living at subsistence may not be terrible, and in fact it's pretty much what most of our past lives outside of the current dream time have always involved. So I think those are two really different lenses to make sense of the same reality, and I personally found this contrast so intriguing that I hosted a salon last year with Paul Christiano, Robin Hanson, Peter Eckersley, and a few others to kind of map out where we may be racing towards, so how bad those competitive equilibria actually are. I also link to those from the website.

To me it's always interesting to map out one potentially possible future visions, and then try to find one either that contradicts or compliments it. I think having a good idea of an overview of those gives you a good map, or at least a space of possibilities.

Ariel: What do you recommend to people who are interested in trying to do more? How do you suggest they get involved?

Allison: One thing, an obvious thing, would be commenting on the Google Docs, and I really encourage everyone to do that. Another one would be just to join the mailing list. You can kind of indicate whether you want updates on me, or whether you want to collaborate, in which case we may be able to reach out to you. Or if you're interested in meetups, they would only be in San Francisco so far, but I'm hoping that there may be others. I do think that currently the project is really in its infancy. We are relying on the community to help with this, so there should be a kind of collaborative vision.

I think that one of the main things that I'm hoping that people can get out of it for now is just to give some inspiration on where we may end up if we get it right, and on why work toward better futures, or even work toward preventing existential risks, is both possible and necessary. If you go on the website on the first section — the vision section — that's what that section is for.

Secondly, then, if you are already opted in, if you're already committed, I'm hoping that perhaps the project can provide some orientation. If someone would like to help but doesn't really know where to start, the focus areas are an attempt to map out the different areas that we need to make progress on for better futures. Each area comes with an introductory text, and organizations that are working in that area that one can join or support, and Future of Life is in a lot of those areas.

Then I think finally, just apart from inspiration or orientation, it's really a place for collaboration. The project is in its infancy and everyone should contribute their favorite pieces to our better futures.

Ariel: I’m really excited to see what develops in the coming year for existentialhope.com. And, naturally, I also want to hear from Max and Anthony about 2019. What are you looking forward to for FLI next year?

Max: For 2019 I'm looking forward to more constructive collaboration on many aspects of this quest for a good future for everyone on earth. At the nerdy level, I'm looking forward to more collaboration on AI's safety research and also ways of making the economy, that keeps growing thanks to AI, actually make everybody better off, rather than some people poorer and angrier. And at the most global level really looking forward to working harder to get past this outdated us versus them attitude that we still have between the US and China and Russia and other major powers. Many of our political leaders are so focused on the zero sum game mentality that they will happily risk major risks of nuclear war and AI arms races and other outcomes where everybody would lose, instead of just realizing hey, you know, we're actually in this together. What does it mean for America to win? It means that all Americans get better off. What does it mean for China to win? It means that the Chinese people all get better off. Those two things can obviously happen at the same time as long as there's peace, and technology just keeps improving life for everybody.

In practice, I'm very eagerly looking forward to seeing if we can get scientists from around the world — for example, AI researchers — to converge on certain shared goals that are really supported everywhere in the world, including by political leaders and in China and the US and Russia and Europe and so on, instead of just obsessing about the differences. Instead of thinking us versus them, it's all of us on this planet working together against the common enemy, which is our own stupidity and the tendency to make bad mistakes, so that we can harness this powerful technology to create a future where everybody wins.

Anthony: I would say I'm looking forward to more of what we're doing now, thinking more about the futures that we do want. What exactly do those look like? Can we really think through pictures of the future that makes sense to us that are attractive, that are plausible, and yet aspirational, and where we can identify things and systems and institutions that we can build now toward the aim of getting us to those futures? I think there's been a lot of, so far, thinking about what are the major problems that might arise, and I think that's really, really important, and that project is certainly not over, and it's not like we've avoided all of those pitfalls by any means, but I think it's important not to just not fall into the pit, but to actually have a destination that we'd like to get to — you know, the resort at the other end of the jungle or whatever.

I find it frustrating a bit when people do what I'm doing now: they talk about talking about what we should and shouldn't do. But they don't actually talk about what we should and shouldn't do. I think the time has come to actually talk about it in the same way that when… there was the first use of CRISPR in a embryo that came to term. So everybody's talking about, "Well, we need to talk about what we should and shouldn't do with this. We need to talk about that, we need to talk about it." Let's talk about it already.

So I'm excited about upcoming events that FLI will be involved in that are explicitly thinking about: let's talk about what that future is that we would like to have and let's debate it, let's have that discussion about what we do want and don't want, try to convince each other and persuade each other of different visions for the future. I do think we're starting to actually build those visions for what institutions and structures in the future might look like. And if we have that vision, then we can think of what are the things we need to put in place to have that.

Ariel: So one of the reasons that I wanted to bring Gaia on is because I'm working on a project with her — and it's her project — where we're looking at this process of what's known as worldbuilding, to sort of look at how we can move towards a better future for all. I was hoping you could describe it, this worldbuilding project that I'm attempting to help you with, or work on with you. What is worldbuilding, and how are you modifying it for your own needs?

Gaia: Yeah. Worldbuilding is a really fascinating set of techniques. It's a process that has its roots in narrative fiction. You can think of, for example, the entire complex world that J.R.R. Tolkien created for The Lord of the Rings series, for example. And in more contemporary times, some spectacularly advanced worldbuilding is occurring now in the gaming industry. So these huge connected systems of systems that underpin worlds in which millions of people today are playing, socializing, buying and selling goods, engaging in an economy. These are these vast online worlds that are not just contained on paper as in a book, but are actually embodied in software. And over the last decade, world builders have begun to formally bring these tools outside of the entertainment business, outside of narrative fiction and gaming, film and so on, and really into society and communities. So I really define worldbuilding as a powerful act of creation.

And one of the reasons that it is so powerful is that it really facilitates collaborative creation. It's a collaborative design practice. And in my personal definition of worldbuilding, the way that I'm thinking of it, and using it, is that it unfolds in four main stages. The first stage is: we develop a foundation of shared knowledge that's grounded in science, and research, and relevant domain expertise. And the second phase is building on that foundation of knowledge. We engage in an exercise where we predict how the interconnected systems that have emerged in this knowledge database — we predict how they will evolve. And we imagine the state of their evolution at a specific point in the future. Then the third phase is really about capturing that state in all its complexity, and making that information useful to the people who need to interface with it. And that can be in the form of interlinked databases and particularly also in the form of visualizations, which help make these sort of abstract ideas feel more present and concrete. And then the fourth and final phase is then utilizing that resulting world as a tool that can be used to support scenario simulation, research, and development in many different areas including public policy, media production, education, and product development.

I mentioned that these techniques are being brought outside of the realm of entertainment. So rather than just designing fantasy worlds for the sole purpose of containing narrative fiction and stories, these techniques are now being used with communities, and Fortune 500 companies, and foundations, and NGOs, and other places, to create plausible future worlds. It's fascinating to me to see how these are being used. For example, they're being used to reimagine the mission of an organization. They're being used to plan for the future, and plan around a collective vision of that future. They're very powerful for developing new strategies, new programs, and new products. And I think to me one of the most interesting things is really around informing policy work. That's how I see worldbuilding.

Ariel: Are there any actual examples that you can give or are they proprietary?

Gaia: There are many examples that have created some really incredible outcomes. One of the first examples of worldbuilding that I ever learned about was a project that was done with a native Alaskan tribe. And the comments that came from the tribe and about that experience were what really piqued my interest. Because they said things like, "This enabled us to sort of leap frog over the barriers in our current thinking and imagine possibilities that were sort of beyond what we had considered." This project brought together several dozen members of the community, again, to engage in this collaborative design exercise, and actually visualize and build out those systems and understand how they would be interconnected. And it ended up resulting in, I think, some really incredible things. Like a partnership with MIT where they brought a digital fabrication lab onto their reservation, and created new education programs around digital design and digital fabrication for their youth. And there's a lot of other things that are still coming out of that particular worldbuild.

There are other examples where Fortune 500 companies are building out really detailed, long-term worldbuilds that are helping them stay relevant, and imagine how their business model is going to need to transform in order to adapt to really plausible, probable futures that are just around the corner.

Ariel: I want to switch now to what you specifically are working on. The project we're looking at is looking roughly 20 years into the future. And you've sort of started walking through a couple systems yourself while we've been working on the project. And I thought that it might be helpful if you could sort of walk through, with us, what those steps are to help understand how this process works.

Gaia: Maybe I'll just take a quick step back, if that's okay and just explain the worldbuild that we're preparing for.

Ariel: Yeah. Please do.

Gaia: This is a project called Augmented Intelligence. The first Augmented Intelligence summit is happening in March in 2019. And our goal with this project is really to engage with and shift the culture, and also our mindset, about the future of artificial intelligence. And to bring together a multidisciplinary group of leaders from government, academia, and industry, and to do a worldbuild that's focused on this idea of: what does our future world look like with advanced AI deeply integrated into it? And to go through the process of really imagining and predicting that world in a way that's just a bit further beyond the horizon that we normally see and talk about. And that exercise, that's really where we're getting that training for long-term thinking, and for systems level thinking. And the world that results — our hope is that it will allow us to develop better intuitions, to experiment, to simulate scenarios, and really to have a more attuned capacity to engage in many ways with this future. And ultimately explore how we want to evolve our tools and our society to meet that challenge.

Gaia: What will come out of this process — it really is a generative process that will create assets and systems that are interconnected, that inhabit and embody a world. And this world should allow us to experiment, and simulate scenarios, and develop a more attuned capacity to engage with the future. And that means on both an intuitive level and also in a more formal structured way. And ultimately our goal is to use this tool to explore how we want to evolve as a society, as a community, and to allow ideas to emerge about what solutions and tools will be needed to adapt to that future. Our goal is to really bootstrap a steering mechanism that allows us to navigate more effectively toward outcomes that support human flourishing.

Ariel: I think that's really helpful. I think an example to walk us through what that looks like would be helpful.

Gaia: Sure. You know, basically what would happen in a worldbuilding process is that you would have some constraints or some sort of seed information that you think is very likely — based on research, based on the literature, based on sort of the input that you're getting from domain experts in that area. For example, you might say, "In the future we think that education is all going to happen in a virtual reality system that's going to cover the planet." Which I don't think is actually the case, but just to give an example. You might say something like, "If this were true, then what are the implications of that?" And you would build a set of systems, because it's very difficult to look at just one thing in isolation.

Because as soon as you start to do that — John Muir says, “As soon as you try to look at just one thing, you find that it is irreversibly connected to everything else in the universe.” And I apologize to John Muir for not getting that quote exactly correct, he says it much more eloquently than that. But the idea is there. And that’s sort of what we leverage in a worldbuilding process: where you take one idea and then you start to unravel all of the implications, and all of the interconnecting systems that would be logical, and also possible, if that thing were true. It really does depend on the quality of the inputs. And that's something that we're working really, really hard to make sure that our inputs are believable and plausible, but don't put too much in terms of constraints on the process that unfolds. Because we really want to tap into the creativity in the minds of this incredible group of people that we’re gathering, and that is where the magic will happen.

Ariel: To make sure that I'm understanding this right: if we use your example of, let's say all education was being taught virtually, I guess questions that you might ask or you might want to consider would be things like: who teaches it, who's creating it, how do students ask questions, who would their questions be directed to? What other types of questions would crop up that we'd want to consider? Or what other considerations do you think would crop up?

Gaia: You also want to look at the infrastructure questions, right? So if that's really something that is true all over the world, what do server farms look like in that future, and what's the impact on the environment? Is there some complimentary innovation that has happened in the field of computing that has made computing far more efficient? How have we been able to do this? Given the — there are certain physical limitations that just exist on our planet. If X is true in this interconnected system, then how have we shaped, and molded, and adapted everything around it to make that thing true? You can look at infrastructure, you can look at culture, you can look at behavior, you can look at, as you were saying, communication and representation in that system and who is communicating. What are the rules? I mean, I think a lot about the legal framework, and the political structure that exists around this. So who has power and agency? How are decisions made?

Ariel: I don't know what this says about me, but I was just wondering what detention looks like in a virtual world.

Gaia: Yeah. It's a good question. I mean, what are the incentives and what are the punishments in that society? And do our ideas of what incentives and punishments look like actually change in that context? There isn't a place where you can come on a Saturday if there's no physical school yard. How is detention even enforced when people can log in and out of the system at will?

Ariel: All right, now you have me wondering what recess looks like.

Gaia: So you can see that there are many different fascinating sort of rabbit holes that you could go down. And of course our goal is to really make this process really useful to imagining the way that we want our policies, and our tools, and our education to evolve.

Ariel: I want to ask one more question about ... Well, it's sort of about this but there's also a broader aspect to it. And that is, I hear a lot of talk — and I’m one of the people saying this because I think it's absolutely true — that we need to broaden the conversation and get more diverse voices into this discussion about what we want our future to look like. But what I'm finding is that this sounds really nice in theory, but it's incredibly hard to actually do in practice. I'm under the impression that that is some of what you're trying to address with this project. I'm wondering if you can talk a little bit about how you envision trying to get more people involved in considering how we want our world to look in the future.

Gaia: Yeah, that's a really important question. One of the sources of inspiration for me on this point was a conversation with Stuart Russell — an interview with Stuart Russell, I should say — that I listened to. We've been really fortunate and we are thrilled that he's one of our speakers and he'll be involved in the worldbuilding process. And he kind of talks about this idea that the artificial intelligence researchers, the roboticists, even a few technologists that are building these amplifying tools that are just increasing in potency year over year, are not the only ones who need to have input into the conversation around how they're utilized and the implications on all of us. And that's really one of the sort of core philosophies behind this particular project, is that we really want it to be a multidisciplinary group that comes together, and we're already seeing that. We have a really wonderful set of collaborators who are thinking about ethics in this space, and who are thinking about a broader definition of ethics, and different cultural perspectives on ethics. And how we can create a conversation that allows space for those to simultaneously coexist.

Allison: I recently had a similar kind of question that arose in conversation, which was about: why are we lacking positive future visions so much? Why are we all kind of stuck in a snapshot of the current suboptimal macro situation? I do think it's our inability to really think in larger terms. If you look at our individual human life, clearly for most of us, it's pretty incredible — our ability to lead much longer and healthier lives than ever before. If we compare this to how well humans used to live, this difference is really unfathomable. I think Yuval Harari said it right, he said "You wouldn't want to have lived 100 years ago." I think that's correct. On the other hand I also think that we're not there yet.

I find it, for example, pretty peculiar that we say that we value freedom of choice in everything we do, but in the one thing that's kind of the basis of all of our freedoms, which is our very existence, we leave it up again to slowly deteriorate according to aging. This would really deteriorate ourselves and everything we value. I think that every day aging is burning libraries. We've come a long way, but we're not safe, and we are definitely not there yet. I think the same holds true for civilization at large. I think thanks to a lot of technologies our living standards have been getting better and better, and I think the decline of poverty and violence are just a few examples.

We can share knowledge much easier, and I think everyone who's read Enlightenment Now will be kind of tired of those graphs, but again, I also think that we're not there yet. I think even though we have less wars than ever before, the ability to wipe ourselves out as a species also really exists, and I think in fact this ability is now more available to more people, and with technologies of maturity, it may really only take a small and well-curated group of individuals to cause havoc of catastrophic consequences. If you let that sink in, it's really absurd that we have no emergency plan for the use of technological weapons. We have no plans to rebuild civilization. We have no plans to back up human life.

I think that current news articles take too much of a short term view. They're more a snapshot. I think the long-term view, on the one hand, opens up this eye of, “Hey, look how far we've come,” but also, "Oh man. We're here, and we've made it so far, but there's no feasible plan for safety yet." I do think we need to change that, so I think the long run doesn't only open up rosy glasses, but also the realization that we ought to do more because we've come so far.

Josh: Yeah, one of the things that makes this time so dangerous is we're at this kind of a fork in the road, where if we go this one way, like say, with figuring out how to develop friendliness in AI, we could have this amazing, astounding future for humanity that stretches for billions and billions and billions of years. One of the things that really opened my eyes was, I always thought that the heat death of the universe will spell the end of humanity. There's no way we'll ever make it past that, because that's just the cessation of everything that makes life happen, right? And we will probably have perished long before that. But let's say we figured out a way to just make it to the last second and humanity dies at the same time the universe does. There's still an expiration date on humanity. We still go extinct eventually. But one of the things I ran across when I was doing research for the physics episode is that the concept of growing a universe from seed, basically, in a lab is out there. It's done. I don't remember who came up with it. But somebody has sketched out basically how to do this.

It's 2018. If we think 100 or 200 or 500 or a thousand years down the road and that concept can be built upon and explored, we may very well be able to grow universes from seed in laboratories. Well, when our universe starts to wind down or something goes wrong with it, or we just want to get away, we could conceivably move to another universe. And so we suddenly lose that expiration date for humanity that's associated with the heat death of the universe, if that is how the universe goes down. And so this idea that we have a future lifetime that spans into at least the multiple billions of years — at least a billion years if we just manage to stay alive on Planet Earth and never spread out but just don't actually kill ourselves — when you take that into account the stakes become so much higher for what we're doing today.

Ariel: So, we’re pretty deep into this podcast, and we haven’t heard anything from Anders Sandberg yet, and this idea that Josh brought up ties in with his work. Since we’re starting to talk about imagining future technologies, let’s meet Anders.

Anders: Well, I'm delighted to be on this. I'm Anders Sandberg. I'm a senior research fellow at The Future of Humanity Institute at University of Oxford.

Ariel: One of the things that I love, just looking at your FHI page, you talk about how you try to estimate the capabilities of future technology. I was hoping you could talk a little bit about what that means, what you've learned so far, how one even goes about studying the capabilities of future technologies?

Anders: Yeah. It is a really interesting problem because technology is based on ideas. As a general rule, you cannot predict what ideas people will come up with in the future, because if you could, you would already kind of have that idea. So this means that, especially technologies that are strongly dependent on good ideas, are going to be tremendously hard to predict. This is of course why artificial intelligence is a little bit of a nightmare. Similarly, biotechnology is strongly dependent on what we discover in biology and a lot of that is tremendously weird, so again, it's very unpredictable.

Meanwhile, other domains of life are advancing at a more sedate pace. It's more like you incrementally improve things. So the ideas are certainly needed, but we don't really change everything around. If you think about more slower, microprocessors are getting better and a lot of improvements are small, incremental ones. Some of them require a lot of intelligence to come up with, but in the end it all sums together. It's a lot of small things adding together. So you can see a relatively smooth development in the large.

Ariel: Okay. So what you're saying is we don't just have each year some major discovery, and that's what doubles it. It's lots of little incremental steps.

Anders: Exactly. But if you look at the performance of some software, quite often it goes up smoothly because the computers are getting better and then somebody has a brilliant idea that can do it not just in 10% less time, but maybe in 10% of the time that it would have taken. For example, the fast Fourier transform that people invented in the 60s and 70s enables the compression we use today for video and audio and enables multimedia on the internet. Without that to speed up, it would not be practical to do, even with current computers. This is true for a lot of things in computing. You get a surprise insight and the problem that previously might be impossible to do efficiently suddenly becomes quite convenient. So the problem is of course: what can we say about the abilities of future technology if these things happen?

One of the nice things you can do is you can lean on the laws of physics. There are good reasons not to think that perpetual motion machines can work, because we understand, actually, energy conservation and the laws of thermodynamics that give very strong reason why this cannot happen. We can be pretty certain that that's not possible. We can analyze what would then be possible if you had perpetual motion machines or faster than light transport and you can see that some of the consequences are really weird. But it makes you suspect that this is probably not going to happen. So that's one way of looking at it. But you can do the reverse: You can take laws of physics and engineering that you understand really well and make fictional machines — essentially work out all the details and say "okay, I can't build this but were I to build it, in that case what properties would it have?" If I wanted to build, let's say, a machine made out of atoms, could I make it to work? And it turns out that this is possible to do in a rigorous way, and it tells you capabilities about machines that don't exist yet, and maybe we will never build, but it shows you what's possible.

This is what Eric Drexler did for nanotechnology in the 80s and 90s. He basically worked out what would be possible if we could put atoms in the right place. He could demonstrate that this would produce machines of tremendous capability. We still haven't built them, but he proved that these can be built — and we probably should build them because they are so effective, so environmentally friendly, and so on.

Ariel: So you gave the example of what he came up with a while back. What sort of capabilities have you come across that you thought were interesting that you're looking forward to us someday pursuing?

Anders: I've been working a little bit on the questions about "is it possible to settle a large part of the universe?" I have been working out, together with my colleagues, a bit of the physical limitations of that. All in all, we found that a civilization doesn't need to use an enormous, astronomical amount of matter and energy to settle a very large chunk of the universe. The total amount of matter corresponds with roughly a Mercury-sized planet in a solar system in each of the galaxies. Many people would say if you want to settle the universe you need an enormous spacecraft and you need enormous amount of energy. It looks like you would be able to see that across half of the universe, but we could demonstrate that actually if you essentially use matter from a really big asteroid or a small planet, you can get enough solar collectors to launch small spacecraft to all the stars and all the galaxies within reach and there you'll use again a bit of asteroids to do it. The laws of physics allow intelligent life to spread across an enormous amount of the universe in a rather quiet way.

Ariel: So does that mean you think it's possible that there is life out there and it's reasonable for us not to have found it?

Anders: Yes. If we were looking at the stars, we would probably miss if one or two stars in remote galaxies were covered with solar collectors. It's rather easy to miss them among the hundreds of billions of other stars. This was actually the reason we did this paper: We demonstrate that much of the thinking about the Fermi paradox — that annoying question that well, there ought to be a lot of intelligent life out in the universe given how large it is and that we tend to think that it's relatively likely yet we don't see anything — many of those explanations are based on the possibility of colonizing just the Milky Way. In this paper, we demonstrate that actually you need to care about all the other galaxies too. In a sense, we made the fermi paradox between a million and a billion times worse. Of course, this is all in a day’s work for us in the Philosophy Department, making everybody's headaches bigger.

Ariel: And now it's just up to someone else to figure out the actual way to do this technically.

Anders: Yeah, because it might actually be a good idea for us to do.

Ariel: So Josh, you've mentioned the future of humanity a couple of times, and humanity in the future, and now Anders has mentioned the possibility of colonizing space. I’m curious how you think that might impact humanity. How do you define humanity in the future?

Josh: I don't know. That's a great question. It could take any number of different routes. I think — Robin Hanson is an economist who came up with this, the great filter hypothesis, and I talked to him about that very question. His idea was that — and I'm sure it's not just his, but it's probably a pretty popular idea — that once we spread out from Earth and start colonizing further and further out into the galaxy, and then into the universe, we’ll undergo speciation events like, there will be multiple species of humans in the universe again, just like there was like 50,000 years ago, when we shared Earth with multiple species of humans.

The same thing is going to happen as we spread out from Earth. I mean, I guess the question is, which humans are you talking about, in what galaxy? I also think there's a really good chance — and this could happen among multiple human species — that at least some humans will eventually shed their biological form and upload themselves into some sort of digital format. I think if you just start thinking in efficiencies, that's just a logical conclusion to life. And then there's any number of routes we could take and change especially as we merge more with technology or spread out from Earth and separate ourselves from one another. But I think the thing that really kind of struck me as I was learning all this stuff is that we tend to think of ourselves as the pinnacle of evolution, possibly the most intelligent life in the entire universe, right? Certainly the most intelligent on Earth, we'd like to think. But if you step back and look at all the different ways that humans can change, especially like the idea that we might become post-biological, it becomes clear that we're just a point along a spectrum that keeps on stretching out further and further into the future than it does even into the past.

We're just at a current situation on that point right now. We're certainly not like the end-all be-all of evolution. And ultimately, we may take ourselves out of evolution by becoming post-biological. It's pretty exciting to think about all the different ways that it can happen, all the different routes we can take — there doesn't have to just be one single one either.

Ariel: Okay, so, I kind of want to go back to some of the space stuff a little bit, and Anders is the perfect person for my questions. I think one of the first things I want to ask is, very broadly, as you're looking at these different theories about whether or not life might exist out in the universe and that it's reasonable for us not to have found it, do you connect the possibility that there are other life forms out there with an idea of existential hope for humanity? Or does it cause you concern? Or are they just completely unrelated?

Anders: The existence of extraterrestrial intelligence: if we knew they existed that would in some sense be hopeful because we know the universe allows for more than our kind of intelligence and intelligence might survive over long spans of time. If we just discovered that we're all alone and a lot of ruins from extinct civilizations, that would be very bad news for us. But we might also have this weird situation that we currently feel, that we don't see anybody. We don't notice any ruins; Maybe we're just really unique and should perhaps feel a bit proud or lucky but also responsible for a whole universe. It's tricky. It seems like we could learn something very important if we understood how much intelligence there is out there. Generally, I have been trying to figure out: is the absence of aliens evidence for something bad? Or might it actually be evidence for something very hopeful?

Ariel: Have you concluded anything?

Anders: Generally, our conclusion has been that the absence of aliens is not surprising. We tend to think that the Fermi Paradox implies "oh, there's something strange here.” The universe is so big and if you multiply the number of stars with some reasonable probability, you should get loads of aliens. But actually, the problem here is reasonable probability. We normally tend to think of that as something like bigger than one chance in a million or so, but actually, there is no reason the laws of physics wouldn't put a probability that’s one in a googol. It actually turns out that we're uncertain enough about the origin of life and the origins of intelligence and other forms of complexity that it's not implausible that maybe we are the only life within the visible universe. So we shouldn't be too surprised about that empty sky.

One possible reason for the great silence is that life is extremely rare. Another possibility might be that life is not rare but it's very rare that it becomes the kind of life that evolves to complex nervous systems. Another reason might be of course that once you get intelligence, well, it destroys itself relatively quickly, and Robin Hanson has called this the Great Filter. We know that one of the terms in the big equation for the number of civilizations in the universe needs to be very small; otherwise, the sky would be full of aliens. But is that one of the early terms, like the origin of life, or the origin of intelligence — or the late term, how long intelligence survives? Now, if there is an early Great Filter, this is rather good news for us. We are going to be very unique and maybe a bit lonely, but, it doesn't tell us anything dangerous about our own chances. Of course, we might still flub it and go extinct because our own stupidity but that's kind of up to us rather than the laws of physics.

On the other hand, if it turns out that there is a late Great Filter, then even though we know the universe might be dangerous, we're still likely to get wiped out — which is very scary. So, figuring out where the unlikely terms in the big equation are is actually quite important for making a guess about our own chances.

Ariel: Where are we now in terms of that?

Anders: Right now, in my opinion — I have a paper, not published yet but it's in the review process, where we try to apply proper uncertainty calculations to this. Because many people make guesstimates about the probabilities of various things, admit that they're guesstimates, and then get a number at the end that we also admit is a bit uncertain. But we haven't actually done a proper uncertainty calculation so quite a lot of these numbers become surprisingly biased. So instead of saying that maybe there's one chance in a million that a planet develops life, you should try to have a full range of what's the lowest probability there could be for life and what's the highest probability and how do you think it distributes between them. If you use that kind of proper uncertainty range and then multiply it all together and do the maths right, then you get the probability distribution for how many alien species there could be in the universe. Even if you're starting out as somebody who's relatively optimistic about the mean value of all of this, you will still find that you get a pretty big chunk of probability that we're actually pretty alone in the Milky Way or even the observable universe.

In some sense, this is just common sense. But it's a very nice thing to be able to quantify the common sense, and then start saying: so what happens if we for example discover that there is life on Mars? What will that tell us? How will that update things? You can use the math to calculate that, and this is what we've done. Similarly, if we notice that there doesn't seem to be any alien super civilizations around the visible universe, that's a very weak update but you can still use that to see that this updates our estimates of the probability of life and intelligence much more than the longevity of civilizations.

Mathematically this gives us a reason to think that the Great Filter might be early. The absence of life might be rather good news for us because it means that once you get intelligence, there's no reason why it can't persist for a long time and grow into something very flourishing. That is a really good cause of existential hope. It's really promising, but we of course need to do our observations. We actually need to look for life, we need to look out in the sky and see. You may find alien civilizations. In the end, any amount of mathematics and armchair astrobiology, that's always going to be disproven by any single observation.

Ariel: That comes back to a question that came to mind a bit earlier. As you're looking at all of this stuff and especially as you're looking at the capabilities of future technologies, once we figure out what possibly could be done, can you talk a little bit about what our limitations are today from actually doing it? How impossible is it?

Anders: Well, impossible is a really tricky word. When I hear somebody say "it's impossible," I immediately ask "do you mean against the laws of physics and logic" or "we will not be able to do this for the foreseeable future" or "we can't do it within the current budget”?

Ariel: I think maybe that's part of my question. I'm guessing a lot of these things probably are physically possible, which is why you've considered them, but yeah, what's the difference between what we're technically capable of today and what, for whatever reason, we can't budget into our research?

Anders: We have a domain of technologies that we already have been able to construct. Some of them are maybe too expensive to be very useful. Some of them still requires a bunch of grad students holding them up and patching them as they are breaking all the time, but we can kind of build them. And then there's some technology that we are very robustly good at. We have been making cog wheels and combustion engines for decades now and we're really good at that. Then there are these technologies that we can do exploratory engineering to demonstrate that if we actually had cog wheels made out of pure diamond or the Dyson shell surrounding the sun collecting energy, they could do the following things.

So they don't exist as practical engineering. You can work out blueprints for them and in some sense of course, once we have a complete enough blueprint, if you asked could you build the thing, you could do it. The problem is of course normally you need the tools and resources for that, and you need to make the tools to make the tools, and the tools to make those tools, and so on. So if we wanted to make atomically precise manufacturing today, we can't jump straight to it. What we need to make is a tool that allows us to build things that are moving us much closer.

The Wright Brothers' airplane was really lousy as an airplane but it was flying. It's a demonstration, but it's also a tool that allows you to make a slightly better tool. You would want to get through this and you'd probably want to have a roadmap and do experiments and figure out better tools to do that.

This is typically where scientists actually have to give way to engineers. Because engineers care about solving a problem rather than being the most elegant about it. In science, we want to have this beautiful explanation of how everything works; Then we do experiments to test whether it's true and refine our explanation. But in the end, the paper that gets published is going to be the one that has the most elegant understanding. In engineering, the thing that actually sells and changes the world is not going to be the most elegant thing but the most useful thing. The AK-47 is in many ways not a very precise piece of engineering but that's the point. It should be possible to repair it in the field.

The reason our computers are working so well was we figured out the growth path where you use photolithography to etch silicon chips, and that allowed us to make a lot of them very cheaply. As we learned more and more about how to do that, they became cheaper and more capable and we developed even better ways of etching them. So in order to build molecular nanotechnology, you would need to go through a somewhat similar chain. It might be that you start out with using biology to make proteins, and then you use the proteins to make some kind of soft machinery, and then you use that soft machinery to make hard machinery, and eventually end up with something like the work of Eric Drexler.

Ariel: I actually want to step back to the present now and you mentioned computers and we're doing them very well. But computers are also an example of — or maybe software I suppose is more the example — of technology that works today but it often fails. Especially when we're considering things like AI safety in the future, what should we make of the fact that we're not designing software to be more robust? I mean, I think especially if we look at something like airplanes which are quite robust, we can see that it could be done but we're still choosing not to.

Anders: Yeah, nobody would want to fly with an airplane that crashed as often as a word processor.

Ariel: Exactly.

Anders: It’s true that the earliest airplanes were very crash prone — in fact most of them were probably as bad as our current software is. But the main reason we're not making software better is that most of the time we're not willing to pay for that quality. Also, that there is some very hard engineering problems with engineering complexity. So making a very hard material is not easy but in some sense, it's a straightforward problem. If, on the other hand, you have literally billions of moving pieces that all need to fit together, then it gets tricky to make sure that this always works as it should. But it can be done.

People have been working on mathematical proofs that certain pieces of software are correct and secure. It's just that up until recently, it's been so expensive and tough that nobody really cared to do it except maybe some military groups. Now it's starting to become more and more essential because we've built our entire civilization on a lot of very complex systems that are unfortunately very insecure, very unstable, and so on. Most of the time we get around it by making backup copies and whenever a laptop crashes, well, we reboot it, swear a bit and hopefully we haven't lost too much work.

That's not always a bad solution — a lot of biology is like that too. Cells in our bodies are failing all the time but they’re just getting removed and replaced and then we try again. But this, of course, is not enough for certain sensitive applications. If we ever want to have brain-to-computer interfaces, we certainly want to have good security so we don't get hacked. If we want to have very powerful AI systems, we want to make sure that their motivations are constrained in such a way that they're helpful. We also want to make sure that they don't get hacked or develop weird motivations or behave badly because their owners told them to behave badly. Those are very complex problems: It's not just like engineering something that's simply safe. You're going to need entirely new forms of engineering for that kind of learning system.

This is something we're learning. We haven't been building things like software for very long and when you think about the sheer complexity of a normal operating system, even a small one running on a phone, it's kind of astonishing that it works at all.

Allison: I think that Eliezer Yudkowsky once said that the problem of our complex civilization is its complexity. It does seem that technology is outpacing our ability to make sense of it. But I think we have to remind ourselves again of why we developed those technologies in the first place, and of the tremendous promises if we get it right. Of course on the one hand I think solving problems that are created by technologies, for example, existential risks — or at least some of those, they require a few kind of non-technological aspects, especially human reasoning, sense-making, and coordination.

And  I'm not saying that we have to focus on one conception of the good. There are many conceptions of the good. There's transhumanist futures, there's cosmist futures, there's extropian futures, and many, many more, and I think that's fine. I don't think we have to agree on a common conception just yet — in fact we really shouldn't. But the point is not that we ought to settle soon, but that we have to allow into our lives again the possibility that things can be good, that good things are possible — not guaranteed, but they're possible. I think to use technologies for good we really need a change of mindset, from pessimism to at least conditional optimism. And we need a plethora of those, right? It's not going to be one of them.

I do think that in order to use technologies for good purposes, we really have to remind ourselves that they can be used for good, and that there are good outcomes in the first place. I genuinely think that often in our research, we put the cart before the horse in focusing solely on how catastrophic human extinction would be. I think this often misses the point that extinction is really only so bad because the potential value that could be lost is so big.

Josh: If we can just make it to this point — Nick Bostrom, whose ideas a lot of The End of the World is based on, calls it technological maturity. It's kind of a play on something that Carl Sagan said about the point we're at now: “technological adolescence” is what Sagan called it, which is this point where we're starting to develop this really intense, amazingly powerful technology that will one day be able to guarantee a wonderful, amazing existence for humanity, if we can survive to the point where we've mastered it safely. That's what the next hundred or 200 or maybe 300 years stretches out ahead of us. That's the challenge that we have in front of us. If we can make it to technological maturity, if we figure out how to make an artificial generalized intelligence that is friendly to humans, that basically exists to make sure that humanity is well cared for and taken care of, there's just no telling what we’ll be able to come up with and just how vastly improved the life of the average human would be in that situation.

We're talking — honestly, this isn't like some crazy far out far future idea. This is conceivably something that we could get done as humans in the next century or two or three. Even if you talk out to 1000 years, that sounds far away. But really, that's not a very long time when you consider just how far of a lifespan humanity could have stretching out ahead of it. The stakes: that makes me, almost gives me a panic attack when I think of just how close that kind of a future is for humankind and just how close to the edge we're walking right now in developing that very same technology.

Max: The way I see the future of technology as we go towards artificial general intelligence, and perhaps beyond — it could totally make life the master of its own destiny, which makes this a very important time to stop and think what do we want this destiny to be? The more clear and positive vision we can formulate, I think the more likely it is we're going to get that destiny.

Allison: We often seem to think that rather than optimizing for good outcomes, we should aim for maximizing the probability of an okay outcome, but I think for many people it's more motivational to act on a positive vision, rather than one that is steered by risks only. To be for something rather than against something. To work toward a grand goal, rather than an outcome in which survival is success. I think a good strategy may be to focus on good outcomes.

Ariel: I think it's incredibly important to remember all of the things that we are hopeful for for the future, because these are the precise reasons that we're trying to prevent the existential risks, all of the ways that the future could be wonderful. So let’s talk a little bit about existential hope.

Allison: The term existential hope was coined by Owen Cotton-Barratt and Toby Ord to describe the chance of something extremely good happening, as opposed to an existential risk, which is a chance of something extremely terrible occurring. Kind of like describing a eucatastrophe instead of a catastrophe. I personally really agree with this line, because I think for me really it means that you can ask yourself this question of: do you think you can save the future? I think this question may appear at first pretty grandiose, but I think it's sometimes useful to ask yourself that question, because I think if your answer is yes then you'll likely spend your whole life trying, and you won't rest, and that's a pretty big decision. So I think it's good to consider the alternative, because if the answer is no then you perhaps may be able to enjoy the little bit of time that you have on Earth rather than trying to spend it on making a difference. But I am not sure if you could actually enjoy every blissful minute right now if you knew that there was just a slight chance that you could make a difference. I mean, could you actually really enjoy this? I don't think so, right?

I think perhaps we fail — and we do our best, but at the final moment something comes along that makes us go extinct anyways. But I think if we imagine the opposite scenario, in which we have not tried, and it turns out that we could have done something, an idea that we may have had or a skill we may have given was missing and it's too late, I think that's a much worse outcome.

Ariel: Is it fair for me to guess, then, that you think for most people the answer is that yes, there is something that we can do to achieve a more existential hope type future?

Allison: Yeah, I think so. I think that for most people there is at least something that we can be doing if we are not solving the wrong problems. But I do also think that this question is a serious question. If the answer for yourself is no, then I think you can really try to focus on having a life that is as good as it could be right now. But I do think that if the answer is yes, and if you opt in, then I think that there's no space any more to focus on how terrible everything is. Because we've just confessed to how terrible everything is, and we've decided that we're still going to do it. I think that if you opt in, really, then you can take that bottle of existential angst and worries that I think is really pestering us, and put it to the side for a moment. Because that's an area you've dealt with and decided we're still going to do it.

Ariel: The sentiment that's been consistent is this idea that the best way to achieve a good future is to actually figure out what we want that future to be like and aim for it.

Max: On one hand, should be a no-brainer because that's how we think about life as individuals. Right? I often get students walking into my office at MIT for career advice, and I always ask them about their vision for the future, and they always tell me something positive. They don't walk in there and say, "Well, maybe I'll get murdered. Maybe I'll get cancer. Maybe I'll ..." because they know that that's a really ridiculous approach to career planning. Instead, they envision the positive future, their aspiring things, so that we can constructively think about the challenges, the pitfalls to be avoided, and a good strategy for getting there.

Yet, as a species, we do exactly the opposite. We go to the movies and we watch Terminator, or Blade Runner, or yet another dystopic future vision that just fills us with fear and sometimes paranoia or hypochondria, when what we really need to do, as a species, is the same thing as we need to do as individuals: envision a hopeful, inspiring future that we want to rally around. It's a well known historical fact, right, that the secret to get more constructive collaboration is to develop a shared positive vision. Why is Silicon Valley in California and not in Uruguay or Mongolia? Well, it's because in the 60s, JFK articulated this really inspiring vision — going to space — which lead to massive investments in stem research and gave the US the best universities in the world and these amazing high tech companies, ultimately. Came from a positive vision.

Similarly, why is Germany now unified into one country instead of fragmented into many? Or Italy? Because of a positive vision. Why are the US states working together instead of having more civil wars against each other? Because of a positive vision of how much greater we'll be if we work together. And if we can develop a more positive vision for the future of our planet, where we collaborate and everybody wins by getting richer and better off, we're again much more likely to get that than if everybody just keeps spending their energy and time thinking about all the ways they can get screwed by their neighbors and all the ways in which things can go wrong — causing some self fulfilling prophecy basically, where we get a future with war and destruction instead of peace and prosperity.

Anders: One of the things I'm envisioning is that you can make a world where everybody's connected but also connected on their own terms. Right now, we don't have a choice. My smartphone gives me a lot of things but it also reports my location and a lot of little apps are sending my personal information to companies and institutions I have no clue about and I don't trust. I think one important technology that might actually be that you do privacy-enhancing technologies. Many of the little near-field microchips we carry around, they also are indiscriminately reporting to nearby antennas what we're doing. But you could imagine having a little personal firewall that actually blocks signals that you don't approve of. You could actually have firewalls and ways of controlling the information leaving your smartphone or your personal space. And I think we actually need to develop that, both for security purposes but also to feel that we actually are in charge of our private lives.

Some of that privacy is a social convention. We agree on what is private and not: This is why we have certain rules about what you are allowed to do with a cell phone in a restaurant. You're not going to have a conversation with somebody — that's rude. And others are not supposed to listen to your restaurant conversations that you have with people in the restaurant, even though technically of course, it's trivial. I think we are going to develop new interesting rules and new technologies to help implement these social rules.

Another area I'm really excited about is the ability to capture energy, for example, using solar collectors. Solar collectors are getting exponentially better and are becoming competitive in a lot of domains with traditional energy sources. But the most beautiful things is they can be made small, used in a distributed manner. You don't need that big central solar farm even though it might be very effective. You can actually have little solar panels on your house or even on gadgets, if they're energy efficient enough. That means that you both reduce the risk of a collective failure but also that you get a lot of devices that can now function independently of the grid.

Then I think we are probably going to be able to combine this to fight a lot of emergent biological threats. Right now, we still have this problem that it takes a long time to identify a new pathogen. But I think we're going to see more and more distributed sensors that can help us identify it quickly, global networks that make the medical professional aware that something new has shown up, and hopefully also ways of very quickly brewing up vaccines in an automated manner when something new shows up.

My vision is that within one or two decades, if something nasty shows up, the next morning, everybody could essentially have a little home vaccine machine manufacture those antibodies to make you resistant against that pathogen — whether that was a bio weapon or something nature accidentally brewed up.

Ariel: I never even thought about our own personalized vaccine machines. Is that something people are working on?

Anders: Not that much yet.

Ariel: Oh.

Anders: You need to manufacture antibodies cheaply and effectively. This is going to require some fairly advanced biotechnology or nanotechnology. But it's very foreseeable. Basically, you want to have a specialized protein printer. This is something we're moving in the direction of. I don't think anybody's right now doing it but I think it's very clearly in the path where we're already moving.

So right now in order to make a vaccine, you need to have this very time consuming process: For example in the case of flu vaccine, you identify the virus, you multiply the virus, you inject it into chicken eggs to get the antibodies and the antigens, you develop a vaccine, and if you did it all right, you have a vaccine out in a few months just in time for the winter flu — and hopefully it was for the version of the flu that was actually making the rounds. If you were unlucky, it was a different one.

But what if you could instead take the antigen, you sequence it — that’s just going to take you a few hours — you generate all the proteins, you run it through various software and biological screens to remove the ones that don't fit, find the ones that are likely to be good targets for immune system, automatically generate the antibodies, automatically test them out so you find which ones might be bad for patients, and then test them out. Then you might be able to make a vaccine within weeks or days.

Ariel: I really like your vision for the near term future. I'm hoping that all of that comes true. Now, to end, as you look further out into the future — which you've clearly done a lot of — what are you most hopeful for?

Anders: I'm currently working on writing a book about what I call "Grand Futures." Assuming humanity survives and gets its act together, however we're supposed to do that, then what? How big could the future possibly be? It turns out that the laws of physics certainly allow us to do fantastic things. We might be able to spread literally over billions of light years. Settling space is definitely physically possible, but also surviving even as a normal biological species on earth for literally hundreds of millions of years — and that's already not stretching it. It might be that if we go post-biological, we can survive up until proton decay in somewhere north of 10^30 years in the future. Of course, the amount of intelligence that could be generated, human brains are probably just the start.

We could probably develop ourselves or Artificial Intelligence to think enormously bigger, enormously much more deeply, enormously more profoundly. Again, this is stuff that I can analyze. There are questions about what the meaning of these thoughts would be, how deep the emotions of the future could be, et cetera, that I cannot possibly answer. But it looks like the future could be tremendously grand, enormously much bigger, just like our own current society would strike our stone age ancestors as astonishingly wealthy, astonishingly knowledgeable and interesting.

I'm looking at: what about the stability of civilizations? Historians have been going on a lot about the decline and fall of civilizations. Does that tell us an ultimate limit on what we can plan for? Eventually I got fed up reading historians and did some statistics and got some funny conclusions. But even if our civilization lasts long, it might become something very alien over time, so how do we handle that? How do you even make a backup of your civilization?

And then of course there are questions like "how long can we survive on earth?" And "when the biosphere starts failing in about a billion years, couldn't we fix that?" What are the environmental ethics issues surrounding that? What about settling the solar system? how do you build and maintain your Dyson sphere? Then of course there's the stellar settlement, the intergalactic settlement, then the ultimate limits of physics. What can we say about them and in what ways could physics be really different from what we expect and what does that do for our chances?

It all leads back to this question: so, what should we be doing tomorrow? What are the near term issues? Some of them are interesting like, okay, so if the future is super grand, we should probably expect that we need to safeguard ourselves against existential risk. But we might also have risks — not just going extinct, but causing suffering and pain. And maybe there are other categories we don't know about. I'm looking a little bit at all the unknown super important things that we don't know about yet. How do we search for them? If we discover something that turns out to be super important, how do we coordinate mankind to handle that?

Right now, this sounds totally utopian. Would you expect all humans to get together and agree on something philosophical? That sounds really unlikely. Then again, a few centuries ago the United Nations and the internet would also sound totally absurd. The future is big — we have a lot of centuries ahead of us, hopefully.

Max: When I look really far into the future, I also look really far into space and I see this vast cosmos, which is 13.8 billion years old. And most of it is, despite what the UFO enthusiasts say, is actually looking pretty dead and wasted opportunities. And if we can help life flourish not just on earth, but ultimately throughout much of this amazing universe, making it come alive and teeming with these fascinating and inspiring developments, that makes me feel really, really inspired.

This is something I hope we can contribute to, we denizens of this planet, right now, here, in our lifetime. Because I think this is the most important time and place probably in cosmic history. After 13.8 billion years on this particular planet, we've actually developed enough technology, almost, to either drive ourselves extinct or to create super intelligence, which can spread out into the cosmos and do either horrible things or fantastic things. More than ever, life has become the master of its own destiny.

Allison: For me this pretty specific vision would really be a voluntary world, in which different entities, whether they're AI or humans, can cooperate freely with each other to realize their interests. I do think that we don't know where we want to end up, and we really have — if you look back 100 years, it's not only that you wouldn't have wanted to live there, but also many of the things that were regarded as moral back then are not regarded as moral anymore by most of us, and we can expect the same to hold true 100 years from now. I think rather than locking in any specific types of values, we ought to leave the space of possible values open.

Maybe right now you could try to do something like coherent extrapolated volition, which is, in AI safety, coined by Eliezer Yudkowsky to describe a goal function of a superintelligence that would execute your goals if you were more the person you wish you were, if we lived closer together, if we had more time to think and collaborate — so kind of a perfect version of human morality. I think that perhaps we could do something like that for humans, because we all come from the same evolutionary background. We all share a few evolutionary cornerstones, at least, that make us value family, or make us value a few others of those values, and perhaps we could do something like coherent extrapolated volition of some basic, very boiled down values that most humans would agree to. I think that may be possible, I'm not sure.

On the other hand, in a future where we succeed, at least in my version of that, we live not only with humans but with a lot of different mind architectures that don't share our evolutionary background. For those mind architectures it's not enough to try to do something like coherent extrapolated volition, because given that they have very different starting conditions, they will also end up valuing very different value sets. In the absence of us knowing what's in their interests, I think really the only thing we can reasonably do is try to create a framework in which very different mind architectures can cooperate freely with each other, and engage in mutually beneficial relationships.

Ariel: Honestly, I really love that your answer of what you're looking forward to is that it's something for everybody. I like that.

Anthony: When you think about what life used to be for most humans, we really have come a long way. I mean, slavery was just fully accepted for a long time. Complete subjugation of women and sexism was just totally accepted for a really long time. Poverty was just the norm. Zero political power was the norm. We are in a place where, although imperfect, many of these things have dramatically changed; even if they're not fully implemented; Our ideals and our beliefs of human rights and human dignity and equality have completely changed and we've implemented a lot of that in our society.

So what I'm hopeful about is that we can continue that process, and that the way that culture and society work 100 years from now, we would look at from now and say, "Oh my God, they really have their shit together. They have figured out how to deal with differences between people, how to strike the right balance between collective desires and individual autonomy, between freedom and constraint, and how people can feel liberated to follow their own path while not trampling on the rights of others." These are not in principle impossible things to do, and we fail to do them right now in large part, but I would like to see our technological development be leveraged into a cultural and social development that makes all those things happen. I think that really is what it's about.

I'm much less excited about more fancy gizmos, more financial wealth for everybody, more power to have more stuff and accomplish more and higher and higher GDP. Those are useful things, but I think they're things toward an end, and that end is the sort of happiness and fulfillment and enlightenment of the conscious living beings that make up our world. So, when I think of a positive future, it's very much one filled with a culture that honestly will look back on ours now and say, "Boy, they really were screwed up, and I'm glad we've gotten better and we still have a ways to go.” And I hope that our technology will be something that will in various ways make that happen, as technology has made possible the cultural improvements we have now.

Ariel: I think as a woman I do often look back at the way technology enabled feminism to happen. We needed technology to sort of get a lot of household chores accomplished — to a certain extent, I think that helped.

Anthony: There are pieces of cultural progress that don't require technology, as we were talking about earlier, but are just made so much easier by it. Labor-saving devices helped with feminism; Just industrialization I think helped with serfdom and slavery — we didn't have to have a huge number of people working in abject poverty and total control in order for some to have a decent lifestyle, we could spread that around. I think something similar is probably true of animal suffering and meat. It could happen without that — I mean, I fully believe that 100 years from now, or 200 years from now, people will look back at eating meat as just like a crazy thing that people used to do. It’s just the truth I think of what's going to happen.

But it'll be much, much easier if we have technologies that make that economically viable and easy rather than pulling teeth and a huge cultural fight and everything, which I think will be hard and long. We should be thinking about, if we had some technological magic wand, what are the social problems that we would want to solve with it, and then let's look for that wand once we identify those problems. If we could make some social problem much better if we only had such and such technology, that's a great thing to know, because technologies are something we're pretty good at inventing. If they don't violate the laws of physics, and there's some motivation, we can often generate those things, so let's think about what they are, what would it take to solve this sort of political informational mess where nobody knows what's true and everybody is polarized?

That's a social problem. It has a social solution. But there might be technologies that would be enormously helpful in making those social solutions easier. So what are those technologies? Let's think about them. So I don't think there's a kind of magic bullet for a lot of these problems. But having that extra boost that makes it easier to solve the social problem I think is something we should be looking for for sure.

And there are lots of technologies that really do help — worth keeping in mind, I guess, as we spend a lot of our time worrying about the ill effects of them, and the dangers and so on. There is a reason we keep pouring all this time and money and energy and creativity into developing new technologies.

Ariel: I’d like to finish with one last question for everyone, and that is: what does existential hope mean for you?

Max: For me, existential hope is hoping for and envisioning a really inspiring future, and then doing everything we can to make it so.

Anthony: It means that we really give ourselves the space and opportunity to continue to progress our human endeavor — our culture, our society — to build a society that really is backstopping everyone's freedom and actualization, compassion, enlightenment, in a kind of steady, ever-inventive process. I think we don't often give ourselves as much credit as we should for how much cultural progress we've really made in tandem with our technological progress.

Anders: My hope for the future is that we get this enormous open-ended future. It's going to contain strange and frightening things, but I also believe that most of it is going to be fantastic. It's going to be roaring onward far, far, far into the long term future of the universe, probably changing a lot of the aspects of the universe.

When I use the term "existential hope," I contrast that with existential risk. Existential risks are things that threaten to curtail our entire future, to wipe it out, to make it too much smaller than it could be. Existential hope, to me, means that maybe the future is grander than we expect. Maybe we have chances we've never seen. And I think we are going to be surprised by many things in the future and some of them are going to be wonderful surprises. That is the real existential hope.

Gaia: When I think about existential hope, I think it's sort of an unusual phrase. But to me it's really about the idea of finding meaning, and the potential that each of us has to experience meaning in our lives. And I think that the idea of existential hope, and I should say, the existential part of that, is the concept that that fundamental capability is something that will continue in the very long-term and will not go away. You know, I think it's the opposite of nihilism, it's the opposite of the idea that everything is just meaningless and our lives don't matter and nothing that we do matters.

If I'm feeling — if I'm questioning that, I like to go and read something like Viktor Frankl's book Man's Search for Meaning, which really reconnects me to these incredible, deep truths about the human spirit. That's a book that tells the story of his time in a concentration camp at Auschwitz. And even in those circumstances, the ability that he found within himself and that he saw within people around him to be kind, and to persevere, and to really give of himself, and others to give of themselves. And there's just something impossible, I think, to capture in language. Language is a very poor tool, in this case, to try to encapsulate the essence of what that is. I think it's something that exists on an experiential level.

Allison: For me, existential hope is really trying to choose to make a difference, knowing that success is not guaranteed, but it's really making a difference because we simply can't do it any other way. Because not trying is really not an option. It's the first time in history that we've created the technologies for our destruction and for our ascent. I think they're both within our hands, and we have to decide how to use them. So I think existential hope is transcending existential angst, and transcending our current limitation, rather than trying to create meaning within them, and I think it's the adequate mindset for the time that we're in.

Ariel: And I still love this idea that existential hope means that we strive toward everyone’s personal ideal, whatever that may be. On that note, I cannot thank my guests enough for joining the show, and I also hope that this episode has left everyone listening feeling a bit more optimistic about our future. I wish you all a happy holiday and a happy new year!

View transcript
Podcast

Related episodes

If you enjoyed this episode, you might also like:
All episodes

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram