Skip to content
All Podcast Episodes

Jaan Tallinn on Avoiding Civilizational Pitfalls and Surviving the 21st Century

Published
April 21, 2021

Jaan Tallinn, investor, programmer, and co-founder of the Future of Life Institute, joins us to discuss his perspective on AI, synthetic biology, unknown unknows, and what's needed for mitigating existential risk in the 21st century.

  • Intelligence and coordination
  • Existential risk from AI, synthetic biology, and unknown unknowns
  • AI adoption as a delegation process
  • Jaan's investments and philanthropic efforts
  • International coordination and incentive structures
  • The short-term and long-term AI safety communities

1:02:43 Collective, institutional, and interpersonal coordination

1:05:23 The benefits and risks of longevity research

1:08:29 The long-term and short-term AI safety communities and their relationship with one another

1:12:35 Jaan's current philanthropic efforts

1:16:28 Software as a philanthropic target

1:19:03 How do we move towards beneficial futures with AI?

1:22:30 An idea Jaan finds meaningful

1:23:33 Final thoughts from Jaan

1:25:27 Where to find Jaan

 

Transcript

Lucas Perry: Welcome to the Future of Life Institute Podcast. I'm Lucas Perry. Today's episode is with Jaan Tallinn, and explores topics such as humanity's weak points and blind spots, how Jaan thinks about and prioritizes existential risks, AI as a delegation process, coordination and incentive issues, the short-term and long-term AI safety communities, as well as Jaan's philanthropic efforts.

If you enjoy this podcast and are not already subscribed, you can follow us on your favorite podcasting platform by searching for the Future of Life Institute Podcast. If you would like to support the podcast, you can also leave us a review on iTunes. It's a huge help for getting the podcast to more people.

For those not familiar with Jaan, Jaan Tallinn is a programmer, investor, philanthropist, founding engineer of Skype, and co-founder of the Future of Life Institute and Center for the Study of Existential Risks. Jaan is a long-time thinker and activist in the realm of existential risk issues, particularly existential risks from artificial intelligence. Jaan has also served on the Estonian President's Academic Advisory Board and has given a number of high-profile public talks on existential risk issues. You can check some of those out on our YouTube channel. And without further ado, let's get into our conversation with Jaan Tallinn.

To start things off here, I'm curious if you could explain what you see as crucial for our species to improve upon or to understand in the 21st century, given how frustrating it is at many levels about how human civilization functions and where our pitfalls and failure modes are.

Jaan Tallinn: So, cop out answer would be that I wish we were more intelligent. The metaphor that I have is that, if you zoom out and look at what is going on at astronomical scales, then probably the most salient or important fact about humanity is that we are minimum viable super intelligence builders. If we were even just a little less smart, we couldn't build it. It's possible that we can't build it, but I think it's unlikely. And if we were much smarter, we probably would have built it already.

I do think that things like intelligence augmentation technologies such as uploading, or various augmentations, might be a net positive, although they come with their own risks. So, when it comes to more detailed answers, I don't think I'll have very good answers. But there are all these things like coordination definitely helps. Sometimes when people say that in the past they used to get these question a lot like, "If you are afraid of AI, why not just turn it off?" I think that my answer would be, "If you are so scared of nuclear disasters, why don't you just launch them?" So the issue is that we have coordination problems. We tend to get involved in races, because other people get involved in races and then can't get out. I do think that this might be very important in the context of AI as well.

Lucas Perry: Then there's this importance of increasing human intelligence as we begin to build machines that are more intelligent. Can you explain a little bit more about why that's so important and how that helps to maintain something like alignment and control?

Jaan Tallinn: Yeah. The usual argument is that it's very plausible that it is easier to build unaligned AI than it is to build aligned AI. So if you are just barely smart enough to build an AI, it's much more likely that you're going to build unaligned AI, and by you, I mean human civilization. That's why it's going to be very useful to increase the margin of error before we actually push the button and let an AI loose.

Lucas Perry: Do you mean decrease the margin of error?

Jaan Tallinn: Increase the margin of error, or increase the amount of... In engineering, usually the margin of error means how much over-engineering you have done, how much the systems can fail before there is a catastrophic failure.

Lucas Perry: So, as synthetic biology and human augmentation and engineering techniques begin to improve, you see that as potentially important for leveraging our capabilities around building aligned AI?

Jaan Tallinn: Yeah, I do think that. Useful tools, I think, are very valuable as long as there are obvious ways to control them. The problem of AI is that can be a tool, and it is a tool mostly right now, but once it gets sufficiently advanced it might be very difficult to control it. Whereas synthetic biology, it has its own failure modes, like pandemics obviously, but at least it's not going to do a strategy and it doesn't have general intelligence, which still kind of gives humans some upper hand.

Lucas Perry: You've mentioned that there are coordination issues which are quite frustrating. Can you be a bit more specific about other pitfalls that the human species keeps running into, and I think perspective on how is it that it is possible to change into something in which that isn't an issue? Almost as if you were to go to another planet and see another species that were more successful than us, that were coming upon the age of AGI, what would they look like relative to us and how is it that they don't fail in coordination like we do and in collective action problems and avoid the kinds of pitfalls that human species falls into? I'm think also about issues with incentives and I think this is also something we'll get into later, you also talk about...

I think you're quoting someone else here, but you can let me know if that's right or wrong. You've mentioned before that the most dangerous things are non-human optimizers, like corporations. So, just looking broadly for clarity and thoughts around all this.

Jaan Tallinn: Again, so simple answer is to what an alien species looks like that doesn't have an issue with coordination, well, obviously like a hive-mind or some kind of top down hierarchy where everything is sort of under control in some much more stronger sense, that in human civilization that is like a large multi-agent system, where agents are roughly equal in power. So, that's like a simple case.

Now, I do think that humanity still has some coordination obviously, like UN. And I has been criticized a lot, but it does have some wins in the past, like fixing the ozone layer, is something that has been brought out as an example of UN-led successes. Another important thing is that the UN is roughly 100 years old, a little bit less, and a lot of the coordination mechanisms and organizations that humanity has, they tend to be fairly old, that they are pre-internet. So, one thing that I'm especially interested in is, what kind of new technologies can human civilization use when it comes to building better coordination mechanisms? And in particular, I do think that that's the most powerful thing about block chains.

So, one interesting way to frame block chains is that now, for the last 10 years or so, we have lived in a regime where it's possible for the entire civilization to agree about a piece of data without anyone having to trust anyone to maintain that consensus. So, I think that's a new powerful feature that we didn't have a little bit more than a decade ago and perhaps we can use it to solve various coordination challenges.

Lucas Perry: So, there's making humans more intelligent as our technology starts to get more and more powerful. There's also inventing new technologies that help with coordination. Are there any other specific weaknesses that you see in human civilization in the 21st century?

Jaan Tallinn: Well, I mean intelligence and coordination seem to be pretty much cover everything, but they are very super, high-level things. Obviously, if we can start drilling down what exactly is the problem with human intelligence that will ... There's this great story called They Are Made Out Of Meat, where two aliens visit the solar system and they see that, "This looks very embarrassing. We shouldn't report it because they're sort of intelligent, but they are made out of meat. This is just ridiculous."

Lucas Perry: Yeah. And they squish air through their meat flaps to communicate.

Jaan Tallinn: Exactly. So that is a huge disadvantage when it comes to intelligent agents possible. In the space for intelligent agents, we barely work. So a lot of the problems that we have, a lot of times are called low hanging fruit, have to do with the fact that we're biologicals. We make up... For example, Elon Musk has been saying that the purpose of Neuralink is to solve the bottleneck between humans and AI which is the bandwidth. I claim that bandwidth is not the bottleneck, the bottleneck is the speed difference. Humans run about one million to one billion times slower than normal AI would.

Lucas Perry: Yeah, my understanding of what Elon is talking about there is that the download for human's is much faster because of the way vision and auditory senses work. So it's a lot easier to absorb information that in it is to express it. Because either you have to speak or you have to use your thumbs on a keyboard. And I think if you measure the number of bits that you can encode with your thumbs and fingers on a keyboard versus you can absorb, it seems like part of Neuralink is for changing that so that the output is higher.

Jaan Tallinn: Sure. I'm not arguing that communication bandwidth for humans is very small, very narrow and it could be improved by orders or magnitude, sure. But even if we do that, like the ultimate bandwidth, you will hit just the next bottleneck, and the next bottleneck is just much, much bigger when it comes to the difference, which is just a speed difference. We really would need to upload humans and run them faster to become competitive with AI.

I think Robin Hanson has this hilarious quip where he said that, "Try to make humans competitive with AI by increasing the bandwidth is just like trying to make horses competitive with cars by making the ropes stronger."

Lucas Perry: That makes sense. I think that's helpful, there being multiple different bottlenecks.

Jaan Tallinn: Yeah, exactly.

Lucas Perry: The bottlenecks are processing speed, but also the input and output bandwidth. And those are all different things. The biggest one, you're saying, is the processing speed. And so that can get fixed by human uploading. So if there's an order of operations of procedure here for moving into the future, what does that look like? So does that look like human augmentation? Making humans smarter, then making AGI that doesn't lead to x-risk? Having the AGI help make human uploads, then leveraging the human uploads to iterate again on the AGI and double check for safety and alignment?

Jaan Tallinn: Yeah, that's definitely one possible future that might be safer than a default one. Unfortunately, I don't think it's going to be... Personally, I don't think it's likely that we would have ability to upload ourselves before we had the de novo AI.

Lucas Perry: Before we had what, sorry?

Jaan Tallinn: De novo AI, which means AI that's just built, not by mimicking humans but from first principles basically. Or like from just training lots of black boxes is the most likely scenario, unless something starts giving out... People who I know that are training black boxes, they say that they do not see diminishing returns yet, which is concerning.

Lucas Perry: Can you explain why that's concerning?

Jaan Tallinn: Because if we saw diminishing returns to the amount of computing power we are throwing at those black boxes, then we would have some confidence that we still have time because we are missing something very fundamental. Whereas, if you do not see diminishing returns, who knows how far are we going to get before we... It's possible that we are going to pull on evolution. Evolution didn't know what it was doing, but it built something that is stronger than itself. So it's possible that we will do the same thing to our detriment.

Lucas Perry: One thing I've heard you say that I quite like was you said, "The amount of runway time left for humanity is not measured in world clock time but in computational clock cycles." So we have a certain amount of-

Jaan Tallinn: Yeah.

Lucas Perry: compute cycles left. And so when computation gets better, we lose out on time to solve AI alignment.

Jaan Tallinn: Yeah, exactly. That's the pretty much that condensed version of what I just said. It's still possible that this is not true, obviously. But it's still possible that they are missing something very fundamental in terms of architecture and we will see diminishing returns and get stuck again and a new AI winter descends.

Although, I don't believe that because AI is profitable now. But I think it's much more likely that now it's just whenever somebody throws more cycles and engineers better hardware to get more cycles, we'll just get much more closer in proportion which means that indeed, the amount of time that humanity has left is measured in clock cycles, not in actual sidereal time.

Lucas Perry: Interesting. It's giving me a little bit of the vibes of Carl Sagan's Contact or the first book in the Three-Body Problem. Have you read either of those?

Jaan Tallinn: I've seen the movie of Contact, and I did read the Three-Body Problem. I especially liked the second and third book. The first one I didn't like that much, but I think it's a great series.

Lucas Perry: One book I know that you're a big fan of... I think two fiction stories I think I know you're a big fan of are Hannu Rajaniemi... I'm not sure if I said that right.

Jaan Tallinn: Rajaniemi.

Lucas Perry: ... Quantum Thief, and then the... I think it might be Scott Alexander's story about this superintelligence acausally trading across the universe.

Jaan Tallinn: Yeah. I think that the latter...

Lucas Perry: What is that one called?

Jaan Tallinn: The Demiurge's Older Brother. It's a really fascinating short story. It probably one of the most fascinating short stories that I read. It kind of, in a fictional form, makes this plausible case that moral realism might be true in some very interesting sense. Basically the idea is once an AI wakes up and kind of figures out, "Okay, what is my strategy to proceed from now?" It might think about, "Wait a minute. I might not be the first." And then the worst thing is what is an expected outcome for an AI that starts to be thinking about, "What kind of norms should I follow if I'm part of a society, not the only one?"

Now the question is, is there some kind of attractors in ethics space, so to speak? We know that human ethics kind of emerged as a result of game theory in some sense. Like tribes that had some sort of ethical norms, they are more competitive than the other tribes that didn't. So it's possible that the same thing happens with other agents, including AIs. And that's what Scott Alexander's short story, The Demiurge's Older Brother, make this case very interesting.

It also covers Fermi's Paradox, but I think that's a side point of the story.

Lucas Perry: So maybe we can touch a little bit more on that soon. As we begin to wrap up this, I think, very high level view of the 21st Century and the kinds of things that humans are running into, we mentioned two things I think at the highest level. One was increasing human intelligence and the other was the difficulties of coordination. Two other things that I want to throw out here before we move on are, you've highlighted the importance of making the creation of AI feel dangerous, like airplane and spaceship construction feels dangerous to engineers. I think you might have been quoting Stuart Russell when you said that.

Jaan Tallinn: Possibly.

Lucas Perry: Another thing that you've mentioned was the nuclear power industry killed itself by downplaying the risks.

Jaan Tallinn: That one is Stuart's. I definitely agree, I think AI industry might be in a better position than nuclear industry was because it came later and can look at the mistakes the nuclear power industry made. Indeed, the reason why nuclear power isn't abundant in the world is largely because the industry downplayed the risks to their own detriment.

Lucas Perry: Do you feel like the creation of AI doesn't feel sufficiently dangerous to AI engineers?

Jaan Tallinn: I think the situation is definitely improving. I mean I've been in this AI safety racket, so to speak, for more than 10 years now. And back then, people in AI capabilities business, that is like AI researchers, they were very dismissive of AI being potentially dangerous.

Right now, I still get some pushback when I talk to older people in the AI research community. But even they are much more subdued now. And young, capable people are like, "Yep. We totally have an alignment problem. That seems to be right." Yeah, I think that the problem of people not realizing that AI can potentially be dangerous, at least in the West, doesn't seem to be a problem.

Perhaps on an intuitive level still. I mean even myself, when I look at a piece of code, it doesn't feel dangerous. That's like a problem with me and other biologicals, rather than a problem with...

Lucas Perry: And other biologicals.

Jaan Tallinn: Exactly.

Lucas Perry: Is that what we're going to be called in 50 years?

Jaan Tallinn: Yeah, if you're still around, possibly.

Lucas Perry: Let's talk a little bit then about whether or not we'll still be around. It seems like one of your main focuses is on the risk of AI. How do you generally think about and view the various realms of existential threat? I'm curious about how you relate AI risk to areas like synthetic bio, climate and nuclear weapons risk, how you rank the priority and the necessity of action and human awareness on these issues?

Jaan Tallinn: Yeah, sort of used the framework that the effective altruist movement has adopted, which is looking at problems and seeing are they important, are they tractable and are they-

Lucas Perry: Neglected?

Jaan Tallinn: ... neglected. Exactly. And on that scale, AI risk seems to still be much higher than any other risk. But in terms of just existential risks, my top three is... First one is AI, second one is synthetic biology and the third one still is unknown unknowns, because both AI risk and synthetic biology is less than 100 years old as a concept. It's very possible that in the next century we will get another contender for the top three. So I'm kind of leaving that spot open.

Nuclear is very interesting, I think. Because nuclear is the first existential risk that humanity faced. You can make the case that there was a decade during which nuclear was a clear existential danger, which was from the mid '30s to the mid '40s. Because during that decade, it wasn't known whether the planet would survive the first nuclear detonation. In fact, the Manhattan Project scientists, they did a first existential risk research that humanity have done which was the report LA-602.

And they found, "Yep, we have 3X margin." If I remember correctly. And then next very important calculation about thermonuclear yield, they actually messed up. So it was an interesting case.

Lucas Perry: Sorry, what is the 3X margin? My understanding is that they were worried that setting off the first nuclear weapon might ignite the atmosphere.

Jaan Tallinn: Yeah. Again, I might be wrong here, but the way I understood it was that they were concerned that the nuclear... or the trinity test basically would create a thermonuclear explosion by turning the planet into a thermonuclear device by having the nitrogen in the atmosphere to fuse. They calculated what is the thermal energy concentration that's going to be? Is it going to be enough to fuse nitrogen? And they thought of that, but it looks like we still have some margin there.

Lucas Perry: Yeah, and as per Nick Bostrom's thought experiment, we're lucky the nuclear weapons didn't only require microwaving sand, which is a nice historical fact.

Jaan Tallinn: Exactly. I say that we have been lucky with laws of physics two times. One is exactly what you just said, that it's not easy to create nuclear weapons. It's just written in the laws of physics, so to speak, how easy it is. And second is that you can't destroy the entire planet with a nuclear weapon, which is another lucky thing that wasn't obvious in the '30s.

I just listening to Richard Rhodes' Making of the Atomic Bomb. It's a great audiobook. I mean it's a great book but the audiobook was super well narrated. It just totally gives you the feeling of... especially the moments before the test, what people thought et cetera. It's almost poetically written and how people, "Okay, are they lost [inaudible 00:20:34]?"

Lucas Perry: Yeah. I mean it's hard to I think emotionally and experientially connect with what that might have been like. It'd probably be a really good movie.

Jaan Tallinn: Yeah. I wonder if they have already been such movies. I'm not sure.

Lucas Perry: Yeah, maybe. So before we get back to talking more about the particular existential risks and how you rank and think about them, do you think that that fact that it's not easy to make nuclear weapons... What is a second example of a technology that's an x-risk that you said we got lucky on?

Jaan Tallinn: Oh, it's just like laws of physics. Like both are about nuclear weapons still, but the first is that it's hard to use one device to blow the planet up, and second is that it's hard to even build this device. If you think about that, they're two distinct things.

Lucas Perry: It's hard to blow a planet up and it's hard to make the device?

Jaan Tallinn: Yeah, yeah.

Lucas Perry: Is that really luck though? I mean this I think interesting and important because it would change our expectation of how easy it will be for unknown-unknown technologies to proliferate and be devastating. I have some vague intuition that these things require civilizations to discover, and if anything were really sufficiently easy, it seems like there might have been a chance that evolution would have found it, if it made some species really powerful.

So everything has to be beyond the reach of evolutionary forces?

Jaan Tallinn: Yeah. I mean we already know that there are a ton of things out of reach of evolutionary forces. The most prominent one is radio. Right? Evolution never thought of using radio, never discovered radio.

Lucas Perry: Yeah, does not use radio.

Jaan Tallinn: And fire arms, there's some mechanical stuff that evolution does, but no actual chemical explosives as far as I know.

Lucas Perry: I see, that's an interesting way to think about it. It doesn't do chemical explosives, it doesn't do radio. So it seems like some of these things require civilization and culture and general intelligence.

Jaan Tallinn: It's just like way more time for evolution. It's a large design space and you need kind of... Evolution is very constrained when it comes to optimization processes. You can't think ahead, so therefore it can't do... you know. Only incremental steps, that's like a big constraint.

Lucas Perry: It seems like we're hitting low hanging fruit, and if we get any fruit of technologies that are even more powerful and devastating, that the higher that you're going on the tree, intrinsically, the more difficult it is to produce those things. It's like some feeling of being skeptical that there will be unknown unknowns that will be cheap and easy to make and could easily proliferate.

Jaan Tallinn: I think a person climbing a tree isn't a great metaphor, because the height of the tree depends on the context, the existing technology, technological environment basically. It's barely possibly to build a quantum computer in the 21st Century. It's clearly totally impossible to do that 200 years ago, so it's like... Technology builds on technology. So whenever you get a new technology that was like a low hanging-ish fruit-

Lucas Perry: The tree is getting shorter.

Jaan Tallinn: Like another... Yeah, exactly. In time, a new landscape might open up.

Lucas Perry: I see. That's interesting. So as time goes on, the capacity to build nuclear weapons becomes easier and easier?

Jaan Tallinn: I think that's also literally true, that it's easier to build nuclear weapons now than it was 100 years ago.

Lucas Perry: Right. So wouldn't we expect that from any technology that has to do with existential risk?

Jaan Tallinn: I think so, yeah. Still, there might be hidden assumptions that there wouldn't be... explicit constraints against existential risks. So if there was some kind of coordination regime that prevents building existential risk, or existential risky technologies, then obviously it becomes harder.

Lucas Perry: So you ranked your top as AI, synthetic bio, then unknown unknowns. So for this unknown unknown category, you take very seriously the threat of some new, easy to produce discovery being an existential risk?

Jaan Tallinn: Yeah. Yeah. New discovery. It even could be some kind of strange considerations like simulation hypothesis and stuff that you just can't easily think about yet. Like, if you're going to grow philosophical and epistemically, we might come up with just have some kind of ontological change and see like, "Wait a minute, we didn't consider X that we currently do now."

Lucas Perry: Like the simulation is only there to test civilizations in the age of AGI, and once they get through it they turn the simulation off, or something like that. Who knows?

Jaan Tallinn: Yeah. There are certainly very weird things about this point in time.

Lucas Perry: Before we move onto that, so climate and nuclear weapons both didn't make your top three. Can you explain to me why you don't see, for example, accidental nuclear war and intentional nuclear war in the 21st Century as really serious existential threats? Particularly because if we reflect on how increasingly powerful technologies will lead to perhaps a race dynamics between countries and the competition, and increasing tensions and the risk of escalation of conflict, why the threat of nuclear weapons isn't higher on your list?

Jaan Tallinn: Yeah. First of all, there are all these weird secondary effects that you might consider. Like they indeed might realize, but it's very hard to put them confidently into your models. In terms of first order effects, as far as people have calculated, it seems just really hard to kill everyone by detonating all the nuclear weapons, the area that you can cover.

Also, the main danger still is the main disaster comes from the nuclear winter, but still it's kind of very plausible that like, 1000, 10,000, 100,000 or a few million people will survive, which seems sufficient to rebuild civilization. So sure, there is some risk of everyone perishing as a result of nuclear winter. But as far as people have calculated, people who's research ability I trust, Toby Ord in his book, Precipice. Yeah, it seems less plausible to kill everyone with nuclear weapons, or with climate change than it is with synthetic biology or AI.

Again, I don't want to dismiss those catastrophic risks. Clearly they're a huge catastrophic risk. In fact, the place where I'm sitting is probably going to be very high up on the target lists. I'm currently sitting in Tallinn, Estonia. But yeah, I think that's a big difference. Was it Bertrand Russell who said "There's a difference between killing 99% of people, 100% of people, and the difference is not 1%."?

Lucas Perry: Derek Parfit?

Jaan Tallinn: Oh, was it Parfit? Okay. I stand corrected.

Lucas Perry: That's Derek Parfit's thought experience, you lose out on all the life in the deep future that originates from Earth.

Jaan Tallinn: Right, right.

Lucas Perry: It's interesting, if one is very committed to long-termism and that kind of view, that we're playing on the order or cosmological time, the game of ethics and life. It seems to cut pretty close, in my intuition, around nuclear winter and whether that ends up being existential or catastrophic.

Jaan Tallinn: Yeah. I would just currently defer to Toby Ord's Precipice. I can't remember exactly what he said, but I think it was in the order of 1% or less that nuclear is actually an existential risk.

Lucas Perry: Toby's claim is that it was 1% change that it's an existential risk?

Jaan Tallinn: I don't remember it being over 1% there, but it might not be much more. I can't remember.

Lucas Perry: Climate change is much more obviously closer to a global catastrophic risk. Though we've had Kelly Wanser on the podcast, who's talked about uncertainty around really catastrophic runaway versions of climate change where the runaway includes cascading effects that are unpredictable, that lead to phytoplankton dying in the ocean which then messes with the atmosphere and other things. How do you see that playing together with these other top three existential risks of yours?

Jaan Tallinn: Yeah. I mostly see it as something that is not acute. It's very plausibly an existential risk, but probably not in the next 10 years or even next 50 years. Whereas, as AI totally is an extension risk in the next 10 or 50 years, and vice versa. If you're going to really get AI right, it seems like all the other risks become much more manageable. Therefore, for pragmatic reasons I would prioritize AI.

Lucas Perry: Right, so AI is the key to existential security should it be aligned and beneficial?

Jaan Tallinn: Exactly.

Lucas Perry: Can you expand a bit more on your view of the risk of synthetic biology and the relationship between AI systems and other technologies being able to really ramp up and kind of proliferate the technology of pathogen engineering for deadly pandemics or deadly pathogens?

Jaan Tallinn: Yeah. So from a first principles thinking, it kind of goes back to this evolution versus humans exploring the design space. And evolution doing it in a very constrained way, and in some ways horrible way, like creates organisms that eat other organisms. What kind of pervert would do that? But on the other had, it's much more benign than really determined humans could be, because evolution doesn't deliberately build organisms to wipe out other organisms. It does that only by accident, whereas humans could actually sit down and do a project to build something really dangerous. Humans are, in some very real sense, more powerful optimizers, more powerful designers than evolution is. They'll also just have a lot more time, but it does have way more constraint than humans have. Therefore, if you're going to let humans loose in the biological design space, it's very possible that we're going to get hopefully a lot of good things will fall out, but possibly things that are just way more dangerous than anything that evolution would build.

Lucas Perry: And so you see that as plausibly killing all people?

Jaan Tallinn: Yeah, that seems... I'm way less expert on biology than I am on computers and AI. So I will defer to experts here, but currently I haven't heard strong arguments that it's not really possible to create something that is much, much more dangerous than anything evolution has built.

Lucas Perry: So when coming up with these top three and putting synthetic bio on number two, what are the motivations for that? Or where does that exactly come from?

Jaan Tallinn: It comes from both the plausibility, which is I haven't heard really good arguments why it's not plausible to build a species-ending replicators in biospace. And second is urgency, because right now, synthetic biology is happening as we speak. And so in some sense it will even happening faster than AI. So yeah, if you're going to combine the urgency and the risk that's why it earns into second place.

Lucas Perry: So as we're talking about biology, another blind spot or failure mode of human civilization that we can reflect on is our short memories, and that we tend to learn from mistakes rather than to do something like expected value calculations and then apply resources as per that calculation and estimation. We're getting close to the end of the coronavirus pandemic, at least all of the lockdown should be ending hopefully in the next few months, which is really great.

Jaan Tallinn: Unless you're in the EU.

Lucas Perry: You guys are going on for longer?

Jaan Tallinn: Or Asia or Africa, sure. Yeah. I mean, EU hasn't done really very well when it comes to vaccinating. I mean it's still better than Africa and Asia, but much worse than UK or US.

Lucas Perry: I see. So we're living in a global event that was precipitated because we couldn't and were not able to listen to the warnings of many experts who were saying that a pandemic was in fact coming? So it's both having to learn from mistakes, and also not having the foresight to do expected value calculations to mitigate risks and apply resources effectively. I think Anthony Aguirre says something like, "Our default mode is to think of whatever the most likely scenario is, and then just basically plan and live as if that is the thing that is definitely going to happen," which leads to things like getting blindsided by a pandemic.

An example of this that you give, that I really like, is... and this is around generational forgetfulness of I think the real difficulty of life. We're out here on a planet, on a thin atmosphere and real stuff happens, real historical stuff, real catastrophes. You talk about how our generations are forgetting the lessons that were learned through the experience of World War 2 for example. So I'm curious about how you see these blind spots affecting our work on existential risk in the 21st Century, and the need for embodying this generational knowledge around the reality of existential and global catastrophic risk.

Jaan Tallinn: Yes. So one bias that human psychology has, human cognition has is if I remember correctly, recency bias, which is that we kind of estimate the probability of things happen just by doing a quick query to our memory and seeing if there are any examples. And that worked reasonably well in the African savannah 100,000 years ago. Whereas now, in a constantly evolving environment, this recency bias doesn't get you good estimates or probabilities.

So in that sense, it's helpful to put something in people's memories that planet-wide problems can happen. Yeah, after the Second World War, I would claim people were in some sense more sane than they were just a few years ago. Whereas, now, having had this global disaster, we are taught another lessons that are very useful to carry around in our memories.

Lucas Perry: I'm somewhat nervous it wasn't bad enough.

Jaan Tallinn: Yeah. I call it minimum viable global catastrophe. So it's not impossible that it wasn't strong enough. That's very plausible. But I think it's still... I mean that's always the trade-off. You want the catastrophe, for making people aware purpose, to be just strong enough but not stronger. But there's always the danger of undershooting and overshooting.

Lucas Perry: Yeah. Let's see what happens and hope that this was a learning lesson.

Jaan Tallinn: Certainly one thing that I see, perhaps kind of naively, is that there are clearly voices who are more reasonable and do get large audience because they have been more reasonable and more right about this kind of changing things. Obviously there are counter-examples of that, et cetera, but it's still great to have the fact that you can go and point to people, like, "Look, that guy was obviously right. That guy was obviously right." It's just a nice thing to have in your arsenal when you want to talk about the next thing.

Lucas Perry: So as we wrap up here on these categories of existential risk, is there anything that you'd like to wrap up on AI, synthetic bio, unknown unknowns, then climate and nuclear risk before we move on?

Jaan Tallinn: Yeah, I'll just stress again that all of those... The least of plausible extensional risks, AI is the only meta technology that if you get AI right, we can fix the other technologies. Whereas, if you fix the other technologies, fix the other risks... Whereas, if you're only going to fix the synthetic biology, existential risk from synthetic biology, we still have AI risk to deal with. So in that sense, AI is like high leverage where the others aren't.

Lucas Perry: All right. So focusing in on AI here then, can you tell me a bit about your thinking on AI adoption as a delegation process? And can you define what AI adoption is?

Jaan Tallinn: Yes. I mean over the years, I've been framing AI differently. I think Max Tegmark used one of my metaphors, AI as a rocket ship that people are just building engines for but not thinking much about how to make sure it doesn't explode or thinking about the steering.

I do think that more seriously, one interesting and pretty precise metaphor for AI is, you can think of AI as an automated decision process, so AI adoption, be it an economic or military whatnot context, becomes a delegation. So people are adopting AI in order to delegate human decisions, the decisions that are too fast, that humans couldn't make, through automated systems that have two important properties.

One property is, A, they are getting more and more competent over time, as opposed to humans that are at the same level of competence. And B, that AI is not human. In someways AI is more alien than Alien. It's not the result of biological processes, so it's a very, very different decision maker than humans are.

If you think about it, we are inviting aliens among us and then giving the reins to the planet to them.

Lucas Perry: Yeah, I mean alien from the Alien movie is really an expression of our own mind, like a dark thing that we could imagine. So it is closer to us than black boxy machine learning.

Jaan Tallinn: In a more sort of strict sense, aliens, if they are biologicals, it's very... If they are produced by evolution, we know that evolution has... is used to doing some things in a certain way. Like it has invented our eyes multiple times on this planet, so it's very reasonable to expect aliens to have similar eyes, like two eyes, and similar eyes than we have, if they come from a relatively similar environment. Because it's just like the clear way to do it, rather than using chemicals.

Whereas, AI clearly does not have similar eyes than humans. Doesn't even have anything... Concept wise you can think of cameras as AI's eyes, but it's someways anthropomorphizing. They really aren't. In that sense, the AI is much more alien than any alien, biological alien.

Lucas Perry: Apparently crabs have been arrived at via evolution through multiple different evolutionary processes.

Jaan Tallinn: Yeah, I heard. Evolution really likes to make crabs.

Lucas Perry: Yeah, they're like an attractor in evolutionary space, one might say.

Jaan Tallinn: Design space. Yes, yes.

Lucas Perry: Design space.

Jaan Tallinn: So there probably would be crabs on other plantets...

Lucas Perry: Okay, so we're going to have crab people showing up in crab spaceships.

Jaan Tallinn: Possibly.

Lucas Perry: So the point is that there are attractors in evolution, but in creating AI, there are also attractors, like architectures that are attractors. But that design... is a completely different part of the design space? A space that's more alien.

Jaan Tallinn: Exactly. The AI design space has totally different constraints. It doesn't have to feed itself while it's training, which really important. It doesn't have to be incrementally constructed. It can almost kind of redesign things and then run it again, et cetera, whereas evolution can't.

So going back to the delegation as a metaphor, I think it's a very productive metaphor to think about because it makes many claims about AI clear, and also highlights the potential concerns about AI better. Because every leader knows what the dangers or delegation are. Like, every leader has experience delegating things and seeing things not ending up the way they hoped. And they also developed various techniques to make delegation go better, that might in some sense kind of... or to some degree, transfer over.

I do think it's kind of like A, fairly neutral, B, not very science fictiony, and C, rather productive way of thinking about AI as a delegation process.

Lucas Perry: So it sounds like when you're talking about AI adoption as a delegation process, delegating decision making and actions to machines systems. So if we think about the AI design space as being different from the evolutionary design space, it sounds like you're talking about AIs as being agentive, but we can also imagine in that design space that there are other proposals, for example for oracles, and also AI as services.

So that's kind of a less agentive thing, but they simply perform complex actions. That may still have elements of delegation to it. I'm curious how you fit this in with these other considerations.

Jaan Tallinn: I think decision making is a more general... or a system that makes decisions is a more general concept than a system that is agenty. So you can have an oracle that is still going to have to decide which result to put first, which one to prioritize. That's still a decision that it's making. On what kind of sources to query, et cetera, how to compose things. Those are series of decisions.

So I do think that decision doesn't assume... Delegated decisions to automated systems, doesn't really assume agentiness. That said, I do think that it's very valuable to do sub-categorization there to indeed start thinking about what kinds of AIs we're delegating to. I do think that non-agent AI, to the degree that we can keep them non-agenty, seem to be clearly safer systems to delegate decisions to.

Or another great example of AIs to delegate decisions to are AIs that do not have any idea that humans exist, that do not have models of humans. So you can have AIs that just try to make predictions about chemistry or biology, microbiology, but do not have any models or reasoning about what happens in human society or in particular, people's heads, which I do think is better avoided. A lot of the creepiness and problems do come from AIs that are built to figure out what humans think.

Lucas Perry: Right. We're talking about a few different forms of AI, and we're talking about it as the adoption of AI as a delegation process. Given the spaces in which AI development exists, so both in federal governments, I'm not sure what the extent of that it, but mostly in private industry, do you think that there are particular kinds of AI in design space that are attractors given the incentives of these spaces in which they're being designed?

Jaan Tallinn: Oh yeah, absolutely. And I do think AIs that do model and then either implicitly or explicitly manipulate humans are in the design space, or in the attractor space. That's like a Facebook is here a fine example. The purpose, even if not explicitly, perhaps even explicitly but definitely implicitly, the purpose of the AI system that Facebook uses for to manipulate the viewers, the humans, that they can track through the system.

Lucas Perry: Is there anything to do about that?

Jaan Tallinn: If we're not careful at this point and say what the exact kind of recipes are to constraining. What are the exact things that we, as society, should be doing when it comes to AIs that manipulate humans? But I think a different step would just be an admission that it's not clear that these are positive things to have in our society.

And if there are examples that we need AI to manipulate humans, then there better be very good arguments why, and selling more stuff to them is not a good reason for having them.

Lucas Perry: Is regulation or governance the mechanism that you see for addressing AIs that manipulate humans by, for example, maximizing the capturing of their attention?

Jaan Tallinn: Yeah, very possibly. Especially in the EU. The EU seems to be much more ahead when comes to tech regulation than the US, because lobbying pressure I guess. And I even heard European Parliament people, or members of parliament being proud of the fact that they can regulate tech in a way that the US no longer can. So there might be some...

A friend of mine, Andrew Critch says that one of the really big deals about GDPR was not that any object level changes that it brought, which might be good or bad. It kind of depends on the view, but the fact that it set a precedent, right? There is a very prominent law in the world now that is going to directly end up constraining the tech industry.

I mean I love technology myself, and I'm already the massive investor in tech startups, et cetera, but that doesn't mean I think it should be completely free reigns and unconstrained as an industry.

Lucas Perry: Yeah. So let's pivot them into a different question. Can you explain your thoughts on your investing activities in AI, while also being concerned about outcomes from AI?

Jaan Tallinn: Yeah, I do get this question quite a bit because indeed, I both split my time between philanthropy and investments. Philanthropy, most of my support goes to organization and people who are trying to think about how can we do AI safely? In some ways, do the homework for people who are developing AI capabilities. On the other hand, I do have investments in various AI companies, mostly notably, I was an investor in DeepMind before they were sold to Google.

The way I look at it is that I see that I can pretty well function as a connector between the community, the AI safety community and the AI capabilities community, because I have some street cred in both. So sometimes I've been joking that the reason I've been investing in AI companies is so I could hang around in their kitchens and talk about the dangers of AI, which I have literally done.

Sometimes I've done small investments in AI companies to just have a ticket to talk people and bring people who are concerned about AI and people who are building AI together. That has worked pretty well. Sometimes I've done bigger AI investments. One important consideration there is I think of what is the counterfactual? When people sometimes say that, "Why are you investing in AI if you are concerned about AI?" They're kind of implicitly have this model that if I didn't invest, that those companies wouldn't get money. But that's just not true. They will get money from others.

And now the question is what kind of money I am displacing on the cap table. There, my goal is to displace the worst kind of money, I call it the profit maximizing money. In some ways... I mean I have a lot of friends in the VC industry and they're the greatest as people, but I think VC is a really weird profession, because from one hand you at the edge of the future, you see as it's unfolding. There's this saying that, "The future is already here, but it's not evenly distributed."

The VC's see the place where the future is heading but-

Lucas Perry: Kind of like being born and then proliferating.

Jaan Tallinn: Yeah. So that's the good news. The bad news is, and they can't do anything about it because they have to maximize. They are managing a lot of people's money, so they are expected to always do the things... make the decisions that maximize the profit for the shareholders. So in that sense, I do think VCs are in an unfortunate in our position when it comes to making decisions that are kind of serving the best interest of humanity.

I'm not saying that they can't do it. They can, but they have to do it with their human hat on. When they put on their VC hat, they are basically legally obliged to the profit maximization decisions. So yeah, that's one thing that I'm going to try to... I trust founders more, because founders... at least empirically I see many people who are going to develop potentially dangerous technology and they are themselves concerned about it. So one strategy I have is support those people in order to have them like more friendly shareholders make up a bigger part of the cap table by basically investing myself and having people who I think do not have to maximize their profits to co-invest.

Lucas Perry: Right. I mean there are few, I guess, different dimensions to this. I'm curious what your thoughts are about it. I mean, so shareholders and VCs, there is a particular incentive structure than within that position that you're saying isn't always aligned with bringing about beneficial futures?

Jaan Tallinn: Yeah, so I mean just technically the remit of venture capital firms is that their purpose is to maximize the profits for the LPs, the limited partners. So people who can make decisions that go against that mission, but there is always going to... They're going to take personal risk when they do that, because LPs can sue them.

Lucas Perry: Yeah, they have fiduciary responsibility to...

Jaan Tallinn: Exactly, that's what it's called. So in that sense, if you're a person, a founder who's developing potentially dangerous technology, it's just much safer to take angel money. Safer in expectation, you'll have like... A lot of angels are still going to be profit maximizing, but at least they don't have to when it comes to a legal framework.

Lucas Perry: Okay. And I'm not versed in investing language, so sorry. A venture capitalist manages a bunch of shareholders' money and an angel is what? An independent individual investor?

Jaan Tallinn: Exactly. So angels are people who just happen to be wealthy, to some degree. Super angels, potentially billionaires. And angels just invest their own money, which means that they are completely free to lose it. Whereas, a typical... There might be exceptions but I can't think of anyone. VC firms, A, they raise funds from places like pension funds and banks and whatnot, or investment endowments. And then turn around and invest those usually in technology startups or various companies. And then their duty is to make those companies as profitable as possible, which means that their hands are tied when it comes to making decisions that potentially might harm humanity.

Lucas Perry: So by investing your own money in places where you can offset the investment of money which is more firmly tied to just profit maximization strictly, you feel you can improve positive impact of that company?

Jaan Tallinn: Yes, so that is definitely the rational. I think that the most humble claim is that by not investing other people's money, I do have a degree of freedom.

Lucas Perry: Yeah.

Jaan Tallinn: Yeah. I have a degree of freedom of letting go the profits if it turns out that maximizing profits is a bad idea, especially if the founders find that it's a bad idea to take that particular military contract and whatnot.

Lucas Perry: Is there anything else you want to say about this before we move on?

Jaan Tallinn: I mean we didn't talk about corporations. Corporations, there's the similar selection process that you have people selected... Either you have founders still in charge, which is like a different situation, or you have people that have joined the corporation that you built their careers. Then you have the people who have been selected for different properties, which means that people who would have joined the corporation in order to make a career, you'll end up with people that are much more representative of the legal landscape, incentive landscape, again fiduciary duties, et cetera.

So in that sense, they are more keen to maximize profits again, potentially at the expense of cost of external actors.

Lucas Perry: Yeah. We had Mohammad Abdalla on the podcast to talk about ethics washing.

Jaan Tallinn: Yep, I heard that podcast, it was good.

Lucas Perry: Cool, great. So you said that different positions select for different properties, so I think this is really interesting to think about as the kind of sifting and filtration process that happens as you move from bottom to the top of a company, and how as you're moving more to the top, that person is more capable and perhaps more interested in the strict profit maximization incentive structure. Whereas, you can't get that kind of position if you're not a founder.

If you're a founder, you can bring in an arbitrary amount of idealism, but then if you come in later, then the incentive structure for the corporation, it continues for the people at the highest levels to be strictly aligned with the survival of the company and the maximization of profit for shareholders, which then creates a potential divergence from what is beneficial or good for people in general.

Jaan Tallinn: Exactly. Yeah. That's well put.

Lucas Perry: What do you do about that?

Jaan Tallinn: I think one thing to look at is the entire history of the environmental movement, because they kind of fought this battle in the '70s, '80s and won. So clearly now there are much fewer companies now that are maximizing profits at the cost of environmental damage, at least so explicitly. I'm very confident that there would be important lessons to be learned when it comes to the environmental movement.

Interestingly, I do think that almost all of the existential risks would manifest themselves as environmental problems. If you think about asteroid impacts, you're not going to die because asteroid is going to hit you, it's just the environment becomes uninhabitable. Same with nuclear, just environment becomes uninhabitable, and same with AI. I think the environment becomes uninhabitable once you have sufficiently powerful AI that's capable of doing geoengineering.

Lucas Perry: All the iron in your food is being assimilated into the Dyson sphere.

Jaan Tallinn: Yeah. Or before that happens, I expect there will be changes in atmosphere which are going to be deadly for biologicals. We are very fragile. Biologicals are just super fragile. It's like if we zoom out from the planet, any tiny astronomical scale temperature change of 100 degrees, it's just ridiculously small. If you just make 100 degrees change, it doesn't even matter, like Fahrenheit or Celsius, you die. If you're two light years out, you barely would have noticed this kind of tiny fluctuation.

Lucas Perry: Yeah. Let's talk a bit about the international, national, and local coordination issues that arise from the adoption of AI as a delegation process. So could you speak a bit about that? 

Jaan Tallinn: Yeah. I do think it's very productive in an international context to think about this AI adoption as a delegation. Basically you kind of needed to see that the reason why big organizations and whatnot are adopting AI is that they have competitive pressure and the need for increased competence, so they're kind of delegating to AIs for the same reason that they're delegating to employees, hiring employees and delegating to them, just to get more done before competitors get this done.

The dangers are very similar. You're going to get a new employee that's not really aligned, does its own thing and delegates something important and something really bad happens. The problems are much worse with AI, because again they are more alien than Alien. You have delegated to a system that just does not care. It really doesn't care, in a way that's kind of hard to convey. The reason why we send robots to space and radioactive areas is they do not care about the environment.

We do have a lot of problems from economic externalities from big companies. But these externalities are still very constrained because we have humans in the loop. In all of the decision processes, all the main decision processes that companies and various organizations are executing. Whereas, it's possible that at some point we might start having, because of competitive pressures, organizations that are fully automated and no humans in the loop anymore. And then it's, A, they might become much more faster to keep track of, just like automated trading is clearly too fast for humans now to keep track of. And B, they just might be making decisions that are just really, really unethical because they do not care. Machines do not care.

Lucas Perry: Yeah. I mean, right. That's already happening with all of the negative impacts of social media and content creation algorithms.

Jaan Tallinn: Yeah. Yeah. I think they are very early birds. It's going to get way, way worse, that's my prediction.

Lucas Perry: It seems like there's a lot more awareness around this.

Jaan Tallinn: Yeah, there is. I think last year there was a survey by Oxford Internet Institute that found that in the US and Europe, there were significantly more people who expect AI to be harmful than people who expect AI to be helpful.

Lucas Perry: How much of the alignment problem do you think our current difficulties with recommender algorithms show us and show the public?

Jaan Tallinn: I mean they are very early examples of the alignment problem, indeed. But because they're so early, they're also very noisy and hard to extract the signal from the noise there in order to have a crisp example of what to expect and what to not do... Yeah, you can kind of abstractly, you can say things that we have discussed earlier, which is try to minimize the amount of humans being modeled by AIs, and trying to constrain AIs in a way that they don't have a direct optimization.

I think Stuart Russell has this interesting point that if you let a reinforcement learner, which is a type of AI architecture, one of the most common types. If you just let it loose in an environment, just as an algorithmic fact, what it does is starts chasing that environment. If you put a reinforcement learning agent in contact with humans, those humans become the environment for that AI, they start to mess around with. That gives you one recipe, like don't put reinforcement learners in contact with humans that way. You don't want unconstrained contact.

Lucas Perry: Right. So a lot of the issue here seems to be incentive structures for corporations and you have the federal government, for example, which can apply governance and regulation and policy to try and... When incentive structures are not necessarily aligned with the public good, or the good of the world, policy and regulation can step in.

You were talking a bit earlier about climate change and learning from that. One dimension of climate change activism is that it's really stigmatized for many people companies which are not environmental conscious. And so it seems like we're at the very beginning phase of having the increase of collective awareness around the negative impacts of algorithms aligned with incentives that don't always end up helping us.

And so this is the same way as we've begun to view oil. And so there's this stigmatization, for example, of oil companies for many people. And so I wonder how you view the role of stigmatization in increasing awareness around AI? And then also the flip side, where there is also this positive marketing and branding aspect to products now, due to climate change, promoting it as, "This is recyclable," or made from recycled components, or green, or, "Our emissions are this or that." And that helps with selling products, people are interested in that.

Do you also see a path forward where I guess tech companies also engage in that kind of mindfulness around their algorithm or something? Like, "This algorithm was tested in a psychology lab with this many participants and we've determined it to meet federal guidelines of human wellbeing impact," or something like that?

Jaan Tallinn: Yeah. I don't know exactly what to think about very strong parallels with environmentalism and climate change when it comes to AI as a consumer meme, so to speak. I do think that currently it feels more productive to do more inside work, as I mentioned, like hang around in AI companies' kitchens and talk to people. Quite often, the AI researchers themselves are more concerned than their bosses, and they're more reasonable. Frankly, many times much smarter than their bosses.

So it's possible that that way we can figure out and kind of have leading AI companies start really good AI examples. Like I said, AI is a dual-use technology. You can create a lot of value, even if you do not follow the direct local incentives to profit maximize. I'm kind of hopeful to just have some kind of corporation coordination or coaliltion of AI startups and companies that would directly decide and declare, "We're not going to just go with our financial incentives and start manipulating humans in order to make more money for them. We're actually going to do this new scientific discoveries and whatnot," that's obviously a net positive.

For example, one thing that I would be very keen to see progress on is just figuring out biology and having just eradicating diseases, et cetera. These are way less likely to have bad... or that they have their own risks, but they are just, I'll claim, much more likely to give a net-positive than things that are just built to maximize next quarterly profits.

Lucas Perry: I guess two things from that. One is I mean there's different levels of building that kind of collective coordination from AI companies. There's both, you can do that at a national or international level. And some of the international dynamics of that are a little bit different if you go from, for example, the United States to China, where there's a little bit more of an adversarial and a competitive flavor there.

Jaan Tallinn: Mm-hmm (affirmative).

Lucas Perry: So I'm curious about how you see the potential success of this with regards to just internally in a country, then moving from country to country where there may be something like strong competition?

Jaan Tallinn: Yeah. So I do think that that's... Going back to the original example of why we can't simply not launch nukes, is that there's international competition. And it's like, we're going to play slaves to the game theory here. So I do think that we are increasingly slaves to game theory when it comes to AI as well.

Yeah, because it's still in early days, I'm still in a lot of information gathering and networking mode. In fact, I do think it's one of my strengths when it comes to the AI safety and x-risk ecosystem, I do not come from Anglo Saxon culture and I'm literally sitting between Beijing and New York. It's a little bit more plane time to Washington. Yes, sit literally between East and West. Yeah, I try to make friends everywhere I go, and then make sure that people talk to each other, especially engineers that I'm meeting in kitchens of AI companies here and there.

So currently I'm just trying to build human cooperation because I do think that institutional coordination seems way harder to build. But if you do have something at grassroots level of human coordination, it's very plausibly much easier to propagate it up to an institutional level.

Lucas Perry: So your recommendation for people would be to form scientific, academic interpersonal relationships with researchers in other countries, and this will create bonds and ties that will be really valuable as we move forward?

Jaan Tallinn: Exactly. I do think that it seems to be that AI esearchers are more increasingly concerned about where this thing is going. If this concern kind of spreads, even perhaps despite the profit maximization agents, there might be good things coming out of this concern. That's kind of coordination that is motivated by this concern among researchers themselves.

Lucas Perry: So you also mentioned AI becoming more aware of... or putting AI towards working on understanding biology. One thing this leads to, for example, is longevity, human life extension and then with uploads maybe we get closer to something like a immortality where a human process kind of continues indefinitely into the future.

Jaan Tallinn: Backups. Backups would be great, I think.

Lucas Perry: I'm actually really nervous and not very excited about that, because I mean there's a lot of positive things that come with the fact that people die. People grow up in different times and cultures and get trained on a different culture with different values. It's kind of nice having the personalities and people holding positions in power structures cycling out. I'm curious what your perspective is on longevity life extension and what could end up being kings and totalitarian regimes of... because those people will get the technology first, right?

Jaan Tallinn: Yeah. Stable totalitarianism is kind of like a form of... Stable global totalitarianism is a form of x-risk, to the degree that it is going to cut off humanity's future potential, sure. I mean, it feels to me a really drastic measure to avoid that by deciding to kill people, effectively. If you imagine a world where people didn't die, would we, as a society, decide to kill everyone who gets to the 80 year level?

Put it another way, our expected lifetime has increased over the centuries, significantly. Why do we think that now is the best optimal expected lifetime? I would say that it's kind of unlikely to be at optimal, and it's okay just to keep increasing it. So yeah, I'm pretty bullish. But sure, I think there are bad effects from having people that are very old and still in the positions of power. It also might be good effects from having... I think I listened to... Was it the Tim Ferriss Show or... Yeah. Vitalik Buterin was making the same point, "Well, we never had this experience of having 80 year olds in our leadership positions. There must be some very good things in having some people that actually remember the last 800 years." But sure, there are some bad effects as well.

Lucas Perry: Yeah. I mean you do get that generational memory, for example, of other global catastrophes that younger generations don't have as much of an experiential connection to.

Jaan Tallinn: I'm not saying it's going to be like a panacea to which everyone just lived forever. There will be lots of new problems, there will be lots of new opportunities and it will be like new landscape. But I do think it's a humanitarian disaster that so many people die every day.

Lucas Perry: All right, coming into the home stretch here now. Let's see if we can move through the next few questions a little bit faster. So there's both the long-term and the short-term AI alignment or safety communities. I don't really think the short-term community would be called alignment, it's more like, I don't know, safety and ethics. So from your perspective, could you explain and define what the difference is between these two communities? What's most needed here now for both of them and perhaps and their relationship with one another?

Jaan Tallinn: Yes. This is another thing that I've been thinking of quite a bit recently. For example, I was part of the European Union's European Commission's AI High Level Expert Group for two years. I saw this battle, so to speak, between people who are thinking about AI as it is now and people who are trying to think about the future of AI play out in a way that is of course very unproductive.

So one way to look it... And obviously people can define... have different definitions. But I do think that one productive definition would be to draw a line between people and organizations who are thinking about the implications of AI that already exists and is being deployed now. So things like face recognition, as a technique it was invented years, if not decades ago. And people who are trying to think more abstractly about AIs that we do not have yet, but plausibly could have.

I mean ultimately AIs that can build AIs, for example, in a way that humans can't easily control. There's a lot of division of labor to have between those communities. People think about the existing AI and its implications and to have different methods that can have different methods and be like more empirical. Long-term communities are thinking abstractly about AI that doesn't exist yet. Whereas, I think unfortunately there has been too much, in some ways, rivalry and competition between those groups and dismissiveness from one group to another. I think humanity would win quite a bit, or gain quite a bit if those tensions would be lessened between those communities.

Lucas Perry: So I think of the first as being about ethics, fairness, bias, transparency.

Jaan Tallinn: That's one very plausible research area that both sides should be very interested in.

Lucas Perry: Right. Also justice, the relationship between the use of algorithms and the justice system.

Jaan Tallinn: One way of looking at this, people who are thinking about implications of existing AI technologies, they are sort of a subset of technology ethicists and legal experts. For them, AI is just another technology. It's not like nothing super extraordinary. Whereas people who are thinking about super human AI and things like that, that doesn't exist yet, it's not clear if they can treat it as yet another technology.

Lucas Perry: I see. That's a really good way of... That's a simple way of putting it. The short-term group is basically like technology ethicists that engage with this technology as they would any other and they're looking at the issues that exist today. And then the long-term community is concerned with existential risk and AGI.

Jaan Tallinn: Among other things, but yeah.

Lucas Perry: And so we want them to speak and communicate with each other more effectively.

Jaan Tallinn: At least not be in some kind of weird, adversarial, tribal relationship.

Lucas Perry: Where there's the sense of competing for funds and, "The other people are crazy and they don't understand my side of the issue."?

Jaan Tallinn: Yeah. I think it's some kind of... I've seen it play out in some weird tribal miscommunicational ways where... To put it simple, the short-term people who are focused on AI as a technology that exists now accuse the long-termists as people who are captured by science fiction, and the vice versa. People in the long-termist community just accused the short-term, like the current AI technologist, AI ethicist as engaged in something that's not going to be relevant in just a few years or few decades from now.

Lucas Perry: Uh-huh (affirmative). Or they lack foresight.

Jaan Tallinn: Exactly.

Lucas Perry: Okay. So could you also describe your current philanthropic efforts? For example through the Survival and Flourishing Fund, s-process and also your views on software as a philanthropic target.

Jaan Tallinn: These topics will take another episode I think. But like, yeah-

Lucas Perry: I guess what's the brief overview of what's going on?

Jaan Tallinn: Yeah. I've been very interested in trying to figure out how to optimize my philanthropic impact. And being ultimately a technologist and software guy, I think two interesting developments that have come out in the last few years are realizations and developments, together with some of my compatriots... How should I word that? ... my co-workers, teammates is that first of all philanthropy it seems can be done much better when you can have more software-centric process. So basically having tools for philanthropy.

So what we are doing... It's kind of a little bit difficult to explain really quickly, but one subject that is in this general direction that I'm in, description, is that what the process that says what the Survival and Flourish Fund is using is it's if you can imagine a three level almost like a neural network. On the first level you have the applicants, the opportunities to fund, funding opportunities. On the second level, I recall, there's a group or committee of what we call recommenders. So like down, there are applicant nodes, there are recommender nodes. On top there are donor or funder nodes.

Then the job of the recommenders is basically investigate the opportunities and everyone independently ranks them by using three numbers, a margin of value function. But anyway, the point is that all the recommenders rank all the applicants. And then they have a series of disputes that are recorded where they look at each others rating and sort of duke it out and battle it out and have, as a result, an update or arrive at some consensus. In some cases... Quite important, they didn't have to arrive at consensus. They can totally agree to disagree.

And then there's a final step. The funders can look at the debates that the recommenders had and in turn rank the recommenders. So then we have finally a software process that runs a simulated funding process where you have 1000 following increments go through this neural network kind of structure where every simulated funder looks at, "What is the next best marginal recommender views for this 1000 dollars?" And this net recommender goes, "Okay..." What the simulator recommender looks at, "What's the best marginal opportunity for funding these given the current simulator funding status?"

And then it's just like crank the process and you will have this stuff, final allocations. The interesting thing is that the recommenders have to do their rankings without knowing what actual money is going, what their budgets are, what is the actual money that's going to flow through their simulated equivalence. Which means that they just have to be honest and make their recommendations robust to budget changes.

So it's like they're incentivized to be as truthful as possible and as legible to the funders who basically have the power to not fund the particular recommender if they find that this recommender is not doing a good job. And there are many good qualities about the process that I could kind of go on and on, but I should probably stop there.

Lucas Perry: If people are interested in applying for any grants or funding, where should they go?

Jaan Tallinn: It's survivalandflourishing.fund. We currently just have a round ongoing. There was a first plenary meeting on Wednesday and we're going to have a couple more. The other thing that I have learned when it comes to software and philanthropy is this idea that there's like a... In some ways it's easier to make investments than it is to make a philanthropic giving, because when you're investing in commercial entities, the commercial entities have external constraints when it comes to metrics. They can look at how the product is doing, what are the revenues, et cetera.

Whereas, in philanthropy, you necessarily don't have that, save in some cases like giving directly, that are trying to do measurements. But an interesting opportunity that I have recently been getting more keen about is fund philanthropic initiatives that are trying to develop software, because then you might be funding software that doesn't get created by commercial pressures or by the commercial market alone. But on the other hand, you will have a nice metric and constraint. You have a product in there to measure and see how effective was that philanthropic project? Therefore, I'm getting more and more keen about funding software that is developed on a philanthropic basis.

The crucial thing really is that this is software that doesn't exist because the commercial ecosystem is not incentivized to create it. And B, because it doesn't exist, the counterfactuals become very clear. If you fund a philanthropic project and it creates a piece of software then you can see what is the value of that software and then you can measure the philanthropic effect, impact. Whereas, if you fund something that just produces a bunch of research or something like that, it's much, much harder to figure out what would have happened if you hadn't funded this particular organization.

Lucas Perry: So non-profits that will create software that wouldn't have happened otherwise, then gives you clear metrics for evaluating your impact. Whereas, research-

Jaan Tallinn: Exactly.

Lucas Perry: ... into squishy things is pretty difficult to measure.

Jaan Tallinn: Yeah. So, I've been talking to a few of people along these lines to see if there's something that could be done.

Lucas Perry: But some things that are squishy seem really, really important.

Jaan Tallinn: Oh yeah, sure, sure.

Lucas Perry: So how do you compare the things that are squishy and seem really important and the want to have clear metrics of impact?

Jaan Tallinn: It's not like an exclusive thing. You can have one thing and the other. It is just different philanthropic avenues and I do think that... The software is, in some sense, something I'm keen about because this is where you're could actually actively learn and optimize your philanthropy, have a much better feedback loop when it comes to your own philanthropic effectiveness.

Again, that is not to say that I would stop doing my other philanthropic activities.

Lucas Perry: So moving along a bit here now, how do you view the project of creating beneficial futures with AI systems in light of the difficulties and relationship between things like value aggregation, multi-multi alignment and the relationship between philosophy and computer science, and kind of the order of operations of reaching existential security and augmenting human intelligence and how these things all sort of fit in together?

Jaan Tallinn: I'm not sure if I have a very holistic view or plan here. But it feels to me that we would benefit from trying to resist the immediate attractors in commercial and military and whatnot space. Try to buy us more time and perhaps use that time to build better understanding by doing research, alignment research, and perhaps build better tools, perhaps even in the form of AI, narrow AIs.

Yeah, I would be keen on seeing more. I'm very happy that the AI safety community has... the AI alignment community has grown up a lot in the last five, six years. I'm a really happy supporter of that ecosystem. I also would like to see people try to build some bridges to mechanism design community. People are thinking about what are the new ways to use coordination, various new technologies that could help communities to coordinate and see if you can build in some sense a more robust civilization.

One kind of visual that you can have is that you can look at the human civilization as like a graph or a network of agents that are roughly on the same level. And now we are starting to introduce some weird agents into the mix and they're into that network. Those agents are going to be increasingly more competent, and they are not human.

So one argument is, can we make this entire network more robust towards such non-human competent really super-fast agents? So one research and software development area I would be keen on is just kind of think about society as a multi-agency system and how can we make it more resilient towards disruptions like that?

Lucas Perry: Yeah. It's really quite a big transition for the first society and civilization and the current most intelligent species on Earth to soon introduce a wide spectrum of intelligence. We have certain kinds of views of identity and of personal rights that will begin to be challenged as you have multiple different levels of intelligence, but how do rights and duties and the pursuit of happiness and the right to life fit in? And certain intelligences can duplicate themselves arbitrarily and biologicals can't do that as easy, and then what are the ethics of such a big interdependent web of many different forms of life?

Jaan Tallinn: Yeah, one way of putting it like, "We're now going to introduce those non-human entities into our civilization and our job is to take them care about the rest of the civilization, by default they will not care at all."

Lucas Perry: That's a really nice and simple way to put it. So as a final question here, and just pivoting a bit, what's one of the most beautiful or meaningful ideas for you that you've encountered or that motivates your life and work?

Jaan Tallinn: Yeah, this is one of those questions that I'm almost certainly going to answer differently if you asked me if I had been thinking about it an hour about it, versus just one minute. Okay, I think know. One thing that is like a really good candidate for an answer is what Toby Ord mentioned in Precipice, that hadn't occurred to me before I read the book, was that in some ways you can kind of look at our potential failure to survive AI and other existential dangerous technologies as letting down our ancestors.

If you think about our ancestors, they had a rough life and they still built this amazing world. And really, if we're just going to run an AI, let an AI run loose that's going to just dump the atmosphere, in some ways the efforts and the hard life of our ancestors would have been for nothing.

Lucas Perry: All right. So is there anything else that you'd like to say just to wrap up? Any parting words or anything you feel is unsaid? Any clear, succinct message for the audience about existential risk and a key take away?

Jaan Tallinn: Perhaps, I will stress again about this idea that one fairly clear way to think about AI... There's this game in rationality, a technique in rationality community called Taboo the Word. So one thing that you do when you want to get more clarity about AI is to stop using the word AI. Whenever you do a search and replace in text and replace AI with automated decision making, and AI adoption or something similar, with delegation to non-human increasingly competent machines.

Lucas Perry: Sorry, getting more specific about what we're actually talking about when we use different kinds of words?

Jaan Tallinn: Yeah. So there is this game in the rationality community called Taboo the Word. The idea is that quite often the words... people have different ideas about the words meaning, so it's useful instead of using like a shorthand, you expand it out and then you will get... A lot of people, and even yourself will get a much better handle on the subject. So I would claim that quite often you can take a text and instead of just going, "Yeah, yeah, yeah. I know what AI is." Just replace it. Replace the AI term with automated non-human decision maker and AI adoption or AI deployment as delegation to non-human increasingly competent machines.

Lucas Perry: I get the same feeling for the word AI alignment.

Jaan Tallinn: Yeah. And I'm pretty sure it's still a similarly compressed concept. It's a label, so sure, you can have like an expanded version of it. I haven't thought exactly. I haven't thought much about what an expanded version of AI alignment would be, so I don't have a good phrase to expand it out to.

Lucas Perry: All right. If people want to follow you or stay in touch, where are the best places to do that?

Jaan Tallinn: I don't have a Twitter profile. I do have Facebook profile but I don't write there much. My investment portfolio is at metaplanet.com. Actually, I don't have any public broadcast channels, just hang around in the kitchens of people.

Lucas Perry: All right. So if you want to find Jaan, go catch him at your local AI lab kitchen, wherever that is.

Jaan Tallinn: Exactly.

Lucas Perry: All right. Thank you very much, Jaan.

Jaan Tallinn: Thank you very much, it was fun.

View transcript
Podcast

Related episodes

If you enjoyed this episode, you might also like:
All episodes

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram