Posts

Michael Klare on the Pentagon’s view of Climate Change and the Risks of State Collapse

  • How the US military views and takes action on climate change
  • Examples of existing climate related difficulties and what they tell us about the future
  • Threat multiplication from climate change
  • The risks of climate change catalyzed nuclear war and major conflict
  • The melting of the Arctic and the geopolitical situation which arises from that
  • Messaging on climate change

Watch the video version of this episode here

See here for information on the Podcast Producer position

Check out Michael’s website here

 

Lucas Perry: Welcome to the Future of Life Institute Podcast. I’m Lucas Perry. Today’s episode is with Michael Klare and explores how the US Department of Defense views and take action on climate change. This conversation is primarily centered around Michael’s book, All Hell Breaking Loose. In both this podcast and his book, Michael does an excellent job of making clear how climate change will affect global stability and functioning in our lifetimes through tons of examples of recent climate induced instabilities. I was also surprised to learn that despite changes in administrations, the DoD continued to pursue climate change mitigation efforts despite the Trump administration’s actions to remove mention and activism on climate change from the federal government. So, if you’ve ever had any doubts or if the impact of climate change and it’s significance has ever or does feel fuzzy or vague to you, this podcast might remedy that.

 I’d also like to make a final call for applications for the Podcast Producer role. If you missed it, we’re currently hiring for a Podcast Producer to work on the editing, production, publishing, and analytics tracking of the audio and visual content of this podcast. As the Producer you would be working directly with me, and the FLI outreach team, to help grow, and evolve this podcast. If you’re interested in applying, head over to the Careers tab on the Futureoflife.org homepage or follow the link in the description. The application deadline is July 31st, with rolling applications accepted thereafter until the role is filled. If you have any questions, feel free to reach out to socialmedia@futureoflife.org.

Michael Klare is a Five Colleges professor of Peace and World Security Studies. He serves on the board of directors of the Arms Control Association, and is a regular contributor to many publications including The Nation, TomDispatch and Mother Jones, and is a frequent columnist for Foreign Policy In Focus. Klare has written fourteen books and hundreds of essays on issues of war and peace, resource competition, and international affairs. You can check his work at Michaelklare.com. And with that I’m happy to present this interview with Michael Klare.

So to start things off here, I’m curious if you could explain at the highest level, how is it that the Pentagon views climate change and why is the Pentagon interested in climate change?

Michael Klare: So, if you speak to people in the military, they will tell you over and over again that their top concern is China. China, China, China followed by Russia and then maybe North Korea and Iran, and they spend their days preparing for war with China and those other countries. Climate change intercedes into this conversation because ultimately they believe that climate change is going to degrade their capacity to prepare for and to fight China and other adversaries down the road, that climate change is a complicating factor, a distraction that will undermine their ability to perform their military duties, and moreover, they see that the threat posed by climate change is increasing exponentially over time. So, the more they look into the future, the more they see that climate change will degrade their ability to carry out what they see as their primary function, which is to prepare for war with China. And so, it’s in that sense that climate change is critical. Now, then you go down in the specific ways in which climate change is a problem, but it’s ultimately because it will distract them from doing what they see as their primary responsibility.

Lucas Perry: I see, so there’s a belief in the validity of it and the way in which it will basically exacerbate existing difficulties and make achieving more important objectives more difficult.

Michael Klare: Something like that. Climate change they see as an intrusion into their work space. They’re trained as soldiers to carry out their military duties, which is combat related, and they believe that climate change is very real and getting more intense as time goes on and it’s going to hold them back, intrude on their ability to carry out their combat functions. It’s going to be a distraction on multiple levels. It’s going to create new kinds of conflicts that they would rather not deal with. It’s going to create emergencies around the world, humanitarian disasters at home and abroad, all of these are going to suck away resources, time, effort, energy, money that they believe should be devoted to their primary function of preparing for war with major enemies.

Lucas Perry: What would you say the primary interests of the Pentagon are right now other than climate change?

Michael Klare: Other than climate change, well the US Department of Defense at this time has a number of crises going on simultaneously. In addition to climate change, there’s COVID of course. Like every other institution in US society, the military was hampered by COVID, many service people came down with COVID and some died and it forced military operations to be restricted. Ships had to be brought back to port because COVID broke out on ships, so that was a problem. The military is also addressing issues of racism and extremism in the ranks. That’s become a major problem right now that they are dealing with, but they view climate change as the leading threat to national security of a non-military nature.

Lucas Perry: So, China was one of the first things that you mentioned. How would you also rank and relate the space of their considerations like Russia and a nuclear North Korea and Iran?

Michael Klare: Sure, the Department of Defense just released their budget for fiscal year 2022, and they rank the military threats and they say China is overwhelmingly the number one threat to US national security followed by Russia, followed by North Korea and Iran, and then down the list would be terrorist threats like Al-Qaeda and ISIS. But as you know, the administration has made a decision to leave Afghanistan and to downgrade US forces in that part of the world, so fighting terrorism and insurgency has been demoted as a major threat to US security, and even Russia has been demoted to second place. Over the past few years, Russia and China have been equated, but now China has been pushed ahead as the number one threat. The term they use is the pacing threat, which is to say that because China’s the number one threat, we have to meet that threat and if we can overcome China, the US could overcome any other threat that might come along, but China is number one.

Lucas Perry: So, there’s this sense of top concerns that the Department of Defense has, and then this is all happening in a context of climate change, which makes achieving its objectives on each of these threats more and more difficult. So, in the context of this interplay, can you describe the objectives of career officers at the Pentagon and how it’s related to and important for how they consider and work with climate change?

Michael Klare: Sure, so if you’re an aspiring general or admiral right now, as I say, you’re going to be talking about how you’re preparing your units, your division, your ship, your air squadron to be better prepared to fight China, but you also have to worry about what they call the operating environment, the OE, the operating environment in which your forces are going to be operating in, and if you’re going to be operating in the Pacific, which means dealing with China, then you have a whole set of worries that emerges. We have allies there that we count on: Japan, South Korea, the Philippines.

These countries are highly vulnerable to the effects of climate change and are becoming more so very rapidly. Moreover, we have bases in those places. Most of those bases, air bases and naval bases are at sea level or very close to sea level and are over and over again have been assaulted by powerful typhoons and have been disrupted, have had to be shut down for days or weeks at a time, and some of those bases like Diego Garcia in the Indian Ocean for example, or the Marianas Islands are not going to be viable much longer because they’re so close to sea level and sea level rise is just going to come and swamp them. So from an operating environment point of view, you have to be very aware of the impacts of climate change on the space in which you’re going to operate.

Lucas Perry: So, it seems like the concerns and objectives of career officers at the Pentagon can be distinguished in significant ways from the perspective and position of politicians, so there’s like some tension at least between career officers or the objectives of the Pentagon in relation to how some constituencies of the American political parties are skeptical of climate change?

Michael Klare: Yes, this was certainly the case during the Trump administration because the commander in chief, as one of his titles, President Trump forbad the discussion of climate change, and he was a denier. He called it a hoax and he forbad any conversation of that. So, the US military did have a position on climate change during the Obama administration. It had as early as 2010 the Department of Defense stated that climate change posed a serious threat to US security and was taking steps to address that threat. So when Trump came along, all of that had to go underground. It didn’t stop, but the Pentagon had to develop a whole lot of euphemisms, like changing climate or extreme weather events, all kinds of euphemisms used to describe what they saw as climate change, but that didn’t stop them from facing the consequences of climate change. During the Trump administration, US military bases in the US suffered billions and billions of dollars of damage from Hurricane Michael, from others that hit the East Coast and the Gulf of Mexico that did tremendous damage to a number of key US bases.

And, the military is still having to find the money to pay for that damage, and the Navy in particular is deeply concerned that its major operating bases in the United States… A Navy base by definition is going to be at the ocean, and many of these bases are of very low lying areas and already are being repeatedly flooded at very high tides, or when there are storms and the Navy is very aware that their ability to carry out their missions to reinforce American forces either in the Atlantic or Pacific are at risk because of rising seas, and they had to maneuver around Trump all during that period, trying to protect their bases, but calling it by different names, calling the danger they faced by different names.

Lucas Perry: Right, so there’s this sense of Trump essentially canceling mention of climate change throughout the federal government and its branches and the Pentagon responding by quietly still responding to what they see as a real threat. Is there anything else you’d like to add here about the Obama to Trump transition that helps to really paint the picture of how the Pentagon views climate change and what it did despite attempts to suppress thought and action around climate change?

Michael Klare: During the Obama administration, as I say, the Department of Defense acknowledged the reality of climate change number one. Number two said it posed a threat to US national security, and as a result said that the Department of Defense had an obligation to reduce its contribution to climate change to reduce its emissions and made all kinds of pledges that it was going to reduce its consumption of fossil fuels and increase its reliance on renewable energy, begin constructing solar arrays. A lot of very ambitious goals were announced in the Obama period, and although all of this was supposed to stop when Trump came into office because he said we’re not going to do anymore any of this anymore. In fact, the Pentagon continued to proceed with a lot of these endeavors, which were meant to mitigate climate change, but again, using different terminology that this was about base reliance, self-reliance, resiliency, and so on, not mentioning climate change, but nonetheless continued to proceed with efforts to actually mitigate their impact on climate.

Lucas Perry: All right, so is there any sense in which the Pentagon’s view of climate change is unique? And, could you also explain how it’s important and relevant for climate change and also the outcomes related to climate change?

Michael Klare: Yes, I think the Pentagon’s view of climate change, I think, is very distinctive and not well understood by the American public, and that’s why I think it’s so important, and that is that the Department of Defense sees climate change as… The term they use is as a threat multiplier. They say, look, we look out at the world and part of our job is to forecast ahead of time where our threat’s going to emerge to US security around the world. That’s our job, and to prepare for those threats, and we see that climate change is going to multiply threats in areas of the world that are already unstable, that are already suffering from scarcities of resources, where populations are divided and where resources are scarce and contested, and that this is going to create a multitude of new challenges for the United States and its allies around the world.

So, this notion of a threat multiplier is very much a part of the Pentagon’s understanding of climate change. What they mean by that is that societies are vulnerable in many ways and especially societies that are divided along ethnic and religious and racial lines as so many societies are, and if resources are scarce, housing, water, food, jobs, whatever, climate change is going to exacerbate these divisions within societies, including American society for that matter, but it’s going to exacerbate divisions around the world and it’s going to create a social breakdown and state collapse. And, the consequence of state collapse could include increased pandemics for example, and contribute to the spread of disease. It’s going to lead to mass migrations and mass migrations are going to become a growing problem for the US.

The influx of migrants on America’s Southern border, many of these people today are coming from Central America and from an area that’s suffering from extreme drought and where crop failure has become widespread, and people can’t earn an income and they’re fleeing to the United States in desperation. Well, this is something the military has been studying and talking about for a long time as a consequence of climate change, as an example of the ways in which climate change is going to multiply schisms in society and threats of all kinds that ultimately will endanger the United States, but it’s going to fall on their shoulders to cope with and creating humanitarian disasters and migratory problems.

And as I say, this is not what they view as their primary responsibility. They want to prepare for high-tech warfare with China and Russia, and they see all of this as a tremendous distraction, which will undermine their ability to defend the United States against its primary adversaries. So, it’s multiplying the threats and dangers to the United States on multiple levels including, and we have to talk about this, threats to the homeland itself.

Lucas Perry: I think one thing you do really well in your book is you give a lot of examples of natural disasters that have occurred recently, which will only increase with the existence of climate change as well as areas which are already experiencing climate change, and you give lots of examples about how that increases stress in the region. Before we move on to those examples, I just want to more clearly lay out all the ways in which climate change just makes everything worse. So, there’s the sense in which it stresses everything that is already stressed. Everything basically becomes more difficult and challenging, and so you mentioned things like mass migration, the increase of disease and pandemics, the increase of terrorism in destabilized regions, states may begin to collapse. There is, again, this idea of threat multiplication, so everything that’s already bad gets worse.

Lucas Perry: There’s loss of food, water, and shelter instability. There’s an increase in natural disasters from more and more extreme weather. This all leads to more resource competition and also energy crises as rivers dry up and electric dams stop working and the energy grid gets taxed more and more due to the extreme weather. So, is there anything else that you’d like to add here in terms of the specific ways in which things get worse and worse from the factor of threat multiplication?

Michael Klare: Then, you start getting kind of specific about particular places that could be affected, and the Pentagon would say, well this is first going to happen in the most vulnerable societies, poor countries, Central America, North Africa, places like that where society is already divided, poor, and the capacity to cope with disaster is very low. So, climate change will come along and conditions will deteriorate, and the state is unable to cope and you have breakdown and you have these migrations, but they also worry that as time goes on and climate change intensifies, that a bigger and bigger or richer and richer and more important states will begin to disintegrate, and some of these states are very important to US security and some of them have nuclear weapons, and then you have really serious dangers. For example, they worry a great deal about Pakistan.

Pakistan is a nuclear armed country. It’s also deeply divided along ethnic and religious lines, and it has multiple vulnerabilities to climate change. It goes between extremes of water scarcity, which will increase as the Himalayan glaciers disappear, but also we know that monsoons are likely to become more erratic and more destructive with more flooding.

All of these pose great threats to the ability of Pakistan’s government and society to cope with all of its internal divisions, which are already severe to begin with, and what happens when Pakistan experiences a state collapse and nuclear weapons begin to disappear into the hands of the Taliban or to forces close to the Taliban, then you have a level of worry and concern much greater than anything we’ve been talking before, and this is something that the Pentagon has started to worry about and to develop contingency plans for. And, there are other examples of this level of potential threat arising from bigger and more powerful states disintegrating. Saudi Arabia is at risk, Nigeria is at risk, the Philippines, a major ally in the Pacific is at extreme risk from rising waters and extreme storms, and I can continue, but from a strategic point of view, this starts getting very worrisome for the Department of Defense.

Lucas Perry: Could you also paint a little bit of a picture of how climate change will exacerbate the conditions between Pakistan, India, and China, especially given that they’re all nuclear weapon states?

Michael Klare: Absolutely, and this all goes back to water and many of us view water scarcity as the greatest danger arising from climate change in many parts of the world. In the case of India, China, Pakistan, not to mention a whole host of other countries depend very heavily on rivers that originate in the Himalayan mountains and draw a fair percentage of their water from the melting of the Himalayan glaciers and these glaciers are disappearing at a very rapid rate and are expected to lose a very large percentage of their mass by the end of this century due to warming temperatures.

And, this means that these critical rivers that are shared by these countries, the Indus River shared by India and Pakistan, the Brahmaputra River shared by India and China, these rivers, which provide the water for irrigation for hundreds of millions of people if not billions of people, depend on these rivers, the Mekong is another. As the water supply begins to diminish, this is going to exacerbate border disputes. All of these countries, Indian and China, Indian and Pakistan have border and territorial disputes. They have very restive agricultural populations to start with, that water scarcity is going to be the tipping point that will produce massive local violence that will lead to conflict between these countries, all of them nuclear armed.

Lucas Perry: So, to paint a little bit more of a picture of these historical examples of states essentially failing to be able to respond to climate events and the kind of destructive force that was to society and to the status of humanitarian conditions and the increasing need for humanitarian operations, so can you describe what happened in Tacloban for example, as well as what is going on in the Nigerian region?

Michael Klare: So, Tacloban is a major city on the island of Leyte in the Philippines, and it was a direct hit. It suffered a direct hit from Typhoon Haiyan in 2013. This was the most powerful typhoon to make landfall up until that point, an extremely powerful storm that created millions of homeless in the Philippines. Many people perished, but Tacloban was at the forefront of this. A city of several hundred thousand, many poor people living in low lying areas at the forefront of the storm. The storm surge was 10 or 20 feet high. That just over overwhelmed these low lying shanty towns, flooded them. Thousands of people died right away. The entire infrastructure of the city collapsed was destroyed, hospitals, everything. Food ran out, water ran out, and there was an element of despair and chaos. The Philippine government proved incapable of doing anything.

And, President Obama ordered the US Pacific Command to provide emergency assistance, and it sent almost the entire US Pacific fleet to Tacloban to provide emergency assistance on the scale of a major war, aircraft carrier, dozens of warships, hundreds of planes, thousands of troops to provide emergency assistance. Now, it was a wonderful sign of US aid. There are a number of elements of this crisis that are worthy of mention. In addition to all of this, one was the fact that there was anti-government rioting because of the failure of the local authorities to provide assistance or to provide it only to wealthy people in the town, and this is so often a characteristic of these disasters that assistance is not provided equitably, and the same thing was seen with Hurricane Katrina in New Orleans and this then becomes a new source of conflict.

When a disaster occurs and you do not have equitable emergency response, and some people are denied help and others are provided assistance, you’re setting the stage for future conflicts and anti-government violence, which is what happened in Tacloban And the US military had to intercede to calm things down, and this is something that has altered US thinking about humanitarian assistance because now they understand that it’s not just going to be handing out food and water, it’s also going to mean playing the role of a local government and providing police assistance and mediating disputes and providing law and order, not just in foreign countries, but in the United States itself and this proved to be the case in Houston with Hurricane Harvey in 2017 and in Puerto Rico with Hurricane Maria when local authorities simply disappeared or broke down and the military had to step in and play the role of government, which comes back to what I’ve been saying all along. From the military’s point of view, this is not what they were trained to do.

This is not what they want to do, and they view this as a distraction from their primary military function. So, here’s the Pacific fleet engaging in this very complex emergency in the Philippines, and what if there were a crisis with China that were to break out? The whole force would have been immobilized at that time, and this is the kind of worry that they have that climate change is going to create these complex emergencies they call them, or complex disasters that are going to require not just a quick in and out kind of situation, but a permanent or semi-permanent involvement in a disaster area and to provide services for which the military is not adequately prepared, but they see that climate change increasingly will force them to play this kind of role and thereby distracting them from what they see as their more important mission.

Lucas Perry: Right, so there’s this sense of the military increasingly being deployed in areas to provide humanitarian assistance. It’s obvious why that would be important and needed domestically in the United States and its territories. Can you explain why the military is incentivized or interested in providing global humanitarian assistance?

Michael Klare: This has always been part of American foreign policy, American diplomacy, winning friends, winning over friends and allies. So, it’s partly to make the United States look good particularly when other countries are not capable of doing that. We’re the one country that has that kind of global naval capacity to go anywhere and do that sort of thing. So, it’s a little bit a matter of showing off our capacity, but it’s also in the case of the Philippines, the Philippines plays a strategic role in US planning for conflict in the Pacific.

It is seen as a valuable ally in any future conflict with China and therefore its stability matters to the United States and the cooperation of the Philippine government is considered important and access to bases in the Philippines, for example, is considered important to the US. So, the fact that key allies of the US in the Pacific, in the Middle East and Europe are at risk of collapsing due to climate change poses a threat to the whole strategic planning of the US, which is to fight wars over there, in the forward area of operations off the coast of China, or off of Russian territory. So, we are very reliant on the stability and the capacity of key allies in these areas. So, providing humanitarian assistance and disaster relief is a part of a larger strategy of reliance on key allies in strategic parts of the world.

Lucas Perry: Can you also explain the conditions in Nigeria and how climate change has exacerbated those conditions and how this fits into the Pentagon’s perspective and interest in the issue?

Michael Klare: So, Nigeria is another country that has strategic significance for the US, not perhaps on the same scale as say Pakistan or Japan, but still important. Nigeria is a leading oil producer, not as important as it once was perhaps, but nonetheless important, but Nigeria is also a key player in peacekeeping operations throughout Africa and because the US doesn’t want to play that role itself, it relies on Nigeria for peacekeeping troops in many parts of Africa. And, Nigeria occupies a key piece of territory in Central Africa, which is it’s surrounded by countries, which are much more fragile and are threatened by terrorist organizations. So, Nigeria’s stability is very important in this larger picture, and in fact Nigeria itself is at risk from terrorist movements, especially Boko Haram and splinter groups, which continue to wreak havoc in Northern Nigeria despite years of effort by the Nigerian government to crush Boko Haram, it’s still a powerful force.

And, partly this is due to climate change. The Boko Haram operates in areas around Lake Chad, which is now a small sliver of what it once was. It has greatly diminished in size because of global warming and water mismanagement. And so, the farmers and fisher folk whose livelihood depended on Lake Chad has all been decimated. Many of them have become impoverished. The Nigerian government has proved inept and incapable of providing for their needs, and many of these people have therefore fallen prey to the appeals of recruitment by Boko Haram, young men without jobs. So, climate change is facilitating, is fueling the persistence of groups like Boko Haram and other terrorist groups in Nigeria, but that’s only part of the picture. There’s also growing conflict between pastoralists, these are herders, cattle herders whose lands are being devastated by desertification.

In this Sahel region, the southern fringe of the Sahara is expanding with climate change and driving these pastoralists into areas occupied by… These are all Muslim, the pastoralists are primarily Muslims and they’re moving into lands occupied by Christians, mainly Christian farmers, and there’s been terrible violence in the past few years, many hundreds of thousands of people displaced. Again, inept Nigerian response, and so I could go on. There’s violence in the Nigeria Delta region, the Niger Delta area in the south and in the area, their breakaway provinces. So, Nigeria is at permanent risk of breaking apart, and the US provides a lot of military aid to Nigeria and provides training. So, the US is involved in this country and faces a possibility of greater disequilibrium and greater US involvement.

Lucas Perry: Right, so I think this does a really good job of painting the picture of this factor of threat multiplication from climate change. So, climate change makes getting food, water, and shelter more difficult. There’s more extreme weather, which makes those things more difficult, which increases instability, and for places that are already not that stable, they get a lot more unstable and then states begin to collapse and you get terrorism, and then you get mass migration, and then there’s more disease spreading, so you get conditions for increased pandemics. Whether it’s in Nigeria or Pakistan and India or the Philippines or the United States and China and Russia, everything just keeps getting worse and worse and more difficult and challenging with climate change. So, could you describe the ladder of escalation of climate change related issues for the military and how that fits into all this?

Michael Klare: Well, now this is an expression that I made up to try to put this in some kind of context, drawing on the ladder of escalation from the nuclear era when the military talked about the escalation conflict from a skirmish to a small war, to a big war, to the first use of nuclear weapons, to all out nuclear war. That was the ladder of escalation of the nuclear age, and what I see happening is something of a similar nature where at present we’re still dealing mainly with these threat multiplying conditions occurring in the smaller and weaker states of Africa, Chad, Niger, Sudan and the Central American countries, Nicaragua and El Salvador, where you see all of these conditions developing, but not posing a threat to the central core of the major powers, but as climate change advances, the military expects and US intelligence agencies expect, as I indicated, that larger, stronger, richer states will experience the same kinds of consequences and dangers and begin to experience this kind of state disintegration.

So, what we’re seeing in places like Chad and Niger, which involves this skirmishing between insurgents, terrorists, and other factions in which the US is playing a remote role, is playing the role, but it’s remote to situations where a Pakistan collapses, a Nigeria collapses, a Saudi Arabia collapses would require a much greater involvement by American forces on a much larger scale and that would be the next step up the ladder of escalation arising from climate change, and then you have the possibility, as I indicated, where nuclear armed states would engage in conflict, would be drawn into conflict because of climate related factors like the melting of the Himalayan glaciers and Indian and Pakistan going to war or Indian and China going to war, or we haven’t discussed this, but another consequence of climate change is the melting of the Arctic and this is leading to competition between the US and Russia in particular for control of that area.

So, you go from disintegration of small states to disintegration of medium-sized states, to conflict between nuclear armed states, and eventually to conceivable US involvement in climate related conflicts. That would be the ladder of escalation as I see it, and on top of that, you would have multiple disasters happening simultaneously in the United States of America, which would require a massive US military response. So, you can envision, and the military certainly worries about this, a time when US forces are fully immobilized and incapable of carrying out what they see as their primary defense tasks because they’re divided. Half their forces are engaging in disaster relief in the United States and another half are dealing with these multiple disasters in the rest of the world.

Lucas Perry: So, I have a few bullet points here that you could expand upon or correct about this ladder of escalation as you describe it. So at first, there’s the humanitarian interventions where the military is running around to solve particular humanitarian disasters like in Tacloban. Then, there’s limited military operations to support allies. There’s disruptions to supply chains and the increase of failed states. There’s the conflict over resources. There’s internal climate catastrophes and complex catastrophes, which you just mentioned, and then there’s what you call climate shock waves, and finally all hell breaking loose where you have multiple failed states, tons of mass migration, a situation in which no state no matter how powerful is able to handle.

Michael Klare: Climate shock wave would be a situation where you have multiple extreme disasters occurring simultaneously in different parts of the world leading to a breakdown in the supply chains that keep the world’s economy afloat and keep food and energy supplies flowing around the world, and this is certainly a very real possibility. Scientists speak of clusters of extreme events, and we’ve begun to see that. We saw that in 2017 when Hurricane Harvey was followed immediately by Hurricane Irma in Florida, and then Hurricane Maria in the Caribbean and Puerto Rico and the US military responded to each of those events, but had some difficulty moving emergency supplies first from Houston to Florida, then to Puerto Rico. At the same time, the west of the US was burning up. There were multiple forest fires out of control and the military was also supplying emergency assistance to California, Washington State, and Oregon.

That’s an example of clusters of extreme events. Now looking into the future, scientists are predicting that this could occur in several continents simultaneously. And as a result, food supply chains would break down, and many parts of the world rely on imported grain supplies, or other food stuffs and imported energy. And in a situation like this, you could imagine a climate shockwave in which trade just collapses and entire states suffer from a major catastrophe, food catastrophes leading to state collapse and all that we’ve been talking about.

Lucas Perry: Can you describe what all hell breaking loose is?

Michael Klare: Well, this is my expression for the all of the above scenario. You have these multiple disasters occurring and one that we have not discussed at length is the threat to American bases and how that would impact on the military. So, you have these multiple disasters occurring that create a demand on the military to provide assistance domestically, like I say, many areas needing emergency assistance and not just of the obvious sort of handing out water bottles, but as I say, complex emergencies where the military is being called in to provide law and order, to restore infrastructure, to play the role of government. So, you need large capacity organizations to step in. At the same time, it’s being asked to do that in other parts of the world, or to intervene in conflicts with nuclear armed states happening simultaneously. But at the same time, its own bases have been immobilized by rising seas and flooding and fires. All of this is a very realistic scenario because parts of it have already occurred.

Lucas Perry: All right, so let’s make a little bit of a pivot here into something that you mentioned earlier, which is the melting of the Arctic. So, I’m curious if you could explain the geopolitical situation that arises from the melting of the Arctic Ocean… Sorry, the Arctic region that creates a new ocean that leads to Arctic shipping lanes, a new front to defend, and resource competition for fish, minerals, natural gas and oil.

Michael Klare: Yes, indeed. In a way, the Arctic is how the military first got interested in climate change, especially the Navy because the Navy never had much of an Arctic responsibility. It was covered with ice, so its ships couldn’t go there except for submarines on long-range patrols under the sea ice, but the Navy never had to worry about the Arctic. And then around 2009, the Department of the Navy created a climate change task force to address the consequences of a melting Arctic sea ice and came to the view that as you say, this is a new ocean that they would have to defend that they’d never thought about before, and for which they were not prepared.

Their ships were not equipped to operate, for the most part, in the Arctic. So ever since then, the Arctic has become a major geopolitical concern of the United States on multiple fronts. But two or three points in particular that need to be noted, first of all, the melting of the ice cap makes it possible to extract resources from the area, oil and natural gas, and it turns out there’s a lot of oil and natural gas buried under the ice cap, under the seabed of the Arctic and oil and gas companies are very eager to exploit those untapped reserves. So the area, what was once considered worthless, is now a valuable geo-economic prize and countries have exerted claims to the area, and some of these claims overlap. So, you have border disputes in the Arctic between Russia and the United States, Russia and Norway, Canada and Greenland, and so on. There are now border disputes because of the resources that are in these areas. And because of drilling occurring there, you now need to worry about spills and disasters occurring, so that creates a whole new level of Naval and Coast Guard operations in the Arctic. This has also led to shipping lanes opening up into the region, and who controls those shipping lanes becomes a matter of interest. Russia is trying to develop what it calls the Northern Sea Route from the Pacific to the Atlantic going across its Northern territory across Siberia, and potentially, this could save two weeks of travel for container ships, moving from Rotterdam say to Shanghai and could be commercially very important.

Russia wants to control that route but the U.S. and other countries says, “It’s not yours to control.” So, you have disputes over the sea routes. But then, more important than any of the above is that Russia has militarized its portion of the Arctic, which is the largest portion, and this has become a new frontier for U.S.-Russian military competition, and there has been a huge increase in military exercises, base construction. Now, from the U.S. point of view, the Arctic is a new front in the future war with Russia and they’re training for this all the time.

Lucas Perry: Could you explain how the Bering Strait fits in?

Michael Klare: The Bering Strait between the U.S. and Russia is a pretty narrow space and that’s the only way to get from the North Pacific into the Arctic region, whether you’re going to Northern Alaska and Northern Canada, or to across from China and Japan, across the Northern Sea Route to Europe. So, this becomes a strategic passage way, the way Gibraltar has been the past. And both the U.S. and Russia are fortifying that passageway and there’s constant tussling going on there. It doesn’t get reported much, but every other week or so, Russia will send up its war planes right to the edge of U.S. airspace in that region, or the U.S. will send its planes into the edge of Russian airspace to test their reflexes and their naval maneuvers happening all the time. So, this has become seen as a important strategic place on the global chessboard.

Lucas Perry: How does climate change affect the Bering Strait?

Michael Klare: Well, it affects it in the sense that it’s become the entry point to the Arctic and the climate change has made the Arctic a place you want to go that it wasn’t before.

Lucas Perry: All right. So, one point that you made in your book that I like to highlight is that the Arctic is seen as a main place for conflict between the great powers in the coming years. Is that correct?

Michael Klare: Yes. For the U.S. and Russia, it’s important, here we would focus more on the Barents Sea, the area above Norway, and just, it helps of course to have a map in your mind, but Russia shares the border with Norway in it’s extreme north. And that part of Russia is the Kola Peninsula, and it’s where the City of Murmansk is located, and that’s the headquarters of Russia’s Northern Fleet and where it keeps its nuclear or missile submarines are based there. So, that’s how, that’s one of Russia’s few ways of access into the Atlantic Ocean from its own territory, from its major naval port at Murmansk. The waters adjacent to Northern Norway and Russia, like on the other side, have become a strategic, very important strategic military location. The U.S. has started building military bases with Norway in that area close to the Russian border. We’ve now stationed B-1 bombers in that area, so it is seen as a likely first area of conflict in the event of a war between the U.S. and Russia is going to occur at that spot.

Climate change figures into this because Russia views its Arctic region as critical economically as well as strategically and is building up its military forces there. And therefore, from U.S. NATO point of view, it’s a more strategically important region. But you ask about China, and China has become very interested in the Arctic as a source of raw materials, but also as a strategic passageway from its east coast to Europe for the reason I indicated, if once the ice cap melts, they’ll be able to ship goods to Europe in much shorter space of time and bring goods back if they can go through the Arctic. But China also is very interested in drilling for energy at the Arctic and for minerals, there are a lot of valuable minerals believed to be in Greenland.

You can’t get to those now because Greenland is covered with ice. But as that ice melts, which it’s doing at a rapid rate, the ground is becoming exposed and mining activities have begun there for things like uranium, and rare earths, and other valuable minerals. China is very deeply interested in mining there and this has led to diplomatic maneuverings, didn’t Donald Trump once talk about buying Greenland, to geopolitical competition between the U.S. and China over Greenland and this area.

Lucas Perry: Are there any ongoing proposals for how to resolve territorial disputes in the Arctic?

Michael Klare: Well, the shorter answer is no, there’s talk, there is something called the Arctic Council and this is an organization of the states that occupy territory in the Arctic region and it has some very positive environmental agendas and had some success in addressing non-geopolitical issues. But it has not been given the authority to address territorial disputes that members have resisted that. So, you don’t have a, it’s not a forum that would provide for that. There is a mechanism under the United Nations Convention on the Law of the Sea that allows for adjudication of off shore territorial disputes and it’s possible that that could be a forum for discussion, but mostly, these disputes have remained unresolved.

Lucas Perry: I don’t know much about this. Does that have something to do with, you have so many, you have X many miles from your sea shelf or something having to do with like the tectonic plates or ocean something.

Michael Klare: I can… Yes, so under the UN Convention on the Law of the Sea, you’re allowed a 200 nautical mile exclusive economic zone off your coastline. Every, any coastal country can claim 200 nautical miles. But you’re also allowed an extra 150 miles if your outer continental shelf, if you can prove scientifically that your outer continental shelf extends beyond 200 nautical miles, then you can extend your EEZ another 150 nautical miles out to 350 nautical miles. And the Northern Arctic has islands and territories that have allowed contending states to claim overlapping EEZs-

Lucas Perry: Oh, okay.

Michael Klare: … on this bases.

Lucas Perry: I see.

Michael Klare: And Russia has claimed vast areas of the Arctic as part of its outer continental shelf. But the great imperial power of Denmark, which territorially, is one of the largest imperial powers on earth because it owns Greenland, and Greenland also has an extended outer continental shelf that overlaps with Russia’s, as does Canada’s. You have to picture the looking down, not on the kind of wall maps we have of the world in our classrooms that make the Arctic look huge, but from a global map, everything comes closer together up there. And so, these extended EEZs overlap and so Greenland, and Canada, and Russia are all claiming the North Pole.

Lucas Perry: Okay. So, I think that paints really well the picture of the already existing and conflict there and how it will likely only get worse in terms of the amount of conflict. It’d be great if we could focus a bit on nuclear weapons risk in climate change in particular. I’m curious if you could explain the DOD’s concerns an improving China, and a nuclear North Korea, and India, and Pakistan, and other nuclear states in this evolving situation of increasing territorial disputes due to climate change.

Michael Klare: From a nuclear war perspective, the two greatest dangers I think and I’ve mentioned these, one is the collapse or the disappearance of the Himalayan Glaciers, sparking a war between India and China that would go nuclear, or one between India and Pakistan that would go nuclear. That’s one climate-related risk of nuclear escalation. The other is in the Arctic, and here, I think the danger is the fact that Russia has turned the Arctic into a major stronghold for its nuclear weapons capabilities. It stations a large share of its nuclear retaliatory, warheads on submarines, and other forces that are based in the Arctic. And so, in the event of a conflict between the U.S. and Russia, this could very well take place in the Arctic region and trigger the use of nuclear weapons as a consequence.

Lucas Perry: I think we’ve done a really good job of showing all of the bad things that happen as climate change gets worse. The Pentagon has perspective on everything that we’ve covered here, is that correct?

Michael Klare: Yes.

Lucas Perry: So, how does the Pentagon intend to address the issue of climate change and how it affects its operations?

Michael Klare: The Pentagon has multiple responses to this, and this began as early as 2010 in the Quadrennial Defense Review of 2010. This is a every four-year strategic blueprint released by the Joint Chiefs of Staff and the Secretary of Defense. And that years was the first one that, number one, identified climate change as a national security threat and spelled out the responses that the military should make, and there were three parts to that. One part is, I guess you would call it hardening U.S. bases to the impacts of climate change, increasing resiliency and seawalls to protect low-lying bases, but otherwise, enhancing the survivability of U.S. bases in the face of climate change. That’s one response. A second response is in mitigating the department’s own contributions to climate change by reducing its reliance on fossil fuels. And I could talk what specifically they’re doing in that area.

The third is, and I think this is very interesting, they said that we should not only, that because climate change is a global problem, this was specific, climate change is a global problem, affects our allies and friends, and therefore, we should work with our allies and with the military forces of our allies and friends to do the same things we’re doing at home to do in their countries as well, that is to build resilience, to prepare for climate change, to reduce impacts so that this would be a global cooperative effort, military to military which has gotten very little attention, I think, from the media and from Congress and elsewhere, but a very important part of American foreign policy with respect to climate change.

Lucas Perry: So, there’s hardening our own bases and systems, I believe in your book you mentioned, for example, turning bases into operational islands such that their energy and material needs are disconnected from supply lines. The second was reducing the greenhouse emissions of the military, and the third is helping allies with such efforts. I’m curious if you could describe a bit more the first and the second of these, the hardening of our own systems and bases and becoming more green. Because I mean, it is interesting and at least a little bit surprising that the military is trying to become green in order to improve combat readiness through independence of a foreign and domestic fuel needs and sources. So, could you explain a little bit more this, for example, the drive to create a green fleet in the Navy?

Michael Klare: Sure. Now, but this began during the Obama administration and then went semi-underground during the Trump administration, so the information we have is mainly pre-Trump. Now, under president Biden, climate change has been elevated to a national security threat as per an executive order he issued shortly after taking office, and our new Secretary of Defense Lloyd Austin has said, has issued a complementary statement that climate change is a departmental-wide Department of Defense concern, so activities that were prohibited by the Trump administration will now be revived. So, we will now hear a lot more about this in the months ahead, but there is a four-year blackout of information on what was being done. But during the Obama administration, the Department of Defense was ordered to, as I say, to work on adaptation and mitigation both as part of its responsibilities, the adaptation affected particularly bases in low low-lying coastal areas.

And there are a lot of U.S. bases for historic purposes, for historic reasons are located along the East Coast of the U.S., that’s where they started out. Most important of them is the Norfolk Naval Station in Virginia, the most important naval base in the United States. It’s at sea level and it’s on basically reclaimed swamplands and it’s subsiding into the ocean at the same time sea level is rising. But there are many other bases along the East Coast and Florida, and in the Gulf coast that are at equal risk. And so, part of what the military is doing is to build seawalls to protect them against sea surges, moving critical equipment from areas that are in high flood prone areas to areas that are at higher elevation, adopting codes, any new buildings built on these bases have to be hardened against hurricanes, and sensitive equipment, electronic equipment has to be put in the higher stories so that if they are flooded they won’t be damaged.

There’s a lot of very concrete measures that have to do with base construction that have been undertaken to enhance the resilience of bases in response to extreme storms and flooding. That’s one aspect of this. The mitigation aspect is to reduce reliance on fossil fuels and to convert as wherever possible, vehicles, air, ground, and sea vehicles to use alternative fuels. So, the Navy, the Army, the Air Force are converting their non-tactical vehicle fleets, they all have huge numbers of ordinary sedans, and vans, and trucks. Increasingly, these will be hybrids or electric vehicles. And the Air Force is experimenting with alternative fuels produced by algae, and the Navy has experimented with alternative fuels derived from agricultural products, and so on. So, there’s a lot of experimentation going on, a lot of, some of the biggest solar arrays in the U.S. are on U.S. military bases or constructed at the behest of U.S. military bases by private energy companies. Those are some of the activities that are underway.

Lucas Perry: In addition to threatening U.S. military bases and the bases of our allies, climate change will also affect the safety and security of, for example biosafety level 4 labs and also nuclear power plants. So, I’m curious how you view the risks of climate change affecting crucial infrastructure, should it fail, could create global catastrophe, for example, from nuclear power plants melting down or pathogens being released from biosafety labs that fail under the stresses of climate change.

Michael Klare: I have not seen the literature on the bio labs in the Pentagon literature. What they do worry about is the fragility of the U.S. energy infrastructure in particular, in part because they depend on the same energy infrastructure as we do for their energy needs, for electricity transmission, pipelines and the like to supply their bases and their other facilities. And they’re very aware that the U.S. energy infrastructure is exceedingly vulnerable to climate change, either a lot of it, a very large part of our infrastructure is on the East Coast and the West Coast, very close to sea level, very exposed to storm damage and a lot of it is just fragile. A clearer example of that is Hurricane Maria in Puerto Rico when the electric system collapsed entirely and the Army Corps of Engineers had to come in and were there for almost an entire year rebuilding the energy infrastructure of Puerto Rico.

They’ve had to do this and other places as well. So, they are very worried that climate change disasters, multiple disasters, will knock out the power in the U.S. causing major cascading failures. So, when energy fails, then petrochemical facilities fail. And that’s what happened in Houston in Hurricane Harvey. The power failure went out and these petrochemical facilities, which Houston has many of, failed and toxic chemicals spilled out, and also the sewer system collapsed. So, you have, cascading failures producing toxic threat. And the military had to issue toxic protective clothing to its personnel in doing rescue operations because the water in flooded areas of Houston was poisonous. So, it’s the cascading effects that they worry about. This happened in New York City with Hurricane Sandy in 2012 where power went out, then gas stations couldn’t operate and hospitals and nursing homes couldn’t function. Well, I’m going on here, but you get a sense of the interrelationship between these critical elements of infrastructure. Fires are another aspect of this, as we know from California. A lot of US bases in California are at risk from fires and the transmission lines that carry the energy. I was going to mention the Colonial Pipeline disaster, which was a cyber attack, not climate related, but that exposes the degree to which our energy infrastructure is fragile.

Lucas Perry: If it rains or snows just enough, we’ve all experienced losing power for six hours or more. The energy grid seems very fragile even to relatively normal weather.

Michael Klare: Yes, but with climate change and these multiple, simultaneous disasters where the whole systems break down.

Lucas Perry: Do you see lethal autonomous weapons as fitting into the risks of escalation in a world stressed by climate change?

Michael Klare: Well, I see lethal autonomous weapons as a major issue and problem, which I’ve written about and I worry about a great deal. Now, what is their relationship to climate change? I couldn’t say. I think the military in general is facing the world in which they feel that humans are increasingly unable to cope with the demands of time compression and decision-making and the complexity of the environment in which decision-makers have to operate and that’s partly technological, it’s partly just the complexity of the world that we’ve been discussing.

And so, there’s ever increasing sense among the military that commanders have to be provided with computer assisted decision-making and autonomous operations because they can’t process the amount of data that’s coming into them simultaneously. This is behind not just autonomous weapons systems, but autonomous weapons systems’ decision-making. The new plans for how the Army, Navy, and Air Force will operate will be fewer human decision-makers and more machine information processors and decision-makers, and which humans will be given a menu of possible choices, but they will be strike this set of targets or that set of targets, but not stop and think about this, and maybe we should de-escalate. They’re going to be militarized options.

Lucas Perry: So, some sense of lethal autonomous weapons is potentially exacerbating or catalyzing the speed at which the ladder of escalation is moved through.

Michael Klare: No question about it. Many factors are contributing to that. The speed of weaponry, the introduction of hypersonic missiles, which cuts down flight time from 30 minutes to five minutes, the fact that wars are being conducted in what they call multiple domains simultaneously: cyber, space, air, sea, and ground, that no commander can know what’s happening in all of those domains and make decisions. So, you have to have what they want to create, a super brain called the Joint All-Domain Command and Control System, the JADC2 system, which will collect data from sensors all over the planet and compress it into simplified assessments of what’s happening, and then tell commanders, here are your choices, one, two, and three, and you have five seconds to choose, and if not, we’ll pick the best one and we’ll be linked directly to the firers to launch weapons. This is what the future will look like, and they’re testing this now. It’s called Project Convergence.

Lucas Perry: So, how do you see all of this affecting the risks of human extinction and of existential risks?

Michael Klare: I’m deeply concerned about this inclination to rely more on machines to make decisions of life and death for the planet. I think everybody should be worried about this, and I don’t think enough attention is being paid to these dangers of automating life and death decision-making, but this is moving ahead very rapidly and I think it does pose enormous risks. The reason that I’m so worried is that I think the computer assisted decision-making will have a bias towards military actions.

Humans are imperfect and sometimes we make mistakes. Sometimes we get angry and we go in the direction of being more violent and brutal. There’s no question about that, but we also have a capacity to say, stop, wait a minute, there’s something wrong here and maybe we should think twice and hold back. And, that’s saved us on a number of occasions from nuclear extinction. I recommend the book Gambling with Armageddon by Martin Sherwin, a new account of the Cuban Missile Crisis day by day, hour by hour account, and which it was clear that the US and Russia came very close, extremely close to starting a nuclear war in 1962, and somebody said, “Wait a minute, let’s just think about this. Let’s not rush into this. Let’s give it another 24 hours to see if we can come up with a solution.”

Adlai Stevenson apparently played a key role in this. I fear that the machines we designed are not going to have that kind of thinking built into them, that kind of hesitancy, that second thinking. I think the machines are going to be designed… The algorithms that inhabit them are going to reflect the most aggressive possible outcomes, and that’s why I fear that we move closer to human extinction in a crisis than before, and because of the time of decision-making is going to be so compressed that humans are going to have very little chance to think about this.

Lucas Perry: So, how do you view the interplay of climate change and autonomous weapons as affecting existential risk?

Michael Klare: Climate change is just going to make everything on the planet more stressful in general. It’s going to create a lot of stress, a lot of catastrophes occurring simultaneously and creating a lot of risk events happening that people are going to have to be dealing with, and they’re going to create a lot of hard, difficult choices. Let’s say you’re the president, you’re the commander in chief, and you have multiple hurricanes striking and fires striking the United States, that’s hardly an unlikely outcome, at the same time that there’s a crisis with China and Russia occurring where war would be a possible outcome. There’s a naval clash at sea in the South China Sea or something happening on the Ukraine border, and meanwhile, Nigeria is breaking apart and India and Pakistan are at the verge of war.

These are very likely situations in another 10 to 20 years if climate change proceeds the way it is. So, just the complexity of the environment, the stress that people will be under, the decisions they’re going to have to make swiftly between do we save Miami or do we save Tokyo? Do we save Los Angeles or do we save New York, or do we save London? We only have so many resources. In these conditions, I think the inclination is going to be to rely more on machines to make decisions and to carry out actions, and that I think has inherent dangers in it.

Lucas Perry: Do you and/or the Pentagon have a timeline for… How much and how fast is the instability from climate change coming?

Michael Klare: This is a progression. We’re on that path, so there’s no point at which you could say we’ve reached that level. It’s just an ever increasing level of stress.

Lucas Perry: How do you see the world in five or 10 years given the path that we’re currently on?

Michael Klare: I’m pessimistic about this, and the reason I am pessimistic is because if you go back and read the very first reports of the Intergovernmental Panel on Climate Change, the IPCC, their very first reports, and they would give a series of projections based on their estimates of the pace of greenhouse gas emissions. If they go this high, then you have these projections. If they go higher, then these projections out to 2030, 2040, 2050, we’ve all seen these charts.

So, if you go back to the first ones, basically we’re living in 2021 what they said were the worst case projections for 2040 to 2050 by and large. So, we’re moving into the danger zone. So, what I’m saying is we’re moving into the danger zone much, much faster than the most worst case scenarios that scientists were talking about 10 years ago, or 20 years ago, and if that’s the case, then we should be very, very worried about the pace at which this is occurring because we’re off the charts now from those earlier predictions of how rapidly sea level rise was occurring, desertification was occurring, heat waves. We’re living in a 2050 world now. So, where are we going to be in a 2030? We’re going to be in a 2075 world and that world was a pretty damn scary world.

Lucas Perry: All right, so I’m mindful of the time here. So, just a few more questions about messaging would be nice. So, do you think that tying existential risks to national security issues would benefit the movement towards reducing existential risks, given that climate change is elevated in some sense by the DOD taking it seriously on account of national security?

Michael Klare: So, let me explain why I wrote this book, and this is very much a product of the Trump era, but I think it’s still true today that you have a country that’s divided between environmentalists and climate deniers, and this divide has prevented forward movement in Congress to pass legislation that will make a significant difference in this country, and I believe this has to come from national level, the kind of changes we need, the massive investments in renewables and charging stations for electric vehicles, all these things require national leadership, and right now that’s impossible because of the fundamental divide between the Democrats and Republicans or denialists and environmentalist, however you want to put it. Some of my friends in the environmental community, dear friends, think if we could only get across the message that things are getting worse that those deniers will finally wake up and change their views.

I don’t think that’s going to happen. I think more scientific evidence about climate change is not going to win over more people. We’ve tried that. We’ve done everything we can to make the scientific evidence known. So, the way to win, I believe the military perspective that this is a threat to the national security of the United States of America, are you a patriotic American or not? Do you care about the security of this country or not?

This is not a matter of environmentalism or anti-environmentalism. This is about the national security of this country. Where do you stand on that? That this is a third approach that could possibly win over some segment of that population that until now has resisted action on climate change, that’s not going to listen to an environmentalist or green argument. There is evidence that this approach is making a difference, that Republicans who won’t even talk about the causes of climate change, but who acknowledge that their communities are at risk or the country is at risk on a national security basis, and therefore are willing to invest in some of the changes that are necessary for that reason. So, I do believe that making this argument, it could win over enough of that resistant population to make it possible to actually achieve forward momentum.

Lucas Perry: Do you think that relating climate change to migration issues is helpful for messaging?

Michael Klare: I’m not sure because I think people who are opposed to migration don’t care what the cause is, but I do think that it might feed into the argument that I was just making that our security would be better off by emphasizing climate change and therefore taking steps to reduce the pressures that lead climate migrants to migrate. The military certainly takes that view, so it could be helpful, but I think it’s a difficult topic.

Lucas Perry: All right, so given everything we’ve discussed here today, how would you characterize and summarize the Pentagon’s interest, view, and action on climate change and why that matters?

Michael Klare: So, now we have a new test because as I’ve indicated, we had a blackout period of four years during the Trump administration when all of this was hidden and couldn’t be discussed. So, we don’t know how much was accomplished. Now, this is an explicit priority for the Department of Defense and the defense budget, and other documents say that this is a priority for the department and the Armed Forces, and they are required to take steps to adapt to climate change and to mitigate their role in climate change.

So, we have to see how much actually is accomplished in this new period before really you can make any definitive assessment, but I think that you can see that the language adopted by the Biden administration and Lloyd Austin at the Department of Defense is so much stronger and more vigorous than what the Pentagon was saying in the Obama administration. So, even though there was a four year blackout period, there was a learning curve going on, and what they’re saying today is much more advanced and the sense of recognizing the severity of the risks posed by climate change and the necessity of making this a priority.

Lucas Perry: All right, so as we wrap up, are there any final words or anything you’d like to share that you feel is left unsaid or any parting words for the audience?

Michael Klare: As I started out, we mustn’t forget that if you asked anybody in the military what their job is, they’re going to come back to China number one. So, we shouldn’t forget that, defending against China. It’s only after you peel away the layers of how they’re going to operate in a climate altered world that all of these other concerns start spilling out, but it’s not going to be the first thing that they’re likely to say. I think that has to be clear, based on my conversations, but there is a real awareness that in fact climate change is going to have an immense impact on the operations of the military in the years ahead, and that its impact is going to grow exponentially.

Lucas Perry: All right. Well, thank you very much for coming on Michael, and for sharing all of this with us. I really appreciated your book and I recommend others check it out as well, it’s All Breaking Loose. I think it does a really good job of showing the ways in which the world is going to get worse through climate change. There’s a lot of really great examples in there. So, also the audiobook has a really great narrator, which I very much liked. So, thank you very much for coming on. If people want to check or follow you out on social media, where and how can they do that?

Michael Klare: Oh, I’m at michaelklare.com and let’s start there.

Lucas Perry: All right. Do you also have a place for you where you list publications?

Michael Klare: At that site.

Lucas Perry: At that site? Okay.

Michael Klare: And, it’s K-L-A-R-E, Michael Klare, K-L-A-R-E.

Lucas Perry: All right, thank you very much, Michael.

Avi Loeb on UFOs and if they’re Alien in Origin

  • Evidence counting for the natural, human, and extraterrestrial origins of UAPs
  • The culture of science and how it deals with UAP reports
  • How humanity should respond if we discover UAPs are alien in origin
  • A project for collecting high quality data on UAPs

Watch the video version of this episode here

See here for information on the Podcast Producer position

See the Office of the Director of National Intelligence report on unidentified aerial phenomena here

 

Lucas Perry: Welcome to the Future of Life Institute Podcast. I’m Lucas Perry. This is a follow up interview to the main release that we’ve done with Avi Loeb. After our initial interview, the US Government released a report on UFOs, otherwise now known as UAPs, titled, Preliminary Assessment: Unidentified Aerial Phenomena. This report is a major and significant event in the history and acknowledgement of UFOs as a legitimate phenomena. As our first interview with Avi focused on Oumuamua and its potential alien origin, we also wanted to get his perspective on UFOs, this report, his views on whether they’re potentially alien in origin, and what this all means for humanity.

In case you missed it in the main episode, we’re currently hiring for a Podcast Producer to work on the editing, production, publishing, and analytics tracking of the audio and visual content of this podcast. As the Producer you would be working directly with me, and the FLI outreach team, to help grow, and evolve this podcast. If you’re interested in applying, head over to the Careers tab on the Futureoflife.org homepage or follow the link in the description. The application deadline is July 31st, with rolling applications accepted thereafter until the role is filled. If you have any questions, feel free to reach out to socialmedia@futureoflife.org.

And with that, I’m happy to present this bonus interview with Avi Loeb.

The Office of the Director of National Intelligence has released a preliminary assessment on unidentified aerial phenomena, which is a new word that they’re using for UFO, so now it’s UAP. So can you summarize the contents of this report and explain why the report is significant?

Avi Loeb: The most important statement made in the report is that some of the objects that were detected are probably real and that is based on the fact that they were detected in multiple instruments using radar systems, or infrared cameras, or optical visual cameras, or several military personnel seeing the same thing, doing the same thing at the same time. And so that is a very significant statement because the immediate suspicion is that unusual phenomena occur when you have a smudge on your camera, when there is a malfunction of some instruments, and the fact that there is corroborating evidence among different instruments implies that it must be something real happening. That’s the first significant statement.

And then there were 144 incidents documented but it was also mentioned there is a stigma on reporting because there is a taboo on discussing extraterrestrial technologies, and as a result only a small minority of all events were reported. But nevertheless, the Navy established in March 2019 a procedure for reporting, which was not available prior to that and the Air Force followed on that in December 2020. So it’s all very recent that there is this procedure or formal path through which reports can be obtained. And of course, that helps in the sense that it provides a psychological support system for those who want to report about unusual things they have witnessed, and prior to that they had to dare to speak given the stigma.

And so the second issue is of course we have objects, if some of them are real. And by the way, we only saw a small fraction of the evidence most of it is classified and the reason is that the government owns the sensors that were used to obtain this evidence. And these sensors are being used to monitor the sky, and therefore our national security importance and we don’t want to release information about the quality of the sensors to our adversaries, to other nations. And so the data itself is being classified because the instruments are classified, but nevertheless one can think of several possible interpretations of these real objects.

And I should say that CIA Directors, say like Brennan, and Woolsey, and President Barack Obama spoke about these events as serious matters so that, to me, it all implies that we need to consider them seriously. So there are several possible interpretations, one of course is that they are human made and some other nation produced them, but some of the objects behaved in ways that supersede our technologies, the limits of the technologies we have in the U.S. and we have some intelligence on what other nations are doing. And moreover, if there was another nation with far better technologies they would find themselves in the consumer market because there would be a huge financial benefit to using them or we would see some evidence for them in the battlefield. And at the moment we have a pretty good idea, I would argue, as to what other nations are doing technologically speaking.

So if there are real objects behaving in ways that exceed human technologies, then the question is what could it be? And there are two possibilities, either these are natural phenomena that occur in the atmosphere that we did not expect or they are of extraterrestrial origin, some other technological civilization produced these objects and deployed them here. And of course, both of these possibilities are exciting because we learned something new. So the message I take from this report is that the evidence is sufficiently intriguing for the subject to move away from the talking points of politicians, national security advisors, military personnel that were really not trained as scientists. It should move into the realm of science where we use state-of-the-art equipment, such as cameras installed on wide field telescopes that scan the sky. These are telescopes you can buy off the shelf and you can position them in similar geographical locations, and monitor the sky with open data, and analyze the data using the best computers we have in a completely transparent way like a scientific experiment.

And so that’s what we should do next and instead what I see is that a lot of scientists just ridicule the significance of the report or say business as usual, that there is no need to attend to these statements. And I think it’s an opportunity for science to actually clarify this matter and clear up the fog and this is definitely a question that is of great interest to the public. What is the nature of these unidentified objects or phenomena? Are they natural in origin or maybe extraterrestrial? And I’m very much willing to address this with my research group given proper funding for it.

Lucas Perry: Let’s stick here with this first point, right? So I actually heard Neil deGrasse Tyson on MSNBC just before we started talking and he was mentioning that he thought that there could have been hardware or software artifacts that could have been generating artifacts on these systems. And I think your first point very clearly refutes that because you have multiple different systems plus eyewitness reports all corroborating the same thing. So it’s confusing to me why he would say that.

Avi Loeb: Well, because he is trying to maximize the number of likes he has on Twitter and doubting the reality of these reports appears to be popular among people in academia, among scientists, among some people in the public and so he’s driven by that. My point is, an intelligent culture is driven or actually is following the guiding principles of science and those are sharing evidence based knowledge. What Neil deGrasse Tyson is doing is not sharing evidence based knowledge, but rather dismissing evidence. And my point is, this evidence that is being reported in Washington, D.C. is intriguing enough to motivate us to collect more evidence rather than ignore it. So obviously, if you look at the history of science, we often make discoveries when we find anomalies, things that do not line up with what we expected.

And the best example is quantum mechanics that was discovered a century ago, nobody expected it. It was forced upon us by experiments and actually, Albert Einstein at the time resisted one of its fundamental facets, entanglement, or what he called spooky action at a distance. That the quantum system knows about its different parts even if they are separated by a large distance such that light signals cannot propagate over the time of the experiment. And he argued that this cannot be the case and wrote a paper about it, it’s with his postdocs, it’s called the Einstein-Podolsky-Rosen experiment. That experiment was done and demonstrated that he was wrong and even a century later we are still debating the meaning of quantum mechanics.

So it’s sort of like a bone stuck in the throat of physicists, but nevertheless, we know that quantum mechanics holds and applies to reality. And in fact the reason the two of us are conversing is because of our understanding of quantum mechanics, the fact that we can use it in all the instruments that we use. For example, the speaker that the two of us are using, and the internet, and the computers we are using, all of these use principles of quantum mechanics that were discovered over the century. And my point is, that very often when you do experiments there are situations where you get something you don’t expect, and that’s part of the learning experience. And we should respect deviations from our expectations because they carve a new path to new understanding, new learning about reality and about nature rather than ridiculing it, rather than always thinking that what we will find will match what we expected.

And so if the government among all comes along with reports about unusual phenomena, the government is very conservative very often, you would expect the scientific community to embrace that as an exciting topic to investigate because the government is saying something, let’s figure out what is it about. Let’s clarify the nature of these phenomena that appear anomalous rather than saying business as usual, I don’t believe it, it could be a malfunction of the instrument. So if it is a malfunction of the instrument, why did many instruments show the same thing? Why did many pilots show the same thing? And I should clarify that in the courtroom if you have two eyewitness testimonies that corroborate each other you can put people in jail as a result. So we believe people in the legal system and somehow when it comes to pilots, who are very serious people that serve our country, then Neil deGrasse Tyson dismisses their testimony.

So my point is not that we must believe it, but rather that it’s intriguing enough for us to collect more data and evidence and let’s not express any judgment until we collect that evidence. The only sin we can make is to basically ignore the reports and do business as usual, which is pretty much what he is preaching for. And my point is, no instead we should invest funds in new experiment or experiments that will shed more light on the nature of these objects.

Lucas Perry: So the second point that you made earlier was that the government was establishing a framework and system for receiving reports about UFOs. So part of this and part of this document, is it true to say then there is also a confirmation that the government does not know what these are and that they are not a secret U.S. project?

Avi Loeb: Yeah, they stated it explicitly in the report. They said the data is not good enough, the evidence is not good enough to figure out the nature of these objects so we don’t know what they are. And by the way, you wouldn’t expect military personnel or politicians to figure out the nature of anomalous objects because they were not trained as scientists. So when you go to a shoemaker, you won’t expect the shoemaker to bake you a cake. These are not people that were trained to analyze data of this type or to collect new data such that the nature of these objects will be figured out.

That is what scientists are supposed to do and that’s why I’m advocating for moving this subject away from Washington, D.C. into an open discussion in the scientific community where we’ll collect open data, analyze it in a transparent way, not with government owned sensors or computers, and then it will be all clear. It will be a transparent process. We don’t need to rely on Washington, D.C. to tell us what appears in our sky. The sky is unclassified in most locations. We can look up anytime we want and so we should do it.

Lucas Perry: So with your third point, there’s this consideration that people are more likely to try and give a conventional explanation of UAPs as coming from other countries like Russia or China, and you’re explaining that there are heavy commercial incentives. For example, that if you had this kind of technology it could for example, revolutionize your economy and you wouldn’t just be using it to pester U.S. Navy pilots off the coast, right? It could be used for really significant economical reasons. And so it seems like that also counts as evidence against it being conventional in origin or a secret government project. What is your perspective on that?

Avi Loeb: Yes and it would not only find its place in the consumer market, but also in the battlefield. And we have a sense of what other nations are doing because the U.S. has its own intelligence and we pretty much know what the status of their science and technology is. So it’s not as if we are completely in the dark here and I would argue if the U.S. government reports these objects there is good evidence that they are not made by those other nations because if our intelligence would tell us that they were potentially made by other nations then we would try to develop the same technologies ourselves. And another way to put it is if a scientific inquiry into the nature of these objects allows us to get a high resolution photograph of one of them and then we see the label “Made in China” or “Made in Russia,” then we would realize that there was a major failure of national intelligence and that would be a very important conclusion of course, that would have implications for our national security.

But I doubt that this is the case because we have some knowledge of what other nations are doing and the data would not have been released this way in this kind of a report if there was suspicion that these objects are human made.

Lucas Perry: So the report makes clear then that it’s not U.S. technology and there’re also reasons that count against it being for example, Russian or Chinese technology because the incentives are aligned for them to just deploy it already and use it in the public sector. So before we get into more specifics about thinking about whether or not these are human or extraterrestrial in origin, I’m curious if you could explain a bit more the flight and capability characteristics of these UAPs and UFOs and what you feel are the most significant reports of them and their activity.

Avi Loeb: Well, I didn’t have direct access to the data, especially not the classified one and I would very much want to see the full dataset before expressing an opinion, but at least some of the videos that were shown indicated motions that cannot be reproduced by the kind of crafts that we own. But what I would like to know is whether when the object moves faster than the speed of sound for example in air, whether it produces a sonic boom, a shockwave that we see for example, when jets do the same because that would be an indication that indeed there is a physical object that is compressing air as it moves around. Or if it moves through water I want to see the splash of water and from that I can infer some physical properties.

And of course, I would like to have a very high resolution image of the object so that I can see if it has screws, if there is something written on it, either Made in China or Made on Planet X. Either messages would be of great importance. So what I’m really after is access to the best data that we have and obviously, it will not be released from the government because the best data is probably classified. But I would like to collect it through using scientific instrumentation, which by the way could be far better than the instruments that were on airplanes that the pilots were using or on Navy because these were designed to be in combat situations and they were not optimal for analyzing such objects. And we can do much better if we choose our scientific instruments carefully and design the experiment in a way that would reproduce the results with a much higher fidelity of the data.

Lucas Perry: There is so much about this that is similar to Oumuamua in terms of there being just barely… The imaging is not quite enough to really know what it is and then there being lots of interesting evidence that counts for extraterrestrial in origin. Is that a perspective you share?

Avi Loeb: Well, yes I wrote a Scientific American article where I said one thing we know about Oumuamua is that it probably had a flat shape, pancake like, and also if its push away from the sun was a result of reflecting sunlight it must have been quite thin and the size of a football field. And in that case, I thought maybe it serves as a lightsail, but potentially it could also be a receiver intended to detect information or signals from probes that were sprinkled on planets in the habitable zone around the sun. So if for example the UAP are probes transmitting signals, then the passage of such a receiver near Earth was meant to obtain that information. And Oumuamua for example, was tumbling every eight hours, was looking in all directions in principle for such signals, so that could be one interpretation that it was thin not because it was a lightsail, but because it served a different purpose.

And in September 2020, we saw another object that also exhibited an excess push away from the sun by reflecting sunlight and had no cometary tail. It was given the name 2020 SO and it was a rocket booster from a 1966 mission. It had thin walls for a completely different purpose, not having anything to do with it being a lightsail. So I would argue that perhaps Oumuamua had these weird properties because it served a different purpose and that’s why we should both try to get a better image of an object like Oumuamua and of the unidentified objects we find closer to Earth. And in both cases, a high resolution photograph is better than a 1,000 words, in my case better than 66,000 words, the number of words in my book.

Lucas Perry: In both cases, a little bit better instrumentation would have seemingly made a huge difference. So let’s pivot into again, this area of conventional explanations. And so we talked a little bit earlier about one conventional explanation being that this is some kind of advanced secret military technology of China or Russia that’s used for probing our aerial defenses. And the argument that counts against that again, was that there are military and economic incentives to deploy it more fully, especially because the flight characteristics that these objects are expressing are so much greater than anything that America has in terms of the speed and the agility. So one theory is that instead of the technology being actual, like they actually have the technology that goes that fast and is that agile, that this is actually some form of spoofing technology, so some kind of way of an adversary training electronic countermeasures to simulate, or emulate, or create the illusion of what we witnessed in terms of the U.S. instruments. So do you think that such an explanation is viable?

Avi Loeb: I mean, it’s possible and that’s why we need more data. But it’s not easy to produce an illusion in multiple instruments, both radar, infrared, and optical sensors because you can probably create an illusion for one of these sensors, but then for all of them it would require a great deal of ingenuity and then you would need a good reason to do that. Why would other nations engage in deceiving us in this way for 20 years? I mean, that would look a bit off and also, we would have probably found something, some clue about them trying to do that because they would have trained such probes or such objects first in their own facilities and we would see some evidence for that. So I find it hard to believe, I would think it’s either some natural phenomena that we haven’t yet experienced or suspected or it’s this unusual possibility of an extraterrestrial origin.

And either way we will learn something new by exploring it more. We should not have any prejudice. We should not dismiss it. That would be the worst we can do, just dismiss it, and ridicule it, and continue business as usual because actually it’s exciting to try and figure out a puzzle. That’s what detectives often do and I just don’t understand the spirit of dismissing it and not looking into it at all.

Lucas Perry: So you just mentioned that you thought that it might have some kind of natural explanation. There are very strange things in nature. I’m not sure if this is for example, real or not, but there’s a Wikipedia page for example, for ball lightning, and there’s also really weird phenomena that you can have in the sky if the lighting in the sky is just right, and where the sun is where you get weird halos and things. And throughout history there are reports of dancing lights in the sky or things that might have been collective hallucinations or actually real. In terms of it being something natural that we understand or it being something human made that we’re not aware of, what to you is the most convincing natural or conventional explanation of these objects? An explanation that is not extraterrestrial in origin.

Avi Loeb: Well, if it’s dancing lights it wouldn’t produce a radar echo. So as I said, I don’t have access to the data in each and every incident, but there are some fundamental logic that one can use for each of these datasets and figure out if it could be an illusion. If not, if it must be a real object, somehow nature has to produce a real object that behaves this way and until I get my own data and reproduce those I won’t make any statement. But I’m optimistic that given the appropriate investment of funds, which I’m currently discussing with private sector funders, that we can do it. And just to give you an example, if you wanted to get a high resolution image, like a megapixel image of a one meter size object at a distance of a kilometer, you just need a one meter telescope for that and observing it at optical light. And you will be able to see millimeter size features on it, like the head of a pin.

People ask why didn’t we see it already in iPhone images of the sky? Well, the iPhone camera is a millimeter or a few millimeters in aperture size and it’s too small. You can’t get anything better than a fuzzy image of a very distant object. So you really need to have a dedicated experiment, and I think one can do it, and I’m happy to engage in that.

Lucas Perry: You would also wonder that these things are probably… So if they were extraterrestrial in origin, you would expect that they would be pretty intelligent and that they might understand what our sensor capabilities are. So, I think perhaps that might count as evidence for why given that there are billions of camera phones around the planet that there aren’t any good pictures. What is your perspective on that?

Avi Loeb: If I had to guess, I would think of these systems as equipped with artificial intelligence. We already have systems of artificial intelligence that are capable of superseding human abilities and within a decade they will be more intelligent than people, we’ll be able to learn from machine learning, and adapt to changing circumstances, and behave very intelligently. So in fact, I can imagine that if another civilization had that technological development of more than a century, more than we did, they could have produced systems that are autonomous. It doesn’t make any sense to communicate with the sender because the nearest star is four light years away. It takes four years for a signal to reach even the nearest star and it takes tens of thousands of years to reach the end of the galaxy, the Milky Way.

And so there is no sense of a piece of equipment communicating with its senders in order to get guidelines as to what to do. Instead, it should be autonomous, it has its own intelligence, and it could outsmart us. We already almost have such systems. So in fact, we may need to use our own artificial intelligence systems in order to interpret the actions of those artificial intelligence systems. So it will resemble the experience of asking our children to interpret the content that we find on the internet because they have better computer skills. We need our computers to tell us what their computers are doing. And that’s the way I think about it and these systems could be so intelligent that they do things that are subtle. They don’t appear and start a conversation with us. They are sort of gathering information of interest to them and acting in a way that reflects the blueprint that guided whoever created them.

And the question is, what is their agenda? What is their intent? And that will take us a while to figure out. We have to see what kind of information they’re seeking, how do they respond to our actions, and eventually we might want to engage with them. But the point is, many people think of contact as being a very sort of abrupt and interaction of extraordinary proportion that is impossible to deny, but in fact, it could be very subtle because they are very intelligent. If you look at intelligent humans, they’re not aggressive, they’re often thinking about everything they do and select their actions appropriately. They don’t get into very violent confrontations often. We need to rely on evidence rather than prejudice and the biggest mistake we can make is the mistake made by philosophers during the days of Galileo. They said, “We don’t want to look through at telescope because we know that the sun moves around the Earth.”

Lucas Perry: We spoke a little bit earlier about Bayesian reasoning and Oumuamua. Do you have the same feelings about not having priors about the UAPs being extraterrestrial or human in origin? Or is there a credence that you’re able to assign to it being extraterrestrial in origin?

Avi Loeb: The situation is even better in the context of UAP because they are here, not far from us and we can do the experiment with a rather modest budget. And therefore, I think we can resolve this issue with no need to have a prejudice. Often you need a prior if the effort requires extraordinary funds, so you have to say, okay is it really worth investing those funds? But my point is, that finding the answer to the nature of UAP may cost us much less than we already spent in the search for dark matter. We haven’t found anything. We don’t know where the dark matter is. We spent hundreds of millions of dollars and at a cost lower than that, maybe by an order of magnitude, we can try and figure out the nature of UAP. So given the price tag, let’s not make any assumptions. Let’s just do it and figure it out.

Lucas Perry: If these are extraterrestrial in origin, one might expect that they are here for probing or information gathering. So you said there are reports going back 20 years, if they are extraterrestrial in origin who knows how long they’ve been here. They could have been sent out through nanoscale shots out into the cosmos that land, and then grow, and replicate on some planet, and act as scouts. So if this were the case, that they were here as information gathering probes, one might wonder why they don’t use much more advanced technology. So for example, why not use nanotechnology that we would have no hope of detecting? In one report for example though, the pilot explains it following him and they’re kind of like… It comes right in front of him and then it disappears, so that disappearing seems a bit more like magic, right? Any sufficiently powerful technology is indistinguishable to magic to a less advanced civilization. But the other characteristics seem maybe like 100 or 200 years away of human technological advancement, so what’s up with that?

Avi Loeb: Well, yeah so for us to figure out what’s going on we need more data and it may well be that there are lots of things happening that we haven’t yet realized, because they are done in a way that is very subtle, that we cannot really detect easily, because our technologies are quite limited to the century and a half that we developed them. So you are right, there may be a hidden reality that we are not aware of, but we are seeing traces of things that attract our attention. That’s why when we see something intriguing we should dig into that. It could be clues for something that we have never imagined.

So for example, if you were to present a cellphone to a caveman, obviously the cellphone would be visible to the caveman and the caveman would think, oh it’s probably a rock, a shiny rock. So the caveman will recognize part of reality, that there is an object reflecting light, that is a bit more shiny than a typical rock because the caveman is used to playing with rocks. But the caveman, initially at least, will not figure out the features on the cellphone and the fact that he can speak to other people through this rock. That’s what will take us a while to be educated about. And the question is, among the things that are happening around us, which fraction are we aware of with the correct interpretation? And maybe we are not.

Lucas Perry: Moving on a bit here, so if these UAPs, or Oumuamua itself, or some new interstellar object that we were able to find were fairly conclusively shown to be extraterrestrial technology, what do you think our response should be? It seems like on one hand this would clearly potentially be an existential threat, which then makes it relevant to the Future of Life Institute. On the other hand, it’s likely that we could do nothing to counter such a threat. We couldn’t even counter humanity probably 50 years from now if we had to defend ourselves against a 50 year old, wiser, more technologically advanced version of ourself. And on cosmological timescales you would expect that even a 1,000, 2,000 year lead would be pretty common, but also indefensible. So there’s a sense that also an antagonistic attitude would probably make things worse, but also that we couldn’t do anything. So how do you think humanity should react?

Avi Loeb: The question of intent is indeed the next question after you identify an object that appears to be of extraterrestrial technological origin. We should all remember the story about the Trojan horse that looked very innocent to the citizens of Troy, but ended up serving a different purpose. That of course implies that we should collect as much evidence as possible about the objects that we find at first and see how they behave, what kind of information they are seeking, how do they respond to our actions, and ultimately we might want to engage with them. But I should mention, if you look at human history, nations that traded with each other benefited much more than nations that went into war with each other. And so a truly intelligent species might actually prefer to benefit from the interaction with us rather than kill us or destroy us, and perhaps take advantage of the resources, use whatever we are able to provide them with. So it’s possible that they are initially just spectators trying to figure out what are the things that they can benefit from.

But from our perspective, we should obviously be suspicious, and careful, and we should speak in one voice. Humanity as a whole, there should be an international organization perhaps related to the United Nations or some other entity that makes decisions about how to interact with whatever we find. And we don’t want one nation to respond in a way that would not represent all of humanity because that could endanger all of us. In that forum that makes decisions about how to respond, there should be of course physicists that figure out the physical properties of these objects and there should be policymakers that can think about how best to interact with these objects.

Lucas Perry: So you also mentioned earlier that you were talking with private funders about potentially coming up with an action plan or a project for getting more evidence and data on these objects. So I guess, there’s a two part question here. I’m curious if you could explain a little bit about what that project is about and more generally what can the scientific, non-governmental, and amateur hobbyist communities do to help investigate these phenomena? So are there productive ways for citizen scientists and interested listeners to contribute to the efforts to better understand UAPs?

Avi Loeb: Well, my hope is to get a high resolution photograph. It’s a very simple thing to desire. We’re not talking about some deep philosophical questions here. If we had a megapixel image, an image with a million resolution elements of an object, if it has a size of a meter that would mean each pixel is a millimeter in size, the size of the head of a pin, you can pretty much see all the details on the object and try and figure out, reverse engineer what it’s meant to do and whether it’s human made or not. So even a kid can understand my ambition. It’s not very complicated. Just get a megapixel image of such an object. That’s it.

Lucas Perry: They seem common enough that it wouldn’t be to difficult if the-

Avi Loeb: Well, the issue is not how common they are but what device you are using to image them, because if you use an iPhone the aperture on the iPhone will give you only fuzzy image. What you need is a meter sized telescope collecting the information and resolving an object of a meter size at the distance of a kilometer down to a millimeter resolution.

Lucas Perry: Right, right. I mean that Navy pilots, for example, have reported seeing them every day for years, so if we had such a device then you wouldn’t have to wait too long to get a really good picture of it.

Avi Loeb: So that’s my point, if these are real objects we can resolve them, and that’s what I want to have, a high resolution image. That’s all. And it will not be classified because it’s being taken by off the shelf instruments. The data will be open and here comes the role that can be played by amateurs, once the data is available to the public anyone can analyze it. Nothing is classified about the sky. We can all look up and if I get that high resolution image, believe me that everyone will be able to look at it.

Lucas Perry: Do you have a favorite science fiction book? And what are some of your favorite science fiction ideas?

Avi Loeb: Well, my favorite film is Arrival and in fact, I admired this film long ago, but a few months ago the producer of that film had a Zoom session with me to tell me how much he liked my book Extraterrestrial. And I told him, “I admired your film long before you read my book.” The reason I like this film is because it deals with the deep philosophical question of how to communicate with an alien culture. In fact, even the medium through which the communication takes place in the film is unusual and the challenge is similar to code breaking, sort of like the project that Alan Turing led during the Second World War of the enigma, trying to figure out, to break the code of the Nazis. So if you have some signal and you want to figure out the meaning of it, it’s actually a very complex challenge depending on how the information is being encoded. And I think the film addresses it in a very genuine and original fashion and I liked it a lot.

Lucas Perry: So do you have any last minute thoughts or anything you’d just really like to communicate to the audience and the public about UAPs, these reports, and the need to collect more evidence and data for figuring out what they are?

Avi Loeb: My hope is that with a high resolution image we will not only learn more about the nature of UAP but change the culture of the discourse on this subject. And I think that such an image would convince even the skeptics, even people that are currently ridiculing it, to join the discussion, the serious discussion about what all of this means.

Lucas Perry: And if there are any private funders or philanthropists listening that are interested in contributing to the project to capture this data, how is it best that they get in contact with you?

Avi Loeb: Well, they can just send me an email to aloeb@cfa.harvard.edu and I would be delighted to add them to the group of funders that are currently showing interest in it.

Lucas Perry: All right, thank you very much Avi.

Avi Loeb: Thank you for having me.

Avi Loeb on ‘Oumuamua, Aliens, Space Archeology, Great Filters, and Superstructures

  • Whether ‘Oumuamua is alien or natural in origin
  • The culture of science and how it affects fruitful inquiry
  • Looking for signs of alien life throughout the solar system and beyond
  • Alien artefacts and galactic treaties
  • How humanity should handle a potential first contact with extraterrestrials
  • The relationship between what is true and what is good

3:28 What is ‘Oumuamua’s wager?

11:29 The properties of ‘Oumuamua and how they lend credence to the theory of it being artificial in origin

17:23 Theories of ‘Oumuamua being natural in origin

21:42 Why was the smooth acceleration of ‘Oumuamua significant?

23:35 What are comets and asteroids?

28:30 What we know about Oort clouds and how ‘Oumuamua relates to what we expect of Oort clouds

33:40 Could there be exotic objects in Oort clouds that would account for ‘Oumuamua

38:08 What is your credence that ‘Oumuamua is alien in origin?

44:50 Bayesian reasoning and ‘Oumuamua

46:34 How do UFO reports and sightings affect your perspective of ‘Oumuamua?

54:35 Might alien artefacts be more common than we expect?

58:48 The Drake equation

1:01:50 Where are the most likely great filters?

1:11:22 Difficulties in scientific culture and how they affect fruitful inquiry

1:27:03 The cosmic endowment, traveling to galactic clusters, and galactic treaties

1:31:34 Why don’t we find evidence of alien superstructures?

1:36:36 Looking for the bio and techno signatures of alien life

1:40:27 Do alien civilizations converge on beneficence?

1:43:05 Is there a necessary relationship between what is true and good?

1:47:02 Is morality evidence based knowledge?

1:48:18 Axiomatic based knowledge and testing moral systems

1:54:08 International governance and making contact with alien life

1:55:59 The need for an elite scientific body to advise on global catastrophic and existential risk

1:59:57 What are the most fundamental questions?

 

See here for information on the Podcast Producer position

See the Office of the Director of National Intelligence report on unidentified aerial phenomena here

 

Lucas Perry: Welcome to the Future of Life Institute Podcast. I’m Lucas Perry. Today’s episode is with Avi Loeb, and in it, we explore ‘Oumuamua, an interstellar object that passed through our solar system and which is argued by Avi to potentially be alien in origin. We explore how common extraterrestial life might be, how to search for it through the space archaeology of bio and techno-signatures they might create. We also get into Great Filters and how making first contact with alien life would change human civilization.

This conversation marks the beginning of the continuous uploading of video content for all of our podcast episodes. For every new interview that we release, you will also be able to watch the video version of each episode on our YouTube channel. You can serach for Future of Life Institute on YouTube to find our channel or check the link in the description of this podcast to go directly to the video version of this episode. There is also bonus content to this episode which has been released speararetly on both our audio and visual feeds.

After our initital interview, the U.S. government released a report on UFOs, otherwise now known as UAPs, titled “Prelimiary Assessment: Unidentified Aerial Phenomena”. Given the release of this report and the relevance of UFOs to ‘Oumuamua, both in terms of the culture of science surrounding UFOs and their potential relation to alien life, I sat down to interview Avi for a second time to explore his thoughts on the report as well as his assessment of unidentified aerial phenomena. You can find this bonus content wherever you might be listening.

We’re also pleased to announce a new opportunity to join this podcast and help make existential risk outreach content. We are currently looking to hire a podcast producer to work on the editing, production, publishing, and analytics tracking of the audio and visual content of this podcast. You would be working directly with me, and the FLI outreach team, to help produce, grow, and evolve this podcast. If you are interested in applying, head over to the “Careers” tab on the FutureofLife.org homepage or follow the link in the description. The application deadline is July 31st, with rolling applications accepted thereafter until the role is filled. If you have any questions, feel free to reach out to socialmedia@futureoflife.org. 

Professor Loeb received a PhD in plasma physics at the age of 24 from the Hebrew University of Jerusalem and was subsequently a long-term member at the Institute for Advanced Study in Princeton, where he started to work in theoretical astrophysics. In 1993, he moved to Harvard University where he was tenured three years later. He is now the  and is a former chair of the Harvard astronomy department. He also holds a visiting professorship at the Weizman Institute of Science and a Sackler Senior Professorship by special appointment in the School of Physics and Astronomy at Tel Aviv University. Loeb has authored nearly 700 research articles and four books. The most recent of which is “Extraterrestial: The First Sign of Intelligent Life Beyond Earth”. This conversation is centrally focused on the contents of this work. And with that, I’m happy to present this interview with Avi Loeb.

To start things off here, I’m curious if you could explain what ‘Oumuamua’s wager is and what does it mean for humanity in our future?

Avi Loeb: ‘Oumuamua was the first interstellar object that was spotted near earth. And by interstellar, I mean an object that came from outside the solar system. We knew that because it moved too fast to be bound to the sun. It’s just like finding an object in your backyard from the street. And this saves you the need to go to the street and find out what’s going on out there. In particular, from my perspective, it allows us to figure out if the street has neighbors, if we are the smartest kid on the block, because this object looked unusual. It didn’t look like any rock that we have seen before in the solar system. It exhibited a very extreme shape because it changed the amount of reflected sunlight by a factor of 10 as it was tumbling every eight hours.

It also didn’t have a cometary tail. There was no gas or dust around it, yet it showed an excess push away from the sun. And the only possible interpretation that came to my mind was a reflection of sunlight. And for that, the object had to be very thin, sort of like a sail, but being pushed by sunlight rather than by the wind. This, you often find on a boat. And the nature doesn’t make sail, so in a scientific paper, we propose that maybe it’s artificial in origin. And since then in September 2020, there was another object found that was pushed away from the sun by reflecting sunlight. And without the cometary tail, it was discovered by the same telescope in Hawaii, Pan-STARRS, and was given the name 2020 SO. And then, the astronomers realized actually it’s a rocket booster that we launched in 1966 in a lunar landing mission. And we know that this object had very thin walls, and that’s why it had a lot of area for its mass and it could be pushed by reflecting sunlight.

And we definitely know that it was artificial in origin, and that’s why it didn’t show cometary tail because we produced it. The question is, who produced ‘Oumuamua? And my point is that just like Blaise Pascal, the philosopher, argued that we cannot ignore the question of whether God exists, because Pascal was a mathematician and he said, okay, logically the two possibilities either God exists or not. And we can’t ignore the possibility that God exists because the implications are huge. And so, my argument is very similar. The possibility that ‘Oumuamua is a technological relic carries such great consequences for humanity, that we should not ignore it. Many of my colleagues in academia dismiss that possibility. They say we need extraordinary evidence before we even engage in such a discussion. And my point is requiring extraordinary evidence is a way of brushing it aside.

It’s a sort of a self-fulfilling prophecy if you’re not funding research that looks for additional evidence, it’s sort of like stepping on the grass and claiming the grass doesn’t grow. Because for example, to the tape gravitational waves required an investment of $1.1 billion by the National Science Foundation. We would never discover gravitational waves unless we invest that amount. To search for dark matter, we invested hundreds of millions of dollars so far. We didn’t find what the dark matter is. It’s a search in the dark. But without the investment of funds, we will never find. So on the one hand, the scientific community puts almost no funding towards the search for technological relics, and at the same time argues all the evidence is not sufficiently extraordinary for us to consider that possibility in the first place. And I think that’s a sign of arrogance. It’s a very presumptuous statement to say, we are unique and special. There is nothing like us in the universe.

I think a much more reasonable down to earth kind of approach is a modest approach. Basically saying, look, the conditions on earth are reproducing in tens of billions of planets within the Milky Way galaxy alone. We know that from the capital satellite, about half of the sun-like stars have a planet the size of the earth, roughly at the same separation. And that means that not only we are not at the center of the universe, like Aristotle argued, we are also what we find in our backyard is not privileged. There are lots of sun-earth systems out there. And if you arrange for similar circumstances, you might as well get similar outcomes. And actually most of the stars formed billions of years before the sun. And so that to me indicates that there could have been a lot of technological civilization like ours that launched equipment into space just like we launched the Voyager 1, Voyager 2, New Horizons, and we just need to look for it.

Even if these civilizations are dead, we can do space archeology. And what they mean by that is when I go to the kitchen and I find an ant, I get alarm because there must be many more ants out there. So we found ‘Oumuamua, to me, it means that there must be many more out there and weird objects that do not look like a comet or an asteroid that we have seen before within the solar system. And we should search for them. And for example, in a couple of years, there would be the Vera Rubin Observatory that would be much more sensitive than the Pan-STARRS telescope and could find one such ‘Oumuamua like object every month. So when we find one that approaches us and we have an alert of a year or so, we can send a spacecraft equipped with a camera that will take a close up photograph of that object and perhaps even land on it, just like OSIRIS-REx landed on the asteroid Bennu recently and collected a sample from it, because they say a picture is worth a thousand words.

In my case, a picture is worth 66,000 words. The number of words in my book. If we had the photograph, I will need to write the book. It would be obvious whether it’s a rock or an artificial object. And if it is artificial and we land on it, it can read off the label made on Planet “X” and even import the technology that we find there to earth. And if it’s a technology representing our future, let’s say a million years into our future, it will save us a lot of time. It will give us a technological leap and it could be worth a lot of money.

Lucas Perry: So, that’s an excellent overview. I think of a really good chunk of the conversation, right? So there’s this first part of an interstellar object called ‘Oumuamua, entering the solar system in 2017. And then there are lots of parameters about and properties of this object, which are not easily or readily explainable as an asteroid or as a comet. Some of these things that we’ll discuss are for example, its rotation, its brightness variation, its size, its shape, how it was accelerating on its way out. And then the noticing of this object is happening in a scientific context, which has some sense of arrogance of not being fully open to exploring hypotheses that seem a bit too weird or too far out there. People are much more comfortable trying to explain it as some kind of like loose aggregate of a cosmic dust bunny or other things which don’t really fit or match the evidence.

And so then you argue that if we look into this with epistemic humility, then if we follow the evidence, it takes us to having a reasonable amount of credence that this is actually artificial in origin rather than something natural. And then that brings up questions of other kinds of life, and the Drake equation, and what it is that we might find in the universe, and how to conduct space archeology. So to start off, I’m curious if you could explain a bit more of these particular properties that ‘Oumuamua had and why it is that a natural origin isn’t convincing to you?

Avi Loeb: Right. I basically follow the evidence. I didn’t have any agenda. And in fact, I worked on the early universe and the black holes throughout most of my career, and then came along this object that was quite unusual. A decade earlier, I predicted how many rocks from other stars should we expect to find. And that was the first paper predicting that. And we predicted the Pan-STARRS telescope that discovered the ‘Oumuamua will not find anything. And the mere detection of ‘Oumuamua was a surprise by all this with magnitude, I should say. And it is still a surprise given what we know about the solar system, the number of rocks that the solar system produce. But nevertheless, that was the first unusual fact, but it still allowed for ‘Oumuamua to be a rock. And then, it didn’t show any cometary tail. And the Spitzer Space Telescope then put very tight limits on any carbon-based molecules in its vicinity or any dust particles.

And it was definitely clear that it’s not a comet because if you wanted to explain the excess push that it exhibited away from the sun through cometary evaporation, you needed about 10% of the mass of this object to be evaporated. And that’s a lot of mass. We would have seen it. The object size is of over the size of the football field, the 100 to 200 meters. And we would see such evaporation easily. So, that implied that it’s not a comet. And then if it’s not the rocket effect that is pushing it through evaporation, the question arose as to what actually triggers that push. And the suggestion that we made in the paper is that it’s the reflection of sunlight. And for that to be effective, you needed the object to be very thin. The other aspect of the object that was unusual is that as it was tumbling, every eight hours, the amount of sunlight reflected from it changed by a factor of 10.

And that implied that the object has an extreme shape, most likely pancake-shaped, flat and not cigar-shaped. Depiction of the object that’s cigar was based on the fact that projected on the sky as it was tumbling, the area that it showed us changed by a factor of 10. So then of course, if you look at the piece of paper tumbling in the wind and you look at it when it’s sideways, it does look like a cigar, but intrinsically it’s flat. And that is at the 90% confidence when trying to model the amount of light reflected from it as it was tumbling. The conclusion was at the 90% confidence that it should be pancake-shaped, flat, which again is unusual. You don’t get such objects very often in the context of rocks. And the most that we have seen before was of the order of a factor of three in length versus width. And then came the fact that it originated from a special frame of reference called the local standard of rest, which is sort of like the local parking lot of the Milky Way galaxy.

If you think about it, the stars are moving relative to each other in the vicinity of the sun, just like cars moving relative to each other in the center of a town. And then there is a parking lot that you can get to when you average over the motions of all of the stars in the vicinity of the sun, and that is called the local standard of rest. And ‘Oumuamua originated at rest in that frame. And that’s very unusual because only one in 500 stars is so much at rest in that frame as ‘Oumuamua was. So firstly, it tells you it didn’t originate from any of the nearby stars. Also, not likely from any of the far away stars because they are moving even faster relative to us, if they’re far away because of the rotation around the center of the Milky Way galaxy.

So it was not a natural result yet, a very small likelihood to have an object that is so rare. But then, or sort of like a buoy sitting at rest on the surface of the ocean and the sun bumped into it like a giant ship. And the question is if it’s artificial in origin, why would it originate from that frame? And one possibility is that it’s a member of objects on a grid that’s for navigation purposes. If you want to know your coordinates as you’re navigating an interstellar space, you find your location relative to this grid. And obviously you want those objects to be stationary, to be addressed relative to the local frame of the galaxy. And another possibility is that it’s a member of relay stations for communication. So to save on the power needed for transmission of signals, you may have relay stations like we have on earth and it’s one of them.

We don’t know the purpose of this object because we don’t have enough data on it. That’s why we need to find more of the same. But my basic point is there were six anomalies of this object that I detail in my book, Extraterrestrial, and I also wrote about in Scientific American. And these six anomalies make it very unusual. If you assign a probability of 1% to the object having each of these anomalies, when you multiply them, you get the probability of one in a trillion that this object is something that we have seen before. So clearly, it’s very different from what we’ve seen before. And response from the scientific community was to dismiss the artificial origin. And there were some scientists that took the scientific process more seriously and tried to explain the origin of  from a natural source. And they suggested four possibilities after my paper came out.

And one of them was maybe it’s a hydrogen iceberg, a chunk of frozen hydrogen that we’ve never seen before by the way. And then the idea is that when hydrogen evaporates, you don’t see the cometary tail because it’s transparent. The problem with that idea is that hydrogen evaporates very easily. So, we showed in a follow-up paper that such a chunk of frozen hydrogen the size of a football field would not survive the journey through interstellar space from its birth site to the solar system. And then there was another suggestion, maybe it’s a nitrogen iceberg that was chipped off the surface of a planet like Pluto. And then we showed in a follow-up paper that in fact you need more mass in heavy elements than you find in all the stars in the Milky Way galaxy by orders of magnitude more just to have a large enough population of nitrogen icy objects in space to explain the discovery of ‘Oumuamua.

And the reason is that there is a very thin layer of nitrogen, solid nitrogen on the surface of Pluto. And that makes a small fraction of the mass budget of the solar system. And so you just cannot imagine making enough chunks, even if you rip off all the nitrogen on the surface of exo-Plutos. It just doesn’t work out this scenario. And then there was a suggestion, maybe it’s a dust bunny as you mentioned it, a cloud of dust particles very loosely bound. And it needs to be a hundred times less dense than air so that when reflecting sunlight, it will be pushed like a feather. And the problem with that idea is that such a cloud would get heated by hundreds of degrees when it gets close to the sun and they would not maintain its integrity. So, that also has a problem.

And the final suggestion was maybe it’s a fragment, a shrapnel from a bigger object that pass close to a star. And the problem with that is the chance of passing close to a star is very small, most objects do not. So, why should we see the first interstellar object is belonging to that category? And the second is when you tidally disrupt a big object when passing through nearest star, the fragments usually get elongated and not pancake-shaped. You get often a cigar-shaped object. So, all of these suggestions have major flows. And my argument was simple. If it’s nothing like we have seen before, we better leave on the table the possibility that it’s artificial. And then, take a photograph of future objects that appears weirdest as this one.

Lucas Perry: So you mentioned the local standard of rest, which is the average velocity of our local group of stars. Is that right?

Avi Loeb: Yes. Well, it’s the frame that you get to after you average over the motions of all the stars relative to the sun, yes.

Lucas Perry: Okay. And so ‘Oumuamua was at the local standard of rest until the sun’s gravitation pulled it in, is that right?

Avi Loeb: Well, no. So the way to think of it, it was sitting at rest in that frame and just like buoy on the surface of the ocean. And then the sun happened to bump into it, the sun simply intercepted it along. And as a result, gave it a kick just like a ship gives a kick to a buoy. The sun acted on it through its gravitational force primarily. And then in addition, there was this excess push which was a smaller fraction of the gravitational force, just a fraction of a percent.

Lucas Perry: Right. And that’s the sun pushing on it through its suspected large surface area and structure.

Avi Loeb: Yeah. So in addition to gravity, there was an extra force acting on it, which was a small correction to the force of gravity, the other 10%. But it’s still, it was detected at very high significance because we monitored the motion of ‘Oumuamua. And to explain this force given that there was no cometary evaporation, you needed a thin object. And as I said, there was another thin object discovered in September 2020 called , that also exhibited an excess push by reflecting sunlight. So, it doesn’t mean necessarily that ‘Oumuamua was a light sail. It just means that it had the large area for its mass.

Lucas Perry: Can you explain why the smooth acceleration of ‘Oumuamua is significant?

Avi Loeb: Yeah. So what we detected is an excess acceleration away from the sun that declines inversely with distance squared in a smooth fashion. And first of all, the inverse-square law is indicative of a force that acts on the surface of the object. And the reflection of sunlight is exactly giving you that. And the fact that it’s smooth cannot be easily mimicked by cometary evaporation because often you had jets. These are spots on the surface of a comet from where the evaporation takes off. And that introduces jitter as the object tumbles, there is a jitter introduced to its motion because of the localized nature of these jets that are pushing it. You can think of the jets as the jets in a plane that push the airplane forward by ejecting gas backwards. But in the case of a comet, the comet is also tumbling and spinning.

And so, that introduces some jitter because the jets are exposed to sunlight at different phases of the spin of the object. And moreover, beyond a certain distance, water does not sublimate, does not evaporate anymore. You have water ice on the surface and beyond a certain distance, it doesn’t get heated enough to evaporate. So the push that you get from cometary evaporation has a sharp cutoff beyond a certain distance, and that was not observed. In the case of ‘Oumuamua, there was a smooth push that didn’t really cut off, didn’t show an abrupt change at the distance where water ice would stop evaporate. And so, that again is consistent with the reflection of sunlight being the origin of the excess push.

Lucas Perry: Can you explain the difference between comets and asteroids?

Avi Loeb: Yeah. So, we’re talking about the bricks that were left over from the construction project of the solar system. So the way that the planets form is that first you make a star like the sun, and you make it from a cloud of gas that condenses and collapses under the influence of itself gravity, its own gravitational force contracted and it pulls, and makes a star in the middle. But some of the gas has rotation around the center. And so when you make a star like the sun, a small fraction of the gas of the other for few percent or so remains in the leftover disks around the star that was just formed. And that debris of gas in the disks is the birthplace of the planets. And that disks of gas that is leftover from the formation process of the sun of course includes hydrogen and helium, the main elements from which the sun is made, but also includes heavy elements.

And they condensed in the mid-plane of the disks and make dust particles that stick to each other, get bigger and bigger over time. And they make the so-called planetesimals. These are the building blocks, the bricks that come together in making planets like the earth or the core of Jupiter that created also hydrogen and helium around the central rocky region. So, the idea is that you have all these bricks that just like Lego pieces make up the planets. And some of them get scattered during the formation process of the planets and they remain as rocks in the outer solar system. So, the solar system actually extends a thousand times farther than the location of the most distant planet in a region called the  that extends to a 100,000 times the earth-sun separation. And that is a huge volume. It goes halfway to the nearest star.

So in fact, if you imagine each star having an Oort cloud of these bricks, these building blocks that were scattered out of the construction process of the planets around the star, then these Oort clouds are touching each other, just like densely packed billiard balls. So just imagine the spherical region of planetesimals, these rocks. And so, comets are those rocks that are covered with water ice. So since they’re so far away from the sun, the ice freezes, the water freezes on their surface. But some of them have orbits that bring them very close to the sun. So when they get close to the sun, the water ice evaporates and creates a cloud of gas, a water vapor and some dust that was embedded in this rock that creates this appearance of a cometary tail. So what you see is the object is moving and then its surface layers get heated up by absorbing sunlight and the gas and dust evaporate and create this halo around the object and a tail, but always points away from the sun because it’s calmed by the solar wind, the wind coming from the sun.

And so you end up with a cometary tail, that’s what the comet is. Now, some rocks remain closer to the sun and are not covered with ice whatsoever. So, they’re just bare rocks. And when they get close to the sun, there is no ice that evaporates from them. These are called asteroids. And they’re just rock without any ice on the surface. And so, we see those as well. There is actually a region where asteroids, it’s called the main belt of asteroids, that’s we don’t know what the origin of that is. It could be a planet that was disintegrated, or it could be a region that didn’t quite make a planet and you ended up with fragments floating there. But at any event, there are asteroids, bare rocks without ice on them because they were close enough to the sun that the ice evaporated and we don’t have the water there.

And then these objects are also seen in the vicinity of the earth every now and then, these are called asteroids. And we see basically two populations. Now, ‘Oumuamua was not a comet because we haven’t seen a cometary tail around it. And it wasn’t an asteroid because there was this excess push. If you have a piece of rock, it will not be pushed much by reflecting sunlight because it’s area is not big enough relative to its mass. So it gets a push, but it’s too small for it to exhibit it in its trajectory.

Lucas Perry: Right. So, can you also explain how much we know about the composition of Oort clouds and specifically the shape and size of the kinds of objects there? And how ‘Oumuamua relates to our expectation of what exists in the Oort cloud of different stars?

Avi Loeb: Yeah. So, the one thing that I should point upfront is when scientists that try to attend to the anomalies of ‘Oumuamua suggests that it’s a hydrogen iceberg or a nitrogen iceberg. By the way, that notion gathered popularity in the mainstream. People said, oh, they had a sigh of relief. We can explain this object with something we know. But the truth is, it’s not something we know. We’ve never seen a nitrogen iceberg that was chipped off Pluto in our solar system. The Oort cloud does not have nitrogen icebergs that we witnessed. So claiming that ‘Oumuamua, the first interstellar object is a nitrogen iceberg or a hydrogen iceberg implies that there are nurseries out there around other stars or in molecular clouds that are completely different than the solar system in the sense that they produce most of the interstellar objects because ‘Oumuamua was the first one we discovered.

So they produce a large fraction of the interstellar objects, yet they are completely different from the solar system. It’s just like going to the hospital and seeing a baby that looks completely different than any child you have seen before. It’s your home from any child you had. And it implies that the birthplace of that child was quite different, but yet that child appears to be the first one you see. So, that’s to me an important signal from nature that you have to rethink what the meaning of this discovery is. And the other message is we will learn something new no matter what, so we need to get more data on the next object that belongs to this family. Because even if it’s a naturally produced object, it will teach us about environments that produce objects that are quite different from the ones we find in the solar system.

And that means that we miss something about the nature. And even if it’s natural in origin, we learn something really new in the process of gathering this data. So we should not dismiss this object and say, business as usual, we don’t have to worry about it, rather we should attempt to collect as much data as possible on the next weird object that comes along. I should say there was a second interstellar object discovered by an amateur astronomer from Russia that called Gennady Borisov. And it was given the name Borisov discovered in 2019. That one looked just like a comet. And I was asked, does that convince you that ‘Oumuamua was also natural because this one looks exactly like the comets we have seen? And I reclined, when you go along the beach and most of the time you see rocks and suddenly you see a plastic bottle. And after that you see rocks, the fact that you found rocks afterwards doesn’t make the plastic bottle a rock.

Each object has to be considered on its own merit. And therefore, it makes ‘Oumuamua even more unusual. The fact that we see Borisov as a natural comet. So in terms of the object that come from the Oort cloud, our own Oort cloud, there is a size distribution that there are objects that are much smaller than ‘Oumuamua and objects that are much bigger. And of course, the bigger objects are more rare. And then roughly speaking, there is equal amount of mass per logarithmic size bin. So, there are many more small objects. And most of them we can’t see because ‘Oumuamua was roughly at the limit of our sensitivity with Pan-STARRS. And that means that objects much smaller than the size of a football field cannot be noticed within a distance comparable to the distance to the sun. The sun acts as a lamppost that illuminates the darkness around us.

And so, an object is detected when it reflects enough sunlight for us to detect with our telescopes. And so small objects do not reflect enough sunlight, and we will notice them. But I calculated that in fact, if there are probes moving very fast through the solar system, let’s say at a fraction of the speed of light that were sent by some alien civilizations, we could detect the emission from them, the infrared emission from them with the James Webb Space Telescope. They would move very fast across our sky, so we just need to be ready to detect them.

Lucas Perry: Do you think given our limited knowledge of Oort clouds that there are perhaps exotic objects or rare objects, which we haven’t encountered yet, but that are natural in origin that may account for ‘Oumuamua?

Avi Loeb: Of course, there could be. As I mentioned, there are people suggested the hydrogen iceberg and nitrogen iceberg, dust bunny. These were suggestions that were already made and each of them has its own challenges. And it could be something else, of course. And the way to find out, that’s the way science operates. The science is guided by evidence by collecting data. And the way science should be done is you leave all possibilities on the table and then you collect enough data to rule out all but one interpretation that looks most plausible. And so, my argument is we should leave the artificial origin possibility on the table, because all the other possibilities that were contemplated invoke something that we’ve never seen before. So, we cannot argue based on speculations that it’s something that we’ve never seen before. We cannot argue that proves the point that it’s not artificial. So, it’s a very simple point that I’m making, and I’m arguing for collecting more data. I mean, I would be happy to be proven wrong, but it’s not artificial in origin, and then move on. The point is that science is not done by having a prejudice, knowing the answer in advance. It’s done by collecting data, and the mistake that was made by the philosophers during Galileo’s time is not to look through his telescope and argue that they know that the sun moves around the Earth. And that only maintained their ignorance.

The reality doesn’t care whether we ignore it. The Earth continued to move around the sun. If we have neighbors that exists out there, and it doesn’t really matter whether we shut down the curtains on our windows and claim, “No, we’re unique and special, and there is nobody out there on the street.” The fact that we say that, we can get a lot of likes on Twitter saying that, and then we can ridicule anyone that argues differently, but that would not change the fact whether we have neighbors or not. That’s an empirical fact. And, in order for us to improve our knowledge of reality, I’m talking about reality, not about philosophical arguments, just figuring out whether we have neighbors, whether we are the smartest kid on the block, that’s within the realm of science, and finding out the answer to this question is not a matter of debate.

It’s a matter of collecting evidence. But of course, if you are not willing to find wonderful things, you will never discover them. So, my point is, we should consider this possibility as real, as very plausible, as mainstream activity, just like the search for dark matter or the search for gravitational waves. We exist. There are many planets out there just like the Earth. Therefore, we should search for things like us that existed or exist on them. That’s a very simple assumption to make, an argument to make, and to me, it sounds like this should be a mainstream activity. But then, I realize that my colleagues do not agree, and I failed to understand this dismissal, because it’s a subject of great interest to the public, and the public fund science. So, if you go back a thousand years, there were people saying the human body has a soul, and therefore anatomy should be forbidden.

So imagine if scientists would say, “Oh, this is a controversial subject. The human body could have a soul. We don’t want to deal with that, because some people are claiming that we should not operate the human body,” where would modern medicine be? My argument is, if science has the tool to address the subject of great interest to the public, we have an obligation to address it and clear it up. Let’s do it bravely, with open eyes. And by the way, there is an added bonus. If the public cares about it, there will be funding for it. So, how is it possible that the scientific community ridicules this subject, brushes it aside, claims, “We don’t want to entertain this unless we have extraordinary evidence,” yet fails to fund at a very substantial level the search for that extraordinary evidence? How is that possible in the 21st century?

Lucas Perry: So, given the evidence and data that we do have, what is your credence that ‘Oumuamua is alien in origin?

Avi Loeb: Well, I have no certainty in that possibility, but I say, it’s a possibility that should be put left on the table, with at least as high likelihood as a nitrogen iceberg or a hydrogen iceberg or a dust bunny. That’s what I consider as the competing interpretations. I don’t consider statements like, “It’s always rocks. It’s never aliens,” as valid scientific statements, because they remind me of the possibility. If you were to present a cell phone to a caveman, and the caveman is used to playing with rocks all of his life, the caveman would argue that the cell phone is just a shiny rock. And, just basing your assertions on past experience is no different than what the philosophers were arguing. We don’t want to look through Galileo’s telescope because we know that the sun moves around the Earth. So, this mistake was made over and over again, throughout human history. I would expect modern scientists to be more open-minded to thinking outside the box, to entertain possibilities that are straightforward.

And what I find is, the strange thing is not so much that there is conservatism regarding this subject. But at the same time, in theoretical particle physics, you have whole communities of hundreds of people entertaining ideas that have no experimental verification, no experimental tests in the foreseeable future whatsoever, ideas like the string theory landscape or the multiverse. Or some people argue we live in a simulation, or other people talk about supersymmetry. And awards were given to people doing mathematical gymnastics, and these studies are part of the mainstream. And I ask myself, “How is it possible that this is considered part of the mainstream and the search for technological signatures is not?” And my answer is, that these ideas provide a sandbox for people to demonstrate that they’re smart, that they are clever, and a lot of the country, the academia is about that. It’s not about understanding nature. It’s more about showing that you’re smart and getting honors and awards. And that’s unfortunate, because physics and science is a dialogue with nature. It’s a learning experience. We’re supposed to listen to nature. And the best way to listen to nature is to look at anomalies, things that do not quite line up with what we expected. And by the way, whether Oumuamua is artificial or not, that doesn’t require very fancy math. It’s a very simple fact that any person can understand. I mean, nature is under no obligation to reveal its most exciting secrets without fancy math. It doesn’t need to be sophisticated.

Aristotle had this idea of the spheres surrounding us, that we are at the center of the universe, and there are these beautiful spheres around us. That was a very sophisticated idea that many people liked, because it flattered their ego to be at the center of the universe, and it also had this very clever arrangement. But it was wrong. So, who cares how sophisticated an idea is? Who cares if the math is extremely complicated? I mean, of course, it demonstrates that you are smart if you’re able to maneuver through these complicated mathematical gymnastics. But that doesn’t mean that it’s reflecting reality. And my point is, we better pay attention to anomalies that nature gives us, than to promoting our image.

Lucas Perry: Right. So it seems like there’s this interesting difference between the extent to which the scientific community is willing to entertain ‘Oumuamua as being artificial and origin, whereas at the same time, there is a ton of theories that, at least at the moment, are unfalsifiable. Yet, here we have a theory that is simple, matches the data, and can be falsified.

Avi Loeb: Right. And the way to falsify it, I mean, it’s not by chasing ‘Oumuamua, because by now, it’s a million times fainter than it was close to the sun. But then, it’s by finding more objects that look as weird as it was. And this was the first object we identified. There must be many more. If we found this object over by serving the sky for a few years, we will definitely find more by serving the sky for a few more years, because of the Copernican principle. Copernicus discovered that we are not positioned in a special location, in a privileged location in the universe. We’re not at the center of the universe, and you can extend it also, not just space, but also time. And, when you make an observation over a few years time, the chance of these few years being special and privileged is small.

I mean, most likely, it’s a typical time, and you would find it if you were to look at the previous three years, so then the following three years… That’s the Copernican principle and I very much subscribe to it, because again, the one thing I learned from practicing astronomy over the decades was a sense of modesty. We are not special. We are not unique. We are not located at the center of the universe. We don’t have anything special in our backyard. The Earth-sun system is very common. So, that’s the message that nature gives us. And, we are born into the world like actors put on a stage. And first thing we see is the stage is huge. It’s 10 to the power 26 times larger than our body. And the second thing we see is that the play has been going on for 13.8 billion years since the big bang, and we just arrived at the end of it.

So, the play is not about us. We are not the main actors. So let’s get a sense of modesty, and let’s look for other actors that may have been around for longer than we did. There’s a technological civilization. Maybe they have a better sense of what the play is about. So, I think it all starts from a sense of modesty. My daughters, when they were young, they were at home and they had the impression that they are the center of the world, that they are the smartest, because they haven’t met anyone else outside the family. And then, when we took them to the kindergarten, they got the better sense of reality by meeting others and realizing that they’re not necessarily the smartest kid on the block. And so, I think our civilization has yet to mature, and the best way to do that is by meeting others.

Lucas Perry: So before we move on to meeting others, I’m curious if you’re willing to offer a specific credence. So, you said that there are these other natural theories, like the dust bunny and the iceberg theories. If we think of this in terms of Bayesian reasoning, what kind of probability would you assign to the alien hypothesis?

Avi Loeb: Well, the point is that these objects that were postulated for natural origin of ‘Oumuamua were never seen before. So, there is no way of assigning likelihood to something that we’ve never seen before. And it needs to be the most common object in interstellar space. So, what I would say is that, we should approach it without a Bayesian prior. Basically, we should leave all of these possibilities on the table, and then get as much data as possible on the next object that shows the same qualities as ‘Oumuamua. By these qualities, I mean, not having a cometary tail, so not being a comet, and showing an excess push away from the sun.

And as I mentioned, there was such an object, 2020 SO, but it was produced by us. So, we should just look for more objects that come from interstellar space that exhibit these properties, and see what the data tells us. It’s not a matter of a philosophical debate. That’s my point. We just need a close up photograph, and we can easily tell the difference between a rock and an artificial object. And I would argue that anyone on Earth should be convinced when we have such a photograph. So, if we can get such a photograph in the next few years, I would be delighted, even if I’m proven wrong, because we will learn something new no matter what.

Lucas Perry: So, there’s also been a lot of energy in the news around UFO sightings and UFO reports recently. I’m curious how the current news and status of UFO interest in the United States and the world, how that affects your credence of ‘Oumuamua being alien in origin, and if you have any perspective or thoughts on UFOs.

Avi Loeb: Yeah, it’s a completely independent set of facts that is underlying the discussion on UFOs. But of course, again, it’s the facts, the evidence that we need to pay attention to. I always say, “Let’s keep our eyes on the ball, not on the audience.” Because if you look at the audience, the scientists are responding to these UFO reports in exactly the same way as they responded to ‘Oumuamua. They dismiss it. They ridicule it. And, that’s unfortunate, because the scientists should ask, “Who do we have access to the data? Could we analyze the data? Could we see the full data? Or could we collect new data on these objects, so that we can clear up the mystery?” I mean, science is about evidence. It’s not about prejudice. But instead, the scientists know the answer in advance. They say, “Oh, these reports are just related to human-made objects, and that’s it.”

Now, let’s follow the logic of Sherlock Holmes. Basically, Sherlock Holmes, as I mentioned in my book Extraterrestrial, Sherlock Holmes made the statement that you put all possibilities on the table, and then, whatever remains after you sought out all the facts must be the truth. That’s the way he operated as a detective. So, that’s the way we should operate as scientists. And what do we know about the latest UFO report, from the Pentagon and Intelligence agencies? So far, a few weeks before it’s being released, we know from leaks that there is a statement that some of the objects that were found are real. Okay? They are not artifacts of the cameras. They are not illusions of the people who saw them, because they were detected by multiple instruments, including infrared cameras, radar systems, optical cameras, and a lot of people from different angles.

And, when you consider that statement coming from the Pentagon, you have to take it seriously, because it’s just the tip of the iceberg. That the data that will be released to the public, presumably, is partial, because they will never released the high quality data, because it will inform other nations of the capabilities, the kind of sensors that the US has in monitoring the sky. Okay? So, I have no doubt that a lot of data is being hidden for national security reasons, because otherwise, it will expose the capabilities of these sensors that are being routinely used to monitor the sky. But, if people that had access to the full data, and that includes officials such as former president Barack Obama, former CIA director James Woolsey and others, that saw the data, and they make the case that these objects are real, then these objects may very well be real.

Okay? And I take that at face value. Of course, as a scientist, I would like to see the full data, or collect new data. There is no difference, because science is about reproducibility of results. So, if the data is classified, I would much rather place state-of-the-art cameras that you can buy in the commercial sector, or scientific instrumentation that we can purchase, and just place those in the same locations and record the sky. The sky is not classified. In principle, anyone could collect data about the sky. So, I would argue that, if all data is classified, we should collect new data that would be open to the public. And it’s not a huge investment of funds to have such an experiment. But the point of the matter is, that we can infer if the objects are real using the scientific method, then let’s assume that they are real, like the people that saw the full data claim.

So, if they’re real, then there are three possibilities. Either they were produced, manufactured by other nations, because we certainly know what we are doing, the US. So, if they were produced by other nations, like China or Russia, then humans have the ability to produce such objects, and they cannot exceed the limits of our technology. And, if the maneuvering of these objects look as if they exceed, substantially, the limits of the technologies we possess, then we would argue it’s not made by humans, because there is no way that the secret about an advanced technology would be preserved on Earth by humans. And because it has huge benefits commercially, so it would appear in the market, in the commercial sector because you can sell it for a lot of money, or it would appear in the battlefield, if it’s being used by other nations.

And we pretty much know what humans are capable of producing. We are also probably getting intelligence on other nations. So, we know what are the limits of human technology. I don’t think we can leave that possibility vague. If there is an object behaving in a way that far exceeds what we are able to produce, then that looks quite intriguing. But the remaining possibilities are, that somehow it’s a phenomenon that occurs in the Earth atmosphere. There is something that happens that we didn’t identify before, or that these are objects that came from an extraterrestrial origin. Okay? And, once again, I make the case that, the way to make progress on this is not to appear on Twitter and claim we know the answer in advance and ridicule the other side of the argument. This is not the way by which we make progress, but rather collect better evidence, better clues and figure it out, clear up the fog.

It’s not the mystery that should be unraveled by philosophical arguments. It’s something that you can measure and get data on and reproduce with future experiments. And once we get that, we will have a clear view of what it means. And then, that’s how mysteries get resolved in science. So, I would argue, for a scientific experiment that will clear up the fog. And the way that we would not do that is if the scientific community would ridicule these reports, and the public would speculate about the possible interpretations. That’s the worst situation you can be in, because you’re basically leaving a subject of great interest to the public unresolved.

And, that’s not the right way. Again, in the 21st century, to treat the subject of interest to the public, that obviously reaches the Congress, it’s not an eyewitness on the street that says, “I saw something unusual.” It’s military personnel. We have to take it seriously, and we have to get to the bottom of it. So that’s the way I look at it. Then, it may well be that it’s not the extraterrestrial in origin, but I think the key is by finding evidence.

Lucas Perry: So, given the age of the universe and the age of our galaxy and the age of our solar system, would you be surprised if there were alien artifacts almost everywhere or in many places, but we were just really bad at finding them? Or those artifacts were really good at hiding?

Avi Loeb: No, I wouldn’t be surprised, because as I said, most of the stars formed billions of years before the sun. And, if there were technological civilizations around them, many of these stars died by now and these civilizations may have perished, but if they send equipment, that equipment may operate, especially if it’s being operated by artificial intelligence or by things that we haven’t invented yet. It may well survive billions of years and get to our environment. Now, one thing you have to realize is, when you go in the wilderness, you better be quiet. You better not make a sound, and listen, because there may be predators out there. Now, we have not been careful in that sense, because we have been broadcasting radio waves for more than a century. So, these radio signals reached a hundred light years by now.

And, if there is another advanced civilization out there with radio telescopes of the type that we possess, they may already know about us. And then, if they use chemical rockets to get back to us, it would take them a million years to traverse a hundred light years. But if they use much faster propulsion, they may be already here. And the question is, are we noticing them? There was this Fermi paradox, formulated 70 years ago by Enrico Fermi, a famous physicist, who said that, “Where is everybody?” And of course, that’s a presumptuous statement, because it assumes that we are sufficiently interesting for them to come and visit us. And, when I met my wife, she had a lot of friends that were waiting for prince charming on a white horse to make them a marriage proposal, and that never happened, and then they compromise.

We, as a civilization, would be presumptuous in assuming that we are sufficiently interesting for others to have a party in our backyard. But nevertheless, it could be that it already happened. As you said, that we didn’t notice. One thing to keep in mind is full geological activity. Most of the surface of the Earth gets mixed with the interior of the Earth, over a hundred million years time scales. So, it could be that some of the evidence was buried by the geological activity on Earth, and that’s why we don’t see it.

But the moon, for example, is like a museum, because it doesn’t have geological activity, and also, it doesn’t have an atmosphere that would burn up an object that is smaller than the size of a person, like the Earth’s atmosphere does, say, for meteors. So in principle, once we establish a sustainable base on the moon, we can regard it as an archeological site, and survey the surface of the moon to look for artifacts that may have landed, may have crashed on it. Maybe we will find a piece of equipment that we never sent, that came from somewhere else that crashed on the surface of the moon.

Lucas Perry: So, it’d be wonderful if we could pivot into Great Filters and space archeology here, but before we do that, you’re talking about the Fermi paradox and whether or not we’re sufficiently interesting to merit the attention of other alien civilizations. I wonder if interesting is really the right criteria, because if advanced civilizations converge on some form of ethics or beneficence, then whether or not we’re interesting is not perhaps the right criteria for whether or not they would reach out. We have people on earth who are interested in animal ethics, like how the ants and bees and other animals are doing. So, it could be the same case with aliens, right?

Avi Loeb: Right. I completely agree. One thing I should say… Well, actually, two things is that, first, that you mentioned before the Drake’s equation. It doesn’t apply to relics. It doesn’t apply to objects. And the Drake equation talks about the likelihood of detecting radio signals. And, that has been the method we used over the past 70 years in searching for other civilizations. And, I think it’s misguided, because in order to get a signal, it’s just like trying to have a phone conversation. You need the counterpart to be alive. And it’s quite possible that most of the civilizations are dead by now. So, that’s the Great Filter idea that there is a narrow window of opportunity for us to communicate with them. But, on the other hand, they may have sent equipment into space, and we can search for it through space archeology, and find relics from civilizations that are not around anymore, just like we find relics from cultures that existed on the surface of Earth through archeological digs.

So I think a much more promising approach to find evidence for dead civilizations is looking for objects floating in space. And, the calculation of what’s the likelihood of finding them, is completely different from the Drake equation. It resembles more the calculation of what’s the chance that you would have stumbled across a plastic bottle on the beach or on the surface of the ocean. And, you just need to know how many plastic bottles are per unit area on the surface of the ocean, and then you will know what’s the likelihood of crossing one of them. And, the same is true for relics in space. You just need to know the number of such objects per unit volume, and then you will figure out what’s your chance of bumping into one of them.

And that’s a completely different calculation than the Drake equation, which talks about receiving radio signals. This is one point that should be born. And the other point that I would like to mention is that, during our childhood, we always have a sense of adults looking over our shoulders, and then making sure that everything goes well, and they often protect us. And then, as we become independent and grow up, we encounter reality on our own. There is this longing for a higher power that overlooks our shoulder. And, that is provided by the idea of God in a religion. But interestingly enough, it’s also related to the idea of some unidentified flying objects that are looking over our shoulders, because if a UFO was identified to be of extraterrestrial origin, it may imply that there is an adult wiser than we are in the room, looking over our shoulder. The question of whether that adult is trying to protect us is still open, remains open, but we can be optimistic.

Lucas Perry: All right. So, let’s talk a little bit about whether or not there might be adults in the room. So, you defined what Great Filter was. So, when I think of Great Filters, I think of there being potentially many of them, rather than a single Great Filter. So, there’s the birth of the universe, and then you need generations of stars to fuse heavier elements. And then there’s the number of planets and Goldilocks zones. And then there’s abiogenesis or the arising of life on Earth. And then there’s moving from single to multicellular life. And then there’s intelligent life and civilization, et cetera. Right? So, it seems like there’s a lot of different places where there could be Great Filters. Could you explain your perspective on where you think the most likely Great Filters might be?

Avi Loeb: Well, I think it’s self destruction, because I was asked by Harvard Alumni, how much longer do I expect our civilization to survive? And I said, “When you look at your life, and you just select a random day throughout your life, what’s the chance that it’s the first day after you are born. That probability is tens of thousands of times smaller, than the probability that the day you select would be during your adulthood, because there are tens of thousands of days in the life of an adult.” So, we existed for about a century as an advanced technological civilization. And you ask yourself, “Okay. Well, if we are in our adulthood, which is the most probable state for us to be in?” As I mentioned before, we’re just sampling randomly a time, and most likely during your adulthood, then that means that we have only a few more centuries left, because the likelihood that we will survive for millions of years, is tens of thousands of times smaller.

It would imply that we are in the first day of our life. And that is unlikely. Now, the one caveat I have for this statement is, that the human spirit can defy all odds. So, I believe that in principle, if we get our act together, we can be an outlier, in the statistical likelihood function. And, that’s my hope. I’m an optimist. And I hope that we will get our act together. But if we continue to behave the way we are, not to care so much about the climate. You can even see it in world politics nowadays. Even when you have administrations that care about climate, they cannot really convince the commercial sector to cooperate. And, suppose our civilization is on a path to self destruction, then we don’t have more than a few centuries left. So, that is a Great Filter. And of course, there could be many other Great Filters, but that seems to me as the most serious one.

And, then you ask yourself, “Okay, so which civilization is more likely to survive?” It’s probably the dumber civilization that doesn’t create the technologies that destroy it. If you have a bunch of crocodiles swimming on the surface of a planet, they will not create an atomic weapon. They would not change the climate. So, they may survive for billions of years. Who knows? So maybe the most common civilizations are the dumb ones. But, one thing to keep in mind is that, when you create technological capabilities, you can create equipment that will reproduce itself, like Von Neumann machines, or you can send it to space. You can escape from the location.

Or you can send it to space. You can escape from the location that you were born on. And so that opens up a whole range of opportunities in space. And that’s why I say that once a civilization ventures into space, then everything is possible. Then you can fill up space with equipment that reproduces itself. And there could be a lot of plastic bottles out there. And we don’t know. We shouldn’t assume anything. We should just search for them. And ‘Oumuamua, as far as I’m concerned, was the wake up call. And the other thing I would like to say is if I imagine a very advanced civilization that understands how to unify quantum mechanics with gravity, something we don’t possess at the moment… there’s such a unification scheme that we know works… perhaps they know how to irritate the vacuum and create a baby universe that would lead to more civilizations.

So it’s just like having a baby that can make babies that can make babies, and you would get many generations as a result of that. So this could be an origin of the Big Bang. Maybe the umbilical cord of the Big Bang started in a laboratory. And by the way, it would say that intelligence, technological advance is an approximation to God because in the religious stories, God created the universe. We can imagine a technology that would create a baby universe. And then the same is true for life. We don’t know if life was seeded, the origins of life was seeded in a laboratory somewhere. And so that remains a possibility. And that’s what’s so fascinating about the search for intelligent life out there, because it may provide answers to the most fundamental questions we have, like the meaning of life.

Lucas Perry: Would you consider your argument there about human extinction? Given what we are currently observing, is that like the doomsday argument?

Avi Loeb: Yeah. Well, you can call it the doomsday. I would call it risk assessment. And then I don’t think we are statistical systems in the sense that there is no escape from a particular future, because I think that once we recognize the risk in a particular future, we can respond and avoid it. The only question is whether as a civilization, we will be intelligent enough. And frankly, I’m worried that we are not intelligent enough. And it may be just like a Darwinian principle where if you are not intelligent enough, you will not survive and we will never be admitted to the club of intelligent civilizations in the Milky Way Galaxy unless we change our behavior. And it’s yet to be been whether we will change our behavior accordingly. One way to convince people to change their behavior is to find evidence for other civilizations that didn’t, and perished as a result. That would be a warning for us, a history lesson.

Now, one caveat I should mention is we always imagined things like us. And when we go to meet someone, it’s a fair assumption to assume that that person has eyes and nose and ears the way we have. And the reason it’s a reasonable assumption is because we share the same genetic heritage as the person that we are meeting. But if you think about life on a planet that had no causal contact with Earth, it could be very different.

And so calculating the likelihood of self-destruction, the likelihood of life of one form versus another, the likelihood of intelligence, all of these very often assume something similar to us, which may not be the case. I think it might be shocking to us to find the creatures from another planet or technologies from another planet. And so my solution to this ambiguity is to be an observer. Even though I’m a theorist, I would argue, let’s be modest. Let’s not try to predict things in this context. Let’s just explore the universe. And the biggest mistake we are making over and over again is to argue about the answer before seeing the evidence. And that’s the biggest mistake because it convinces you to be lazy, not to collect more evidence to say, “I know the answer in advance. I don’t need to look through the telescope. I don’t need to invest funds in searching for this. Even though it’s an important question, I know the answer in advance.” And that’s the biggest mistake we can make as a species.

I’m willing to go through all the hardships of arguing something outside the box of confronting these personal attacks against me just because it’s a question of such great importance to humanity. If that was a minor question about the nature of dark matter, I would not risk anything for that. Who cares? If the dark matter is axions or weakly interacting massive particles, that has very little impact on our daily lives. It’s not worth confronting the mainstream on that. And by the way, the response would not be so emotional in that case either. But on a subject as important as this one to the future of humanity, which is the title of your organization, there is no doubt in my mind that it’s worth the extra effort.

It’s worth the hardship, bringing people to recognize that such a search for technological relics in space is extremely important for the way we view ourselves in the big scheme of things, our aspirations for space, our notions about religion, and what we might do in response to the knowledge that we acquire will completely change the future of humanity. And on such a question, I’m willing to put my body on the barbed wire.

Lucas Perry: Well, thank you very much for putting your body on the barbed wire. I think you mentioned that there was something in… Was it Israeli training where soldiers are taught to put their body on the barbed wire so people can climb over them?

Avi Loeb: Yeah. That was a statement that in the battlefield, very often, a soldier is asked to put his body on the barbed wire so that others can pass through. The way I see it historically is you look at Socrates, the ancient Greek philosopher. He advocated for doubting the wisdom of influential politicians at the time and other important figures, and he was blamed for corrupting the youth by dismissing the gods that were valued by the civilians of the city-state of Athens at the time. And he was prosecuted and then forced to drink poison. Now, if Socrates would have lived today, he would have been canceled on the Athenian social media. That would be the equivalent of the poison. And then you see another philosopher, Epicurus, that made many true statements, but again, was disliked by some religious authorities at the time. And you see, of course, Galileo Galilei that was put in house arrest.

Later on, you see Giordano Bruno. I mean, he was an obnoxious person that was not liked by a lot of people, but he simply argued that other stars are just like the sun, and therefore, they might have a planet just like the Earth that could have life on it. And the church at the time found it offensive because if there is life that is intelligent out there, then that life may have sinned, and then Christ could have saved that life. And then you need billions of copies of Christ to be distributed throughout the galaxy to visit all these planets. And that makes little sense. That made little sense to the church. And so they burned Giordano Bruno on a stake. And even though nowadays we know that indeed, a lot of stars are like the sun, a lot of planets are just like the Earth at roughly the same separation from their host stars where life may exist [inaudible 01:12:03]. So in that sense, he was correct.

And obviously you find many such examples also in modern science over the past century, of people advocating for the correct ideas and being dismissed and ridiculed. Just to give you an example, a former chair of the astronomy department at Harvard that preceded me… I chaired the astronomy department for nine years. I was the longest-serving chair in the history of the astronomy department at Harvard. Before me was Cecilia Payne-Gaposchkin. And in her PhD thesis, which was the first thesis in astronomy at Harvard, she argued based on analyzing the spectrum of the sun that most of the surface of the sun is made of hydrogen. And while defending her PhD thesis, Henry Norris Russell, who was the director of the Princeton University Observatory, an authority on stars at that time, dismissed her idea and said, “That is ridiculous because we know that the sun is made of the same elements as the Earth. So there is not much hydrogen on Earth. It cannot be the case that the sun is made mostly of hydrogen.”

So she took out that conclusion from her PhD. And then in the subsequent few years, he redid the analysis, got more data, and wrote an extended paper, Industrial Physical Journal, arguing the same, that she was correct. And interestingly enough, in a visiting committee to the Princeton University Department of Astrophysics, the chair of that department was bragging that Henry Norris Russell discovered that the sun is made mostly of hydrogen. So you can see that history depends pretty much from who tells it. But the point of the matter is that sometimes, when you propose an idea that even though it has to be correct because it’s based on evidence, it’s being dismissed by the authorities, and science is not dictated by authority.

In the 1930s, there was a book co-authored by tens of scientists arguing that Einstein’s Theory of Relativity must be wrong. And when Einstein was asked about it, he said, “Why do you need tens of scientists to prove that my theory is wrong? It’s enough to have one author that would explain why the theory is wrong.” Science is not based on authority. It’s based on reasoning and on evidence. And there is a lot of bullying going on nowadays. And I witness it. And throughout my career, I’ve seen a number of ideas that I proposed that were dismissed and ridiculed at first. And then they became the interest of mainstream. And now, there are hundreds of people working on them. That was true for my work on the first stars. I remember that it was dismissed early on. There were people claiming even that there are no stars beyond the redshift [inaudible 01:14:59]. And then I worked on imaging black holes. I suggested that there could be a correlation between black or mass and characteristic velocity dispersion of stars in the vicinity of those supermassive black holes at the centers of galaxies.

I worked on gravitational wave astrophysics long before it was fashionable. And in all of these cases, the interest that I had early on was ridiculed. I gave a lecture in a winter school in 2013, in January 2013, winter school in Jerusalem, on gravitational wave astrophysics. And one of the other lecturers, who still is 20 years younger than I am, stood up and said, “Why are you wasting the time with these young students on a subject that will not be of importance in their career?” And he said it publicly. He stood up in front of everyone it’s on video. And two and a half years later, the LIGO experiment detected the first gravitational wave signal.

Many of these students were still doing their PhD, and this became the hottest frontier in astrophysics in subsequent years, and the Nobel prize was awarded. So here you have a situation where someone says, “Why are you giving a lecture on this subject to students? Because it would never be of importance through their careers.” And two and a half years later, it becomes the hottest topic, the hottest frontier in astrophysics. And it involves a new messenger other than light that was never used before in astrophysics. Gravitational waves, wrinkles in space and time. It opens up a whole new window into the universe. So how is it possible that someone that is 20 times younger than I am stands up, feels that it’s completely appropriate for him to stand up in front of all the students and say that?

And to me, it illustrates narrow-mindedness. It’s not a matter of conservatism. It’s a matter of thinking within the box and not allowing to think outside the box. And that, you might say, okay, it’s acceptable because there are lots of people suggesting crazy ideas. But at the same time, you have whole communities of theoretical physicists working on very strange ideas that were not verified experimentally. And that is part of the mainstream. And the common threads between these two communities of people is that they both don’t pay attention to evidence. They both do not recognize the fact that evidence leads the way. In the case of gravitational waves, it’s the fact that we detect the signal. So just wait for LIGO to find the signal, and then everything will change.

The case of ʻOumuamua, we saw some anomalies. Let’s pay attention to them. Let’s talk about them. And in the case of String Theory, it’s let’s say this shouldn’t be at the fringes of mainstream because we haven’t found evidence that supports the idea of extra dimensions as of yet. So it doesn’t deserve to be center stage. But you have these two communities living side to side because both of them feel comfortable not paying attention to evidence.

Lucas Perry: We like to think of science as this really clean epistemic process of hypothesis-generating and creating theories, and then verification and falsification through evidence and data-gathering. But the reality is that it’s still made up of lots of humans who have their own need for recognition and meaning and acceptance and validation. And so in order to improve the process of science, it’s probably helpful to bring light to the reality of the humanity that we all still have when we’re engaged in the scientific pursuit. And that helps to open our minds to the truth when our pursuit of the truth is not being obscured by things we’re not being honest with ourselves about.

Avi Loeb: Right. And I was the founding director of the Black Hole Initiative at Harvard University, which brings together physicists, mathematicians, astronomers, and philosophers. And my motivation in creating this center was to bring people from different perspectives so that they will open the minds of other disciplines to possible breakthroughs in the context of black holes. And I think this is key. I think we should be open-minded and we should also fund risky propositions, risky ideas. There should be a certain fraction of the funding that goes in those directions. And even though I founded this Black Hole Initiative, in the first annual conference that we had, a philosopher gave a lecture, and then at the end of the lecture, the philosopher argued that… After speaking to a lot of string theories, he made this statement that if a bunch of physicists agree on something as being true for a decade, then it must be true because physics is what physicists decide to do.

And I raised my hand. I said, “How can you make… No, I would expect philosophers to give us a litmus test of honesty.” It’s just like the canary in the cave. They should tell us when truth is not being spoken. And I just couldn’t understand how a philosopher could make such a statement. I said, “There are many examples in history where physicists agreed on something and it was completely wrong. And the only way for us to find out is by experimental evidence.” Nature is teaching us. It’s a learning experience. And we can all agree that we are the wealthiest people in the world. And if we go to an ATM machine, that’s equivalent to doing an experiment and testing that idea. Now we can feel happy until we try to cash the money out of the ATM machine, and then we realized that our ideas were wrong.

If someone mentions an idea, how do we tell whether it’s a Ponzi scheme or not? Bernie Madoff told a lot of people that if they give him their money, he would give them more in return, irrespective of what the stock market will do. Now, that was a beautiful idea. It appealed to a lot of people. They gave him their money. What else can you expect from people that believe a beautiful idea? They made money and gave it to Bernie Madoff because the idea was so beautiful. And he felt great about it. They felt great about it. But when they wanted to cash out, which was the experiment, he couldn’t provide them the money. So this idea turned out to be wrong.

And it’s not just the nuance of science to say, “Oh, okay. The recent experimental tests, but we can give up on this as long as we’re happy and we feel very smart and we completely agree that we should pursue these questions and just do mathematical gymnastics and give each other awards and feel great about life, and in general, just make the general statement that experiments will be great, but we can’t do them right now. And therefore, let’s not even discuss them.”

Having a culture of this type is unhealthy for science because how can you tell the difference between the idea of Bernie Madoff and reality? You can feel very happy until you try to cash it out. And if you don’t have an experimental test during your life, then you might spend your life as a physicist on an idea that doesn’t really describe reality. And that’s a risk that as a physicist, I’m not willing to take. I want to spend my life on ideas that I can test. And if they are wrong, I learn something new.

And by the way, Einstein was wrong three times in the last decade of his career. He argued that black holes don’t exist, gravitational waves don’t exist, and quantum mechanics doesn’t have spooky action at a distance. But that was part of his work at the frontiers of physics. You can be wrong. There’s nothing bad about it. When you explore new territories, you don’t always know if you’re heading in the right direction. As long as you’re doing it with dignity and honesty and integrity and you’re just following what is known at the time, it’s part of the scientific pursuit. And that’s why people should not ridicule others that think outside the box. As long as they’re doing it honestly, and as long as the evidence allows for what they’re talking about, that should be considered seriously.

And I think it’s really important for the health of the scientific endeavor because we’re missing on opportunities to discover new things. Just to give you an example, in 1952, there was an astronomer named Otto Struve that argued that we might find planets close in to a star like the sun, that if they have the mass of Jupiter, because if they’re close in, if they’re hot Jupiters, heated by the sun, they’re getting very close to the sun, then they would tag the sun like star back and forth in a way that we can measure, or they would a occlude significant portion of the area of the star. So we can see them when they transit the star. So he argued let’s search for those. And for four decades, no time on major facilities was allocated for such a search because astronomers argued, “Oh, we pretty much understand why Jupiter formed so far away from the sun, and we shouldn’t expect hot Jupiters.” And then in 1995, a hot Jupiter was discovered. And the Nobel Prize was given for that a couple of years ago.

So you might say, “Okay, that baby was born.” Eventually, even though four decades were wasted, eventually, we found a hot Jupiter. And that opened up the field of exoplanets. But my argument is that this is a baby that was born. For each baby like that, there must be many babies that were never born because it’s still being argued that it’s not worth the effort to pursue those frontiers.

And that’s unfortunate, because we are missing opportunities to discover new things. If you’re not open to discover new things, you will never discover them.

Lucas Perry: I think that’s some great wisdom for many different parts of life. One thing that you mentioned earlier that really caught my attention was you were talking about us becoming technologically advanced, and that would unlock replicators, and that replicators could explore the universe and fundamentally change it and life in our local galactic cluster. That was also tied into the search for the meaning for life. And a place where I see these two ideas as intersecting is in the idea of the cosmic endowment. The cosmic endowment is this idea of the total amount of matter and energy that an intelligent species has access to after it begins creating replicators. So since the expansion of the universe is accelerating, there’s some number of galaxies which exist outside of a volume that we have access to. So there’s a limited amount of energy and matter that we can use for whatever the meaning of life is or whatever good is. So what do you think the cosmic endowment should be used for?

Avi Loeb: Right. So I actually had an exchange with Freeman Dyson on this question. When the accelerating universe was discovered, I wrote a paper saying, “When the universe ages by a factor of 10, we will be surrounded by vacuum beyond our galaxy, and we will not have contact with other civilizations with resources.” And he wrote back to me and said, “We should engage in a cosmic engineering project where we propel our star and come together with other civilizations. And by that, we will not be left alone.” And I told him, “Look, this cosmic engineering project is very ambitious. It’s not practical. In fact, there are locations where you have much more resources, 1,000 thousand times more than in our Milky Way Galaxy. These are called clusters of galaxies, and we can migrate to the center of the nearest cluster of galaxy. And in fact, there might be a lot of journeys taken by advanced civilizations towards clusters of galaxies that would avoid the cosmic expansion.”

So that’s my answer of how to prepare for the cold winter that awaits us, where we will be surrounded by vacuum. It’s best to go to the nearest cluster of galaxy, where the amount of resources is 1,000 times larger. In addition to that, you can imagine that then in the future, we will build the accelerators that bring particles to energies that far exceed the large Hadron Collider. And the maximum particle energy that we can imagine is so-called Planck energy scale. And if you imagine developing our accelerator techniques, you can, in principle, imagine building an accelerator within the solar system that will reach Planck energies. And if you collect particles at these energies, we don’t really know the physics of quantum gravity, but you can imagine a situation where you would irritate the vacuum to a level where the vacuum will start burning up. Because we know the vacuum has some mass density, some energy density that is causing the accelerated expansions, the so-called cosmological constant.

And if you bring the vacuum to zero energy density state, then you have an excess energy that is just like a burning front. It’s the energy you get from a propellant that burns. And you get a domain wall that can expand and consume all the vacuum energy along its path. And of course, it moves at the speed of light. So if you were to be on the path of such a domain wall, you would not get an advanced warning and it will burn up everything along its path at the speed of light.

So I think if we ever meet advanced civilizations that have the capabilities of building accelerators that reach the Planck scale, we should sign a treaty, a galactic treaty, whereby we will never collide particles approaching that energy in order not to risk everyone else from domain walls that would burn them up. That’s just the matter of cosmic responsibility.

Lucas Perry: I think Max Tegmark calls these “death bubbles”?

Avi Loeb: Yeah. I mean, these are domain walls that, of course, we have no evidence for, but they could be triggered by collisions at the Planck scale. And a matter of cosmic responsibility is not to generate these domain walls artificially.

Lucas Perry: So let’s pivot into looking for life and space archeology, which is a cool term that you’ve created, and looking for them through bio-signatures and techno-signatures. One place that I’m curious to start here is since we were just talking about replicators, why is it that we don’t find evidence of replicators or large-scale super structures in other galaxies or in our own galaxy? For example, a galaxy where half of it has been turned into Dyson spheres. And so it’s like half illuminated.

Avi Loeb: Right. I mean, presumably such things do not exist. It’s actually very difficult to imagine an engineering project that will construct a Dyson sphere. And I think it’s much more prudent for an advanced civilization to build small pieces of equipment that go through the vast space in between stars. And that is actually very difficult for us to detect with existing instrumentation. Even a spacecraft as big as a football field would be noticed only when it passes within the Earth’s orbit around the sun. That’s the only region where Pan-STARRS detected objects the size of ʻOumuamua, from the reflected sunlight. So we will notice such objects the farther than the Earth is from the sun. And the distance to the nearest star is hundreds of thousands of times bigger than that. So most of space could be filled with things passing through it that are not visible to us.

A spacecraft the size of a football field is huge. We cannot imagine something much bigger than that. And so I would argue that there could be a lot of things floating through space. Also, as of now our telescopes were not monitoring for objects that move very fast, a fraction of the speed of light, obviously astronomers saw something moving across the sky so fast, they would dismiss it. They would say, “It makes no sense. We are looking for asteroids or comets that are moving at a percent of a percent of the speed of flight, 10 to the minus four of the speed of light.” So part of it is our inability to consider possibilities that may exist out there. But most of the fact that we haven’t yet detected a lot of these objects is a lack of sensitivity. We can’t really see these things when they’re far away unless there are major megastructures, as you pointed out. But I think such engineering projects are unlikely.

Lucas Perry: I’m curious why you feel that engineering projects like that are unlikely. It seems like one of the most interesting things you can do is computation. Computation seems like it has something to do with creating consciousness, and consciousness seems like it is the bedrock of value given that all value arises in conscious experience. I would imagine using the energy of suns to enable vast amounts of computation is one of the most interesting things that a civilization can do. And the objects that they might send out to other solar systems would be a nanoscale, right? You send out nano scale replicators. They would be even smaller than football fields or smaller than Amomum. Then, those would begin Dyson sphere engineering projects. With artificial super intelligence and billions and billions of years to go in the universe, in some sense it feels like we’re in the early universe. It feels curious to me why superstructures would be unlikely. I’m not sure I totally understand that.

Avi Loeb: If you think about what the star is, a star is just a nuclear reactor that is bound by gravity. That doesn’t seem to be like the optimal system for us to use. It’s better to build an artificial nuclear reactor that is not bound by gravity like nuclear engine, nuclear reactor. We’re trying to do that. It’s not easy to build a fusion reactor on earth, but we do have a fission reactor. If I were to think about using nuclear energy, I would say it’s much better to use artificially-made nuclear engines than to use the energy produced by a giant nuclear reactor that nature produced in particular locations. Because then you can carry your engine with you. You are always close to it. You can harness all of its energy and you don’t need to put a huge structure around the star, which brings in a lot of engineering difficulties or challenges.

I would be leaning in the direction of having small systems, a lot of small systems sent out rather than a giant system that covers the star. But once again, I would argue that, we should look at the evidence and there are constraints on Dyson’s spheres that imply that they are not very common. I should say a couple of weeks ago, I wrote a paper with an undergraduate student in Stanford, Eliza Tabor, that considers the possibility of detecting artificial lights on the night side of Proxima B: the habitable planet around the nearest star, Proxima Centauri using the James Webb Space Telescope. We show that one can put very interesting limits on the level of artificial illumination on the dark side of that planet if there are any CT lights out there.

The other technological signatures that one can look for are, for example, industrial pollution in the atmosphere planet. I wrote the paper about that six years ago. You can look for reflectance that indicates photovoltaic cells on the day side of a planet, which is quite different than the reflectance of rock as a spectral edge. You can look for light beams that sweep across the sky. You see them as a flash of light. For example, the light being used for propulsion using light sails. If you imagine in other planetary system where cargoes are being delivered from an Earth-like planet to a Mars-like planet using light sails, the beam of light could cross our line of sight and we could see it as a flash of light, and we can have even correlate it with the two planets passing a longer line of sight. That would give us confidence that indeed it’s a light sail traveling between those two planets that we are witnessing. I wrote a paper about that in 2016.

There are all kinds of technological signatures we can search for but we need to search for it and we need to put funds towards this.

Lucas Perry: We have both bio-signatures and techno-signatures. In terms of bio-signatures, you’ve proposed looking in the clouds of brown dwarfs and green dwarfs. There is looking around our own solar system through looking at the elements in, for example, the atmosphere of Venus, there was phosphene which we thought could not exist except their biological pathways. So, it’s hypothesized that maybe there’s some kind of life in the atmosphere of Venus. They’re searching other planets for elements that can’t exist without life. Then, in terms of techno-signatures, they are searching for radio waves which you’ve talked about. That is a primary way of looking for life, but it potentially needing a refresh where it is, for example, looking for artificial light or the remnants of industry. You’ve also proposed increasing the threshold of sensitivities for developing imaging that is increasingly sensitive. Because ‘Oumuamua was that basically, was it at the limit of our telescopes’ capacity? 

Avi Loeb: It was roughly. I mean, it was at the level of sensitivity that allows us definite detection, but we can’t see objects that are much smaller than that or reflect much less light than ‘Oumuamua did. I should say that all of these, both biological signatures and technological signatures, are being reviewed in a textbook that I wrote together with my former post doc Manasvi Lingam that is coming out to be published on the 29th of June, 2021. It’s more than a thousand pages long. It’s 1,061 pages long and it has an overview of the current scientific knowledge we have and the expectations we have for biological signatures and technological signatures. The title of the book is “Life in the Cosmos” and it’s to be published by Harvard University Press. It is meant to be a textbook for scientific research as a follow-up on my popular level book, Extraterrestrial.

Lucas Perry: Pivoting a bit here, do you feel that, and we mentioned this a little bit earlier when we were talking about the difference between aliens being interested in us or compelled to reach out to us because of ethical concerns. Do you think that advanced alien civilizations can converge on ethics and beneficence?

Avi Loeb: That’s an interesting question. It really depends on their value system. It also depends on Darwinian selection. The question is what kind of civilizations will be most abundant? If you look at human history, very often, the more aggressive, less ethical cultures survived because they were able to destroy the others. It’s not just a matter of which values appear to be more noble. It’s a question of which set of values leads to a survival in the long run and domination in the long run? Without knowing the full spectrum of possibilities, we can’t really assess that. Once again, I would say the smart thing for us to do is be careful. I mean, not transmit too much to the outside world until we figure out if we have neighbors. There was this joke when I Love Lucy was replayed the again and again, that we might get a message from another planet saying, “If you keep replaying reruns of I Love Lucy, we will invade you.”

I think, well, it’s important for us to be careful and figure out first whether there are smarter kids on the block. But having said that, if we ever establish contact or if you find the equipment in our neighborhood, the question is what to do? It’s a policy question how to respond to that and it really depends on the nature of what we find. How much more advanced is the equipment that we uncover? What were the intentions of those who produced it and sent it? These are fundamental questions that will guide our policy and our behavior. Until we find conclusive evidence, we should wait until that moment.

Lucas Perry: To push back a little bit on the Darwinian argument that’s, of course, a factor where we have this kind of game theoretic expression of genes, the selfish gene trying to propagate itself through generations and that leading to behaviors and how the human being is conditioned by evolution in that way. There’s also the sense that over time humanity has become increasingly moral. We’re, of course, doing plenty of things right now that are wrong, but morality seems to be improving over time. This leads to a question where, for example, do you think that there is a necessary relationship between what is true and what is good? You need to know more and more true facts in order to, for example, spread throughout the universe. So, if there’s a necessary relationship between what is true and what is good, there would be a convergence then also on what is good as truth continues to progress.

Avi Loeb: I was asked in a forum when I joked about the fact that a man seeking intelligence in space in the sky, because I don’t find it often here on earth, a member of the audience chuckled and asked me: how do you define an intelligent civilization? The way I define it is by the guiding principles of science, which is sharing or cooperation on evidence-based knowledge. The word cooperation is extremely important. I believe that intelligence is marked by cooperation, not by fighting each other because that’s a sink for our energy, for our resources, that doesn’t do any good. Promoting a better future for ourselves through cooperation is a trademark of intelligence. It’s also the guiding principle of science.

The second component of these guiding principles is evidence-based knowledge. The way I view science is it’s an infinite sum game. In economics, you have a zero sum game where if someone makes a profit, another person loses. In science, when we increase the level of knowledge we have, everyone benefit. When a vaccine was developed for COVID-19, everyone on earth benefited from it. Science aims to increase the territory of this island of knowledge that we have in the ocean of ignorance that surrounds it. It should be evidence-based, not based on our prejudice. That’s what I hope the future of humanity is. It will be coincident with the guiding principles of science, meaning, people will cooperate with each other, nations will cooperate with each other and try to share evidence-based knowledge rather than what’s the alternative.

The alternative is what we are doing right now: fighting each other, trying to feel superior relative to each other. If you look at human history, you find racism, you find attempts to feel supremacy or elitism, or all kinds of phenomena that stem from a desire to feel superior relative to other people. That’s ridiculous in the big scheme of things, because we are such an unimportant player in the cosmic stage that we should all feel modest, not trying to feel superior relative to each other. Because any advantage that we have relative to each other is really minuscule in the big scheme of things. Now, the color of the skin is completely meaningless. Who cares what the color of the skin is? What significance could that have for the qualities of a person?

Yet, a lot of human history is shaped around that. This is not the intelligent way for us to behave as a species. We should focus on the guiding principles of science which are cooperation and sharing of evidence-based knowledge. Rather than ridiculing each other, rather than trying to feel superior relative to each other, rather than fighting each other, let’s work together towards a better future and demonstrate that we are intelligent so that we will acquire a place in the club of intelligent species in the Milky Way galaxy.

Lucas Perry: Do you see morality as evidence-based knowledge?

Avi Loeb: I think morality, if you listen to Kant, it’s the logical thing to do if you consider a principle such that it will promote the better good of everyone around. You’re basically taking into consideration others and shaping your behavior so that if other people follow the same principles, we will be in a better world. That to me is a sign of recognizing evidence because the evidence is that you don’t live alone. If you are to live alone, if you are the only person on earth, morality loses significance. Not only that there is nobody else for you to consider morality relative to. That’s not the issue. The issue is that it’s irrelevant. You don’t need to consider morality because you’re the only person. You can do whatever you want. It has no effect on other people, therefore, morality is not relevant. You can do whatever you want. But given the fact that you look at the evidence and you realize that you’re not alone, that’s evidence. You shape your behavior based on that evidence, and I do think that’s evidence-based knowledge. Definitely.

Lucas Perry: How do you see axiomatic based knowledge? For example, axioms of morality and mathematics that build these structures, they’re also axioms, for example, of science, like this value of communication and evidence-based reasoning. Axioms and morality are, for example, might be the value and disvalue are innate and intrinsically experienced in consciousness. Then, there are axioms in mathematics which motivate and structure that field. We’ve talked a lot about science and evidence-based reasoning, but what about knowledge in the philosophical territory which is almost a priori true, like things which we rest fields upon? How do you see that?

Avi Loeb: I do believe that there is room for humanities of the future. The way that philosophy was handled in past centuries should be updated. Let me illustrate that with an example related to your question. Basically, suppose we want to decide about the principles of morality, the way to do that is you can construct a simulation that includes a lot of people. In principle, if you include all the ingredients that make people behave one way or another. It doesn’t need to be rational reasoning. You can include some randomness or some other elements that shape human behavior based on their environment. You can include that in the simulation.

Let’s just imagine this simulation where you put individual people and you have an algorithm for the way that they respond to their environment. It doesn’t need to be by rational reasoning. It could be emotional, it could be any other way that you find appropriate. You have the building blocks. Each of them is a person and you introduce the randomness that is in the population. Then, you run the simulation and you see what happens. This is just like trying to produce human history artificially. Then, you introduce principles for the behavior of people, guiding principles, just like moral principles.

First you let people behave in a completely crazy way, like, anything they want then you will get killed as the outcome of this simulation. But if you introduce principles of morality, you can see the outcomes that will come out of it. What I would say is in principle, in the future, if we have a sophisticated enough computer algorithm to describe behavior of people based on our understanding of how people behave, if we get the better sense of how people behave and respond to their environment, we can design the optimal code by which people should behave such that we will end up in a stable society that is intelligent, that follows the kind of principles I mentioned before that is orderly and that benefits everyone for a better future.

That’s one way of approaching it. Obviously in the past, philosophers could not approach it this way because they didn’t have the computer capabilities that we currently have. You can imagine artificial intelligence addressing this task in principle.

Lucas Perry: You can set moral principles and moral parameters for a system and then evolve the system, but the criteria for evaluating the success or not of that system and those are more like moral axioms. As a scientist, I’m curious about how you approach, for example, moral axioms that you use for evaluating the evolution of a particular moral system.

Avi Loeb: My criterion, the one that I think that guides me is maintaining the longevity of the human species. Whatever will keep us for the longest amount of time. Of course, bearing in mind that the physical conditions will change on earth. Within a billion years, the sun will boil off all the oceans on earth, but let’s leave that aside. Let’s just ask, suppose you put the people in a box and, generation after generation, let them follow some principles. What would be the ideal principles to maintain the stability of society and the longevity of the human species? That’s what will guide me. I think survival is really the key for maintaining your ideas. That’s the precondition. In nature, things that are transient, they go away. They don’t survive and they lose their value so they have less value. I mean, obviously in the short term, they could have more value, but I care about the long-term and I define the principles based on how long they would allow us to survive.

Lucas Perry: But would you add expected value to that calculation? It’s not just time, but it’s actually like the expected reward or expected value over time. Because some futures are worse than others and so maybe we wouldn’t want to just have longevity.

Avi Loeb: There is the issue of being happy and pleased with the environment that you live in. That could be factored in. But I think the primary principle would be survival because within any population you always will find a fraction of the components that are happy. It partly depends on the circumstances that they live in, but partly on the way they accept those circumstances. You can live in the barn and be happy. You can be in a mansion and be unhappy. It’s complicated as to what makes you happy and I would put that as a secondary condition. I would worry more about social structures that maintain longevity.

Lucas Perry: All right. On humanity’s longevity, we’re basically beginning to become technologically advanced, we’re facing existential risks in the 21st century from artificial intelligence and nuclear weapons and synthetic biology. There’s UFO’s and there’s ‘Oumuamua and a lot of really interesting, crazy things are going on. I’m curious if you could touch on the challenge of humanity’s response and the need for international governance for potentially communicating and encountering alien life.

Avi Loeb: Well, I do think it’s extremely important for us to recognize that we belong to the same species. All the confrontations we often have in politics, between nations, they should play a lesser role in guiding our behavior. Cooperation on the global scale, international cooperation, is extremely important. Let me give an example from recent history. There was a virus that came from Wuhan, China. If scientists were allowed to gather all the information of how this virus came and what the characteristics of this virus are, then the vaccine would have been developed earlier and it could have saved the lives of many people.

I would say, in the global world that we live in today, many of our problems are global and therefore we should cooperate on the solutions. That argues against putting borders in our knowledge, trying, again, to gain superiority of one nation relative to another, but instead help each other towards a better future. It’s really the science that provides the glue that can bind us internationally. I realized, I’m trying to be a realist, that it may not happen anytime soon that people will recognize the value of science as the international glue. But I hope that eventually we will realize that this is the only path that will bring us to survival, to a better future, if we act based on cooperation on evidence-based knowledge.

Lucas Perry: In 2020, you have an article where you advocate for creating an elite scientific body to advise on global catastrophes. In the Future of Life Institute we’re interested in reducing the risks of existential risks, ways in which technology can be misused or lead to accidents which lead to the extinction of life on earth. Could you comment on your perspective on the need for an elite scientific body to advise on existential risk and global catastrophic risks?

Avi Loeb: Well, we noticed that during the pandemic, we were not really prepared especially in the Western world because the last major pandemic of this magnitude took place a century ago, and nobody around today in politics or otherwise was around back then. As a result, we were not ready. We were not prepared. I think it’s prudent to have an organization that will cultivate cooperation globally. It could be established by the United Nations. It could be a different body. But once again, it’s important for us to plan ahead and avoid catastrophes that could be more damaging than COVID-19. If you prevent them, it would more than overpay for the investment of funds.

Just to give you another example, solar eruptions, solar storms, if there was a Carrington Event. About 150 years ago, it was big eruption on the sun that brought energetic particles to earth and back in the mid-19th century, there wasn’t much technological infrastructure. But if the same event would have happened today, it would cost trillions of dollars to the world economy because it would damage power grids and satellites, communication, and so forth. It would be extremely expensive. It’s important for us to plan ahead. About seven years ago, there was a plume of hot gas that was ejected by the sun and it just missed the earth. We should be ready for that and build infrastructure that would protect us for such a catastrophe.

There are many more. One can go through the risks and some of them are bigger than others. Some of them are rarer than others. Of course, one of them is the risk from an asteroid hitting the earth and the Congress that tasked NASA to find all asteroids or rocks bigger than the size of ‘Oumuamua, about 140 meters. They wanted NASA to find the 90% of all of those that could potentially intercept earth and collide with earth. The Pan-STARRS telescope that we started from, that discovered Oumuamua, was funded for finding such near earth objects. The Vera Rubin Observatory will most likely fulfill two-thirds of the Congressional task and find 60% of all the near earth asteroids bigger than 140 meters.

That shows that the human brain is actually much more useful for survival than the body of a dinosaur because the dinosaurs had huge bodies. 66 million years ago, they were very proud of themselves. They dominated their environment, they ate grass, and were happy. Then, from the sky came this giant rock the size of Manhattan Island. When it hit the ground, it tarnished their ego trip abruptly. Just to show you that the human brain, even though it’s much smaller than the dinosaur body is much more precious for protecting us because we can design telescopes that would alert us to incoming objects. That’s a catastrophe that obviously we can protect ourselves against by shifting the trajectories of objects heading our way.

Lucas Perry: As a final question, I’m curious, what are the most fundamental questions to you in life and what motivates and excites you from moment to moment as a human being on earth? I’ve read or heard that you were really interested in existentialism as a kid. What are the most foundational or important questions to you?

Avi Loeb: The fundamental issue is that we live for a finite time, with short time. The question is what’s the meaning of our existence? You see, because very often we forget that this trip that is very exciting that we’re having and could be very stimulating and intriguing, is finite. When I realized that, when both my parents passed away over the past three years, I came to the realization that I can give a damn of what other people think. Let’s focus on the substance. Let’s keep our eyes on the ball and not on the audience. Then, it was the focusing of my attention to the important things in life that we should appreciate.

Then, there is this fundamental question of why is life worth living? What are we living life for? What is the meaning of our life? You know, it may well be that there is no meaning, that we just go through this interesting trip; that we are spectators of the universe. We should enjoy the play while it lasts. But that, again, argues that we should be modest and behave like spectators rather than trying to shape our immediate environment and feel a sense of deep arrogance as a result of that. That was the view of the dinosaurs before the rock hit them. In a way, what gives me a sense of a meaningful life is just looking at the universe and learning from it. I don’t really care about my colleagues.

Every morning I jog at 5:00 AM. I developed this routine during the pandemic. I enjoy the company of birds, ducks, wild turkeys, and rabbits. I really enjoy nature left to its own much more than people because there is something true in looking at it. Every morning, I see something different. Today, I saw a red bird. I saw the sunrise was completely different than yesterday. Everyday, you can learn new things and we just need to pay attention, not to feel that you know everything. It’s not about us. It’s about what is going on around us that we should pay attention to. Once we behave more like kids appreciating things around us and learning from them, then we would feel happier. I was asked by the Harvard Gazette: what is the one thing I would like to change about the world? I said, I would like my colleagues to behave more like kids, basically not being driven by promoting their image but rather willing to make mistakes, putting skin in the game, and learning regarding life as a learning experience. We might be wrong sometimes, but we are doing our best to figure out when we are wrong.

Lucas Perry: All right, Avi. Thank you very much for inspiring this childlike curiosity in science, for also helping to improve the cultural and epistemic situation in science, and also for your work on ‘Oumuamua and everything to do with extraterrestrials and astronomy. Thank you very much for coming on the podcast.

Avi Loeb: Thanks for having me. I had a great time.

 

Nicolas Berggruen on the Dynamics of Power, Wisdom, and Ideas in the Age of AI

  • What wisdom consists of
  • The role of ideas in society and civilization
  • The increasing concentration of power and wealth
  • The technological displacement of human labor
  • Democracy, universal basic income, and universal basic capital
  • Living an examined life

 

Check out Nicolas Berggruen’s thoughts archive here

Lucas Perry: Welcome to the Future of Life Institute Podcast. I’m Lucas Perry. Today’s episode is with Nicolas Berggruen and explores the importance of ideas and wisdom in the modern age of technology. We explore the race between the power of our technology and the wisdom with which we manage it , what wisdom really consists of, why ideas are so important, the increasing concentration of power and wealth in the hands of the few, how technology continues to displace human labor, and we also get into democracy and the importance of living an examined life. 

For those not familiar with Nicolas, Nicolas Berggruen is an investor and philanthropist. He is the founder and president of Berggruen Holdings, and is a co-founder and chairman of the Berggruen Institute. The Berggruen Institute is a non-profit, non-partisan, think and action tank that works to develop foundational ideas about how to reshape political and social institutions in the face of great transformations. They work across cultures, disciplines and political boundaries, engaging great thinkers to develop and promote long-term answers to the biggest challenges of the 21st Century. Nicolas is also the author, with Nathan Gardels, of Intelligent Governance for the 21st Century: A Middle Way between West and East as well as Renovating Democracy: Governing in the Age of Globalization and Digital Capitalism. And so without further ado, let’s get into our conversation with Nicolas Berggruen. 

So, again, thank you very much for doing this. And to set a little bit of stage for the interview and the conversation, I just wanted to paint a little bit of a picture of wisdom and technology and this side of ideas, which is not always focused on when people are looking at worldwide issues. And I felt that this Carl Jung quote captured this perspective well. He says that “indeed it is becoming ever more obvious that it is not famine, not earthquakes, not microbes, not cancer, but man himself who is man’s greatest danger to man for the simple reason that there is no adequate protection against psychic epidemics, which are infinitely more devastating than the worst of natural catastrophes.” So, I think this begins to bring us to a point of reflection where we can think about, for example, the race between the power of our technology and the wisdom with which we manage it. So, to start things off here, I’m curious if you have any perspective about this race between the power of our technology and the wisdom with which we manage it, and in particular what wisdom really means to you.

Nicolas Berggruen: So, I think it’s an essential question. And it’s becoming more essential every day because technology, which is arguably something that we’ve empowered and accelerated is becoming increasingly powerful to a point where we might be at the cusp of losing control. Technology, I think, has always been powerful. Even in very early days, if you had a weapon as technology, well, it helped us humans on one side to survive by likely killing animals, but it also helped us fight. So, it can be used both ways. And I think that can be said of any technology.

What’s interesting today is that technology is potentially, I think, at the risk or in the zone of opportunity where the technology itself takes on a life. Go back to the weapon example. If the weapon is not being manned somehow, well, the weapon is inert. But today, AIs are beginning to have lives of their own. Robots have lives of their own. And networks are living organisms. So, the real question is when these pieces of technology begin to have their own lives or are so powerful and so pervasive that we are living within the technology, well, that changes things considerably.

So, going back to the wisdom question, it’s always a question. When technology is a weapon, what do you do with it? And technology’s always a weapon, for the good or for the less good. So, you’ve got to have, in my mind at least, wisdom, intention, an idea of what you can do with technology, what might be the consequences. So, I don’t think it’s a new question; I think it’s a question since the beginning of time for us as humans. And it will continue to be a question. It’s just maybe more powerful today than it ever was. And it will continue to become more potent.

Lucas Perry: What would you say wisdom is?

Nicolas Berggruen: I think it’s understanding and projection together. So, it’s an understanding of maybe a question, an issue, and taking that issue into the real world and seeing what you do with that question or that issue. So, wisdom is maybe a combination of thinking and imagination with application, which is an interesting combination at least.

Lucas Perry: Is there an ethical or moral component to wisdom?

Nicolas Berggruen: In my mind, yes. Going back to the question, what is wisdom? Do plants or animals have wisdom? And why would we have wisdoms, and they not? We need to develop wisdom because we have thoughts. We are self-aware. And we also act. And I think the interaction of our thinking and our actions, that makes for need for wisdom. And in that sense, a code of conduct or ethical issues, moral issues become relevant. They’re really societal, they’re really cultural questions. So, they’ll be very different depending on when and where you are. If you are sitting, as we are, it seems both in America today, in 2021; or if we were sitting 2,000 years ago somewhere else; or even today, if we’re sitting in Shanghai or in Nairobi.

Lucas Perry: So, there’s this part of understanding and there’s this projection and this ethical component as well and the dynamics between our own thinking and action, which can all interdependently come together and express something like wisdom. What does this projection component mean for you?

Nicolas Berggruen: Well, again, to me, one can have ideas, can have feelings, a point of view, but then how do you deal with reality? How do you apply it to the real world? And what’s interesting for us as humans is that we have an inner life. We have, in essence, a life with ourselves. And then we have the life that puts us in front of the world, makes us interact with the world. And are those two lives in tune? Are they not? And how far do we push them?

Some people in some cultures will say that your inner life, your thoughts, your imagination are yours. Keep them there. Other cultures and other ways of being as individuals will make us act those emotions, our imaginations, make them act out in the real world. And there’s a big difference. In some thinking, action is everything. For some philosophers, you are what you do. For others, less so. And that’s the same with cultures.

Lucas Perry: Do you see ideas as the bridge between the inner and the outer life?

Nicolas Berggruen: I think ideas are very powerful because they activate you. They move you as a person. But again, if you don’t do anything with them, in terms of your life and your actions, they’ll be limited. What do you do with those ideas? And there, the field is wide open. You can express them or you can try to implement them. But I do think that unless an idea is shared in any way that’s imaginable, unless that idea is shared, it won’t live. But the day it’s shared, it can become very powerful. And I do believe that ideas have and will continue to shape us humans.

We live in a way that reflects a series of ideas. They may be cultural ideas. They may be religious ideas. They may be political ideas. But we all live in a world that’s been created through ideas. And who created these ideas? Different thinkers, different practitioners throughout history. And you could say, “Well, these people are very creative and very smart and they’ve populated our world with their ideas.” Or we could even say, “No, they’re just vessels of whatever was the thinking of the time. And at that time, people were interested in specific ideas and specific people. And they gained traction.”

So, I’m not trying to overemphasize the fact that a few people are smarter or greater than others and everything comes from them, but in reality, it does come from them. And the only question then is, were they the authors of all of this? Or did they reflect a time and a place? My feeling is it’s probably a bit of both. But because we are humans, because we attribute things to individuals one way or another, ideas get attributed to people, to thinkers, to practitioners. And they’re very powerful. And I think, undoubtedly, they still shape who we are and I think will continue to shape who we are at least for a while.

Lucas Perry: Yeah. So, there’s a sense that we’ve inherited hundreds of thousands of years of ideas basically and thinking from our ancestors. And you can think of certain key persons, like philosophers or political theorists or so on, who have majorly contributed. And so, you’re saying that they may have partially been a reflection of their own society and that their thought may have been an expression of their own individuality and their own unique thinking.

So, just looking at the state of humanity right now and how effective we are at going after more and more powerful technology, how do you see our investment in wisdom and ideas relative to the status of and investment that we put into the power of our technology?

Nicolas Berggruen: To me, there’s a disconnect today between, as you say, the effort that we put in developing the technologies versus the effort that’s being invested in understanding what these technologies might do and thinking ahead, what happens when these things come to life? How will they affect us and others? So, we are rightly so impressed and fascinated by the technologies. And we are less focused on the effects of these technologies on ourselves, on the planet, on other species.

We’re not unaware, and we’re getting more and more aware. I just don’t know if, as you say, we invest enough of our attention, of our resources there. And also, if we have the patience and you could almost say the wisdom, to take your word, to take the time. So, I see a disconnect. And in going back to the power of ideas, and you could maybe ask the question in a different ways: our ideas or technology. Which one is more influential? Which one is more powerful? I would say they come together. But technologies alone are limited or at least historically have been limited. They needed to be manifested or let’s say empowered by the owners or the creators of the technologies. They helped people. They helped the idea-makers or the ideas themselves enormously. So, technology has always been an ally to the ideas. But technology alone, without a vision, I don’t think ever got that far. So, without ideas, technology is a little bit like an orphan.

And I would argue that the ideas are still more powerful than the technologies because if you think about how we think today, how we behave, we live in a world that was shaped by thinkers a few thousand years ago, no matter where we live. So, in the West, we are shaped by thinkers that lived in Greece 2, 3,000 years ago. We are shaped by beliefs that come from religions that were created a few thousand years ago. In Asia, the cultures have been shaped by people who lived also 2, 3,000 years ago. And the technology, which has changed enormously in every way, East or West, may have changed the way we live, but not that much. In the way we behave with each other, the way we use the technologies, those still reflect thinking and cultures and ideas that were developed 2, 3,000 years ago. So, I would almost argue the ideas are more powerful than anything. The technologies are an ally, but they themselves don’t change the way we think, behave, feel, at least not yet.

It’s possible that certain technologies will truly… and this is what’s so interesting about living today, that I think some technologies will help us transform who we are as humans, potentially transform the nature of our species, maybe help us create a different species. That could be. But up to now, in my mind at least, the ideas shape how we live; the technologies help us live maybe more deeply or longer, but still in a way that reflects the same ideas that were created a few thousand years ago. So, the ideas are still the most important. So, going back to the question, do we need, in essence, a philosophy for technology? I would say yes. Technologies are becoming more and more powerful. They are powerful. And the way you use technology will reflect ideas and culture. So, you’ve got to get the culture and the ideas right because the technologies are getting more and more powerful.

Lucas Perry: So, to me, getting the culture and the ideas right sounds a lot like wisdom. I’m curious if you would agree with that. And I’m also curious what your view might be on why it is that the pace of the power of our technology seems to rapidly outpace the progress of the quality of our wisdom and culture and ideas. Because it seems like today we have a situation where we have ideas that are thousands of years old that are still being used in modern society. And some of those may be timeless, but perhaps some of them are also outdated.

Nicolas Berggruen: Ideas, like everything else, evolve. But my feeling is that they evolve actually quite slowly, much more slowly than possible. But I think we, as humans, we’re still analog, even though we live increasingly in a digital world. Our processes and our ability to evolve is analog and is fairly slow still. So, the changes that happened over the last few millennials, which are substantial, even in the world of ideas, things like the enlightenment and other important changes, happened in a way that was very significant. Changed entirely the way we behave, but it took a long time. And technology helps us, but it’s so part of our lives, that there’s a question at some point, are we attached to the technology? Meaning, are we driving the car, or is the car driving us? And we’re at the cusp of this, potentially. And it’s not necessarily a bad thing, but, again, do we have wisdom about it? And can we lose control of the genie, in some ways?

I would argue, for example, social media networks have become very powerful. And the creators of it, even if they control the networks, and they still do in theory, they really lost control of them. The networks really have a life of their own. Could you argue the same to other times in history? I think you could. I mean, if you think of the Martin Luther and the Gutenberg Bible, you could say, “Well, that relates ideas and technologies.” And in a way that was certainly less rapid than the internet, technology, in the case of the printed material, really helped spread an idea. So, again, I think that the two come together. And one helps the other. In the example I just gave, you had an idea; the technology helped.

Here, what’s the idea behind, let’s say, social networks? Well, giving everybody a voice, giving everybody connectivity. It’s a great way to democratize access and a voice. Have we thought about the implications of that? Have we thought about a world where, in theory, everyone on earth has the same access and the same voice? Our political institutions, our cultures really are only now dealing with it. We didn’t think about it ahead. So, we are catching up in some ways. The idea of giving every individual an equal voice, maybe that’s a reflection of an old idea. That’s not a new idea. The instrument, meaning let’s say social media, is fairly new. So, you could say, “Well, it’s just a reflection of an old idea.”

Have we thought through what it means in terms of our political and cultural lives? Probably not enough. So, I would say half and half in this case. The idea of the individual is not new. The technology is new. The implications are something that we’re still dealing with. You could also argue that the nature of anything new, an idea, in this case helped by technology. And we don’t really know where the journey leads us. It was a different way of thinking. It became incredibly powerful. You didn’t know at the beginning, how powerful and where it would lead. It did change the world.

But it’s not the technology that changed the world; it’s the ideas. And here, the question is the technology, let’s say social networks, are really an enabler. The idea is still the individual. And the idea is democratizing access and voices, putting everybody on the same level playing field, but empowering a few voices, again, because of network. So, it’s this dance between technology and humans and the ideas. At the end, we have to know that technology is really just a tool, even though some of these tools are becoming potential agents themselves.

Lucas Perry: Yeah. The idea of the tools becoming agents themselves is a really interesting idea. Would you agree with the characterization then that technology without the right ideas is orphaned, and ideas without the appropriate technology is ineffectual?

Nicolas Berggruen: Yes, on both.

Lucas Perry: You mentioned that some of the technology is becoming an agent in and of itself. So, it seems to me then that the real risk there is that if that technology is used or developed without the wisdom of the appropriate ideas, that that unwise agentive technology amplifies that lack of wisdom because being an agent, its nature is to self-sustain and to propagate and to actualize change in the world of its own accord. So, it seems like the fact that the technology is becoming more agentive is like a calling for more wisdom and better ideas. Would you say that that’s fair?

Nicolas Berggruen: Absolutely. So, technology in the form of agents is becoming more powerful. So, you would want wisdom, you would want governance, you would want guidance, thinking, intention behind those technologies, behind those agents. And the obvious ones that are coming, everything around AI. But you could say that some of the things we are living with are already agents, even though they may not have been intended as agents. I mentioned social networks.

Social networks, frankly, are living organisms. They are agents. And no matter if they’re owned by a corporation and that corporation has a management, the networks today are almost like living creatures that exist for themselves or exist as themselves. Now, can they be unplugged? Absolutely. But very unlikely that they’ll be unplugged. They may be modified. And even if one dies, they’ll be replaced most likely. Again, what I’m saying is that they’ve become incredibly powerful. And they are like living organisms. So, governance does matter. We know very well from these agents that they are amazingly powerful.

We also know that we don’t know that much about what the outcomes are, where the journey may lead and how to control them. There’s a reason why in some countries, in some cultures, let’s say China or Russia or Turkey, there’s been a real effort from a standpoint of government to control these networks because they know how powerful they are. In the West, let’s say in the US, these networks have operated very freely. And I think we’ve lived with the real ramifications as individuals. I don’t know what the average engagement is for individuals, but it’s enormous.

So, we live with social networks. They’re part of us; we are part of them equally. And they’ve empowered political discourse and political leaders. I think that if these networks hadn’t existed, certain people may not have gotten elected. Certainly, they wouldn’t have gotten the voice that they got. And these are part of the unintended consequences. And it’s changed the nature of how we live.

So, we see it already. And this is not AI, but it is in my mind. Social networks are living creatures.

Lucas Perry: So, following up on this idea of technology as agents and organisms. I’ve also heard corporations likened to organisms. They have a particular incentive structure and they live and die by their capacity to satisfy that incentive, which is the accumulation of capital and wealth.

I’m curious, in terms of AI, when you were at Beneficial AI 2017. So, I’m curious what your view is of how ideas play a role in value alignment with regards to technology that is increasingly agentive, so specifically artificial intelligence. So, there’s a sense that training and imbuing AI systems with the appropriate values and ideas and objectives, yet at the same time dealing with something that is fundamentally alien, given the nature of machine learning and deep learning. And so, yeah. I’m curious about your perspective about the relationship between ideas and AI.

Nicolas Berggruen: Well, you mentioned corporations. And corporations are very different than AIs, but at the same time, the way you mentioned corporations I think makes them very similar to AI. And they are a good example because they’ve been around for quite a while. Corporations, somebody from the outside would say, “Well, they have one objective is to accumulate capital, make money.” But in reality, money is just fuel. It’s just, if you want, the equivalent of energy or blood or water. That’s all it is. Corporations are organisms. And their real objective, as individual agents, if you want, as sort of creatures, is to grow, expand, survive. And if you look at that, I would say you could look at AIs very similarly.

So, any artificial intelligent agent, ultimately any robot, if you put it in a embodied form, if they’re well-made, if you want, or if they’re well-organized, if they’re going to be truly powerful, a bit like a corporation is really very powerful and it’s helped progress, it’s helped… if you think capitalism has helped the world, in that sense, it’s helped. Well, strong AIs will also have the ability over time to want to grow and live.

So, going back to corporations. They have to live within society and within a set of rules. And those change. And those adapt to culture. So, there’s a culture. When you look at some of the very old corporations, think the East India Company or so, employed slaves. That wouldn’t be possible today for the East India Company. Fossil fuels were really the allies of some of the biggest corporations that existed about 100 years ago, even 50 years ago. Probably not in the future. So, things change. And culture has an enormous influence. Will it have the same kind of influence over AI agents? Absolutely.

The question is, as you can see from criticism of corporations, some corporations thought to become too powerful, not under the control or governance of anyone, any country, supernational, if you want. I think the same thing could happen to AIs. The only difference is that I think AIs could become much more powerful because they will have the ability to access data. They’ll have the ability to self-transform in a way that hasn’t really been experienced yet. And we don’t know how far… it’ll go very far. And you could imagine agents being able to access all of the world data in some ways.

And the question is, what is data? It’s not just information the way we think of information, which is maybe sort of knowledge that we memorize, but it’s really an understanding of the world. This is how we, as creatures and animals, as creatures, are able to function in that they understand the world. Well, AIs, if they really get there will sort of understand the world. And the question then is, can they self-transform? And could they, and this is the interesting part, begin to think and develop instincts and maybe access dimensions and senses that we as humans have a tough time accessing? And I would speculate that, yes.

If you look at AlphaGo, which is the DeepMind Google AI that beat the best Go players, the way that they beat the best Go players, and this is a complicated game that’s been around for a long time, is really by coming up with moves and strategies and a way of playing that the best human players over thousands of years didn’t think of. So, a different intuition, a different thinking. Is it a new dimension? Is it having access a new sense? No, but it’s definitely, very creative, unexpected way of playing. To me, it’s potentially a window into the future, where AIs and machines become in essence more creative and access areas of thinking, creativity and action that we humans don’t see. And the question is, can it even go beyond?

I’m convinced that there are dimensions and senses that we, humans, don’t access today. It’s obvious. Animals don’t access what we access. Plants don’t access what animals do. So, there was change in evolution. And we are certainly missing dimensions and senses that exist. Will we ever access them? I don’t know. Will AIs help us access them? Maybe. Will they access them on their own by somehow self-transforming? Potentially. Or are there agents that we can’t even imagine, who we have no sense of, that are already there? So, I think all of this is a possibility. It’s exciting, but it’ll also transform who we are.

Lucas Perry: So, in order to get to a place where AI is that powerful and has senses and understanding that exist beyond what humans are capable of, how do you see the necessity of wisdom and ideas in the cultivation and practice of building beneficial AI systems? So, I mean, industry incentives and international racing towards more and more powerful AI systems could simply ruin the whole project because everyone’s just amplifying power and taking shortcuts on wisdom or ideas with which to manage and develop the technology. So, how do you mitigate that dynamic, that tendency towards power?

Nicolas Berggruen: It’s a very good question. And interestingly enough, I’m not sure that there are many real-world answers or that the real-world answers are being practiced, except in a way that’s self-disciplined. What’s interesting in the West is that government institutions are way, way behind technology. And we’ve seen it even in the last few years when you had hearings in Washington, D.C. around technology, how disconnected or maybe how naive and uninformed government is compared to the technologists. And the technologists have, frankly, an incentive and also an ethos of doing their work away from government. It gives them more freedom. Many of them believe in more freedom. And many of them believe that technology is freedom, almost blindly believing that any technology will help free us as humans. Therefore, technology is good, and that we’ll be smart enough or wise enough or self-interested enough not to mishandle the technology.

So, I think there’s a true disconnect between the technologies and the technologists that are being empowered and sort of the world around it because the technologists, and I believe it, at least the ones I’ve met and I’ve met many, I think overall are well-intended. I also think they’re naive. They think whatever they’re doing is going to be better for humanity without really knowing how far the technology might go or in whose hands the technology might end up in. I think that’s what’s happening in the West. And it’s happening mostly in the US. I think other parts of the West are just less advanced technologically. When I say the US, I include some of the AI labs that exist in Europe that are owned by US actors.

On the other side of the world, you’ve got China that is also developing technology. And I think there is probably a deeper connection, that’s my speculation, a deeper connection between government and the technologies. So, I think they’re much more interested and probably more aware of what technology can do. And I think they, meaning the government, the government is going to be much more interested and focused on knowing about it and potentially using it. The questions are still the same. And that leads to the next question. If you think of beneficial AI, what is beneficial? In what way, and to who? And it becomes very tricky. Depending on cultures and religions and cultures that are derivatives of religions, you’re going to have a totally different view of what is beneficial. And are we talking about beneficial just to us humans or beyond? Who is it beneficial for? And I don’t think anybody has answered these questions.

And if you are one technologist in one lab or little group, you may have a certain ethos, culture, background. And you’ll have your own sense of what is beneficial. And then there might be someone on the other side of the world who’s developing equally powerful technology, who’s going to have a totally different view of what’s beneficial. Who’s right? Who’s wrong? I would argue they’re both right. And they’re both wrong. But they’re both right to start with. So, should they both exist? And will they both exist? I think they’ll both exist. I think it’s unlikely that you’re going to have one that’s dominant right away. I think they will co-exist, potentially compete. And again, I think we’re early days.

Lucas Perry: So, reflecting on Facebook as a kind of organism, do you think that Mark Zuckerberg has lost control of Facebook?

Nicolas Berggruen: Yes and no. No, in the sense that he’s the boss of Facebook. But yes, in the sense that I doubt that he knew how far Facebook and other, I would say, engines of Facebook would reach. I don’t think he or anyone knew.

And I also think that today, Facebook is a private company, but it’s very much under scrutiny, not just from governments, but actually from its users. So, you could say that the users are just as powerful as Mark Zuckerberg, maybe more powerful. If tomorrow morning, Mark Zuckerberg turned Facebook or Instagram or WhatsApp off, what would happen? If they were tweaked or changed in a way that’s meaningful, what would happen? It’s happening all the time. I don’t mean the switch-off, but the changes. But I think the changes are tested. And I think the users at the end have an enormous amount of influence.

But at the end of the day, the key is simply the engine has become so powerful or the kind of engine has become so powerful that it’s not in the hands of Mark Zuckerberg. And if he didn’t exist, there would be another Facebook. So, again, argument is even though one attributes a lot of these technologies to individuals, a little bit like ideas are attributable to individuals and they become the face of an idea, and I think that’s powerful, that’s incredibly powerful even with religions, I think that the ideas are way beyond, all the technologies are way beyond the founders. They reflect capability in terms of technology at the time when they were developed. There are a number of different social networks, not just one. And they reflect a culture or a cultural shift in the case of ideas, of religions.

Lucas Perry: So, I have two questions for you. The first is, as we begin to approach artificial general intelligence and superintelligence, do you think that AI labs and the leaders of them like Mark Zuckerberg may very well lose control of the systems and the kind of inertia that it has in the world, like the kind of inertia that Facebook has as a platform for its own continued existence? That’s one question. And then the second is that about half the country is angry at Facebook because it deplatformed the president, among other people. And the other half is angry because it was able to manipulate enough people through fake news and information and allow Russian interference in advertising certain ideas.

And this makes me think of the Carl Jung quote from the beginning of the podcast about there not being adequate protection against psychic epidemics, kind of like there not being adequate protection against collectively bad ideas. So, I’m curious if you have any perspective, both on the leaders of AI labs losing control. And then maybe some antivirus malware for the human mind, if such a thing exists.

Nicolas Berggruen: So, let’s start with the second question, which is the mind and mental health. Humans are self-aware. Very self-aware. And who knows what’s next? Maybe another iteration, even more powerful. So, our mental health is incredibly important.

We live in our minds. We live physically, but we really live in our minds. So, how healthy is our mind? How healthy is our mental life? How happy or unhappy? How connected or not? I think these are essential questions in general. I think that in a world where technology and networks have become more and more powerful, that’s even more important for the health of people, nations, countries and the planet at the end. So, addressing this seems more important than ever. I would argue that it’s always been important. And it’s always been an incredibly powerful factor, no matter what. Think of religious wars. Think of crusades. They are very powerful sort of mental commitments. You could say diseases, in some cases, depending on who you are.

So, I would say the same afflictions that exist today that make a whole people think something or dream something healthy or maybe in some cases not so healthy, depressed, or the opposite, euphoric or delusional, these things have existed forever. The difference is that our weapons are becoming more powerful. This is what happened half a century ago or more with the atomic power. So, our technology’s becoming more powerful. Next one obviously is AI. And with it, I also think that our ability to deal with some of these is also greater. And I think that’s where we have, on one side, a threat, but, on the other side, I think an opportunity. And you could say, “Well, we’ve always had this opportunity.” And the opportunity is really, going back to your first question, around wisdom. It’s really mental. We can spending time thinking these things through, spending time with ourselves. We can think through what makes sense. Let’s say what’s moral in a broad sense. So, you could say that’s always existed.

The difference, in terms of mental health, is that we might have certain tools today that we can develop, that can help us be better. I’m not saying that it will happen and I’m not saying that there’s going to be a pill for this, but I think we can be better and we are going to develop some ways to become better. And these are not just AI, but around bio-technology. And we’ll be able to affect our mental states. And we will be able to do it through… and we do already through drugs, but there’ll be also implants. There’ll be maybe editing. And we may one day become one with the AIs, at least mentally, that we develop. So, again, I think we have the potential of changing our mental state. And you could say for the better, but what is better? That’s goes back to the question of wisdom, the question of who do we want to be, and what constitutes better.

And to your other question, have the developers or the owners of some of the AI tools, do they control them? Will they continue to control them? I’m not sure. In theory, they control them, but you could argue, in some cases, “Well, they may have the technology, the IP. And in some cases, they have so much data that is needed for the AIs that there’s a great synergy between the data and the technology.” So, you need it almost in big places like a Facebook or Google or Tencent or an Alibaba. But you could very well say, “The technology’s good enough. And the engineers are good enough. You can take it out and continue the project.” And I would argue that at some point if the agents are good enough, the agents themselves become something. They become creatures that, with the right help, will have a life of their own.

Lucas Perry: So, in terms of this collective mental health aspect, how do you view the project of living an examined life or the project of self-transformation, and the importance of this approach to building a healthy civilization that is able to use and apply wisdom to the creation and use of technology? And when I say “examined life” I suppose I mean it in a bit of the sense in the way the Greeks used it.

Nicolas Berggruen: The advantage that humans have is that we can examine ourselves. We can look at ourselves. And we can change. And I think that one of the extraordinary things about our lives, and certainly I’ve witnessed that in my life, is that it’s a journey. And I see it as a journey of becoming. And that means change. And if you are willing to self-examine and if you are willing to change, not only will life be more interesting and you will have a richer, fuller life, but you will also probably get to a place that’s potentially better over time. For sure, different. And at times, better.

And you can do this as an individual. You can do that as many individuals. And as we have longer lives now, we have the opportunity to do it today more than ever. We also have not only longer lives, but longer lives where we can do things like what we are doing now, discussing these things. When, at the time of Socrates, few people could do it and now many people can do it. And I think that that trend will continue. So, the idea of self-transformation of self-examination, I think, is very powerful. And it’s an extraordinary gift.

My still favorite book today is a book by Hermann Hesse called Siddhartha, which, the way I look at it, one way to read it is really a journey of self-transformation of chapters of life, where each chapter is not necessarily an improvement, but each chapter is part of living and each chapter is what constitutes maybe a full life. And if you look at Siddhartha, Siddhartha had totally different lives all within one. And I think we have this gift given to us to be able to do a lot of it.

Lucas Perry: Do you think, in the 21st century, that given the rapid pace of change, of the power of our technology, that this kind of self-examination is more important than ever?

Nicolas Berggruen: I think it’s always important. It’s always been important as a human because it makes our lives richer on one side, but it also helps us deal with ourselves and our excitement, but also our fears. In the 21st century, I think it’s more important than ever because we have more time, not only in length, but also in quantity, within a quantum of time. And also because our effect on each other is enormous. Our effect on the planet is enormous. By engaging in social networks, by doing a podcast, by doing almost anything, you influence so many others, and not just others as humans, but you influence almost everything around you.

Lucas Perry: So, in this project of living an examined life in the 21st century, who do you take most inspiration from? Or who are some of the wisest people throughout history who you look to as examples of living a really full human life?

Nicolas Berggruen: Right. So, what is, let’s call it the best life, or the best example of an examined life? And I would argue that the best example that I know of, since I mentioned it, even though it’s an incredibly imperfect one, is the life, at least the fictional life, in the book of Hermann Hesse, Siddhartha, where Siddhartha goes through different chapters, in essence different lives, during his life. And each one of them is exciting. Each one of them is a becoming, a discovery. And each one of them is very imperfect. And I think that reflects the life of someone who makes it a mission to understand and to find themselves or find the right life. And it tells you how difficult it is. It also tells you how rich it can be and how exciting it can be and that there is no right answer.

On the other hand, there are people who may be lucky enough who never question themselves. And they may be the ones who live actually the best lives because, by not questioning themselves, they just live a life almost as if they were dealt a set of cards, and that’s the beginning and the end. And they may be the luckiest of all, or the least lucky because they don’t get to live all the potential of what a human life could be.

So, it’s a long-winded answer to say I don’t think there is an example. I don’t think there is a model life. I think that life is discovery, in my mind, at least for me. It’s living, meaning the experience of life, the experience of change, allowing change. And that means there will never be perfection. You also change. The world changes. And all of these become factors. So, you don’t have a single answer. And I couldn’t point to a person who is the best example.

That’s why I go back to Siddhartha because the whole point of the story of Siddhartha, at least the story by Hermann Hesse, is that he struggled going through different ways of living, different philosophies, different practices. All valid. All additive. And even the very end in the story, where in essence before his death he becomes one with the world is actually not the answer. So, there is no answer.

Lucas Perry: Hopefully, we have some answers to what happens to some of these technological questions in the 21st century. So, when you look at our situation with artificial intelligence and nuclear weapons and synthetic biology and all of the really powerful emerging tech in the 21st century, what are some ideas that you feel are really, really important for this century?

Nicolas Berggruen: I think what we’ve discovered through millennials now, but also through what the world looks like today, which is more and more the coexistence, hopefully peaceful coexistence, of very, very different cultures. We see that we have two very powerful factors. We have the individual and the community. And what is important, and it sounds almost too simple and too obvious, but I think very difficult, is to marry the power, the responsibilities of the individual with that of community. And I’m mentioning it on purpose because these are totally different philosophies, totally different cultures. I see that there’s always been a tension between those two.

And the technologies you’re talking about will empower individual agents even more. And the question is, will those agents become sort of singular agents, or will they become agents that care about others or in community with others? And the ones who have access to these agents or who control these agents or who develop these agents will have enormous influence, power. How will they act? And will they care about themselves? Will they care about the agents? Will they care about the community? And which community? So, more than ever, I think we have those questions. And in the past, I think philosophers and religious thinkers had a way of dealing with it, which was very constructive in the sense that they always took the ideas to a community or the idea of a community, living the principles of an idea one way or another. Well, what is it today? What is a community today? Because the whole world is connected. So, some of these technologies are technologies that will have an application way beyond a single culture and a single nation or a single system.

And we’ve seen, as an example, what happened with the COVID pandemic, in my mind, accelerated every trend and also made every sort of human behavior and cultural behavior more prevalent. And we can see that with the pandemic, technology answered pretty quickly. We have vaccines today. Capital markets also reacted quickly. Funded these technologies. Distributed them to some extent. But where things fell down was around culture and governance. And you can see that everybody really acted for themselves in very different ways, with very little cooperation. So, at a moment when you have a pandemic that affects everyone, did we have global cooperation? Did we have sharing of information, of technology? Did we have global practices? No. Because we didn’t, we had a much worse health crisis, incredibly unevenly distributed. So, health, but also economic and mental health outcomes, very different depending where you were.

So, going back to the question of the powerful technologies that are being developed, how are we going to deal with them? When you look at what happened recently, and the pandemic is obviously a negative event, but powerful event. You could say it’s technology. It’s a form of technology that spread very quickly, meaning the virus. Well, look at how we behaved globally. We didn’t know how to behave. And we didn’t behave.

Lucas Perry: It seems like with these really powerful technologies, that it will enable very few persons to accumulate a vast amount of wealth and power. How do you see solutions to this problem of the wealth still being inherited by like… more evenly and distributed by the rest of humanity, as technologies will increasingly empower a few individuals to have control and power over that wealth and technology?

Nicolas Berggruen: In my mind, you’re right. I think that the concentration of power, the concentration of wealth will only be helped by technology. With technology, with intellectual property, you create more power and more wealth, but you need less and less people and less and less capital. So, how do you govern it? And how do you make it fair?

Some of the thinking that I believe in and that we’ve also been working on at the Institute is the idea of sharing the wealth, but sharing the wealth from the beginning, not after the fact. So, our idea is simply from an economic standpoint, as opposed to redistribution, which is redistributing the spoils through taxes, which is not only toxic, but sort of you’re transferring from the haves to the haves-not. So, you always have a divide.

Our thinking is make sure that everybody has a piece of everything from the beginning. Meaning, let’s say tomorrow Lucas starts a company, and that company is yours. Well, as opposed to it being yours, maybe it’s 80% of yours, and 20% goes to a fund for everyone. And these days, you can attribute everyone as individuals through technology, through blockchain. You can give a piece of Lucas’ company to everyone on paper. So, if you become very successful, everybody will benefit from your success. It won’t make a difference to you because if you have 80% or 100%, you’ll be successful one way or another, but your success will be the success of everyone else. So, everyone is at least in the boat with Lucas’ success. And this kind of thinking I think is possible, and I think actually very healthy because it would empower others, not just Lucas. And so, the idea is very much, as technology as wealth becomes even more uneven, make sure it’s shared. And as opposed to it being shared through redistribution, make sure everybody is empowered from the beginning, meaning, has a chance to access it economically or otherwise.

The issue still remains governance. If whatever you’re creating is the most powerful AI engine in the world, what happens to it? Besides the economic spoils, which can be shared the way I described it, what happens to the technology itself, the power of the technology itself? How does that get governed? And I think that’s very early days. And nobody has a handle of it because if it’s yours, you, Lucas, will design it and design the constraints or lack of constraints of the engine. And I do think that has to be thought through. It can’t just be negative; it also has to be positive. But they always come together. Nuclear power creates energy, which has beneficial and empowers weapons, which is not. So, every technology has both sides. And the question is you don’t want to kill the technology out of fear. You also don’t want to empower the technology where it becomes a killer. So, we have to get ahead of thinking these things through.

I think a lot of people think about it, including the technologists, the people who develop it. But not enough people spend time on it and certainly not across disciplines and across cultures. So, technologists and policymakers and philosophers and humans in general need to think about this. And they should do it in the context of let’s call it Silicon Valley, but also more old-fashioned Europe, but also India and China and Africa, so that it includes some of the thinking and some of the cultural values that are outside of where the technology is developed. That doesn’t mean that the other ideas are the correct ones. And it shouldn’t mean that the technology should be stopped. But it does mean that the technology should be questioned.

Lucas Perry: It seems like the crucial question is how do we empower people in the century in which basically all of their power is being transferred to technology, particularly the power of their labor?

Right so, you said that taxing corporations and then using that money to provide direct payments to a country’s population might be toxic. And I have some sense of the way in which that is enfeebling, though I have heard you say in other interviews that you see UBI as a potential tool, but not as an ultimate solution. And so, it seems like this, you call it universal basic capital, which is where this, say, 20% of my company is collectively owned by the citizens of the United States, that this puts wealth into the pockets of the citizenry, rather than being completely disconnected from the companies and not having any ownership in them.

I’m curious whether this really confers the kind of power that would be really enabling for people because the risk seems like people lose their ability to perform work and to have power at workplaces and then they become dependent on something like UBI. And then the question is, is whether or not democracy is representative enough of their votes to give them sufficient power and say over their lives and what happens and how technology is used?

Nicolas Berggruen: Well, I think there’s a lot of pieces to this. I would say that the… let’s start with the economic piece. I think UBC, meaning universal basic capital, is much more empowering and much more dignified than universal basic income, UBI. UBI is, in essence, a handout to even things out. But if you have capital, you have participation and you’re part of the future. And very importantly, if you have a stake in all the economic agents that are growing, you really have a stake, not only in the future, but in the compounding, in terms of value, in terms of equity, of the future. You don’t have that if you just get a handout in cash.

The reason why I think that one doesn’t exclude the other, you still need cash to live. So, the idea is that you could draw against your capital accounts for different needs, education, health, housing. You could start a business. But at times you just need cash. If you don’t have universal basic capital, you may need universal basic income to get you through it. But if it’s well done, I think universal basic capital does the job. That’s on the economic side.

On the side of power and on the side of a dignity, there will be a question because I think technology will allow us, that’s the good news, to work less and less in the traditional way. So, people are going to have more and more time for themselves. That’s very good news. 100 years ago, people used to have to work much more hours in a shorter life. And I think that the trend has gone the other way. So, what happens to all the free time? A real question.

And in terms of power, well, we’ve seen it through centuries, but increasingly today, power, and not just money, but power, is more concentrated. So, the people who develop or who control, let’s say, the technological engines that we’ve been talking about really have much more power. In democracies, that’s really balanced by the vote of the people because even if 10 people have much more power than 100 million people, and they do, the 100 million people do get to vote and do get to change the rules. So, it’s not like the 10 people drive the future. They are in a very special position to create the future, but in reality, they don’t. So, the 100 million voters still could change the future, including for those 10 people.

What’s interesting is the dynamics in the real world. You can see about big tech companies. This is ironic. Big tech companies in the West, they’re mainly in the US. And the bosses of the big tech companies, let’s say Google or Facebook, Amazon, really haven’t been disturbed. In China, interestingly enough, Alibaba, Jack Ma was removed. And it looks like there’s a big transition now. ByteDance, which is the owner of TikTok. So, you can see, interestingly enough, in democracies, where big changes could be made because voters have the power, they don’t make the changes. And in autocracies, where the voters have no power, actually the changes have been made. It’s an ironic fact.

I’m not saying it’s good. I am not saying that one is better than the other. But it’s actually quite interesting that in the case of the US, Washington frankly has had no influence, voters have had pretty much no influence, when at the other side of the world, the opposite has happened. And people will argue, “Well, we don’t want to live in China where the government can decide anything any day.” But going back to your question, we live in an environment where even though all citizens have the voting power, it doesn’t seem to translate to real power and to change. Voters, through the government or directly, actually seem to have very little power. They’re being consulted in elections every so often. Elections are highly polarized, highly ideological. And are voters really being heard? Are they really participants? I would argue in a very, well, manipulated way.

Lucas Perry: So, as we’re coming to the end here, I’m curious if you could explain a future that you fear and also a future that you’re hopeful for, given the current trajectory of the race between the power of our technology and the wisdom with which we manage it.

Nicolas Berggruen: Well, I think one implies the other. And this is also a philosophical point. I think a lot of people are thinking sort of isolates one or the other. I believe in everything being connected. It’s a bit like if there is light, that means there is dark. And you’ll say, “Well, I’m being a little loopy.” It’s a bit like…

Lucas Perry: It’s duality.

Nicolas Berggruen: Yeah. And duality exists by definition. And I would say in the opportunity that exist in front of us, what makes me optimistic… and my feeling is if you live, you have no choice but to be an optimist. But what makes me optimistic is that we can, if we want, deal with things that are planetary and global issues. We have technologies that are going to hopefully make us healthier, potentially happier, and make our lives more interesting. That gives us also the chance, but also the responsibilities to use it well. And that’s where the dangers come.

We have, for the first time, two totally different political and cultural powers and systems that need to coexist. Can we manage it?

Lucas Perry: China and the US.

Nicolas Berggruen: Yes. China and the US. Technologically, between AI, gene editing, quantum computing, we are developing technologies that are extraordinary. Will we use them for right common good and wisely? We have a threat from climate, but we also have an opportunity. This opportunity is to address those issues, to a little bit like what happened with the pandemic, sort of create sort of the vaccines for the planet, if you want, because we are forced to do it. But then the question is, do we distribute them correctly? Do we do the fair thing? Do we do it in a way that’s intelligent and empowering? So, the two always come together. And I think we have the ability, if we’re thoughtful and dedicated, to construct, let’s say, a healthy future.

If you look at history, it’s never been in a straight line. And it won’t be. So, there’ll be, I hate to say, terrible accidents and periods. But over time I think our lives have become richer, more interesting, hopefully better. And in that sense, I’m an optimist. The technologies are irresistible. So, we’ll use them and develop them. So, let’s just make sure that we do it in a way that focuses on what we can do with them. And then what are the minimums, in terms of individuals, economically, in terms of power, voice and protection? And what the minimums in terms of addressing cooperation between countries and cultures, and with addressing planetary issues that are important and that have become more front and center today?

Lucas Perry: All right. So, as we wrap up here, is there anything else that you’d like to share with the audience? Any final words to pass along? Anything you feel like might be left unsaid?

Nicolas Berggruen: I think your questions were very good. And I hope I answered some of them. I would say that the journey for us, humans, as a species, is only getting more exciting. And let’s just make sure that we are… that it’s a good journey, that we feel that we are at times the conductor and the passenger both, not so bad to be both, in a way that you could say, “Well, listen, we’re very happy to be on this journey.” And I think it very much does depend on us.

And going back to your very first question, it depends on some of our wisdom. And we do have to invest in wisdom, which means we have to invest in our thinking about these things because they are becoming more and more powerful, not just in the machines. We need to invest in the souls of the machines. And those souls are our own souls.

Lucas Perry: I really like that. I think that’s an excellent place to end on.

Nicolas Berggruen: Well, thank you, Lucas. I appreciate it. Very good questions. And I look forward to listening.

Lucas Perry: Yeah. Thank you very much, Nicolas. It was a real pleasure, and I really appreciated this.

Bart Selman on the Promises and Perils of Artificial Intelligence

  • Negative and positive outcomes from AI in the short, medium, and long-terms
  • The perils and promises of AGI and superintelligence
  • AI alignment and AI existential risk
  • Lethal autonomous weapons
  • AI governance and racing to powerful AI systems
  • AI consciousness

 

Lucas Perry: Welcome to the Future of Life Institute Podcast. I’m Lucas Perry. Today’s episode is with Bart Selman and explores his views on possible negative and positive futures with AI. The importance of AI alignment and safety research in computer science, facets of national and international AI governance, lethal autonomous weapons, AI alignment, and safety at the association for the advancement of artificial intelligence and a little bit on AI consciousness.

Bart Selman is a Professor of Computer Science at Cornell University, and previously worked AT&T Bell Laboratories. He is a co-founder of the Center for Human-Compatible AI, and is currently the President of the Association for the Advancement of Artificial Intelligence. He is the author of over 90 publications and has a special focus on computational and representational issues. Professor Selman has worked on tractable inference, knowledge representation, stochastic search methods, theory approximation knowledge compilation, planning, default reasoning, and the connections between computer science and statistical physics.

And so without further ado, let’s get into our conversation with Bart Selman.

So to start things off here, I’m curious if you can share with us an example of a or a few futures that you’re really excited about and an example of a or a few futures that you’re quite nervous about or which you fear most.

Bart Selman: Okay. Yeah. Thank you. Thank you for having me. So just let me start with an example of a future in the context of AI that I’m excited about is the new capabilities that AI brings it should have the potential to make a life for everyone much easier and much more pleasant. I see AI as complementing our cognitive capabilities. So I can envision household robots or smart robots that assist people living in their houses, living independently longer, including doing kinds of work that are sort of monotonous and not that exciting for humans to do. So, AI has the potential to compliment our capabilities and to hugely assist us in many ways, including in areas you might not have thought of. Like for example, policymaking and governance. So AI systems are very good at thinking in high dimensional terms, trade offs between many different factors.

For humans, it’s hard to actually think in a multi-dimensional trade-off. We tend to boil things down to one or two central points and argue about a trade off in one or two dimensions. Most policy decisions involve 10, 20 different criterias that may conflict, or be somewhat contradictory and exploring that space, AI can assist us. I mean, finding better policy solutions and better governance for everybody. So I think AI has this tremendous potential to improve life for all of us provided that we learned to share this capabilities that we have policies in place and mechanisms in place to make this a positive experience for humans. And i’ll have to draw a parallel, human labor, physical labor machines have freed us from heavy duty physical labor. AI systems can help us with a sort of monotonous cognitive labor or as I mentioned, household robots and other tools that will make our life much better. So that’s for the positive side. So should I continue with a negative?

Lucas Perry: So before we get into the negative, I’m curious if you could explain a little bit more specifically what these possible positive futures look like on different timescales. So you explained AI assisting with cognitive capabilities with monotonous jobs. And so, over the coming decades, it will begin to occupy some of these roles increasingly, but there’s also the medium term, the long term and the deep future in which the positive fruits of AI may come to bear.

Bart Selman: Yeah. So, that’s an excellent point. I think one thing that in any transition and as I say, these medium cognitive capabilities that will help us live better lives, it will also disrupt the labor force and the workforce. And that this is a process that I can see play out over the next five, 10, maybe 15 years, a significant change in workforce. And I am somewhat concerned about how that will be managed because basically I feel we are moving to our future where people would have more free time. We’d have more time to be creative, to travel and to live independently. But of course, everybody needs to have the resources to do that. So there is an important governance issue of making sure that in this transition to a world was more leisure time that we find ways of having everybody benefit from these new future.

And this is really I think, 5, 10, 15 years process that we’re faced now, and important, that is done right. Further out in the future, it’s a little my own view of AI is that, machines will excel at certain specific task as we’ve seen very much with AlphaGo, AlphaZero. So, very good at that specific tasks and those systems will come in first self-driving cars, specialized robots for assisting humans. So we’ll first get these specialized capabilities. Those are not yet general AI capabilities. That’s not AGI. So the AGI future, I think is more 20, 25 years away.

So we first have to find ways of dealing with incorporating these specialized capabilities, which are going to be exciting as a scientist. You know, I already see AI transforming the way we approach science and do scientific discovery and really complementing our ways. I hope people get excited in the areas of creativity, for example, in computers or AI system, bringing a new dimension to these type of human activities that will actually be exciting for people to be part of. And that’s an aspect that we started to see emerge, but people are not fully aware of yet.

Lucas Perry: So we have AI increasingly moving its way into specialized kind of narrow domains. And as it begins to proliferate into more and more of these areas, it’s displacing all of the traditional human solutions for these areas, which basically all just includes human labor. So there’s an increase in human leisure time. And then what really caught my attention was you said AGI maybe 20, 25 years away. Is that your sense of the timeline where you start to see real generality or?

Bart Selman: Yeah. That’s in my mind a reasonable sense of a timeline, but we cannot be absolutely certain about that. And it’s sort of, for AI researchers it is a very interesting time. The hardest thing at this point in the history of AI is to predict what AI can and cannot do. I’ve learned as a professor, never to say that deep learning can’t do something because every time it surprises me and it can do it a few years later. So, we have a certain sense that, oh! the field is moving so fast that everything can be done. On the other hand, in some of my research, I look at some of these advances and if I can give you a specific example. So, my own research is partly in planning, which is a process of how humans plan out activities.

They have certain goals, and then they plan, what steps should I take to achieve those goals? And that’s can be very long sequences of actions to achieve complicated goals. So we worked on sort of a puzzle style domain, which is called Sokoban. And most people will not be familiar with it but it’s a kind of a game where it’s modeled after workers in a warehouse that have to move around boxes. And so there is a little grid world and you push around the boxes to get them from certain initial state to goal states somewhere else on the grid. And there are walls, and there are corners and all kinds of things you have to avoid. And what’s amazing about the planning task is for traditional planning, this was really a very challenging domain. So we picked it because traditional planners could do maybe a hundred steps, a hundred pushes as we call them, but that was about it.

There were puzzles available on the web that required 1500 to 2000 steps. So it was beyond way beyond any automated program. And AI researchers had worked on this problem for decades. So we of course used reinforcement learning RL with specific some clever curriculum training, some clever forms of training. And suddenly we could solve these 2000 steps, 1500 steps, Sokoban puzzles. So, we were very calm. We’re still very excited about that capability. And then we started looking, what did the deep net actually know about the problem? And our biggest surprise there was that although the system had learned very subtle things, that humans that are beyond human capabilities, it also was totally ignorant about other things that were trivial for humans. So in the Sokoban puzzle you don’t want to push your box in a corner because once it’s in a corner, and you can’t get it out of a corner. This is something that a human player discovers in the first, I would say first minute of pushing some boxes around.

We realized, I guess the deep learning routine network never conceptualized the notion of a corner. So it would only learn about corners if it had seen something being pushed in a particular corner. And if it had never seen that corner being used or encountered, it would not realize it shouldn’t push the box in there. So we had to sort of, we realized that this did this deep net had a capability that is definitely super human, in terms of being able to solve these puzzles. But also holes in its knowledge of the world that were very surprising to us. And that’s, I think part of what makes AI at this time, very difficult to predict. Will these holes be filled in while we develop AI systems that also get these obvious things, right?

Or will AI be at this amazing level of performance, but do things in ways that are to us, like quite odd. And so I think there are hard challenges that we don’t quite know how to fill in, but because of the speed with which things are developing, it’s very hard to predict whether they will be solved in the next two years on the next, or it will take another 20 years. But I do want to stress, there are surprising things about the I call it “the ignorance of tje learn models” that surprised us humans. Yeah.

Lucas Perry: Right. There are ways in which models fail to integrate really rudimentary parts of the world into their understanding that lead to failure modes that even children don’t encounter.

Bart Selman: Yeah. It’s… So the problem when we as humans interact with AI systems or think about AI systems, we anthropomorphize. So we think that they do think similar to the way we do things, because that’s sort of how we look at complex systems, even animals are anthropomorphized. So, we think that things has to be done a way similar to our own thinking, but we’re discovering that they can do things very differently and leave out pieces of knowledge that are sort of trivial to us.

I have discussion with my students and I point that out and they’re always sort of even skeptical of my claim. And they say, “well, it should know that somewhere.” And we actually do experiments. They say, “no. Okay, if it never seen the box go in that corner, it will just put it in the corner next time.” And so they actually have to see it to believe it, because it sounds that how can you be the world’s best Sokoban solver, and not know what a human knows in the first minute, but that’s the surprise. And that it also makes the field exciting, but that bakes the challenges of super intelligence and general intelligence and the impacts of an AI safety, particularly challenging topic.

Lucas Perry: Right. So, predicting an actual timeline seems very difficult, but if we don’t go extinct, then do you see the creation of AGI and superintelligence as inevitable?

Bart Selman: I do believe so. Yes. I do believe so. I think the path I see is we will develop these specialized capabilities, but in more and more areas in almost all areas, and then how they start merging together and in systems that do a two or three or four, and then a thousand specialized tasks. And so generality they will emerge almost inevitably. My only hesitation is what could go wrong? Why might it not happen if there is some aspect of cognition that is really beyond our capabilities of modeling. But I think that is unlikely. I think one of the surprises in the deep net world and the neural network world is that, before the deep learning revolution, if you can call it that before it happened, a lot of people looked at artificial neural networks as being too simplistic compared to real neurons.

So, there was this sense that, yeah, these little artificial neural networks are nice models, but they’re way too simplistic to capture what goes on in the human brain. The big surprise was is that apparently that level of simplification is okay, that you can get the functionality of a much more complex, real neural network. You get that level of performance and complexity using much simpler units. So that sort of convinced me that yes, the digital approximations we make, simplifications we make, as long as we connect things in sufficiently complex networks, we get properties emerged that match our human brain capabilities. So that makes me think that at some point we will reach AGI. It’s just a little hard to say exactly when, and I think it may not matter that much exactly when we’ll have challenges, in terms of AI safety and value alignment that are already occurring today before we have AGI. So we have to deal with challenges right from the start, we don’t have to wait for AGI.

Lucas Perry: So in this future where we’ve realized AGI, so do you see superintelligence as coming weeks, months, or years after the invention of AGI. And then what is beautiful about these futures to you in which we have realized AGI and superintelligence.

Bart Selman: Yeah. So what’s exciting about these possible, I mean, there are certain risks and the superintelligence would go against humans. I don’t think that is inevitable. I think these systems, they will do things… They will show us aspects of intelligence that to us will look surprising, but will also be exciting. So some of my other work, we’d look at mathematical theorem improving. And we look at AI systems for approving new open conjectures in mathematics. The systems clearly do a very different kind of mathematics than humans do, are very different kinds of proofs, but it’s sort of exciting to see a system that can check a billion step proof in a few seconds and generate a billion steps proof in an hour, and realizing that we can prove something to be true mathematically.

So find a mathematical truth that is beyond the human brain. But since we’ve designed to program and we know how it works and we use the technology, it’s actually a fun way to compliment our own mathematical thinking. So that’s what I see as the positive sense in which superintelligence will actually be of interest to humans to have around as a compliment to us, assuming they will not turn on us. But I think that’s manageable. Yeah.

Lucas Perry: So how long do you see superintelligence arising after the invention of AGI, even though it’s all kind of vague and fuzzy, like what’s what…

Bart Selman: Yeah. How long… So I think when I think of superintelligence, I think of it more as superintelligent in certain domains. So I assume you are referring to superintelligence as superseding AGI.

Lucas Perry: What I mean is like vastly more intelligent than the sum of humanity.

Bart Selman: I think that’s a super interesting question. And I have discussed that I can see capability at vastly more intelligent in areas like mathematical discovery, scientific discovery, thinking about problems with multiple conflicting criteria that has to be weighted against each other. So a particular task I can see superintelligence being vastly more powerful than our own intelligence. On the other hand, there is also a question in what sense superintelligence would manifest itself. That is, if I had to draw an analogy is if you meet somebody who is way smarter than you are, and everybody now meets such a person I’ve met a few in my life, these people will impress you about certain things and gets you insight. That, “Oh, this is exciting.” But when you go to have dinner with them and have a good meal, and they’re just like regular people.

So, superintelligence doesn’t necessarily manifest itself in all aspects. It will be surprising as certain kinds of areas and tasks and insights, but it will not… I do not believe it will come out. I guess If I draw an analogy like you can’t, if you go for dinner with that’s bad for dogs, but if you go for dinner with a dog, you will fully dominate all the conversations and be sort of superintelligence compared to the dog.

I’m not sure it’s not clear to me that there is an entity that will dominate our intelligence in all aspects. So there will be lots of activities, lots of conversations, lots of things we can have as a superintelligent being, that are quite understandable, quite accessible to us. So the analogy that there will be an entity that dominates our intelligence uniformly, that I’m not convinced exists. And it sort of, that goes back to the question, what human intelligence is, and human intelligence is actually quite general. So there’s an interesting question. What is meant by superintelligence? How would we recognize it? How would it manifest itself?

Lucas Perry: When I think of superintelligence, I think of like a general intelligence that is more intelligent than the sum of humanity. And so part of that generality is its capability to run an emulation of like maybe 10,000 human minds within its own general intelligence. And so the human mind becomes a subset of the intelligence of the superintelligence. So in that way, it seems like it would dominate human intelligence in all domains.

Bart Selman: Yeah, what I’m trying to say is I can see that, and that’s sort of, if you would play a game of chess with such a superintelligence they would beat you if they would give you a… It would not be fun if they… If he would do some maths with you, the mathematics and then show you some proofs of Fermat’s Last Theorem, it will be trivial for the superintelligence. So I can see a lot of specific task and domains where the superintelligence would indeed run circles around you and around any human, but how would it manifest it? So, yeah, on these individual questions, but you have to have the right questions as it is, I guess what I struggle is a little bit, you have to have the right questions to show to superintelligence.

So, like for example. The question, what should we do about income inequality? So like a practical problem in the United States. Would a superintelligence necessarily have something superintelligent to say about that? And that’s not that clear to me because there might actually… That it’s a tough problem, but it may just be as tough for the superintelligence as it is for any human. So a superintelligent politician has solutions to all our problems suddenly, would it win every debate? I think interestingly, the answer’s probably no. There’re certain… So super intelligence manifests itself on tasks that require high level of intelligence, like problem solving task, mathematical domains, scientific domains games, but daily life and governance. It’s a little less clear to me. So in that sense, and that’s what I mean by you’re going to have dinner with a superintelligence, which you be just sitting there. I can’t say anything useful about income inequality, because the superintelligence will say much more better things about it. I’m not so sure.

Lucas Perry: Maybe you’ve both had a wine or two, and you you ask the superintelligence and you know, why is there some thing rather than nothing? Or like, what is the nature of moral value? And they’re just like…

Bart Selman: What’s the purpose of life. I’m not sure the superintelligence is going to get me a better answer to that. So, yeah.

Lucas Perry: And this is where philosophy and ethics and metaethics merges with computer science, right? Because it seems like you’re talking about, there are domains in which AI will become superintelligent. Many of these domains, the ones that you listed sounded very quantitative. Ones which involve kind of the scientific method and empiricism, not that these things are necessarily disconnected from ethics and philosophy, but if you’re just working with numbers with a given objective, then there’s no philosophy that really needs to be done, if the objective is given. But if you ask about how do we deal with income inequality, then the objective is not given. And so you do philosophy about what is the nature of right and wrong? What is good? What is valuable? What is the nature of identity and all of these kinds of things and how they relate to building a good world. So I’m curious, do you think that there are true or false answers to moral questions?

Bart Selman: Yeah, I think that there are clearly wrong answers in this. So, I think moral issues, I think that’s a spectrum to me and that we can probably, as humans agree on certain basic moral values, it’s also very human kind of topic. So I think, we can agree on basic moral values, but I think the hard part is we also see among people and among different cultures, incredible different views of moral value. So saying which one is right and which one is wrong, may actually be much harder than we would like it to be. So this comes back to the value alignment problem and not like these into discussions about it. It’s a very good research field and a very important research field. But the question always is, whose values? And, we now realize that even within a country, people have very different values that are actually hard to understand between different groups of people.

So there is a challenge. There might be uniquely human of other… So it feels like there should be universal truths in morality thinks about equality, for example, but I’m a little hesitant because I’m surprised about how much disagreement I see about these, what I would think are universal truths that somehow are not universal truths for all people. So that’s another complication. And again, if you tie that back to superintelligence, so a superintelligence is going to have some position on it, but it is going to be yes or no, but it may not agree with everybody. And there’s no superintelligent position on it in my mind. So that’s, that’s a whole area of AI and value alignment that is very challenging.

Lucas Perry: Right. So, it sounds like you think that you have some intuition that there are universal moral truths, but it’s conflicting to why there was so much disagreement across different persons. So I guess I’m curious about two things. The first is one thing that you’re excited about for the future and about positive outcomes from AGI. Is it worlds in which AGI and superintelligence can help assist with moral and philosophical issues like around how to resolve income inequality and truth around moral questions? And then the second part of the question is, do you think that superintelligences is created by other species across the universe? Do you think that they would naturally converge on certain ethics, whether those ethics be universal truths or whether they be relative game theoretic expressions of how intelligence can propagate in the universe.

Bart Selman: Yeah, so two very good questions. So, as the first one, I am quite excited about the idea that a superhuman level of intelligence or an extreme level intelligence will help us better understand moral judgments and decisions and issues of ethics. I almost feel that humans are a little stuck in this debate. And a lot has to do, I think, with an inability to explain clearly to each other, why certain values matter and other values should be viewed differently, that it’s often even a matter of, can we explain to each other what are good moral judgments and good moral positions? So, I have some hope that AI systems, smart AI system would be better at actually sorting out some of these questions, and then convincing everybody, because in the end, we have to agree on these things. And perhaps these systems will help us find more common ground.

So that’s a hope I have for AI systems that truly understand our world, and are truly capable of understanding, because part of the alpha super smart AI would be understanding many different positions, and maybe something that limits humans in getting agreements on ethical questions, is that we actually have trouble understanding the perspective of another person that has a conflicting position. So superintelligence might be one way of modeling everybody’s mind, and then, being able to bring a consensus about … I have an optimistic view of, there may be some real possibilities there for superintelligence. Your second question of whether some alien form of superintelligence would come to the same basic ethical values as we may come to? That’s possible. I think it’s very hard to, yeah.

Lucas Perry: Yeah, sorry, whether those are ultimate truths, as in facts, or whether they’re just relative game theoretic expressions of how agents compete and cooperate in a universe of limited resources.

Bart Selman: Yes, yes. From a human perspective, you would hope there are some universal shared ethical perspective, or an ethical view of the world. I’m really on the fence, I guess. I could also see that, that in the end, very different forms of life, that we would even hardly recognize, would basically interact with us via sort of a game theoretic competition mode, and that they cannot, and because they’re so different from us, that we would have trouble finding shared values. So I see possibilities for both outcomes. If other life forms share some commonality with our life form, I’m hopeful for a common ground. But that seems like a big assumption, because they could be so totally different, that they cannot connect at a more fundamental level.

Lucas Perry: Taking these short and long term perspectives, what is really compelling and exciting for you about good futures from AI? Is it the short to medium term benefits? Are you excited and compelled by the longer term outcomes, the possibility of superintelligence allowing us to spread for millions or billions of years into the cosmos? What’s really compelling to you about this picture of what AI can offer?

Bart Selman: Yeah. I’m optimistic about the opportunities, both short term and longer term. I think it’s fairly clear that humanity is actually struggling with, there’s an incredible range of problems right now, sustainability, global warming, political conflicts. You could be quite pessimistic, almost, about the human future. I’m not, but these are real challenges. So I’m hopeful that actually AI will help humanity in finding a better path forward. Now, as I mentioned briefly, even in terms of policy and governance, AI systems may actually really help us there. So far this has never been done. AI systems haven’t been sufficiently sophisticated for that, but in the next five to 10 years, I could see systems starting to help human governance. That’s the short term. I actually think AI can have a significant positive impact in resolving some of our biggest challenges.

In a longer term, it’s harder to anticipate what the world would look like, but of course, spreading out as a superintelligence and living on, in some sense, spreading out across the universe and over many different timescales, having AI continue the human adventure is actually sort of interesting, how we wouldn’t be confined to our little planet. We would go everywhere. We’d go out there and grow. So that could actually be an exciting future that might happen. It’s harder to imagine exactly what it is, but it could be quite a human achievement. In the end, whatever happens to AI, it is, of course, a human invention. So I think science and technology are human inventions, and that’s almost what we can be most proud of, in some ways, of things that we actually did figure out how to do well, aside from creating a lot of other problems on the planet. So we could be proud of that.

Lucas Perry: Is there anything else here in terms of the economic, political and social situations of positive futures from AI that you’d like to touch on, before we move on to the negative outcomes?

Bart Selman: Yeah. I guess the main thing, I’m hoping that the general public and politicians will become more aware, and will be better educated about the positive aspects of AI, and the positive potential it has. The range of opportunities to transform education, to transform health care, to deal with sustainability questions, to deal with global warming, scientific discovery, the opportunities are incredible.

What I would hope is that those aspects of AI will receive more attention in the broader public, and with politicians and with journalists. It’s so easy to go after the negative aspects. Those negative aspects and the risk have received a disproportional attention from the positive aspect. So that’s my hope.

As part of the AAAI organization, the professional organization for artificial intelligence, part of our mission is to inform Washington politicians of these positive opportunities, because we shouldn’t miss out on those. That’s an important mission for us, to make that clear, that there’s something to be missed out on, if we don’t take these opportunities.

Lucas Perry: Yeah. Right. There’s the sense that all of our problems are basically subject to intelligence. As we begin to solve intelligence, and what it means to be wise and knowing, there’s nothing in the laws of physics that are preventing us from solving any problem that is, in principle, solvable within the laws of physics. It’s like intelligence is the key to anything that is literally possible to do.

Bart Selman: Yeah. Underlying that is rational thought, our abilities to analyze things, to predict a future, to understand complex systems. And as the rationality underlies that scientific thought process, and humans have excelled at that, and AI can boost that further. That’s an opportunity we have to grab. And I hope we, people, recognize that more.

Lucas Perry: I guess, two questions here, then. Do you think existential risk from AI is a legitimate threat?

Bart Selman: I think it’s something that we should be aware of, that it could develop as a threat, yeah. The timescale is a little unclear to me, how near that existential threat is, but it’s something that we should be aware of that there is a risk of runaway intelligence systems not properly controlled. Now, I think that the problems will emerge much more concretely and earlier, for example, cybersecurity and AI systems that break into computer networks that are hard to deal with. So it will be very practical threats to us, that will take most of our attention. But the overall existential threat, I think, is indeed also there.

Lucas Perry: Do you think that the AI alignment problem is a legitimate, real problem, and how would you characterize it, assuming you think it’s a problem?

Bart Selman: I do think it’s a problem. What I like about the term, it sort of makes it crisp, that if we train a system for a particular objective, then it will learn how to be good at that objective. But in learning how to do that, it may violate basic human principles, basic human values. I think, as a general paradigm statement, that we should think of what happens to systems that we train to optimize a certain objective, that they need to achieve that in a way that aligns with human values, I think, is a very fundamental research question and a very valid question. In that sense, I’m a big supporter, in the research community, of taking the value alignment problem very serious.

As I said before, there are some hesitation about how to approach the problem. I think, sometimes, the value alignment folks gloss over this issue, of what are common values, and are there any common values? So the value alignment, solving it assumes, “Okay, well, when we get the right values in, we’re all done.” What worries me a little bit in that context, as well, these common values are possibly not as common as we think they are. But that’s the issue of how to deal with the problem. But the problem itself, and as a research domain, is very valid. As I said early on, with the little Sokoban example, it is an absolutely surprising aspect of the AI systems for training, how they can achieve incredible performance, but doing it in a way, not knowing certain things that are obvious to us, in some very nonhuman ways. So that’s clearly coming out in a lot of AI systems. And that’s related to the value alignment problem. This fact that we can achieve a super high level of performance, even when we train it carefully with the human generated training data, and things like that, it still can find ways of doing things that are very nonhuman, and potentially very non-value aligned. That makes it even more important to study the topic.

Lucas Perry: Do you think that the Sokoban example, that you can translate the pushing the boxes into corners as a expression of the alignment problem, like imagining if pushing boxes into corners was morally abhorrent to humans?

Bart Selman: Yes. Yeah, that’s an interesting way of putting it. It is an example of what I sort of think of as it’s a domain, and it’s a toy domain, of course, but there’s certain obvious truths to us that are obvious. In that case, pushing a box in a corner is not a moral issue, but it’s definitely something that is obvious to us. If you replace it with some moral truths to us that is obvious to us, it is an illustration of the problem. It’s an illustration of when we think of training a system, and even if you think of, let’s say, bringing up a child, or a human learner, you have a model of what that system will learn, what that human learns, and how the human will make decisions. The Sokoban example is sort of a warning of, with an AI system, it will learn the performance, the test, so it will pass the final test. But it may do so in ways that you would never have expected to achieve it.

With the corner example, it’s a little strange, almost, to me, to realize that oh, you can solve this very hard Sokoban problem, without ever knowing about what a corner is. And it literally doesn’t. It’s the surprises of getting to a human level performance, and missing, and not quite understanding how that’s done. I think another, for me, a very good example, is machine translation systems. So machine translation systems, we see incredible performance of machine translation systems, where they basically map strings in one language, to a string in English to Chinese, or English to French, having discovered a very complex transformation function in the deep net, trained on hundreds of thousands of sentences, but it’s doing it without actually understanding. So it can translate and an English text into a French text or a Chinese text, at a reasonable level, without having any understanding of what the text is about. Again, to me, it’s that nonhuman aspect. Now, researchers might push back and say, “Well, the network has to understand something about the texts, deep in the network.”

I actually think that we’ll find out that the network understands next to nothing about a text. It just has found a very clever transformation that we initially, when we started working on a natural language translation didn’t think would exist. But I guess it exists, and you can find it with a gradient descent deep network. Again, it’s an example of showing a human level cognitive ability achieved in a way that is very different from the way we think of intelligence. That means, when we start using these systems, we are not aware. So if people in general are not aware that your machine translation app has no idea what you’re talking about.

Lucas Perry: So, do you think that there’s an important distinction here to be made between achieving an objective, and having knowledge of that particular domain?

Bart Selman: Yes, yes. I think that’s a very good point, yeah. So by boiling things down, in my sense, by boiling tasks in AI down too much to an objective, in machine learning, the objective is do well on the test set. By boiling things down too much to a single measurable objective, we are losing something, and we’re losing underlying knowledge, the way in which the system actually achieves it.

We’re losing an understanding, and we’re losing the attention to that aspect of the system. That’s why interpretability of deep nets has become sort of a, so it’s definitely a hot area.

But it’s trying to get back to some of that issue is, what’s actually being learned here? What’s actually in these systems? But if you focus just on the objective, and you get your papers published, you’re actually not encouraged to think about that.

Lucas Perry: Right. And there’s the sense, then, also, that human beings have many, many, many different objectives and values that are all simultaneously existing. So when you optimize for one, in a kind of unconstricted way, it will naturally exploit the freedom in the other areas of things that you care about, in order to maximize achieving that particular objective. That’s when you begin to create lots of problems for everything else that you value and care about.

Bart Selman: Yeah, yeah. No, exactly. That’s the single objective problems. Actually, you lay out, a potential path is saying, “Okay, I should not focus on a single objectives task. I actually have to focus on multiple objectives.”

And I would say, go one step further. Once you start achieving objectives, or sets of objectives, and your system performs well, you actually should understand, to some extent, at least, what knowledge is underlying, what is the system doing, and what knowledge is it extracting or relying on, to achieve those objectives? So that’s a useful path.

Lucas Perry: Given this risk of existential threat from AI, and also, the AI alignment problem as its own kind of issue, which, in the worst of all possible cases, leads to existential risk. What is your perspective on futures that you fear, or futures that have quite negative outcomes from AI, in particular, in the light of the risk of existential threat, and then, also, the reality of the alignment problem?

Bart Selman: Yeah, I think, so the risk is that we continue on a path of designing a system, with a single objective in mind, and just measuring the achievement there, and ignored yet alignment problem. People are starting to pay attention to it, but paying attention to it, and actually really solving it is, two different things. There is a risk that these systems just become so good and so useful, and commercially valuable, that the alignment problem gets sort of pushed to the background as being not so relevant, and that we don’t have to worry about it.

So I think that’s sort of the risks that AI is struggling with. And it’s a little amplified by the commercial interest. I think you had a clear example, there is the whole social network world, and how that has spread fake news, and then got people into different groups of people to think totally different things, and to believe totally different facts. In that, I see a little warning sign there for AI. Those networks are driven by tremendous commercial interests. It’s actually hard for society to say there’s something wrong about these things, and maybe we should not do it this way. So that’s a risk, it works too well to actually push back and say, “We have to take a step back and figure out how to do this well.”

Lucas Perry: Right? So you have these commercial interests, which are aligned with profit incentives, and attention becomes the variable which is trying to be captured for profit maximization. So attention becomes this kind of single objective that these large tech companies are training their massive neural nets and algorithms to try and capture the most of, from people. You mentioned issues with information.

And so people are more and more becoming aware of the fact that if you have these algorithms, that are just trying to capture as much attention as possible, then things like fake news, or extremist news and advertising is quite attention capturing. I’m curious if you could explain more of your perspective on how the problem of social media algorithms attempting to capture, and also commodifying human attention, as a kind of single objective that commercial entities are interested in capturing, how that represents the alignment problem?

Bart Selman: Yeah, so I think it’s a very nice analogy. First, I would say, to some extent the algorithms that try to maximize the time spent online, basically, are getting most attention. Those are not particularly sophisticated. Those are actually, very basic sort of, you can sample little TikTok videos. How often are they watched by some subgroup? And if they’re watched a lot, you give them out more. If they’re not watched, you start giving them out less. So the algorithms are actually not particularly sophisticated, but they do represent an example of what can go wrong with this single objective optimization.

What I find intriguing about it, it’s not that easy to fix, I think. Because the companies, of course, their business model is user engagement, is advertising, which, you have to tell that the company’s not to make as much money as they could. If there was an easy solution, that would have happened already. I think we’re actually in the middle of trying to figure out, is there a balance between making profits from a particular objective and societal interests, and how can we align those ideas? And it’s a value alignment problem between society and companies that profit from them. Now, I should stress, in the whole of social networking, and that’s, I think, what makes the problem sound intriguing. There’s an incredible positive aspects to social networks, and people exchanging stories, and interacting. Again, that I think is what makes it complex. It’s not that that it’s only negative, it’s not. There’s tremendous positive sides to having interesting social networks, and exchanges between people. People, in principle, could learn more from each other.

Of course, what we’ve seen is actually, strangely, people seem to listen less to each other. Maybe it’s too easy to find people that think the same way as you do, and the algorithms encourage that. In many ways, the problems with the social networks and the single objective optimization are a good example of a value alignment challenge. It shows that the solution, finding a solution to that is probably, it will require way more than just technology. It will require society and governance companies to come together and find a way to manage these challenges. It will not be an AI researcher in an office that finds a better algorithm. So it is a good illustration of what can go wrong. To me, it’s a good illustration, of what can go wrong. And in part, because, if people didn’t expect this, actually. They saw the positive sides of these networks, and they’re bringing people closer together, and that no one actually had thought of fake news, I think. It’s something that emerged, and that shows how technology can surprise you. That’s of course, in terms of AI, one of the things we have to watch out for, the unexpected things that we did not think would happen, yeah.

Lucas Perry: Yeah, so it sounds like the algorithms that are being used are simpler than I might have thought, but I guess maybe that seems like it accounts for the difficulty of the problem, if really simple algorithms are creating complete chaos for most of humanity.

Bart Selman: Yeah. No, no, exactly. I think that that’s an excellent point. So yeah, you don’t have to create very complicated … You might think, “Oh, this is some deep net doing reinforcement learning.”

Lucas Perry: It might be closer to statistics that gets labeled AI.

Bart Selman: Yeah. Yeah, it gets labeled AI, yeah. So it’s actually just plain old simple algorithms, that now do some statistical sampling, and then amplify it. But you’re right, that maybe the simplicity of the algorithm makes it so hard to say, “Don’t do that.”

It’s like, if you run a social network, you would say, “Let’s not do that. Let’s spread the posts that don’t get many likes.” That’s almost against your interests. But it is an example of, the power is partly also, of course, the scale on which these things happen.

With the social network, I think, what I find interesting is why it took awhile before people became aware of this phenomenon is, because everybody had their personalized content. There was no share to one news channel, or something like that. There’s one news channel. Everybody watches it, and then you see what’s on it.

I have no idea what’s in my newsfeed of the person who’s sitting next to me. So there was also certain things like, “Ah, I didn’t know you got all your news articles with a certain slant.”

So not knowing what other people would see and having a huge level of personalization was another factor in letting this phenomenon go unnoticed for quite awhile. But luckily people are now at least aware of the problem. I haven’t solved it yet.

Lucas Perry: I think two questions come up for me. One thing that I liked, that Yuval Noah Harari has said is, he’s highlighted the importance of knowledge and awareness and understanding in the 21st century, because you said this isn’t going to be solved by someone in Big Tech creating an algorithm that will perfectly … captures the collective value of all of the United States or planet earth and how it is that the content be ethically distributed to everyone. It also, it requires some governance, as you said, but then also some degree of self-awareness about how the technology works and like how your information is being biased and constrained and for what reasons. The first question is, I’m curious how you see the need for collective education on technology and AI issues in the 21st century, so that we’re able to navigate it as people become increasingly displaced from their jobs and it begins to really take over. Let’s just start there.

Bart Selman: So, I think that’s a very important challenge that we’re facing. And I think education of everyone is a key issue there. So, AI should not be, or these technologies should not be presented as some magic boxes. I think it’s much better for people to get some understanding of these technologies. And, I think that’s possible in, in our educational system. It has to start fairly early that people get some idea of how AI technologies were. And most importantly, perhaps people need to start understanding better what we can do and what we cannot do and what AI technologies are about. A good example to me is something like the data privacy initiative in Europe, which I think is a very good initiative.

But for example, there’s a detail in it, is where you have a right. I think- I’m not sure whether it’s part of the law, but there’s definitely discussions and how you have a right to get an explanation of a decision by an AI system. So there’s a right to an explanation. And what I find interesting about it, that sounds like, oh, that’s a very good thing to get. Until you’ve worked with AI system and machine learning systems, and you realize, you can make up pseudo explanations pretty easily, and you can actually ask your systems to explain it without using the word gender or race, and they will come up with good explanation.

So the idea that a machine learning algorithm has sort of a crisp explanation, that is the true explanation of the decision is actually far from trivial and can actually easily be circumvented. So, it’s an example to me, of policymakers coming up with regulations that sounds like they’re making progress, but they’re missing something about what AI system can and cannot do. Yeah. That’s another reason why I think people need much better education and insights into AI technologies and at least hear from different perspectives about what’s possible and what’s not possible.

Lucas Perry: So, given the risk of AI and algorithms increasingly playing a role in society, but also playing this part of single objective optimization and then us as humanity, having to collectively face the negative outcomes and negative externalities from widely spread algorithms that are single objective maximizing. In light of this, what are futures that you most fear in the short term from 5, 10, 15, 20 years from now where we’ve really failed it at AI alignment and working on these ethics issues.

Bart Selman: Yeah, so one thing that I do fear is an increased income inequality, and it’s as simple as that, that the companies that are the best at AI, that have the most data will get such an advantage over other organizations, that the benefits will be highly concentrated on a small group of people. And that, I think is real, because AI technology, in some sense, amplifies your ability to do things. So it is like in finance, if you have a good AI trading program that can mine text and a hundred or a thousand different indicators, you could build a very powerful financial trading firm. And of course trading firms are working very hard on that, but it concentrates a lot of the benefits in the hands of a small group of people. That I actually think is sort of- in my mind, sort of the biggest short-term risk of AI technology.

It’s a risk any technology has, but I think AI sort of amplifies it. So that, has to be managed and that comes back to what I mentioned fairly early on, the benefits of AI. It has to be ensured that it will benefit everyone, and maybe not all to the same extent, but at least everyone should benefit to some extent, and that’s not automatically going to happen. So that’s a risk I see in development of AI and then more dramatic risks. I think short term cybersecurity issues, smart tax on our infrastructure. AI programs could be quite dangerous, deep fakes and so sophisticated, some deep fakes. There are some specific risks that we have to worry about because they are going to be accelerated with AI technology. And then there’s of course the military autonomous weapon risk.

There’s an enormous pressure… Since it’s a competitive world of developing systems that use as much automation as possible. So, it’s not so easy to tell a military or country not to develop autonomous weapon systems. And so I’m really hoping that people start to realize, and this is again, an educational issue, partly of people, the voters basically that there is a real risk there just like nuclear weapons was a real risk and we have to get together to make agreements about at least a management of nuclear weapons. So we have to have agreements, global agreements about autonomous weapons and smart weapons, and what can be developed or what should at least be controlled somehow that will benefit older players. And that’s one of the short-term risks I see.

Lucas Perry: So if we imagine in the short term that there’s just all of these algorithms, proliferating that are single objective maximizing, that are aligned with whatever corporation that is using them, there is a lack of international agreement on autonomous weapons systems. Income inequality is far higher due to the concentration of power in particular individuals who control vast amounts of AI. So, if you have the wealth to accumulate AI, you begin to accumulate most of the intelligence on earth, and you can use that to create robots or use robotics so that you’re no longer dependent on human labor. So there’s increase in income and power inequality, and lack of international governance and regulation. Is that as bad as the world gets in the short term? Or is there anything else that makes it even worse?

Bart Selman: No, I think that’s about as bad as it gets. And I assumed I would be a very strong reaction in almost every country of the regular person as of the voter or the person in the street. There would be a strong reaction to that. And it’s real.

Lucas Perry: So, is that reaction though, possible to be effective in any way if lethal autonomous weapons have proliferated?

Bart Selman: Well, so legal autonomous weapons, yeah. So there are two different aspects to sort of what- In one aspect is what sort of happens within a country. And do the people accept that extreme levels of inequality, income inequality, and power distributions, and I think people will push back and there will be backlash against that. Lethal autonomous weapons when they start proliferating, I think- So I just have some hope that countries will realize that that is in nobody’s interest. So, that countries are able to manage risks that are unacceptable to everyone, I think. So I’m sort of hopeful that in the air of lethal autonomous weapons, that we will see a movement by countries to say that, “Hey, this is not going to be good for any one of us.”

Now, I’m being a little optimistic here, but with nuclear weapons. We did see it’s always a struggle and it remains a struggle today. But so far, countries have sort of managed these risks reasonably well. It’s not easy, but it can be done. And I think it’s partly done because everybody realizes nobody will be better off if we don’t manage these risks. So legal autonomous weapons, I think there has to be first a better understanding that these are real risks. And if you let that get out of hand, like let small groups develop their own autonomous weapons, for example, that that could be very risky to the global system. I’m hoping that countries will realize this and start developing a strategy to manage it, but it’s a real risk. Yeah.

Lucas Perry: So should things like this come to pass, or at least some of them, in the medium to long-term, what are futures that you fear in the time range of fifty to a hundred years or even longer?

Bart Selman: Yeah, so the lethal autonomous weapon risk would be, that could just be as bad as nuclear weapons being used at some point. So that sort of could wipe out humanity. So there is, I think that’s sort of worst case scenario is that we would go down in flames. There are some other scenarios where I think, and this is more about the inequality issue. Where a relatively small group of people grabs most of the resources and is enabled to do so by AI technology and the rest can live reasonable lives, but are limited by their resources.

So that’s, I think a somewhat dark scenario that I could see happen if we don’t pay attention to it right now. That could play out in 20, 30 years. It’s a little hard to… Again, one thing that’s a little difficult to predict is how fast the technology will grow and you combine it with advances in biology and medicine. I’m always a little optimistic. We could be living in just a very different and very positive world too, if that’s what I’m hoping that we’ll choose. So I am staying away a little bit from too dark a scenario.

Lucas Perry: So, a little bit about AI alignment in particular. I’m curious, it seems like you’ve been thinking about this since at least 2008, perhaps. I mean, even earlier you can let us know how have your views shifted and evolved. I mean, it’s been, what is that, about 13 years?

Bart Selman: Yeah. No, very good question. So yeah, in 2008 we had the, Eric Horvitz and I co-chaired a AAAI presidential panel on the risks of AI. It’s very interesting because at that time, this was before the real deep learning revolution. People saw some concerns, but the general consensus of… And this was a group of about 30 or 40 AI researchers and a very, very good group of people. There was a sort of a general census that it was almost too early to worry about the value alignment and the risks of AI. And I think it was true that AI was still a very academic discipline and a theme talking about, oh, what if this AI system starts to work? And then people start using it and what’s going to happen, seemed premature and was premature at the time, but it was good for, I think for people to get together and at least discuss the issue of what could happen. Now that really dramatically changed over the last 10 years, as, particularly the last five years.

And in part to people like Stuart Russell, Max Tegmark who basically brought to the forefront these concerns of AI systems, combined with the fact that we see the system starting to work, I guess yeah, if that hadn’t happened. So now, we see these incredible investments and companies really going after AI capabilities and suddenly these questions that were quite academic early on are now very real, and we have to deal with them and we have to think about them. And you do, I mean, the good thing is if I look at, for example, NSF and the funding in the United States, but around the world, actually also in Europe and in China, people are starting to fund AI safety, AI ethics, work on value alignment. You see it in conferences and people start looking at those questions. So I think that’s the positive side.

So I’m actually quite encouraged, but it was how much was achieved in a fairly short time. You know, FLI played a crucial role in that too, in bringing awareness to the AI safety issues. And now I think among most AI researchers, maybe not all, but most AI researchers, these are viewed as legitimate topics for study and legitimate challenges that we have to address. So it’s not, sometimes I feel good about that aspect. Of course, the questions remain urgent and the challenges are real, but at least I think the research community has found the attention. And in Washington, I was actually quite pleased if I look at the significant investments being planned for AI and the development of the AI R&D in the United States and yeah, safety, fairness, a whole range of issues that touches on how AI will affect society are getting serious attention, so they are being funded. And that happened last five years, I would say. So, that’s a very positive develop in this context.

Lucas Perry: So given all this perspective on the evolution of the alignment problem and the situation in which we find ourselves today, what are your plans or intentions as the president of AAAI?

Bart Selman: Yeah, so as part of AAAI, we’ve definitely stepped up our involvement, with Washington policy making process to try to inform policy makers better about the issues. And we’ve had, actually, we did a roadmap for AI research in the United States, and there was also of planning 20 years ahead of topics. And we proposed there to, I think what was a key component to us to build a national AI infrastructure, as we called it, that there’s an infrastructure to do AI research and development that would be shared among institutions and be accessible to almost every organization. And the reason is that we don’t want AI research and development to be concentrated just in a few big private companies. We actually would like to make it accessible to many more stakeholders and many more groups in society.

And to do that, you need an AI infrastructure where you have capabilities to store, curate large data sets, large facilities to cloud computing to give access to other groups in society, to build AI tools that are good for them, and that are useful for them. So as AAAI, we are pushing for this sort of generally making AI R&D generally available and to boost the level of funding. Keeping in mind, these issues of fairness, value alignment as valid research topics that should be part of anybody’s research proposal. People who do research proposals should have a component of where they consider whether their work is relevant in that context. And if it is, what contributions they can make. So, that’s what our society is doing and this is of course, a good time to be doing this because Washington is actually paying attention because not just the US, every country is developing AI R&D initiatives. So our goal is to provide input and to steer it in a positive way. Yeah and that’s actually a good process to be part of.

Lucas Perry: So you mentioned alignment considerations being explicitly, at least covered in the publication of papers, is that right?

Bart Selman: So at least I think people are, yeah- so there are papers purely on the alignment problem, but people are- if I look at sort of like reinforcement learning world, people are aware that value alignment is an issue. And to me, it feels so closely related to interpretability and understanding. We talked about it a little bit before is, you are not just getting to a certain objective, quantitative objective, single objective not just optimizing for that, actually understanding the bounds of your system safety bounds, for example, in the work on cyber-physical systems and self-driving cars, as of course, a key issue is how do I guarantee that whatever policy has learned, how do I guarantee that that policy is safe? So it’s getting more attention now. You know the pure value alignment problem like when it gets to ethics. I think there is- we talked about values or there’s a whole issue of how you define values and what are the basic core values.

And these are partly ethical questions. There I think is still room for growth. But I also see that for example, at Cornell, there are ethics people in the philosophy departments are thinking about ethics, are starting to look at this problem again and looking at the way AI is going in these directions. So partly I’m encouraged by an increase of collaborations between different disciplines that traditionally have not collaborated much, but the fact that ethics is relevant to computer science students. I think now five years ago, nobody even thought of mentioning that. And now I think that most departments realize, yes, we actually should tell our students about ethical issues and we should educate them about algorithmic bias and value alignment is a more challenging thing because you have to know a little bit more about AI, but most AI courses will definitely cover that now.

So, I think there’s great progress and I’m hoping that we just keep continuing to make those connections and make it clear that when we train students to be the next generation of AI engineers, that they’re very aware, they should be very aware of these ethical components. And that’s, I think is, is- it might even be somewhat unique in engineering. I don’t think engineering normally would touch on ethics too much, but I think AI is forcing us to do so.

Lucas Perry: So you see understanding of, and a sense of taking seriously the AI alignment problem at for example, AAAI as increasing.

Bart Selman: Yes. Yes, yes. And definitely it’s increasing and people are- yeah partly, there’s always- it takes time for people to become familiar with the terminology, but people are much more familiar with the questions and yeah, we’ve even had job candidates talk about AI alignment. And so then the department has to learn about what that means. So it’s partly an educational mission, it’s you actually have to understand how reinforcement learning, optimize and decision-making, and you have to understand a little bit how things work, but I think we’re starting to educate people and definitely people are much more aware of these problems so that’s good.

Lucas Perry: Yeah. Does global catastrophic or existential risk from AI fit into AAAI.

Bart Selman: I would say that at this point, yeah. I don’t think we got research. Well actually, it’s hard to say because we, I think we have like 10,000 submissions and I think there’s room at AAAI for those kind of papers. I just haven’t actually- personally haven’t actually seen them, but that’s actually- as president of AAAI, I would definitely encourage us to branch out and if somebody has an interesting paper, this could be a position paper or some other types of papers now that we have that sort of say, okay, let’s have a serious paper on existential risks because there is room for it. It just hasn’t so far has not happened much, I think, but I think it fits into our missions. So I would encourage that.

Lucas Perry: So you mentioned that one of the things that AAAI was focusing on was collaborating with government and policy making decisions, offering comments on documents and suggestions or proposals. Do you have any particular policy recommendations for existing AI systems or the existing AI ecosystem that you might want to share?

Bart Selman: Yeah, I think my sense there is sort of more like a meta level comment is- I think what we want is people designing systems. There’s a significant AI component and that the big tech companies, for example, our main input dare is we want people to pay serious attention to various things like bias, fairness, and these kind of criteria, AI safety. So it’s not- I wouldn’t have a particular recommendation for any particular system. But, with the AAAI submissions now we asked for sort of an impact statement and that’s somebody who does research. And that’s what we’re asking from the researchers, is when you do research that touches on something like value alignment or AI safety that you should actually think about societal component and possible impact on work. So we’re definitely asking people to do that.

In companies, I would say it’s more we ask company, we encourage company to have those discussions and make their engineers aware of these issues. And there’s one organization. Now the global partnership on AI that that’s actually also now very actively trying to do this on an international scale. So it’s a process. And it’s partly, I think you mentioned earlier an educational process where people have to learn about these problems to start incorporating them into our daily work.

Lucas Perry: I’m curious about what you think of AI governance and the relationship needed between industry and government. And one facet of this is for example, we’ve had Andrew Critch on the podcast and he makes quite an interesting point that some number of subproblems in the overall alignment problem will be naturally solved via industry incentives. Whereas, some of them won’t be. The ones that are, that will naturally be solved by industry incentives are those which align with whatever industry incentives are, so profit maximization. I’m curious, your view on the need for AI governance and how it is that we might cover these areas of the alignment problem that won’t naturally be solved by industry.

Bart Selman: That’s a good question. I think not all these problems will be solved by industry. So their objectives are sometimes a little too narrow to just solve them, a broad range of objects. So I really think it has to occur in a discussion, in a dialogue between policymakers, government, and public and private organizations. And it may require whether it requires regulation or at least form of self regulation that may be necessary to even level the playing field. Very early on, earlier we talked about social networks are spreading, fake news. You might actually need regulations to tell people not to do certain things because it will be profitable for them to do it. And so then you have to have regulations to limit that.

On the other hand, I do think a lot of things will be through self-regulation. So self-driving cars is a very circumscribed area. There’s a clear interest of all the participants in all the companies working on self-driving cars to make them very safe. So for some kind of AI systems, the objectives are sort of self-reinforcing and you need safety. Otherwise, people will not accept them. Other areas, I’m thinking for example, finance industry and that’s a big issue on- that it’s actually the competitive advantages often in proprietary system. It’s actually hard to know what these systems do. That I haven’t- I don’t have a good solution for that. One of my worries is that financial companies developed technologies that they will not want to share because that would be detrimental to the business, but actually expose risks that we don’t even know of.

So society actually has to come to grips with, are risks being created by AI systems that we don’t know of? So it has to be a dialogue and interaction between public and private organizations.

Lucas Perry: So in the current AI ecosystem, how do you view and think about narratives around a international race towards more and more powerful AI systems, particularly between the United States and China?

Bart Selman: Yeah. Yeah. So, yeah, I think that’s a bit of an unfortunate situation right now. So in some sense, the competition between China and the US and also Europe is good from an AI perspective, in terms of investments in AI R&D which actually does address also some of the AI safety issues and issues of alignment. So in some sense that’s a good benefit of these extra investments. And the competition aspect is less positive. And as AI scientists, we actually interact with AI scientists in China and we enjoy those interactions and a lot of good work comes out of that. When things become proprietary, people have data sets that other people don’t have and other organization don’t have and some countries do have, others don’t, I think the competition is not as positive. And, again, my hope is that by bringing out potentially positive aspects of AI, much stronger in terms of how it can… To me, for example, AI can transform the healthcare system.

It can make it much more efficient, much more widely available with remote healthcare delivery and things like that and better diagnosis systems. So there’s an enormous upside for developing AI for healthcare. I’ve actually interacted with people in China that work on healthcare for AI. Whether it gets developed in China or it gets developed here, that actually doesn’t matter. It would benefit both countries. So I really hope that we can keep these channels open instead of totally separate developments in these two countries. But there is a bit of a risk because the situation has become so competitive, but, again, I’m hoping people see it will improve healthcare in both countries is probably the right way to do it and we shouldn’t be too isolationist in this regard.

Lucas Perry: How do you feel the sense of these countries competing towards more and more powerful AI systems, how do you feel that that affects the chances of successful value alignment?

Bart Selman: Yeah, so that could be an issue. If the countries really start not sharing their technology and not sharing potential advances, it is harder, I think, to keep value alignment issues and AI safety issues under control. I think we should be open about the risk of countries going at it by themselves because the more people look at systems, the more researchers look at different AI systems from different angles, the better. And I guess a very odd example is, I always thought that it would be nice if AlphaZero was available to the AI research community to probe the brain of AlphaZero, but it’s not. And so there are already systems in industry that would actually benefit from study by much broader group of researchers. And there’s a risk there.

Lucas Perry: Do you think there’s also a risk with sharing? It would seem that you would accelerate AGI timelines by sharing the most state-of-the-art systems with anyone, right? And then you can’t guarantee that those people will use it in value-aligned ways.

Bart Selman: Yeah. No, no, no. That’s a flip side. It’s good you brought that up. Yeah. There is a flip side in sort of sharing even the latest deep learning code or something like that, that other people, that malicious actors could use it. In general, I think an openness is better in terms of keeping an eye on what gets developed. So in general, I think openness allows different researchers to develop common standards and common safety guards. So I see that risk of sharing, but I do think overall the international research community can set standard. We see that in synthetic biology and other areas where openness in general leads to better managing of risks. But you’re right. There is effect that it accelerates progress, but the countries are big enough, even if China and the US would completely separate their AI developments, both countries would do very well in their development of technology, so.

Lucas Perry: So I’m curious, do you think that AI is a zero-sum game? And I’m curious how you view an understanding of AI alignment and existential risk at the highest levels of Chinese and US government affecting the extent to which there is international cooperation for the beneficial development of AI, right? So there’s this sense of racing because we need to capture the resources and power but there’s the trade-off with the risks of alignment and existential risk.

Bart Selman: So yeah. I firmly believe that it’s not a zero-sum game. Absolutely not. I give the example of the healthcare system. Both China and the US have interest in better accessible healthcare, more available healthcare and lower cost healthcare. So actually the objectives are very similar there, and AI can make incredible difference for both countries. Similarly in education, you can improve education by having AI assisted education, adult education, continuous learning education. So there are incredible opportunities and both countries would benefit. So definitely AI is not a zero-sum game. So I hope countries realize that, when China declared they want to be a leading AI nation by 2030, I think there’s room for several leading nations.

So I don’t think one nation is better at AI that there will be the best outcome. The better outcome is if AI gets developed and used by many nations and shared. So I hope that politicians and governments see that shared interest. Now, as part of that shared interest, they may actually realize that the existential risks of bad actors, and that can be small groups of people, it could be a company or an organization, a bad actor using AI for negative goals that’s a global risk that, again, should be managed by countries collaborating. So I’m hoping that there are actually some understanding of the global benefits and not a zero-sum game, we all can gain, and the risk is a global risk and we should actually have a dialogue about some of these risks. The one component that is a tricky one, I think, is always the military component. But even there, as I mentioned before, the risk of autonomous lethal weapons is, again, something that affects every nation. So I can see countries realizing it’s better to collaborate and cooperate in these areas than to just take it as a pure competition.

Lucas Perry: So you said it’s not a zero-sum game and that we can all benefit. How would you view the perspective that the relative benefits for me personally, for racing are still higher, even if it’s not a zero-sum game, therefore I’m going to race anyway?

Bart Selman: Yes. I mean, yeah. There may be some of that, except that… I almost look at it a little different. I can see a race where we still share technology. So, the race is one of these strange… It’s almost like we’re competing with each other but we’re trying to get better all together. You can have a race and that can still be beneficial for progress, as long as you don’t want to keep everything to yourself. And I think what’s interesting, and that’s the story of scientific discovery and the way scientists operates, in some sense, scientists compete with each other because we all want to discover the next big thing in science. So there’s some competition. There is also a sense that we have to share because if I don’t share, I don’t get the latest from what my colleague is doing. So there’s a mutual understanding that yes, we should share because actually helps me, even individually. So that’s how I see.

Lucas Perry: So how do you convince people to share the thing which is like the final invention? Do you know what I mean? If I need to share it because then I won’t get the other thing that my colleague will make, but I’ve just made the last invention that means I will never have to look to my colleague again for another invention.

Bart Selman: Yeah. So that’s a good one, but yeah. So in science, we don’t think there’s sort of an endpoint. There will always be something novel, so.

Lucas Perry: Yeah, of course there’s always something novel, but you’ve made the thing that will more quickly discover every other new novel thing than any other agent on the planet. How do you get someone to share that?

Bart Selman: So, well, I think partly the story still is even if one person, if one country gets so dominant, there still is the question, is that actually beneficial for even the country? I mean, there are many different capabilities that we have. There are still nuclear weapons and things like that. So you might get the best AI and somebody might say, “Okay, I think it’s time to terminate you.” So there’s a lot of different forces. So I think it’s a sufficiently complex interaction game that I’m thinking that, to think of it as a single dimension issue is probably not quite the way the worlds will work. And I hope politicians are aware of that. I think they are.

Lucas Perry: Okay. So in the home stretch here, we’ve brought up lethal autonomous weapons a few times. What is your position on the international and national governance of lethal autonomous weapons? Do you think that a red line should be drawn at the fact that life or death decisions should not be delegated to machine systems?

Bart Selman: That’s a reasonable goal. I do think there are practical issues to specify exactly in what sense and how the system should work. So, decisions that have to be made very quickly, how are you going to make those if there’s no time for a human to be in the loop? So I like it as an objective that there should always be a human in the loop, but the actual implementation of system, I think, needs further work and it might even just come down to actual systems and somebody looking at those systems and say, “Okay, this has sufficient safeguards. And this system doesn’t.” Because there’s this issue of how quickly do we have to react and can this be done?

And of course, that’s partly you see that a defensive system may have to make a very quick decision, which could endanger the life of, I don’t know, incoming pilots, for example. So there are some issues, but I’d like it as a principle that legal autonomous systems should not be developed and that there should always be this human decision making as part of it, but that it probably has to be figured out for each individual system.

Lucas Perry: So would you be in favor of, for example, international cooperation in limiting, I guess, having treaties and governance around autonomous weapons?

Bart Selman: Oh, definitely. Yeah. Yeah, definitely. And I think people are sort of sometimes skeptical or wonder whether it’s possible, but I actually think it’s one of those things that is probably possible because when militaries start to develop those systems because that’s the real tricky part, when these systems are being developed or start being sold, they can be in the hands of any group. So I think countries actually have an interest in treaties and agreements on regulating or limiting any kind of development of such systems. So I’m a little hopeful that people will see this would be in nobody’s interest to have countries competing on developing the most deadly lethal autonomous weapon. That would actually be a bad idea. And I’m hopeful that people will actually realize. That is partly again an educational thing. So people should be more aware of it and will directly ask their governments to get agreements.

Lucas Perry: Do you see the governance of lethal autonomous weapons as being a deeply important issue around the international regulation and governance of AI, kind of like first key issue in AI as we begin to approach AGI and superintelligence? So does our ability to regulate and come up with beneficial standards and regulation for autonomous weapons, is that really important for long-term beneficial outcomes from things like AGI and superintelligence?

Bart Selman: Yeah, I think it would be a good exercise, in some sense, of seeing what kind of agreements you can put in place. So lethal autonomous weapons, I think, is a useful starting place because I think it’s fairly clear. I also think there are some complications. You can say, “Oh, we’d never do this.” What about if you have to decide in a fraction of a second what to do? So there are things that have to be worked out, but in principle, I think countries can agree that it needs collaboration between countries and then that same kind of discussion, the same kind of channels, because these things take time to form the right channels and the right groups of people to discuss these issues, could then be put towards other risks that AI may pose. So I think it’s a good starting point.

Lucas Perry: All right. A final question here, and this one is just a bit more fun. At Beneficial AGI 2019, I think you were on a panel that was about, do we want machines to be conscious. On that panel, you mentioned that you thought that AI consciousness was both inevitable and adaptive. I’m curious if you think about the science and philosophy of consciousness and if you have a particular view that you subscribe to.

Bart Selman: No, it’s a fun topic. And actually when I thought about consciousness more and will it emerge, there’s an area of AI that actually, because I’ve been in the field a long time, and the area generally is called knowledge representation and reasoning and it’s about how knowledge is represented in an AI system and how an AI system can reason with it. And one big sub-area there was the notion of self-reflection, the notion in a multi-agent system. So self-reflection is not only you know certain things, you also know about what you know, and you know about what you don’t know. And similarly in multi-agent systems, you have to know not only what you know, but you have to have some ideas of what others may know. And yes, you might have some ideas of what other agents don’t know but that is to facilitate interactions with other agents.

So this whole notion of reflection on your own knowledge and other agents’ knowledge, in my mind, is somewhat connected to consciousness of yourself and your environment, of course. So that led to my comment that if you build sufficiently complex systems that behave intelligently, they will have to develop those capabilities. They have to know about what they know, what they don’t know and what others know and others don’t know. And knowing about knowing what others might know about you, it actually goes arbitrary levels of interactions. So I think it’s going to be a necessary part of developing intelligence system. And that’s why my sense is that some notion of consciousness will emerge in such systems because it’s part of this reflection mechanism.

And then what I think is exciting about it is, in consciousness research there’s also a lot of research now on what is the neurological basis for consciousness. There’s some neurological basis in the brain that points at consciousness. Well, now we have worked at that. We see how deep reinforcement learning interacts with neuroscience. And we’re looking for analogies of deep reinforcement learning approaches in AI and what insights it gives in actual brains, in actual biological neurological systems. So perhaps when we see things like reflection and consciousness emerge in AI systems, we will get new insights in what happens potentially in the brain. So it’s a very sort of interesting potential there.

Lucas Perry: My sense of it is that it may be possible to disentangle constructing a self model, like a model of both what I am and also what it is that I know and that I don’t know and then also my world model and that these things seem to be correlated with consciousness, with the phenomenal experience of being alive. But it seems to me they would be able to come apart just because it seems conceivable to me that I could be a sentient being with conscious awareness that doesn’t have a self model or a world model. You can imagine just like awareness of a wall that’s the color green. The no sense of no duality there between self and object. So, it’s a bit different the way philosophers come at the problem and computer scientists. There’s the computational aspect, of course, the modeling that’s happening, but it seems like the consciousness part can become disentangled from the modeling perhaps. And so I’m curious if you have any perspective or opinion on that and how we could ever know if an AI was conscious given that they may come apart.

Bart Selman: No, you raise an interesting possibility that maybe they can come apart. And then the question is, can we investigate that? Can we study that? And that’s a question in itself. Yeah. So, I was more coming at it from the sense of that when the system gets complex enough and it starts having these reflections, it will be hard not to have it be conscious. But you’re right. It probably could still be, although I would be a little surprised, but yeah. So my point in part is that the deep learning reinforcement approach or whatever deep learning framework we will use to get these reflective capabilities, I’m sort of hoping it might give us new insights into how to look at it from the brain perspective and a neural perspective because these things might carry over. And is consciousness a computational phenomena? My guess is it is, of course, but that still needs to be demonstrated.

Lucas Perry: Yeah. I would also be surprised if sophisticated self and world modeling didn’t most of the time or all the time carry along conscious awareness with it. But even prior to that, as we have domain specific systems, it’s a little bit sci-fi to think about, but there’s the risk of proliferating machine suffering if we don’t understand consciousness and we’re running all of these kinds of machine learning algorithms that they don’t have sophisticated self models or world models, but the phenomenal experience of suffering still exists then that could… We had factory farming of animals and then maybe later in the century, we have the running of painful, deep learning algorithms.

Bart Selman: No, that’s indeed a possibility. It sort of argues we actually have to dig deeper into the questions of consciousness. And so far, I think, most AI researchers have not studied it. I’m just starting to see some possibility of studying it again. I’m starting to study it as AI researchers. And it just brought me back a little bit that this notion of reflection that topics go in and out of fashion, but that used to be actually quite seriously studied, including with philosophers about what it means to know what you know, what does it mean to know what you don’t know, for example. And then there’s the things you don’t know that you don’t know kind of thing. So, we thought about some of these issues and now consciousness brings in that new dimension and you’re quite right. It could be quite separate, but it could also be related.

Lucas Perry: So as we wrap up here, is there a final comment you’d like to make or anything that you feel like is left unsaid or just a parting word for the audience about alignment and AI?

Bart Selman: So, comment to the audience is that the alignment question, value alignment, AI safety are super key important topics for AI researchers, there are many research challenges there that are far from solved. And in terms of the development of AI there are tremendous positive opportunities if things get done right, and that we should not… So one concern I have as an AI researcher is that if we get overwhelmed by the concerns and the risks and decide not to develop positive capabilities for AI. So we should keep in mind that can really benefit society if AI is done well and that we should take that as our primary challenge and manage the risk while doing so.

Lucas Perry: All right, Bart, thank you very much.

Bart Selman: Okay. Thanks so much. It was fun.

Jaan Tallinn on Avoiding Civilizational Pitfalls and Surviving the 21st Century

  • Intelligence and coordination
  • Existential risk from AI, synthetic biology, and unknown unknowns
  • AI adoption as a delegation process
  • Jaan’s investments and philanthropic efforts
  • International coordination and incentive structures
  • The short-term and long-term AI safety communities

1:02:43 Collective, institutional, and interpersonal coordination

1:05:23 The benefits and risks of longevity research

1:08:29 The long-term and short-term AI safety communities and their relationship with one another

1:12:35 Jaan’s current philanthropic efforts

1:16:28 Software as a philanthropic target

1:19:03 How do we move towards beneficial futures with AI?

1:22:30 An idea Jaan finds meaningful

1:23:33 Final thoughts from Jaan

1:25:27 Where to find Jaan

 

Joscha Bach and Anthony Aguirre on Digital Physics and Moving Towards Beneficial Futures

  • Understanding the universe through digital physics
  • How human consciousness operates and is structured
  • The path to aligned AGI and bottlenecks to beneficial futures
  • Incentive structures and collective coordination

You can find FLI’s three new policy focused job postings here

1:06:53 A future with one, several, or many AGI systems? How do we maintain appropriate incentive structures?

1:19:39 Non-duality and collective coordination

1:22:53 What difficulties are there for an idealist worldview that involves computation?

1:27:20 Which features of mind and consciousness are necessarily coupled and which aren’t?

1:36:40 Joscha’s final thoughts on AGI

Roman Yampolskiy on the Uncontrollability, Incomprehensibility, and Unexplainability of AI

  • Roman’s results on the unexplainability, incomprehensibility, and uncontrollability of AI
  • The relationship between AI safety, control, and alignment
  • Virtual worlds as a proposal for solving multi-multi alignment
  • AI security

You can find FLI’s three new policy focused job postings here

 

Paper’s discussed in this episode:

On Controllability of AI

Unexplainability and Incomprehensibility of Artificial Intelligence

Unpredictability of AI

 

Stuart Russell and Zachary Kallenborn on Drone Swarms and the Riskiest Aspects of Lethal Autonomous Weapons

  • The current state of the deployment and development of lethal autonomous weapons and swarm technologies
  • Drone swarms as a potential weapon of mass destruction
  • The risks of escalation, unpredictability, and proliferation with regards to autonomous weapons
  • The difficulty of attribution, verification, and accountability with autonomous weapons
  • Autonomous weapons governance as norm setting for global AI issues

You can check out the new lethal autonomous weapons website here

Beatrice Fihn on the Total Elimination of Nuclear Weapons

  • The current nuclear weapons geopolitical situation
  • The risks and mechanics of accidental and intentional nuclear war
  • Policy proposals for reducing the risks of nuclear war
  • Deterrence theory
  • The Treaty on the Prohibition of Nuclear Weapons
  • Working towards the total elimination of nuclear weapons

4:28 Overview of the current nuclear weapons situation

6:47 The 9 nuclear weapons states, and accidental and intentional nuclear war

9:27 Accidental nuclear war and human systems

12:08 The risks of nuclear war in 2021 and nuclear stability

17:49 Toxic personalities and the human component of nuclear weapons

23:23 Policy proposals for reducing the risk of nuclear war

23:55 New START Treaty

25:42 What does it mean to maintain credible deterrence

26:45 ICAN and working on the Treaty on the Prohibition of Nuclear Weapons

28:00 Deterrence theoretic arguments for nuclear weapons

32:36 Reduction of nuclear weapons, no first use, removing ground based missile systems, removing hair-trigger alert, removing presidential authority to use nuclear weapons

39:13 Arguments for and against nuclear risk reduction policy proposals

46:02 Moving all of the United State’s nuclear weapons to bombers and nuclear submarines

48:27 Working towards and the theory of the total elimination of nuclear weapons

1:11:40 The value of the Treaty on the Prohibition of Nuclear Weapons

1:14:26 Elevating activism around nuclear weapons and messaging more skillfully

1:15:40 What the public needs to understand about nuclear weapons

1:16:35 World leaders’ views of the treaty

1:17:15 How to get involved