Beatrice Fihn on the Total Elimination of Nuclear Weapons

  • The current nuclear weapons geopolitical situation
  • The risks and mechanics of accidental and intentional nuclear war
  • Policy proposals for reducing the risks of nuclear war
  • Deterrence theory
  • The Treaty on the Prohibition of Nuclear Weapons
  • Working towards the total elimination of nuclear weapons

4:28 Overview of the current nuclear weapons situation

6:47 The 9 nuclear weapons states, and accidental and intentional nuclear war

9:27 Accidental nuclear war and human systems

12:08 The risks of nuclear war in 2021 and nuclear stability

17:49 Toxic personalities and the human component of nuclear weapons

23:23 Policy proposals for reducing the risk of nuclear war

23:55 New START Treaty

25:42 What does it mean to maintain credible deterrence

26:45 ICAN and working on the Treaty on the Prohibition of Nuclear Weapons

28:00 Deterrence theoretic arguments for nuclear weapons

32:36 Reduction of nuclear weapons, no first use, removing ground based missile systems, removing hair-trigger alert, removing presidential authority to use nuclear weapons

39:13 Arguments for and against nuclear risk reduction policy proposals

46:02 Moving all of the United State’s nuclear weapons to bombers and nuclear submarines

48:27 Working towards and the theory of the total elimination of nuclear weapons

1:11:40 The value of the Treaty on the Prohibition of Nuclear Weapons

1:14:26 Elevating activism around nuclear weapons and messaging more skillfully

1:15:40 What the public needs to understand about nuclear weapons

1:16:35 World leaders’ views of the treaty

1:17:15 How to get involved

 

Max Tegmark and the FLI Team on 2020 and Existential Risk Reduction in the New Year

  • FLI’s perspectives on 2020 and hopes for 2021
  • What our favorite projects from 2020 were
  • The biggest lessons we’ve learned from 2020
  • What we see as crucial and needed in 2021 to ensure and make improvements towards existential safety

54:35 Emilia Javorksy on the importance of returning to multilateralism and global dialogue

56:00 Jared Brown on the need for robust government engagement

57:30 Lucas Perry on the need for creating institutions for existential risk mitigation and global cooperation

1:00:10 Outro

 

Future of Life Award 2020: Saving 200,000,000 Lives by Eradicating Smallpox

  • William Foege’s and Victor Zhdanov’s efforts to eradicate smallpox
  • Personal stories from Foege’s and Zhdanov’s lives
  • The history of smallpox
  • Biological issues of the 21st century

18:51 Implementing surveillance and containment throughout the world after success in West Africa

23:55 Wrapping up with eradication and dealing with the remnants of smallpox

25:35 Lab escape of smallpox in Birmingham England and the final natural case

27:20 Part 2: Introducing Michael Burkinsky as well as Victor and Katia Zhdanov

29:45 Introducing Victor Zhdanov Sr. and Alissa Zhdanov

31:05 Michael Burkinsky’s memories of Victor Zhdanov Sr.

39:26 Victor Zhdanov Jr.’s memories of Victor Zhdanov Sr.

46:15 Mushrooms with meat

47:56 Stealing the family car

49:27 Victor Zhdanov Sr.’s efforts at the WHO for smallpox eradication

58:27 Exploring Alissa’s book on Victor Zhdanov Sr.’s life

1:06:09 Michael’s view that Victor Zhdanov Sr. is unsung, especially in Russia

1:07:18 Part 3: William Foege on the history of smallpox and biology in the 21st century

1:07:32 The origin and history of smallpox

1:10:34 The origin and history of variolation and the vaccine

1:20:15 West African “healers” who would create smallpox outbreaks

1:22:25 The safety of the smallpox vaccine vs. modern vaccines

1:29:40 A favorite story of William Foege’s

1:35:50 Larry Brilliant and people central to the eradication efforts

1:37:33 Foege’s perspective on modern pandemics and human bias

1:47:56 What should we do after COVID-19 ends

1:49:30 Bio-terrorism, existential risk, and synthetic pandemics

1:53:20 Foege’s final thoughts on the importance of global health experts in politics

 

Sean Carroll on Consciousness, Physicalism, and the History of Intellectual Progress

  • Important intellectual movements and their merits
  • The evolution of metaphysical and epistemological views over human history
  • Consciousness, free will, and philosophical blunders
  • Lessons for the 21st century

Mohamed Abdalla on Big Tech, Ethics-washing, and the Threat on Academic Integrity

 Topics discussed in this episode include:

  • How Big Tobacco used it’s wealth to obfuscate the harm of tobacco and appear socially responsible
  • The tactics shared by Big Tech and Big Tobacco to preform ethics-washing and avoid regulation
  • How Big Tech and Big Tobacco work to influence universities, scientists, researchers, and policy makers
  • How to combat the problem of ethics-washing in Big Tech

 

Timestamps: 

0:00 Intro

1:55 How Big Tech actively distorts the academic landscape and what counts as big tech

6:00 How Big Tobacco has shaped industry research

12:17 The four tactics of Big Tobacco and Big Tech

13:34 Big Tech and Big Tobacco working to appear socially responsible

22:15 Big Tech and Big Tobacco working to influence the decisions made by funded universities

32:25 Big Tech and Big Tobacco working to influence research questions and the plans of individual scientists

51:53 Big Tech and Big Tobacco finding skeptics and critics of them and funding them to give the impression of social responsibility

1:00:24 Big Tech and being authentically socially responsible

1:11:41 Transformative AI, social responsibility, and the race to powerful AI systems

1:16:56 Ethics-washing as systemic

1:17:30 Action items for solving Ethics-washing

1:19:42 Has Mohamed received criticism for this paper?

1:20:07 Final thoughts from Mohamed

 

Citations:

Where to find Mohamed’s work

The Future of Life Institute AI policy page

 

We hope that you will continue to join in the conversations by following us or subscribing to our podcasts on YoutubeSpotify, SoundCloudiTunesGoogle PlayStitcheriHeartRadio, or your preferred podcast site/application. You can find all the AI Alignment Podcasts here.

You can listen to the podcast above or read the transcript below. 

Lucas Perry: Welcome to the Future of Life Institute Podcast. I’m Lucas Perry. Today’s conversation is with Mohamed Abdalla on his paper The Grey Hoodie Project: Big Tobacco, Big Tech, and the threat on academic integrity. We explore how big tobacco has used and still uses its wealth and influence to obfuscate the harm of tobacco by funding certain kinds of research, conferences, and organizations, as well as influencing scientists, all to shape public opinion in order to avoid regulation and maximize profits. Mohammed explores in his paper and in this podcast how big technology companies engage in many of the same behaviors and tactics of big tobacco in order to protect their bottom line and appear to be socially responsible. 

Some of the opinions presented in the podcast may be controversial or inflammatory to some of our normal audience. The Future of Life Institute support’s hearing a wide range of perspectives without taking a formal stance on it as an institution. If you’re interested to know more about FLI’s work in AI policy, you can head over to our policy page on our website at futureoflife.org/ai-policy, link in the description. 

Mohamed Abdalla is a PhD student in the Natural Language Processing Group in the Department of Computer Science at the University of Toronto and a Vanier scholar, advised by Professor Frank Rudzicz and Professor Graeme Hirst. He holds affiliations with the Vector Institute for Artificial Intelligence, the Centre for Ethics, and ICES, formerly known as the Institute for Clinical and Evaluative Sciences.

And with that, let’s get into our conversation with Mohamed Abdalla

So we’re here today to discuss a recent paper of yours titled The Grey Hoodie Project: Big Tobacco, Big Tech, and the threat on academic integrity. To start things off here, I’m curious if you could paint in broad brush strokes how you view big tech as actively distorting the academic landscape to suit its needs, and how these efforts are modeled and similar to what big tobacco has done. And if you could also expand on what you mean by big tech, I think that would also be helpful for setting up the conversation.

Mohamed Abdalla: Yeah. So let’s define what big tech is. I think that’s the easiest of what we’re going to tackle. Although in itself, it’s actually not a very easy label to pin down. It’s unclear what makes a company big and what makes a company tech. So for example, is Yahoo still a big company or would it count as big tech? Is Disney a tech company? Because they clearly have a lot of technical capabilities, but I think most people would not consider them to be big tech. So what we did was we basically had a lot of conversation with a lot of researchers in our department, and we asked for a list of companies that they viewed as big tech. And we ended up with a list of 14 companies. Most of them we believe will be agreeable. Google, Facebook, Microsoft, Apple, Amazon, Nvidia, Intel, IBM Huawei, Samsung, Uber, Alibaba, Element AI, and OpenAI. This is a very restrictive set on what we believe the big tech companies are. Like for example, a clear missing one here is Oracle. There’s a lot of other big companies that are missing, but we’re not assigning this as a prescriptive or a definitive list of what big tech companies are.

But adding more companies to this list would only help us strengthen the conclusions we’ve drawn in our paper because they will show how much more influence these companies have. So by limiting it to a small group, we’re actually taking a pessimistic view on the maximum amount of influence that they have. That’s what we define as big tech.

Then the question comes, what do we mean by they have an outsized influence or how they go about influencing policy? And we will get into specific examples here, but I think the best way of demonstrating why there should be cause for concern is through a very simple analogy.

So imagine if there was a health policy conference, which had tens of thousands of researchers. And among the topics they discussed was how do you deal with the negative effects of increased tobacco usage. And its largest funding bodies were all big tobacco companies. Would this be socially acceptable? Would the field of health policy accept this? No. In fact, there are guidelines such as the Article 5.3 on the World Health Organization’s Framework for Convention on Tobacco Control. Which states that if you are developing public health policies with respect to tobacco control, you are required to act to protect these policies from commercial and other vested interests of the tobacco industry. So they are aware of the fact that industrial funding has a very large negative effect on the types of research, the types of conclusions, how strong the conclusions are that can be drawn from research.

But if we flip that around. So instead of health policy, replace machine learning policy or AI policy. Instead of big tobacco, you replace it with big tech. And instead of the negative effects of increased tobacco usage, the ethical concerns of increased AI deployment. Would this be accepted? And this is not even a hypothetical, because all of the big machine learning conferences among their top funding bodies are all these big tech companies. If you look at NeurIPS or you look at FAccT, the Fairness, Accountability, and Transparency Conference, their platinum sponsors or gold sponsors, whatever their highest level is depending on the conference, is all of these companies. Even if one wants to say it’s okay, because these companies are not the same as big tobacco, this should be justified. There is no justification for why we allow big tech to have such influence. I haven’t proven that this influence exists yet in my speaking so far. But there is precedence to believe that industrial funding warps research. And there’s been no critical thought of whether or not big tech, as computer science industrial funding warps research. And I argue in the paper that it does.

Lucas Perry: All right. So with regards to understanding how industry involvement in research and development of a field or area can have a lot of influence, what you take as a historical example to learn from the strategies and the playbook of a historical industry to see if big tech might be doing the same things as big tobacco. So can you explain in the broadest brush strokes, how big tobacco became involved in shaping industry research on the health effects of tobacco, and how big tech is using all of the same or most of the same moves to shape the research landscape to make big tech themselves have a public image of trust and accountability when that may not be the case?

Mohamed Abdalla: The history is that shortly after World War II in the mid 1950s, there was a pronounced decrease in demand for their product. What they believe caused or at least in part caused this drop in demand was a Reader’s Digest article that was published called Cancer by the Carton. And it discussed the scientific links between smoking and lung cancer.

It was later revealed after litigation that big tobacco actually knew about these links, but they also admitted it would result in increased legislation, decreased profits. So they didn’t want to publicly agree with the conclusions of the paper, despite having internal research which showed that this indeed was the case.

After this article was published, it was read by a lot of people and people were getting scared about the health effects of smoking. They employed a PR firm. And their first strategy was to publish a full-page ad in the New York Times that was seen by I think approximately 43 million people or so, which is a large percentage of the population. And they would go into state and I quote, “They accept an interest in people’s health as basic responsibility, paramount to every other consideration in our business.” So despite having internal research that showed that these links were conclusive in their full-page ad, not only did they state that they believed these links were not conclusive, they also lied and said that the health of their people was paramount to every other consideration, including profit. So clear, blatant lies, but dressed up really nicely. And it reads really well.

Another action that they were instructed to do by the PR firm was to fund academic research that would not only draw questions on the conclusiveness of these links, but also to sort of add noise, cause controversy, slowed down legislation. And I’ll go into the points specifically. But the idea to fund academia was actually the PR firms’ idea. And it was under the instruction of the PR firm that they funded academia. And despite their publicly stated goal of funding independent research because they wanted the truth and they care about health, it was revealed again after litigation that internal documents existed that showed that their true purpose was to sow doubt into the research that showed conclusive links between smoking and cancer.

Lucas Perry: Okay. And why would an industry want to do this? Why would they want to lie about the health implications of their products?

Mohamed Abdalla: Well, because there’s a profit motive behind every company’s actions. And that is basically unchanged to this day. Where while they may say a lot of nice sounding stuff, it’s just a simple fact of life that the strongest driving factor between any company’s decision is the profit motive. Especially if they’re publicly traded, they have a legal obligation to their shareholders to maximize profit. It’s not like they’re evil per se. They’re just working within the system that they’re in.

We see sort of the exact same thing with big tech. People can argue about when the decline of opinion regarding these big tech companies started. Especially if you’re American centric. Since I’m in Canada, you’re in the States. I think that the Cambridge Analytica scandal with Facebook can be seen as sort of a highlight, although Google has its own thing with Project Maven or Project Dragonfly.

And the Pew Research Firm shows that the amount of people that view big tech as having a net positive social change in the world started decreasing around the mid 2010s. And I’m only going to quote Facebook here, well Mark Zuckerberg specifically in his testimony to Congress. But in Congress, he would state, “It’s clear that we didn’t do enough and we didn’t focus enough on preventing abuse.” And he stressed that he admitted fault and they would double down on preventing abuse. And this was simply that they didn’t think about how people could do harm. And again, this statement did not mention the leaked internal emails, which stated that they were aware of companies breaking their policies. And they explicitly knew that Cambridge Analytica was breaking their scraping policies and chose to do nothing about it.

Even recently, there have been leaks. So Buzzfeed News leaked Sophie Zhang’s resignation letter, which basically stated that unless they thought that they were going to catch flack in terms of PR, they would not act to moderate a lot of the negative events that was happening on their platforms.

So this is a clear profit incentive thing, and there’s no reason to think that these companies are different. So then the question is how much of their funding of AI ethics or AI ethicists is driven by benevolent desire to see goodness in the world? And I’m sure there are people that work that have this sort of desire. But this is the four criticisms that you can get for reasons that you can fund academia. How much of it is used to reinvent yourself as socially responsible, influence the events and decisions made by funded universities, influence the research questions of individual scientists, and discover receptive academics.

So while we can assume that some people there may actually have the social good in mind and may want to improve society, we need to also consider these the reasons that big tobacco funded academia, and we need to check does big tech also fund academia? And is the effects of their funding academia the exact same as the effects of big tobacco funding academia? And if so, we as academics, or the public body, or government, or whoever needs to take steps to minimize the undue influence.

Lucas Perry: So I do want to spend the main body of this conversation on the big tobacco and big tech engage in these four points that you just illustrated, in order to reinvent themselves in the public image and be seen as socially responsible, even when they may not be actually trying to be socially responsible. It’s about the image and the perception. And then two, that they are trying to influence the events and decisions made by funded universities. And that three, they’ll be influencing the research questions and plans of individual scientists. This helps funnel research into areas that will make you look benevolent or socially responsible. Or you can funnel people away from topics that will lead to regulation.

And then the last one is to discover receptive academics who can be leveraged. If you’re in the oil industry and you can find a few scientists who have some degree of reputability and are willing to cast doubt on the science of climate change, then you’ve found some pretty good allies in your fight for your industry.

Mohamed Abdalla: Yep, exactly.

Lucas Perry: So before we jump into that, do you want to go through each of these points and say what big tobacco did in each of them and what big tech did in each of them? Or do you want to just start by saying everything that big tobacco did?

Mohamed Abdalla: Since there’s four points and there’s a lot of evidence for each of these points, I think it’s probably better to do for the first point, here’s what big tobacco did. And then here’s what big tech did.

Lucas Perry: Let’s go ahead and start then with this first point. From the perspective of big tobacco and big tech, what have they done to reinvent themselves in the public image as socially responsible? And again, you can just briefly touch on why it’s their incentive to do that, and not actually be socially responsible.

Mohamed Abdalla: So the benefit of framing yourself as socially responsible without having to actually take any actions to become socially responsible or to earn that label is basically increased consumer confidence, increased consumer counts, a decreased chance of legislation if the general public, and thereby the general politician believes that you are socially responsible and that you do care about the public good, you are less likely to be regulated as an industry. So that’s a big driving factor for trying to appear socially responsible. And we actually see this in both industries. I’ll cover it later, but a lot of stuff that’s leaked basically shows that spoiler, a lot of the AI research being done, especially in AI ethics is seen as a way to either delay or prevent the legislation of AI. Because they’re afraid that it will eat into their profit, which is against their profit motive, which is why they do a lot of the stuff that they do.

So first, I’ll go over what big tobacco did. And then we’ll try to draw parallels to what big tech did. To appear socially responsible, they were suggested by their PR firm Hill+Knowlton Strategies to fund academics and to create research centers. The biggest one that they created was CTR, which is the Council for Tobacco Research. And when they created CTR, they created it in a very academically appealing way. What I mean by that is that CTR was advised by distinguished scientists who serve on its scientific advisory boards. They went out of their way to recruit these scientists so that the research center gains academic respectability and is trusted by if not only the lay person, by academics in general. That they’re a respectable organization despite being funded by big tobacco.

And then what they would do is fund research questions. They’d act essentially as a pseudo granting body and provide grants to researchers who were working on specific questions that was decided by this council. At surface level, it seems okay. I’m not 100% sure how it works in the States, but at least in Canada we have research funding bodies. So we have NSERC or CIHR Natural Sciences and Engineering Research Council, which decides who gets the grants from the government’s research money. And we have it for all the different fields. And in theory, the research questions should be given in terms of validity of research, potential impact. A lot of the academically relevant considerations.

But what we ended up showing after litigation again, was that at least in the case of big tobacco, there were more lawyers than scientists involved in the distribution of money. And the lawyers were aware of what would and would not likely hurt the bottom line of these companies. So quoting previous work, their internal documents showed that they would simply refuse to fund any proposal that acknowledged that nicotine was addictive or that smoking was dangerous.

They basically went out of their way to fund research that was sort of unrelated to tobacco use so that they get good PR while minimizing the risk that said research would harm their profit motive. And during any sort of litigation, for example during a cigarette product liability trial, the lawyers presented a list of all the universities and medical schools supported by this Council for Tobacco Research as proof that they care about social responsibility and they care about the wellbeing. And they use this money as proof.

Basically at first glance, all of their external facing actions did seem that they cared about the well-being of people. But it was later revealed through internal documents that this was not the case. And this was basically a very calculated move to prevent legislation, beat litigation, and other self-serving goals in order to maximize profit.

In big tech, we see similar things happening. In 2016, the Partnership on AI to Benefit People and Society was established to, “Study and formulate best practices on AI technologies and to study AI and its influences on people and society.” Again, a seemingly very virtuous goal. And a lot of people signed up for this. A lot of non-profit organizations, academic bodies, and a lot of industries signed up for this. But it was later leaked that despite sounding rosy, the reality on the ground was a little bit darker. So reports from those involved, and this was a piece published at The Intercept. Demonstrated how neither prestigious academic institutions such as MIT, nor civil liberty organizations like the ACLU had much power in the direction of the partnership. So they ended up serving as a legitimate function for big tech’s goals. Basically railroading other institutions while having their brand on your work helps appear socially responsible. But if you don’t actually give them power, it’s only the appearance of social responsibility that you’re getting. You’re not actually being forced to be socially responsible.

There’s other examples of going to litigation specifically. During his testimony to Congress, Mark Zuckerberg states that in order to tackle this problems, they’ll work with independent academics. And these independent academics would be given oversight over their company. It’s unclear how an academic that is chosen by Facebook, theoretically compensated by Facebook, and could be fired by Facebook would be independent of Facebook after being chosen, receiving compensation, and knowing that they can lose that compensation if they do something to anger Facebook.

Another example almost word for word from big tobacco’s showing off to jurors is that Google boasts that it releases more than X research papers on topics in responsible AI in a year to demonstrate social responsibility. This is despite arm’s length involvement with the military minded startups. So if you build on that, Alphabet Google faced a lot of internal backlash with Project Maven, which is basically they’re working on image recognition algorithms for drones. They faced a lot of backlash. So publicly, they appeared to have stopped. They promised to stop working with the military. However, internally, Gradient Ventures, which is basically the venture capital arm of Alphabet still funds, provides researchers, and provides data to military startups. So despite their promise not to work in military, despite their research in responsible AI, they still work in areas that don’t necessarily fit the label of being socially responsible.

Lucas Perry: It seems there’s also this dynamic here where in both tobacco and in tech, it’s cheaper to pretend to be socially responsible than to actually be socially responsible. In the case of big tobacco, that would’ve actually meant dismantling the entire industry and maybe bringing e-cigarettes on 10 years before that actually happened. Yet in the case of big tech, it would seem to be more like hampering short term profit margins and putting a halt to recommender algorithms and systems that are already deployed that are having a dubious effect on American democracy and the wellbeing of the tens of millions of human brains that are getting fed garbage content by these algorithms.

So this first point seems pretty clear to me. I’m not sure if you have anything else that you’d like to add here. Public perception is just important. And if you can get policymakers and government to also think that you’re playing socially responsible, they’ll hold off on regulating you in the ways that you don’t want to be regulated.

Mohamed Abdalla: Yeah, that’s exactly it. Yeah.

Lucas Perry: Are these moves that are made by big tobacco and big tech also reflected in other contentious industries like in the oil industry or other greenhouse gas emitting energy industries? Are they making generally the same type of moves?

Mohamed Abdalla: Yeah, 100%. So this is simply industry’s reaction to any sort of possible legislation. Whether it’s big tobacco and smoking legislation, big tech and some sort of legislation on AI, oil companies and legislation on greenhouse gas emissions, clean energy, so on and so forth. Even a lot of food industry. I’m not sure what the proper term for it is, but a lot of the nutritional science research is heavily corrupted by funding from whether it’s Kellogg’s, or the meat industry, or the dairy industry. So that’s what industry does. They have a profit motive, and this is a profitable action to take. So it’s everywhere.

Lucas Perry: Yeah. So I mean, when the truth isn’t in your favor, and your incentive is profit, then obfuscating the truth is your goal.

Mohamed Abdalla: Exactly.

Lucas Perry: All right. So moving on to the second point then, how is it that big tobacco and big tech in your words, work to influence the events and decisions made by funded universities? And why does influencing the decisions made by funded universities even matter for large industries like big tobacco and big tech?

Mohamed Abdalla: So there’s multiple reasons to influence events. What it means to influence events is also a variety of actions. You could either hold events, you could stop holding events, or you can change how events that are being held operate. So events here, at least in the academic sense, I’m going to talk about conferences. And although they’re not always necessarily funded by universities, they are academic events. So why would you want to do this? Let’s talk about big tobacco first and show by example why they gained from doing this.

First, I’ll just go over some examples. So at my home university, the University of Toronto, Imperial Tobacco, which is one of the companies that belongs in big tobacco, withheld its funding from U of T’s Faculty of Law conference as retribution for the fact that U of T law students were influential in having criminal charges be laid against Shoppers Drug Mart for selling tobacco to a minor. As one of their spokespersons said, they were biting the hand that feeds them. If universities events such as this annual U of T law conference relies on funding from industry in general, then they have an oversized say of what you as an institution will do, or what people working for you can and cannot do. Because you’ll be scared of losing that consistent money.

Lucas Perry: I see. So you feed as many people as you can. Knowing that if they ever bite you, the retraction of your money or what you’re feeding them is an incentive for them to not hurt you?

Mohamed Abdalla: Exactly. But it’s not even the retraction, it’s the threat of retraction. If in the back of their mind 50% of their funding comes from whatever industry, can you afford to live without 50% of your funding? Most people would say no, and that causes worry and will call you to self self-censor. And that’s not a good thing in academia.

Lucas Perry: And replacing that funding is not always easy?

Mohamed Abdalla: It’s very difficult.

Lucas Perry: So not only are you getting the public image of being socially responsible by investing in some institutions or conferences, which appear to be socially responsible or which contain socially responsible workshops and portions. But then you also have in the back of the mind of the board that organizes these conferences, the worry and the knowledge that, “We can’t take a position on this because we’d lose our funding. Or this is too spicy, so we know we can’t take a position on this.” So you’re getting both of these benefits. The constraining of what they may do within the context of what may be deemed ethically and socially responsible, which may be to go against your industry in some strong way. But you also gain the appearance of just flat out being socially responsible while suppressing what would be socially responsible free discourse.

Mohamed Abdalla: 100%. And to build on that a little bit. Since we brought in boards, people that decide what happens. An easier way of influencing what happens is to actually plant or recruit friendly actors within academia. There is a history at least by my home university again, where the former president and dean of law of U of T was the director of a big tobacco company. And someone on the board of Women’s College Hospital, which is a teaching hospital affiliated with my university was the president and chief spokesperson for the Canadian Tobacco Manufacturer’s Council. So although there is no proof that they necessarily went out of their way to change the events held by the university, if a large percentage of your net worth is held in tobacco stocks, even if you’re a good human being, just because you’re a human being, you will have some sort of incentive to not hurt your own wellbeing. And that can influence the events that your university holds, the type of speakers that you invite, the types of stances that you allow your university to take.

Lucas Perry: So you talked about how big tobacco was doing this at your home university. How do you believe that big tech is engaging in this?

Mohamed Abdalla: The first thing that we’ll cover is the funding of large machine learning AI conferences. In addition to academic innovation or whatever reason they may say that they’re funding these conferences. And I believe that a large portion of this is because of academic innovation. You can see that the amount of funding that they provide also helps give them a say. Or at least in the back of the organizer’s mind. NeurIPS, which is the biggest machine learning AI conference has had always at least two big tech sponsors at the highest tier of funding since 2015. And in recent years, the number of big tech companies has exceeded five. This also carries over to workshops where over the past five years, only a single ethics related workshop did not have at least one organizer belonging to big tech. And that was 2018s robust AI in financial services workshop, which instead featured the foreheads of AI branches at big banks, which is not necessarily better. It’s not to say that those working in these companies should not have any say. But to have no venue that doesn’t rely on big tech in some sort of way or is not influenced in big tech in some sort of way is worrying.

Lucas Perry: Or whose existence is tied up in the incentives of big tech. Because whatever big tech’s incentives are, that’s generating the profit which is funding you. So you’re protecting that whole system when you accept money from it. And then your incentives become aligned with the incentives of the company that is suppressing socially responsible work.

Mohamed Abdalla: Yeah. Fully agree. In the next section where we talk about the individual researchers, I’ll go more into this. There’s a very reasonable framing of this issue where big tech isn’t purposely doing this. And industry is not purposely being influenced, but the influence is taking place. But basically exactly as you said. Even if these companies are not making any explicit demands or requests from the conference organizers, it’s only human nature to assume that these organizers would be worried or uncomfortable doing anything that would hurt their sponsors. And this type of self-censorship is worrying.

The example that I just showed was for NeurIPS, which is largely a technical conference. So Google does have incentive to fund technical research because a really good optimization algorithm will help their industry or their work, their products. But even when it comes to conferences that are not technical in their goal, so for example the FAccts conference, the Fairness Accountability, and Transparency Conference has never had a year without big tech funding at the highest level. Google’s three out of three years. Microsoft is two out of three years. And Facebook is two out of three years.

FAccT has a statement regarding sponsorship and financial support where they say that you have to disclose this funding. But it’s unclear how disclosure alone helps combat direct and indirect industrial pressures. A reaction that I often get is basically that those who are involved are very careful to disclose the potential conflicts of interest. But that is not a critical understanding of how the conflict of interest actually works. Disclosing a conflict of interest is not a solution. It’s just simply highlighting the fact that a problem exists.

In the public health sphere, researchers push that resources should be devoted to the problems associated with sequestration, which is elimination of relationships between commercial industry and professionals in all cases where it’s remotely feasible. So that means how this policy realizes that simply disclosing is not actually debiasing yourself.

Lucas Perry: That’s right. So you said that succinctly, disclosing is not debiasing. Another way of saying that is it’s basically just saying, “Hey, my incentives are misaligned here.” Full-stop.

Mohamed Abdalla: Exactly.

Lucas Perry: And then like okay, everyone knows that now, and that’s better than not. But your incentives are still misaligned towards the general good.

Mohamed Abdalla: Yeah, exactly. And it’s unclear why we think that AI ethicists are different from other forms of ethicists in their incorruptibility. Or maybe it’s an unfounded in terms of research view that thinks or believes that we would be able to post-hoc adjust for the biases of researchers, but that’s simply unfounded by research. So yeah, one of the ways that they influence that events is simply by funding the events and funding the people organizing these events.

But there’s also examples where some companies in big tech knowingly are manipulating the events. And I’ll quote here from my paper. “As part of a campaign by Google executives to shift the antitrust conversation, Google sponsored and planned a conference to influence policy makers going so far as to invite a token Google critic capable of giving some semblance of balance.” So it’s clear that these executives know what they’re doing, and they know that by influencing events, they will influence policy, which will influence legislation, and in turn litigation. So it’s clear from the leaks that are happening that this is not simply needless worrying, but this is an active goal of industry in order to maximize their profit.

There is some work for big tobacco that has not been done in big tech. I don’t think it can be done in big tech, and I’ll speak about why. But basically, when it comes to influencing events, there is research that shows that events held that were sponsored by big tobacco, such as symposiums or workshops about secondhand smoking are not only skewed, but also are poor quality compared to events not sponsored by big tobacco. So when it comes to big tobacco research, if the event is sponsored by big tobacco, the causation here is not clear whether or not it’s subconscious, whether or not it’s conscious. The causation might not be perfectly clear, but the results are. And it shows that if you’re funded by big tobacco, you’re more skewed and poorer quality in terms of research about the effects of secondhand smoking or the effects of smoking.

We can’t do this sort of research in big tech because there isn’t an event that isn’t sponsored by big tech. So we can’t even do this sort of research. And that should be worrying. If we know in other fields that sponsorship leads to lower quality of work, why are we not trying to have them divest from funding events directly anyway?

Lucas Perry: All right. Yeah. So this very clearly links up with the first example you gave at the beginning of our conversation about imagine having a health care conference and all of the main investors are big cigarette companies. Wouldn’t we have a problem with that?

So these large industries, which are having detrimental effects to society and civilization first have the incentive to portray a public image of social responsibility without actually being socially responsible. And then they also have an incentive to influence events and decisions made by funded universities as to one, make the incentives of those universities and events aligned with their own because their funding is dependent on these industries. And then also, to therefore constrain what can and may be said at these conferences in order to protect that funding. So the next one here is how is it that big tobacco and big tech influence the research questions and plans of individual scientists?

Mohamed Abdalla: So in the case of big tobacco, we know especially from leaked documents that they actively sought to fund research that placed the blame of lung cancer on anything other than smoking. So there’s the classic example on owning a bird is more likely to increase your chance of getting lung cancer. And that’s the reason why you got lung cancer instead of smoking.

Lucas Perry: Yeah, I thought that was hilarious. They were like, “Maybe it’s pets. I think it’s pets that are creating cancer.”

Mohamed Abdalla: Yeah exactly. And when they choose to fund this research question, not only do they get the positive PR. But they also get the ability to say that this is not conclusive. “Because look, here are these academics in your universities that think that it might not be smoking causing the cancer. So let’s hold off on litigation until this type of research is done.” So this steering of funds instead of exploring the effects of tobacco on lung cancer, they would study just the basic science of cancer instead. And this would limit the amount of negative PR that they get. So that’s one reason for doing it. But number two is it allows them sow doubt and say that there’s confusion, or that we haven’t arrived at some sort of consensus.

So that’s one of the ways that they did it, finding researchers who they termed critics or skeptics. And they would fund them and amplify their voices. And they had a specific set of money for set people, especially if they were smokers. So they tried to actively seek for people that were smokers because they felt that they’d be more sympathetic to these companies. They would purposely steer funds towards these people and they would change the research sphere.

There’s also very egregious actions that they took. So for example, Professor Stanton Glantz. He’s in UCSF I think, University of California, San Francisco. They would take out ads against him in the newspapers where they would put lies to point out flaws in his studies. And these flaws aren’t really flaws. It’s just a twisting of the truth. It’s basically if you go against us, we’re going to attack you. We’re going to make it very hard for you to get further funding. You’re going to have a lot of bad PR. It’s just sort of dis-incentivizing anyone else from doing critical issues against them.

They would work with elected politicians as well to block funding of scientists of opposing viewpoints. So it’s not like they didn’t have their fingers in government as well. During litigation, an email covered where HHS, which is the U.S. Department of Health and Human Services appropriations continuing resolution will include language to prohibit funding for Glantz who is Stanton Glantz, the same scientist. So it’s clear that through intimidation, but also through acting as a funding body, they’re able to change what researchers have work on.

Big tech works in essentially the same way. The first thing to note here is that when it comes to ethical AI or AI ethics, big tech in general has a very specific conception of what it means for an algorithm to be ethical. Whether it’s inspired by their insular culture where it’s sort of a very echo-y place where everyone sort of agrees with each other, there’s sort of an agreed upon culture. There is previous work owning ethics that discusses how there’s three main factors in defining AI ethics. And there’s three values of Silicon Valley. Meritocracy, trust in the market. And I forgot the last one. And basically, their definition is simply different from that, which the rest of the world, or the rest of the country generally has.

So Silicon Valley has a very specific view of what AI ethics is or should be. And that is not necessarily shared by everyone outside of Silicon Valley. That is not necessarily in itself a bad thing. But when they act as a pseudo granting body that is, they provide grants or money for researchers, it becomes an issue. Because if for example, you are a professor. And as a professor, one of your biggest roles is simply to bring in money to do research. And if your research question does not agree with the underlying logics of Silicon Valley’s funding bodies, whoever makes these decisions.

Lucas Perry: Like if you question the assumption about trust in the market?

Mohamed Abdalla: Yeah, exactly. Or you question meritocracy, and maybe that’s not how we should be basing our societal values. Even if the people granting the money are not lawyers like they were in big tobacco, even if they were research scientists, the fact that they’re at a high enough level to be choosing who gets money likely means that they’ve been there for awhile. And there’s an increased chance that they personally believe or agree with the views that their companies hold. Not necessarily always the case, but the probability is a lot higher.

So if you believe in what your company believes in, and there is a researcher who is working from a totally different set of foundations whose assumptions do not match your assumption. As human nature, you’re less likely to believe in this research. You’re less likely to believe that it’s successful or on the true path. So you’re less likely to fund it. And that requires no malicious intent on your side. It’s just that you are part of an industry that has a specific view. And if a researcher does not share this view, you’re not going to give them that money.

And you switch it over to the researcher side. If I want to get tenure, I’m not a professor. So I can’t even get tenure. But if I want to get hired, I have to show that I can bring in money. If I’m a professor and I want to get tenure, I have to show that I can bring in even more money. And if I see that these companies are giving away vast sums of money, it is in my best interest to ask a research question that will be funded by them. Because what good am I if I get no money and I don’t get hired? Or what good am I if I get no money and I don’t get tenure?

So what will end up happening there is that it’s a cyclical thing where researchers view that these companies fund specific types of research or researchers that are based in fundamental assumptions that they may not necessarily agree with. And in order to maximize their opportunities to get this money, they will change their research question. Whether it’s complete, or slight adjustment, or changing the assumptions to match what will get them the money. And the cycle will just keep happening until there’s no difference.

Lucas Perry: So there’s less opportunity for researchers and institutions that fundamentally disagree with some axiom that these industries hold over ethics and accountability, or whatever else?

Mohamed Abdalla: Exactly. 100%. And the important thing to note here is that for this to happen, no one needs to be acting maliciously. The people in big tech probably believe in what they’re pushing for. At least I like to make this assumption. And I think it makes it for the easiest sell, especially for those within the computer science department because there’s a lot of pushback to this type of thought. Where even if the people deciding who gets the money, imagine they’re completely disinterested researchers who have very agreeable goals, and they love society, and they want the general good. The fact that they are in a position to be deciding who gets the money means that they’re likely higher up in these companies. You don’t get to be higher up and stay for these companies long enough, unless you agree with the viewpoint.

Lucas Perry: And that viewpoint though, in a market has to be in some sense, aligned with the impersonal global corporate objective of maximizing the bottom line. There’s this values filtration process internally in a company where maybe you’ll have all the people who are against Project Maven, but none of them are high enough. Right?

Mohamed Abdalla: Exactly.

Lucas Perry: You need to sift those people out for the higher positions because the higher positions are the ones which have to be aligned with the bottom line, with maximizing profits for shareholders. Those people could authentically think that maximizing the profit of some big industrial company is a good thing, because you really trust in the market and how it serves the market.

Mohamed Abdalla: I think there are people that actually believe this. So I know you say it kind of disbelievingly, but I think that people actually believe this.

Lucas Perry: Yeah, people really do believe this. I don’t actually think about this stuff a lot. But yeah. I mean, it makes sense to me. We buy all your stuff. So you’re serving me, I’m making a transaction with you. But this fact about this values sifting towards the top to be aligned with the profit maximization, those are the values that will remain for then deciding the funding of researchers at institutions. So no one has to be evil in the process. You just have to be following the impersonal incentives of a global capitalist industry.

Mohamed Abdalla: Yeah. I do not aim to shame anybody involved from either side. Certain executives I shame and certain attorneys I shame. But I work under the assumption that all computer science, AI, ethicists, researchers, whatever you want to call them are well-intentioned. And the way the system is set up is that even well-intentioned researchers can have negative impact on the research being done, and can have a limiting impact on the types of questions being considered.

And I hope by now you agree that at least theoretically, that by acting as a pseudo granting body, there’s a chance for this influence to occur. But then in my work, what I did was I actually counted how many people were actually looking to big tech as a pseudo granting body. So I looked at the CVs of all computer science faculty at four schools. University of Toronto, Massachusetts Institute of Technology, Stanford, and Berkeley. Two private schools, two public schools. Two eastern coast universities, two western coast. And for each CV that I could find, I looked to answer a certain number of questions. So whether or not a specific faculty works on AI, whether or not they work on the ethics of AI. I very loosely defined that as being having at least one paper defined about any sort of societal impact of AI. Whether or not they have ever received faculty funding from big tech. So that is grants or awards from companies. Whether they have received graduate funding from big tech. So was any portion of this faculty’s graduate education funded by big tech? And whether or not they are or were employed by big tech. So at any time that they have any sort of previous or current financial relationship with big tech?

What the research shows is that of all computer science faculty, 52% of them. So at least half view big tech as a funding body. So that means as a professor, they have received a grant or an award to do research from big tech. And universities technically are here to not maximize profit for these companies, but to do science. And in theory, public good kind of things. At least half of the researchers are looking to these companies as granting bodies.

If you narrow that down to computer science faculty that work in AI, that percentage goes up to 58%. If you limit it to computer science faculty who work in the ethics of AI or who are AI ethicists, it remains at 58%. Which means that 58% of the people looking to answer the really hard questions about AI and society, whether it’s short-term or long-term, view these companies as a funding body. Which in turn, as we discussed, opens them up to influence whether it’s subconscious or conscious.

Lucas Perry: So then if you’re Mohamed Abdalla and you come out with a paper like you came out with, is the thought here that it’s very much less likely than in the future you will receive grants from big tech?

Mohamed Abdalla: So it’s unclear. There’s a meta game to play here as well. A classic example here is Michael Moore. The filmmaker, political activist. I’m not sure the title you want to give him. But a lot of his films are funded by Fox or some subsidiary of Fox.

Lucas Perry: Yeah. But they’re all leftist views.

Mohamed Abdalla: Exactly. So as in the example that I gave previously where Google would invite a token critic to their conferences to give it some semblance of balance, simply disagreeing with them will not reject you from their funding. It’s just that they will likely limit the amount of people who are publicly disagreeing with them by choosing. Again, it seems too self-serving to say, “I’m a martyr. I’ve sacrificed myself.” I don’t view that as the case, although I did get some feedback saying, “Maybe you shouldn’t push this now until you get a job,” kind of thing. But I’m pushing that it shouldn’t be researchers deciding who they get money from. This is a higher level issue.

If you go into a pure hypothetical. And for the listeners, this is not what I actually believe. But let us consider big tech to be evil, right? And publicly minded researchers who refuse to take money from big tech as good. If every good researcher. Again, good here is not being used in the prescriptive sense, but just in our hypothetical. If all of the good researchers refuse to take money from these evil corporations, then what you’re going to be ending up with is these researchers will not get jobs, will not get promoted. Their viewpoints will die out. But also, the people who are not good will have no problem taking this money. And they will be less likely to challenge these evil corporations. So in a game theoretic perspective, if you go from a pure utility perspective, it makes sense for you as a good researchers to take this bad money.

So that’s why I state in the paper whatever our fix to this is, can’t be done at the individual researcher level. You have to assume that all researchers are good, but you have to come up with a system level solution. Whether that’s legislation from governments, whether that’s a funding body solution, or a collection of institutions that come up with an institutional policy that applies to all of these top schools or all computer science departments all over the world. Whoever we can get to agree together. So that’s basically what I’m pushing for. But there’s also ways that you can influence research questions without directly funding them. And the way that you do this is by repeated exposure to your ideas of ethics or your ideas of what is fair, what is not fair. However you want to phrase it.

I got a lot of puzzled looks when I tell people that I also looked at whether or not a professor was funded during graduate school by these companies. And there is some rightful questioning there. Because I saying, or am I assuming that the fact that they got a scholarship from let’s say Microsoft during their PhD, is that going to impact their research question 20 years down the line when they’re a professor? And I do not think that’s actually how this works. But the reason for asking this was to show how often or how much exposure these faculty members were given to big tech’s values or Silicon Valley values, however you want to say it.

Even if they’re not actively going out of their way to give money to researchers to affect their research questions. If every single person who becomes a faculty member at all of these prestigious schools has at one point done some term. Whether that’s a four month internship, or a one-year stint, or a multiple year stint in big tech in Silicon Valley. It’s only human to worry that repeated exposure to such views will impact whatever views you end up developing yourself. Especially if you’re not going into this environment trying to critically examine their views. You’re just likely to adopt it internally, subconsciously before you have to think about it.

And what we show here is that 84% of all computer science faculty have had some sort of financial connection with big tech. Whether that’s receiving funding as graduate, or a faculty member, or been previously employed.

Lucas Perry: We know what the incentives of these industries are all about. So why would they even be interested in funding the graduate work of someone, if it wasn’t going to groom them in some sense? Are there tax reasons?

Mohamed Abdalla: I’m not 100% sure how it works in the United States. It exists in Canada, but I’m not sure if it does in the U.S.

Lucas Perry: Okay.

Mohamed Abdalla: So there’s multiple reasons for doing this. There is of course as usual, the PR aspects of it. We are helping students pay off their student loans in the states I guess. There’s also if you fund someone’s graduate funding, you’re building connections, making them easier to hire possibly.

Lucas Perry: Oh yeah. You win the talent war.

Mohamed Abdalla: Yeah, exactly. Yeah. If you want a Microsoft Fellowship, I think you also win an internship at Microsoft, which makes you more likely to work for Microsoft. So it’s also a semi-hiring thing. There’s a lot of reasons for them to do this. And I can’t say that to influence is the only reason. But the fact that if you limit it to CS faculty who work in AI ethics, 97% of them have had some sort of financial connection to big tech. 97% of them have had exposure to the dominant views of ethics by Silicon Valley. What percentage of this 97 is going to subconsciously accept these views, or adopt these views because they haven’t been presented with another view, or they haven’t been presented with the opportunity to consider another critical view that disagrees with their fundamental assumptions? It’s not to say that it’s impossible, it’s just to say should they be having such a large influence?

Lucas Perry: So we’ve spent a lot of time here then on this third point on influencing the research questions and plans of individual scientists. It seems largely again that by giving money, you can help align their incentives with your own. You can help direct what kind of research questions you care about. You can help give the impression of social responsibility. When actually, you’re constraining and funneling research interest and research activity into places which are beneficial to you. You’re also, I think you’re arguing here exposing researchers to your values and your community.

Mohamed Abdalla: Yeah. Not everyone’s view of ethics is fully formed when they get a lot of exposure to big tech. And this is worrying because if you’re a blank slate, you’re much more easily drawn upon. So they’re more likely to impart their views on you. And if 97% of the people are exposed, it’s only safe to assume that some percentage will draw this. And then that will artificially inflate the amount of people that agree with big tech’s viewpoints. And therefore further push academia or the academic conversation into alignments with something they find favorable.

Lucas Perry: All right. So the last point here then is how big tobacco and big tech discover receptive academics who can be leveraged. So this is the point about finding someone who may be a skeptic or critic in a community of some widely held scientific view, and then funding and propping them up so that they introduce some level of fake or constructed, and artificial doubt and skepticism and obfuscation of the issue. So would you like to unpack how big tobacco and big tech have done this?

Mohamed Abdalla: When it comes to the big tobacco, we did cover this a tiny bit before. For example, when we talked about how they would fund research that questions whether it’s actually keeping birds as pets that caused lung cancer. And so long as this research is still being funded and has not been published in the journal, they can honestly speaking say that it is not yet conclusive. There is research being done, and there are other possible causes being studied.

Despite having internal research to show that this is not true. If you go from a pure logic standpoint where is it conclusive being defined as there exists no oppositional research, they’ve satisfied the conditions such that it is not conclusive and there is fake doubt.

Lucas Perry: Yeah. You’re making logically accurate statements, but they’re epistemically dishonest.

Mohamed Abdalla: Yeah. And that’s basically what they do when they leverage these academics to sow doubt. But they knew that especially in Europe a little bit after this, that there was a lot of concern being drawn regarding the funding of academics by big tobacco. So they would purposefully search for European scientists who had no previous connection to them who they could leverage to testify. And this was part of a larger project that they called the White Coat Project, which resulted in infiltrations in governing bodies, heads of academia, and editorial boards to help with litigation legislation. And that’s actually why I named my paper The Grey Hoodie Project. It’s an homage to the White Coat Project. But since computer scientists don’t actually wear white coats, we’re more likely to wear gray hoodies. That’s where the name of the paper comes from. So that’s how big tobacco did it.

When it comes to big tech, we have clear evidence that they have done the same. Although it’s not clear the scope of which they have done this because there haven’t been enough leaks yet. This is not something that’s usually publicly facing. But Eric Schmidt, previously the CEO of Google was, and I quote from an Intercept article, “Advised on which academic AI ethicists his private foundation should fund.” I think Eric Schmidt has very special views regarding the place of big tech and its impact on society that likely would not be agreed with by the majority of AI ethicists. However, if they find an ethicist that they agree with and they amplify him and give him hundreds of thousands of dollars a year, he is basically pushing his viewpoint on the rest of the community by way of funding.

Another example where Eric again, Eric Schmidt asked that he should fund a certain professor, and this certain professor later served as an expert consultant to the Pentagon’s innovation board. And Eric Schmidt is now on some military advisement role in the U.S. government. And that’s a clear example of how those from big tech are looking to leverage receptive academics. We don’t have a lot of examples from other companies. But the fact that it is happening and this one got leaked, do we have to wait until other ones get leaked to worry about this?

An interesting example that I personally view as quite weak. I don’t like this example, but the irony will show in a little while. There is a professor at George Mason University who had written academic research that was funded indirectly by Google. And his research criticized the antitrust scrutiny of Google shortly before joining the FTC, the Federal Trade Commission. And after he joined the FTC, they dropped their antitrust suits. They’ve picked it up now again. But this claim, which basically draws into question whether or not Google funded him one, because of his criticism of antitrust against Google. So that is a possible reason they chose to fund him. There’s another unstated question here in this example with did he choose to criticize antitrust scrutiny of Google because they fund him? So which direction does this flow? It’s possible that neither direction flow. But when he joined the FTC, did they drop their case because essentially they hired a compromised academic?

I do not believe and I have no proof of any of this. But Google’s response to this hinting question was that this expose was pushed by the Campaign for Accountability. And Google said that this evidence should not be acceptable because this nonprofit Campaign for Accountability is largely funded by Oracle, which is another tech company.

So if you sort of abstract this away, what Google is saying is that claims made regarding societal impacts, or legislation, or anything to do with AI ethics. If that researcher is funded by a big tech company, we should be worried about what they’re saying, or we should be very skeptical about what they’re saying. Because they’re essentially saying that you should not trust this because it’s funded by Oracle. It’s largely backed by Oracle. You know, you abstract that away, it’s largely backed by big tech. Does that not apply to everything that Google does or everything that big tech in general does? So it is clear that they themselves know that industry money has a corrupting influence on the type of research being done. And that sort of just pushes my entire piece.

Lucas Perry: Yeah. I mean at some sense, none of this is mysterious. They couldn’t not be doing this. We know what industry wants and does, and they’re full of smart people. So, I mean, if someone from industry who is participating in this or listening to this conversation, they would be like, “You’ve woken up to the obvious. Good job.” And that’s not to downplay the insight of your work yet. It also makes me think of lobbying.

Mohamed Abdalla: 100%.

Lucas Perry: We could figure out all of the machinations of lobbying and it would be like, “Well yeah, they couldn’t not be doing this, given their incentives.”

Mohamed Abdalla: So I fully agree. If you come into this knowing all of the incentives, what they’re doing is the logical move. I fully agree that this is obvious, right?

Lucas Perry: I don’t think it’s obvious. I think it naturally follows from first principles, but I feel like I learned a lot from your paper. Not everyone knows this. I would say not even many people know this.

Mohamed Abdalla I guess obviously it wasn’t the correct word. But I was going to say that the points that I raise show that there’s a clear concern here. And I think that once people hear the points, they’re more likely to believe this. But there are people in academia. A common criticism I get is that people know who pay them. So they say that it’s unfair to assume that someone funded by a company cannot be critical of that company or big tech in general. And several researchers who work at these companies are critical of their employer’s technology. So the point of my work is to lay this out flat to show that it doesn’t matter if people know who pay them, the academic literature shows that this has a negative effect. And therefore, disclosure isn’t enough. I don’t want to name the person who said this criticism, but they’re pretty high up. The idea that conflict of interest is okay simply because it’s disclosed seems to be a uniquely computer science phenomenon.

Lucas Perry: Yeah. It’s a weird claim to be able to say, “I’m so smart and powerful, and I have a PhD, and giving me dirty money or money that carries with it certain incentives, I’m just free of that.”

Mohamed Abdalla: Yeah. Or whether or not it’s incorrectly perceived ability to self-correct for these biases. That’s sort of the current that I’m trying to fight against. Because the mainstream current in academia is sort of like, “Yeah, but we know who pays us, so we’re going to adjust for it.” And although the conclusions I draw are intuitive I think, the intuition that people have regarding big tech is basically, big tobacco, everyone has an intuitively negative gut feeling. So it’s very easy for them to agree. It’s a little bit more difficult to convince them that even if you believe that big tech is a force for good, you should still be worried.

Lucas Perry: I also think that the word here that is better than obvious is it’s self-evident once it’s been explained. It’s not obvious. Because if it were obvious, then you wouldn’t have needed to write this paper. And I already would’ve known about this, and everyone would have. So if you were just to wrap up and summarize in a few bullet points here this last point on discovering receptive academics and leveraging them, how would you do that?

Mohamed Abdalla: I kind of summarize this to policymakers. When policymakers to try to make policy, they tend to converse with three main parties. They will converse with industry, they converse with the academics, and they converse with the public. And they believe that getting this wide viewpoint will help them arrive at the best compromise to help society move in the way that it should. However, the very mindful way that big tech is trying to leverage academics, a policymaker will talk to industry. He’ll talk to the very specific researchers who are handpicked by industry. And therefore, are basically in agreement with the industry and they will talk to the public. So two thirds of the voices they hear are industry aligned voices, as opposed to previously one third. And that’s something that I cover in the paper.

And that’s the reason why you want to leverage receptive academics, because it gives you the ability so that the majority of whatever a policymaker hears, and they’re really busy people, and they don’t have the time to do the research themselves. If two out of every three people is pushing policy or pushing views that are in alignment with whatever’s good for big tech’s profit motive, then you’re more likely to believe that viewpoint. As opposed to having an independent academia where if the right decision is to agree with big tech then you assume they would. If the right decision is to disagree, then you assume they would. But if they leverage the academics, this is less likely to happen. Therefore, academia is not playing its proper role when it comes to policy-making.

Lucas Perry: All right. So I think this pretty clearly lays out then how industries in general, whether it be big tobacco, big tech, oil companies, greenhouse gas emitting energy companies, you even brought up the food industry. I mean, just anyone really who have the bottom line as their incentive. These strategies are just naturally born of the impersonal incentive structure of a corporation or industry.

This next question is maybe a bit more optimistic. All these organizations are made up of people, and these people are all I guess more or less good or more or less altruistic. And you expect that if we don’t go extinct, these industries always get caught, right? Big tobacco got caught, oil industries are in the midst of getting caught. And next we have big tech. And I mean, the dynamics are also a little bit different because cigarettes and oil can be booted. But we’re kind of married to the technology of big tech forever. Literally.

Mohamed Abdalla: I would agree with that.

Lucas Perry: Yeah. So the strategy for those two seems to be obfuscate the issue for as long as possible so your industry exists as long as possible, and then you will die. There is no socially responsible version of your industry. That’s not going to happen with big tech. I mean, technology is here to stay. So does big tech have any actual incentives for genuine social responsibility, or are they just playing the optimal game from their end where you obfuscate for as long as possible, and you bias all of the events and the researchers as much as possible? Eventually, there’ll be enough podcasts like this and minds changed that they can’t do that any longer without incurring a large social cost in opinion, and perhaps market. So is it always simply the case that promoting the facade of being socially responsible is cheaper and better than the incentive of actually becoming socially responsible?

Mohamed Abdalla: So a thing that I have to say, because the people that I worked with that still work in health policy regarding tobacco would be hurt if I didn’t say it. Big tobacco is still heavily investing in academia, and they’re still heavily pushing research and certain viewpoints. And although the general perception has shifted regarding big tobacco, they’re not done yet. So although I do agree with your conclusion that it is a matter of time when they’re done, to think that the fight is over is simply not true yet. There’s still a lot of health policy folks that are pushing as hard as they can to completely get rid of them. Even within the United States and Europe, they create new institutions that do other research. They’ve become maybe a little bit more subtle about it. But to declare victory I think is the correct path on what will happen, but it has not yet happened. So there’s still work to be done.

Regarding whether or not big tech has an actual incentive to do good, I like to assume the best of people. I assume that Mark Zuckerberg actually founded Facebook because he actually cared about connecting people. I believe that in his heart of hearts, he does have at least generally speaking, a positive goal for society. He doesn’t want to necessarily do bad or be wrecking democracies across the world. So I don’t think that’s his goal, right?

So I think that starting from that viewpoint is helpful because one, it will make you heard. But also, it shows how this is a largely systemic issue. Because despite his well-intentioned goals that we’re assuming exist. And I actually do believe at some level, it’s true. The incentives in the system in which he plays adds a caveat to everything he says, that we aren’t putting there

So for example, when Facebook says they care about social responsibility or that they will take steps to minimize the amount of fake news, whatever that means. All of the statements made by any industry in any company, because of the fact that we’re in a capitalist system, is prior on the statements given it does not hamper profits, right? So when Facebook wants to deal with fake news, they will turn to automated AI algorithms. And they say we’re doing this because it’s impossible to moderate the amount of stories that we get.

From a strictly numeric perspective, this is true. But what they’re not saying is that it is not possible for us to use humans to moderate all of these stories while staying profitable. So that is to say the starting point of their action may be positive. But the fact that it has to be warped to fit the profit motive ends up largely negating, if not completely negating the effects of the actions they take.

So for example, you can take Facebook’s content moderation in the continent of Africa. They used to have none. And until recently, they got only one content center moderation in the entire continent of Africa. The amount of languages spoken in that continent alone, how many people do you have to hire in that one continent’s moderation? How many people per language are you hiring? Sophie Zhang’s resignation letter basically showed that despite being aware of all of these issues and having employees especially at the lower levels who were passionate about the social good. So it’s clear that they are trying to do a social good. The fact that everything is prior to whether or not it will result in money, hurts the end result of their action. So I believe and I agree with you that this industry is different. And I do believe that they have an incentive for the social good. But unless this incentive is forced upon everyone else, they are hurting themselves if they refuse to take profit that they could take, if that makes sense.

But if you choose to not do something because it is socially good but it will hurt your profits, some other company is going to do that thing. And they will take the profits and they will beat your market share until you can find a way to account for it in the stock price.

Lucas Perry: People value you more now that you are being good.

Mohamed Abdalla: Yeah. But I don’t think we’re at a stage where that’s possible or that’s even well-defined what it means. So I agree that even if this research is well-intentioned, the road to hell is paved with good intentions.

Lucas Perry: Yeah. Good intentions lead to bad incentives.

Mohamed Abdalla: Or the good incentives are required to be forced through the lens of bad incentive. It has to be aligned with the bad incentive for them to actually manifest. Otherwise, they will always get blocked.

Lucas Perry: Yeah. By that you mean the things which are good for society must be aligned with the bad incentives of maximizing profit share, or they will not manifest.

Mohamed Abdalla: Exactly. And that’s the issue when it comes to funding academia. Because it is possible to change society’s viewpoint on one, what is possible. But two, what is preferable to match the profit incentives of these companies. So you could find a way of what is ethical AI, what does it cover? What sort of legislation is feasible? What sort of legislation is desirable? In what context does it apply, does it not apply? What jurisdiction, so on and so forth. These are all still open questions. And it is the incentive of these companies to help mold these answers such that they have to change as little as possible.

Lucas Perry: So when we have benefits that are not accruing from industry, or where we have negative externalities or negative effects from the incentives of industry leading to detrimental outcomes for society, the thing that we have for remedying that is regulation. And I imagine that more than the general population, I would guess are libertarian attitudes at big tech companies. Which in this sense I would summarize as socially liberal or left leaning and then against regulation. So valuing the free market. So there’s this value resistance. We talked about how the people at the top are going to be sifted through. You’re not going to have people at the top of big tech companies who really love regulation, or think that regulation is really good for making a beautiful world. Because regulation is just always hampering the bottom line.

Yet, it’s the tool that we have for trying to mitigate negative externalities and negative outcomes from industry maximizing their bottom line. So what do you suggest that we do? Is it just that we need good regulation? We need to find some meaningful regulatory system and effective policy? Because otherwise, nothing will happen. They’ll just keep following their incentives, and they have so much power. And they’ll just do what they do to keep doing the same thing. And the only way to break that is regulation.

Mohamed Abdalla: So I agree. The solution is basically regulation. The question is, how do we go about getting there? Or what specific rules do we want to use or laws do we want to create? And I don’t actually answer any of this in my work. I answer a question that comes before the legislation or the regulation. Which is basically, I propose that AI ethics should be a different department from computer science. So that in the same way that bioethics is no longer in the same department as biology or medicine, AI ethics should be its own separate department. And in that way, anyone working in this department is not allowed to have any sort of relationship with these companies.

Lucas Perry: You call that sequestration.

Mohamed Abdalla: It’s not my own term. But yeah, that’s what it’s called.

Lucas Perry: Yeah. Okay. So this is where you’re just removing all of the incentives. Whether you’re declaring conflict of interest or not, you’re just removing the conflict of interest.

Mohamed Abdalla: Yes. Putting myself on the spot here, it’s very difficult to assume that I myself have not been corrupted by repeated exposure. As much as I try to view myself as a critical thinker, the research shows repeated exposure will influence what you think and what you believe. I’ve interned at Google for example, and they have a very large amount of internal propaganda pointed at their employees.

So I can’t barge in here saying that I am a clean slate or, “I’m a clean person. You should listen to my policies.” But I think that academia should try to create an environment where it is possible. Or dare I say, encouraged to be a clean person where clean means no financial involvement with these companies.

That said, there’s a lot of steps that can be done when it comes to regulation. Slightly unrelated, but kind of not unrelated is fixing the tax code in the U.S. and Canada and around the world. A large reason of why a lot of computer science faculty and computer scientists in general look to industry for funding is because governments have been cutting or at least not increasing with the rates of research being done in fields the amount of money available for research funding. And why do they not have as much money? This is probably in part because these companies are not paying their fair share when it comes to paying their taxes, which is how a lot of researchers get their funding. That’s one way of doing it. If you want to go into specifics, it’s more difficult and much harder to sell for specific policies. I don’t think regulation of specific technologies would be effective because all the technologies changed very fast.

I think creating a governmental body whose role it is to sue these companies when they violate stuff that we don’t believe match our social norms is probably the way to go about it. But I don’t know. It’s hard for me to say. It’s a difficult question that I don’t have an answer for. We don’t even know who to ask for legislation because every computer scientist is sort of corrupted. And they’re like, “Okay, do we not use computer scientists at all? Are we relying only on economists and moral philosophers to do this sort of legislation possibly?” I don’t know.

Lucas Perry: So I want to talk a little bit about transformative AI, and the role that this transition plays in that. There is a sense, and this is a meme that I think needs to be combated. The race between China and America on AI with the end goal of that being AI systems that are increasingly powerful.

So some sense that any kind of regulation used to try to fix any of these negative externalities from these incentives is just shooting ourself in the knee. And the evil other is racing to beat us.

Mohamed Abdalla: That’s the Eric Schmidt argument.

Lucas Perry: So we can’t be implementing these kinds of regulations in the face of the geopolitical and international problem of racing to ever more powerful AI systems. So you already said this is the Eric Schmidt argument. What is your reaction to this kind of argument?

Mohamed Abdalla: There’s multiple possible reactions. And I don’t like to state which one I believe in personally, but I’d like to walk through them. Because I think first off, let us assume that the U.S. and China are racing for an automated general intelligence, AGI. Would you not then increase government funding and nationalize this research such that it belongs to the government and not to a multinational corporation? In the same way that if for example, Google, Facebook, Microsoft, Alibaba, Huawei were in the race to develop nukes. Would you say, leave these companies alone so they can develop nuclear weapons? And once they develop a nuke, we’ll be able to take it. Or would you not nationalize these companies? Or not nationalize them, but basically it has to become only for the U.S. They cannot have any incentives in any other country. That is a form of legislation or regulation.

Governments would have to have a much bigger say in the type of research being done, who’s doing it, what can be done. For example, in the aerospace industry, you can’t employ non U.S. citizens. Is this what you’re pushing for in artificial intelligence research? Because if not, then you’re conceding that it’s not likely to happen. But if you do believe that this is likely to happen, then you would be pushing for some sort of regulation. You could argue what the regulation, but I don’t find the viewpoint that we want these companies to compete with the Chinese companies, because they’re going to create this thing that we need to beat the Chinese at. If you believe that this is going to happen, you’d be still in support of regulation. It’d just be different regulation.

Lucas Perry: I mean obviously I can’t speak for Eric Schmidt. But the kinds of regulation that stops the Chinese from stealing the AGI secrets is good regulation. And then anything else that slows the power of our technology is bad regulation.

Mohamed Abdalla: Yes. But for example, when Donald Trump banned the H1B visa. Or not banned, he put a limit or a pause. I’m not sure the exact thing that happened.

Lucas Perry: Yes. He’s made it harder for international students to be here and to do work here.

Mohamed Abdalla: Yes, exactly. That is the type of regulation that you would have if you believed AI was a threat, like we are racing the Chinese. If you believed that, you would be for that sort of regulation because you don’t want these companies training foreign nationals and the development of this technology. Yet this is not what these companies are going for. They are not agreeing with the legislation or the regulation that limits the amount of foreign workers they can bring in.

Lucas Perry: Yeah. Because they just want all the talent.

Mohamed Abdalla: Exactly. But if they believe that this was a matter of national security, would they not support this? You can’t make the national security arguments when they say, “Don’t regulate us, because we need to develop as much as we can, as fast as we can.” While also pushing against regulation that if this was truly dangerous, if we did truly need to leave you unregulated internally, we should limit who can work for you in the same way that we do it for rocketry. Who can work on rockets, who can work at NASA? They have to be U.S. citizens.

Lucas Perry: Why is that contradictory?

Mohamed Abdalla: Because they’re saying, “Don’t regulate us in terms of what we can work on,” but they’re saying also, “Do not regulate us in terms of who can work for us.” If what you’re working on is a matter of national security and you care about national security, then by definition, you want to limit who can work on it. If you want anyone, or you say there should be no limit on who can work for us, then you are basically admitting that this is not a matter of national security, or profits over everything else. Google, Facebook, Microsoft, when possible legislation comes up, the Eric Schmidt argument gets played. And it’s like, “If you legislate us, if you regulate us, you are slowing down our progress towards this technology.”

But if any sort of regulation against the development of tech will slow down the arrival of AGI, which we assume that the Department of Defense cares is important. Then what you’re saying is that these companies are essentially striving towards that should they not be protected from foreign workers infiltrating. So this is where the companies hold two opposing viewpoints. Where depending on who they’re talking to, no don’t regulate us because we’re working towards AGI and you don’t want to stop us. But at the same time, don’t regulate immigration because we need these workers. But if what you were working on is sensitive, then you shouldn’t even be able to take these workers.

Lucas Perry: Because it would be a national security risk.

Mohamed Abdalla: Exactly. When a lot of your researchers come from another country and they’re likely to go back to that country or at least have friends, have conversations with other countries.

Lucas Perry: Or just be an agent.

Mohamed Abdalla: Yeah, exactly. So if this is actually your worry that this regulation will slow down the development of AGI, how can you at the same time be trying to hire foreign nationals?

Lucas Perry: All right. So let’s do some really rapid fire here.

Mohamed Abdalla: Okay.

Lucas Perry: Is there anything else that you wanted to add to this argument about incentives and companies actually just being good? And we are walking through this Eric Schmidt argument.

Mohamed Abdalla: Yeah. So the thing I want to highlight is that this is a system level problem. So it’s not a problem with any specific company, despite some being in the news more than others. It’s also not a problem with any specific researchers or institutions. This is a systemic issue. And since it’s a high level problem, the solution needs to be at a high level as well. Whether it’s at the institutional level or national level, some sort of legislation, it’s not something that research individually can solve.

Lucas Perry: Okay. So let’s just blast through the action items here then for solving this problem. You argue that everyone should post their funding information online in historic informations. This increases transparency on conflicts of interest. But as we discussed earlier, they actually just need to be removed. You also argue that universities should publish documents highlighting their position on big tech funding for researchers.

Mohamed Abdalla: Yeah. Basically I want them to critically consider the risks associated with accepting such funding. I don’t think that it’s a consideration that most people are taking seriously. And if they are forced to publicly establish a position, they’ll have to defend it. And that will I believe lead to better results.

Lucas Perry: Okay. And then you argue that more discussion on the future of AI ethics and the role of industry in this space is needed. Can’t argue with that. That computer science should explore how to actively court antagonistic thinkers.

Mohamed Abdalla: Yeah. I think there’s a lot of stuff that people don’t say because it’s either not in the zeitgeist, or it’s weird, or seems an attack on a lot of researchers.

Lucas Perry: Stigmatized.

Mohamed Abdalla: Yeah, exactly. So instead of trying to find people who simply list on their CV that they care about AI ethics or AI fairness, you should find who will to disagree with you. If you’re able to beat points to disagree with, it doesn’t matter if you don’t agree with their viewpoint.

Lucas Perry: Yeah. I mean usually, the people that are saying the most disruptive things are the most avant garde and are sometimes bringing in the revolution that we need. You also encourage academia to consider the splintering of AI ethics into different department from computer science. This would be analogous to how bioethics is separated from medicine and biology. We talked about this already as sequestration. Are there any other ways that you think that the field of bioethics can help inform the development of AI ethics on academic integrity?

Mohamed Abdalla: If I’m being honest, I’m not an expert on bioethics or the history of the field of bioethics. I only know it in relation to how it has dealt with the tobacco industry. But I think largely, more historical knowledge needs to be used by people deciding what we as computer scientists do. There’s a lot of lessons learned by other disciplines that we’re not using. And they’ve basically been in a mirror situation. So we should be using this knowledge. So I don’t have an answer, but I think that there’s more to learn.

Lucas Perry: Have you received any criticism from academics in response to your research, following the publication that you want to discuss or address?

Mohamed Abdalla: In this specific publication, no. But it may be because of the COVID pandemic. I have raised these points previously. And I have received some pushback, but not for this specific piece. Although this piece was covered in WIRED and there are some criticisms of the piece in the WIRED article, but I kind of raised them up in this talk.

Lucas Perry: All right. So as we wrap up here, do you have anything else that you’d like to just wrap up on? Any final thoughts for listeners?

Mohamed Abdalla: I just want to stress if they’ve made it this way without hating me, this work is not meant to call into question the integrity of researchers, whether they’re in academia or in industry. And that I think these are critical conversations to be had now. It may be too late for the initial round of AI legislation. But for the future, good. And for longer-term problems, I think it’s more important.

Lucas Perry: Yeah, there’s some meme I think going around, like one of the major problems in the world is good people who are running the software of bad ideas on their brain. And I think similar to that is all of the good people who are caught up in bad incentives. So this is just sort of amplifying your non-critical or judgmental role, but that the universality of the human condition is that we all get caught up in these systemic negative incentive structures that lead to behavior that is harmful for the whole.

So thank you so much for coming on. I really learned a lot in this conversation. I really appreciate that you wrote this article. I think it’s important, and I’m glad that we have this thinking early and we can hopefully try and do something to make the transformation of big tech into something more positive happen faster and more rapidly than it has historically with other industries. So if people want to follow you, or look into more of your work, or get in contact with you, where the best places to do that?

Mohamed Abdalla: I’m not on any social media. So email is the best way to contact me. It’s on my website. If you search my name and add the University of Toronto and the end of it, I should be near the top. It’s cs.toronto.edu/msa. And that’s where all my work is also posted.

Lucas Perry: All right. Thanks so much, Mohamed.

Mohamed Abdalla: Thank you so much.

Maria Arpa on the Power of Nonviolent Communication

 Topics discussed in this episode include:

  • What nonviolent communication (NVC) consists of
  • How NVC is different from normal discourse
  • How NVC is composed of observations, feelings, needs, and requests
  • NVC for systemic change
  • Foundational assumptions in NVC
  • An NVC exercise

 

Timestamps: 

0:00 Intro

2:50 What is nonviolent communication?

4:05 How is NVC different from normal discourse?

18:40 NVC’s four components: observations, feelings, needs, and requests

34:50 NVC for systemic change

54:20 The foundational assumptions of NVC

58:00 An exercise in NVC

 

Citation:

The Center for Nonviolent Communication’s website 

 

We hope that you will continue to join in the conversations by following us or subscribing to our podcasts on YoutubeSpotify, SoundCloudiTunesGoogle PlayStitcheriHeartRadio, or your preferred podcast site/application. You can find all the AI Alignment Podcasts here.

You can listen to the podcast above or read the transcript below. 

Lucas Perry: Welcome to the Future of Life Institute Podcast. I’m Lucas Perry. Today’s conversation is with Maria Arpa on nonviolent communication, which will be referred to as NVC for short throughout the episode. This podcast continues to explore the theme of wisdom in relation to the growing power of our technology and our efforts to mitigate existential risk, which was covered in our last episode with Stephen Batchelor. Maria and I discuss what nonviolent communication consists of, its four components of observations, feelings, needs, and requests, we discuss the efficacy of NVC, its core assumptions, Maria’s experience using NVC in the British prison system, and we also do an on the spot NVC exercise towards the end of the episode.   

I find nonviolent communication to be a powerful upgrade in relating, resolving conflict, and addressing the needs and grievances we find in ourselves and others. It cuts through many of the bugs of normal human discourse around conflict and makes communication far more wholesome and collaborative than it otherwise might be. I honestly view it as a quite a powerful and essential skill or way of being that has had quite a transformative impact on my own life, and may very well do the same for others. It’s a paradigm shift in communication and human relating that I would argue is an essential part of the project of waking up and growing up. 

Maria joined the Center of Nonviolent Communication as the Executive Director in November 2019. She was introduced to NVC by UK Trainer Daren de Witt, who invited her to see Marshall Rosenberg, the founder of NVC, speak in London, a moment that changed her life. She was inspired to attend one of the Special Sessions on Social Change in Switzerland in 2005 and then invited Marshall to Malta where she organised a conference between concentration camp survivors and the multi-national corporation that had bought the site formerly used as a place of torture. Since then she has worked with marginalised, hard to reach individuals and communities and taken her work into prisons, schools, neighbourhoods and workplaces.

And with that, let’s get into our conversation with Maria Arpa. 

I really appreciate you coming on, and I’m excited to learn more about NVC. A lot of people in my community, myself included, find it to be quite a powerful paradigm, so I feel excited and grateful that you’re here. So I think we can kick things off here with just a pretty simple question. What is nonviolent communication?

Maria Arpa: Thank you. Yes, and it’s really good to be here. So nonviolent communication is a way or an approach or a system for putting words to living a nonviolent life. So if you’ve chosen nonviolence as a philosophy or as a way of life, then what Marshall Rosenberg proposed in creating nonviolent communication in the 1960s is a way of communicating interpersonally based on the idea that we need to connect as human beings first.

So NVC, which is short for nonviolent communication, is both a spiritual practice and a set of concrete skills. And it’s based on the idea of listening deeply to ourself and to others, in order to establish what are the real needs that we’re trying to meet? And from the understanding of the needs that come through empathic listening, then we can begin to build strategies.

Lucas Perry: Alright, so can you juxtapose what NVC does, compared to what usually happens in normal discourse and conflict resolution between people?

Maria Arpa: Yeah, lovely. Thank you. I like that question. In society, we have been brought up, and I would go so far as to say indoctrinated, with taking adversarial positions. And we do that because we think about things like the legal profession, academia, science, the military, and all of those disciplines, which are highly prized in society, but they all use a debate model of discourse.

And that’s wonderful if you want to prove a theory or expand the knowledge between people, but if we’re actually trying to just build a relationship in order that we can coexist, do things, envision, create something new in society, debate just doesn’t work. So at the very micro level, in the family system, what I experienced with couples and families is a sort of table tennis match. And while you’re speaking, I am preparing my counter argument. I would say that we’ve been indoctrinated with a debate model of conversation that even extends into our entertainment, because in my understanding, if you go to scriptwriting school, they will tell you that to make a Hollywood blockbuster, you need to leave a conflict in every scene.

Maria Arpa: So that has become the way in which we communicate with each other, which is played out in our legal systems, in the justice system, it’s played out in education. When I position that against nonviolent communication, nonviolent communication says we need to build a relationship first. What is the relationship between us? So if we even take this podcast, Lucas, you and I had built a relationship. We didn’t just make an appointment, come on, and do this, we got to know each other a bit. Is that a helpful answer?

Lucas Perry: That’s a good question. Yeah, I think that is a helpful answer. I’m also curious if you could expand a bit more on what the actual communicative features of this debate style of conversation focus on. So where NVC focuses on talking about needs and feelings, feelings being evidence of needs, and also making observations, observations as being opposed to judgments, what is it that the normal debate style or adversarial kind of conversations that we’re having, what is the structure and content of that kind of conversation that is not needs, feelings, and observations?

To me, it often seems like it deals with much more constructed and synthetic concepts where, NVC you boil down concepts to things which are very simple and basic and core, whereas the adversarial relationship deals with more complex constructive concepts like respect, abandonment, and is less simple in a sense. So do you have anything else you’d add here on to what actually constitutes the adversarial kind of conversation?

Maria Arpa: Yeah, definitely. So in an adversarial conversation now we have two levels. And the first level, because I’m really thinking about how we’ve been programmed, in the first level, we’re battling backwards and forwards and what we’re out to do is win the argument. So what I want is for my argument to prevail over yours. And in families, it’s generally who’s the worst off person, who’s the most tired, who’s the most unresourced, who does most of the work, who’s earning most of the money, that level. It’s a competition. It’s a competitive conversation.

At its worst ends of the spectrum, and when we think about school and education, it’s a debate, which also includes the prospects of enforcement, which is punitive. So we could simply be competing to win an argument or we could be actually out to provide evidence of the other person’s wrongness in order to punish them, maybe not even physically punish them, but punish them with the withdrawal of our love.

Lucas Perry: Right, so if we’re not sufficiently mindful, there is this kind of default way that we have been conditioned into communicating, where I noticed a sense of strong ego identification. It’s a bit selfish. It’s not collaborative in any sense. As you said, it’s kind of like if I can make and give points which are sufficiently strong, then the person will see that I’m the most tired, I’m the most overworked, I have contributed the most, and then my needs will be satisfied. And then my needs being satisfied is more implicit, and it’s never made explicit. And so NVC is more about making the needs and feelings explicit and the core and cutting out the argument.

Maria Arpa: Yes, I really agree with that. The problem with that debate model, that adversarial model, is that I might get my needs met, I might be able to bulldoze or bully my way, or I might be able to play the victim and get my needs met, but usually I have induced the other person to respond to me out of fear, guilt, or shame, not out of love. Someone, somewhere will pay for that. It’s that whole thing, you may have won the battle, but you haven’t won the war. While at a very micro level, day by day, I can scrape by and score points and win this and get my need met, in the long term, I’m actually still feeding the insecurity that nothing can come without struggle.

Lucas Perry: That’s a really good point.

Maria Arpa: And when you say it cuts out the argument, the argument bit which is not necessary, I’m saying that we will go into the dialogue with the idea that once we’ve established the needs, then we can actually build agreements. Now, that’s not to say that I’m going to get everything I want, and you’re going to get everything I want. But there’s a beauty in the negotiation that comes from the desire to contribute.

Lucas Perry: Yeah, so I found what you said to be quite beautiful and powerful around, you may gain short term benefits from participating in what one might call violent communication, or this kind of adversarial debate format, you might get your needs met but there’s a sense in which you’re conditioning in yourself and others this toxic communicative framework, where first, you’re more deeply instantiating this duality between you and other people, which you’re reifying by participating in this kind of conversation. And that’s leading to, I think, a belief in an unwanted world, a world that you don’t want to live in, a world where my needs will only be met, if I’ll take on this unhappy, defensive, self-centered stance in communication. And so that’s a contradicted belief. If I don’t do the debate format, then I will not be safe, or I will not get my needs met or I will be unhappy. You don’t want that to be true, and so it doesn’t have to be true if you pivot into NVC.

Maria Arpa: Yes, and got two things really to say about that. One is that we are often counting the cost of something and one of the things that we very rarely factor in is the emotional cost to getting our needs met, the emotional cost to taking on the task. And sometimes when I work with people, we find that the price is too high. I may be getting my fame and fortune or whatever it is I’m after, but actually, the emotional price to my soul is just too high.

And most of us have been taught not to imagine that as being of any importance. In fact, most of us have been taught, rather like scientific experiments, that when I’m looking at a situation, even when I’m not taking a self-centered point of view, even when I’m looking at it and wanting to be generous and benevolent, I look at a situation and I fracture myself into that situation, as if I don’t matter, or I don’t count, and the truth is everybody matters, and everybody counts.

So that’s one thing, and then when I heard you talking about this idea of adopting the adversarial approach to life and how that will feed the system, the best example I have of that is in the prison that I’ve been working in for the last four years. Prisons are pretty mean places, and so most people come into prison in the belief that they need to develop a very strong set of armor in order to defend themselves, and sometimes attack is the best form of defense, in order to survive this new world. And what I’ve been able to demonstrate with the guys that I’ve been training is that actually, if you throw away the armor, and if you actually come to be able to work out what your needs are, and be able to help other people find their needs, actually, it becomes a different place.

And the proof of that has been 25 men that I have trained to do what I do within the prison, working with other prisoners across the prison, and turning the prison into one of the safest jails in the UK.

Lucas Perry: Wow.

Maria Arpa: So the first thing that happens is I deliver training, and it’s intensive, it’s grueling, there’s a lot of work to do, and what happens to the guys as they’re doing the training. Because see, I believe that the people that cause the problems are the ones who have the answers to the problems. We should go to the people that cause the problems and say, “How do we not end up here again?” So as they do this training and they realize the potential to make their prison sentence go better, what a nice thing to be able to do, and then they realize that actually, I can’t do this for anyone else until I’ve done it for myself. So that’s part of the transformative bit where they go, “Well hang on, I’ve got so much conflict, or I’ve got so much inner conflict or dislike of myself, or whatever chaos going on inside, I can’t possibly do this for anyone else. So, right, now we can begin because now we have to begin as a team.”

And so what’s been remarkable for me is 25 men who, on the outside of prison would never ordinarily come into contact with each other, from completely different walks of life, different areas, different ages, different crimes, we’ve got everything from what in America you call homicide, through sexual offenses, to fraud, and that type of thing.

So these 25 men have been through a process to be able to work together as a team and to be able to understand each other, and they are completely blown away by the idea that they have a process so when they feel that the system that they’re using, because they get overrun with casework, and that what I’ve taught them to do is you have to put yourselves first. So when that happens, you actually need to stop everything and come back as a team, and work out what are the petty conflicts that are arising, and use this nonviolent communication process to come back to center. So for me, very much, there’s a huge difference in being able to live this way, and no other greater place have I proved it than in a place called prison.

Lucas Perry: Yeah, exactly. I bet that’s quite liberating for them, and I bet that there’s a deeper sense of security that is unavailable when you’re taking the adversarial, aggressive stance.

Maria Arpa: There’s a deeper sense of security. There’s a huge amount of gratification for somebody who has literally been thrown away by society and told that they’re worthless and have no place, and I don’t want to get into the crime or whether they did it or didn’t do it, but they’ve been thrown away by society, to actually find that they can be in service of others, and they can actually start to love themselves. So there’s a really huge gratification in being able to do that and not do it in a self sacrificing way, to be able to do it in a way that enriches the other person and enriches themselves. That for me is monumental.

And the second thing is that, in places like prisons and family, I often compare schools to prisons, you can be overwhelmed by the power of enforcement and the misuse of authority. So often, one person in a position of power may dish out the rules one way today in a different way tomorrow, may treat one person differently to another. And so what we’ve been able to establish is that if the guys sit in circle and invite officers to those circles, they can clear up some of the things that just create unnecessary conflict.

Now, obviously, in prison, some topics are non-negotiable, and we don’t go there, right? And if you don’t think you should be imprisoned, then you need to take the appropriate steps through your legal advisors. But if it’s a case of the laundry’s messed up every week and there are fights starting over it, if it’s a case of exactly who’s collecting the slips for the lunches, and that’s creating, or there’s an argument when people are queuing up for their food, these things can be sat down and had out and people could talk about their different experiences, and we can clear up those gray areas by the guys coming up with a policy. And they do this using needs.

Lucas Perry: All right, it’s encouraging to see it as effective and liberating in the case of the prison system. I’m interested in talking with you more about how this might apply to, for example the effective altruism movement, or altruism in general, and also working on really big problems in the world that involve both industry and government, who at times function and act rather impersonally. Where, for example, the collective incentives of some corporation are to maximize some bottom line, and so it’s unclear how one could NVC with something where the aggregate of all the behavior is something rather impersonal.

So right before we get to that, I just want to more concretely lay out the four components of NVC practice, just so listeners have a better sense of what it actually consists of. So could you take us through how NVC consists of observations, feelings, needs, and requests?

Maria Arpa: Yeah, I’d love to, yes, thank you. So usually, in our heads, if we’re indoctrinated in that adversarial and we don’t even know it, whatever, usually what’s happening is when we see something, we’re busy judging it, evaluating it, deciding whether we like it or don’t like it, imposing our diagnosis, and generally having an opinion about it, good or bad. And then, of course we live in a world now where people can take to social media and destroy other people if they choose. So we can actually just act out of what we think we’re seeing.

In nonviolent communication, what we do is we try to get to what we call the observation without the evaluation. So what is it that’s actually happening? And that is really trying to separate the reality from the perception. A really good example of that was I saw a demonstration once, a woman came down from the audience, and she wanted to talk about how angry she was with her flatmate. And she gave out this whole story and the trainer would say, “Well, we need to get to the observation, we need to get to the observation.” And out of this whole mass, the only two observations that she could come up with were that her flatmate occasionally leaves a dirty plate and a dirty mug and dirty cutlery in the sink without washing it up. And on occasion, her flatmate when she leaves the flat or the apartment, allows the door to slam and make a very loud noise behind her. And those are the only two observations that she could come up with that were actually really happening. Everything else was what she made up around it.

Lucas Perry: Yeah. So there’s this sense in which we’re telling ourselves stories on top of what may be just kind of simple, brute facts about the world. And then she’s just suffering so much over the stories that she’s telling herself.

Maria Arpa: Yeah, so the story attached to someone leaving the dirty plates in the sink is, she’s doing it on purpose, doing it to get at me. Those are the sorts of things we might tell ourselves. Or we might be the opposite and say, “She’s just so selfish.” And I really love what you just said. Of course, the person that I’m causing the most grief and pain to is myself. I’m cutting myself off from my own channel of love.

So that’s how we get to the observation, and it’s a really important part of the NVC process, because it helps us to identify that which is what we are telling ourselves and that which is actually happening in front of us. And the way that I could tell if it’s an observation or an evaluation, is I could record it on a video camera and show it to you and you would see the same thing.

Lucas Perry: Yeah, that makes sense. You’re trying to describe things as more factual without the story elements. This happened, then this happened, rather than, “My asshole roommate decided to leave her dirty shit everywhere because she just doesn’t care and she sucks.”

Maria Arpa: Yeah, so that’s the first step. And then what we’re looking to do is check in with ourselves on how do we feel. This is a really important step, because in nonviolent communication, what we propose is that our feelings are the red warning light on the dashboard of the car that tells you to pull over and look under the hood, you would say, we would say bonnet, and to check what else is going on. So feelings are a gateway. They’re our doorway. So they are our barometer.

So it’s really important to develop a really good vocabulary around feelings. And it’s really important to get to the feeling itself, whether it’s sadness, or anger, or upset, or despair or grief, or joy, or happiness, it’s really important to develop a vocabulary of feelings because if I ask someone how they feel, and I get, “I feel like,” no feeling is coming after the word “like.” I feel like jumping off a cliff. I feel like just going to bed and never getting up again. I feel like running away. That’s not a feeling. People can use those kinds of metaphors for us to try and guess the feelings, but actually what I want is the feeling.

Lucas Perry: Right, those are overly constructed. They need to be deconstructed into core feelings, which is a skill in itself that one learns. So you could say for example, “I feel abandoned,” but saying, “I feel abandoned,” needs to be deconstructed. Being abandoned is being afraid and feeling lonely.

Maria Arpa: So if we say, “I feel abandoned,” and I’m particularly referring to the ED at the end, or, “I feel disappointed,” rather than disappointment, then what I’m doing is I’m actually, by the backdoor, I’m accusing somebody of doing it to me.

Lucas Perry: Yeah, that’s right. It’s a belief about the world actually, that other people have abandoned you or are capable of abandoning you.

Maria Arpa: Yeah, exactly. So there’s a skill in the language. However, if you go to the cnvc.org website, we have a free feelings list and a needs list that’s downloadable for anybody that wants to go and get it. And that helps you to really get closer to the language.

Lucas Perry: Okay, so I do have a objection here that I don’t want to spend too much time on, but I’m curious what your reaction is. So I think that what you would call the “abandoned” word is a faux feeling, is that the right word you guys use? There’s a sense in which it needs to be further deconstructed, which you mentioned, because it’s a belief about the world, yet is there not also some reality that we need to respect and engage with, where abusers or toxic people may actually be doing the kind of thing which seems like a belief in the world.

Maria Arpa: That’s where I would get back to the observation. Because those things do happen. I work a lot in domestic violence. I understand this. And there are two things, and one that we’ll go on to later. There’s getting back to the observation, because if I heard you say, “I feel abandoned,” what I would want to do is go back to figure out what’s the observation that brings you to that sense. Because actually, if you’re telling yourself you’ve been abandoned, or if somebody has abandoned you, and we can see that in an observation, then I’m guessing you’re feeling a huge amount of misery, grief, and despair, or loneliness, those would be the feelings.

And then later on, when we get to it, I’ll talk about the use of protective force. Because it isn’t all happy, dippy, and let’s all get on our hippie barge and have a great life. Without putting too fine a point in it, shit has happened, shit is happening, and shit’s always going to happen. And that’s the way of the world. What I’m talking about is how we respond to it.

Lucas Perry: Yeah, that’s right. You can try and NVC Hitler, and when you realize it’s not going to work, that’s when you mobilize your armies.

Maria Arpa: That’s a very interesting thing, you could try to work with Hitler, because actually, I don’t know if you’ve seen, I have a copy of it somewhere, Gandhi actually wrote a letter to Hitler.

Lucas Perry: Yeah, it didn’t work.

Maria Arpa: It didn’t work. And actually, if you look at the letter, it’s a shame because there was nothing in there for me that I recognized as NVC that may have generated at least a response.

Lucas Perry: Alright, so we have feelings, and we want to be sure to deconstruct them into simple feelings, which is a skill that one develops. And the thing here that you said that the feelings are like the warning to check the engine of the car, which is a metaphor to say feelings are a signal giving you information about needs being unmet, or at least even the impression or ignorance or delusion that you think your needs are not being met. Whether it’s actually your needs not being met, or a kind of ignorance to your needs being met, either way, they are a signal of that kind of perception.

Maria Arpa: Yes, absolutely. They’re a signal for something. And so when we talk about feelings, what I’m trying to do is capture the real emotion here and name it.

Lucas Perry: And so then there’s a sense that when you communicate needs to other people, they cannot be argued with and they’re also universally shared. So you can recognize the unmet needs of another person as a reflection or a copy and paste of your own needs.

Maria Arpa: So, this is a really interesting part of the conversation when we get to needs because that sits in something called needs-based theory. And Marshall Rosenberg does not have the monopoly on needs-based theory. I mean, most people will have heard of Maslow’s hierarchy of needs. There’s a Chilean economist called Manfred Max Neef, who boiled all the needs down to just nine and said that everything else is just ways or satisfiers, to try and meet those needs.

For me, needs-based theory is an art, not a science. And so again, you could go on the cnvc.org website, and you can pull off a list of needs, and you’ll recognize them. Now, when I say it’s an art not a science, on there could be, say, the need for order and structure. Okay, so let’s say I have a need for order and structure.

Lucas Perry: That seems like it needs to be deconstructed.

Maria Arpa: Yes. So I would then say, “Maybe that is a strategy to get to a deeper need of inner peace, but at the moment, that seems to be the very present need for me. I come downstairs, my desk looks like a bomb’s hit it, I’ve got calls to get on, and I just don’t feel like I can get my day started until I’ve created a sense of order around myself.”

It is a simple need in that moment, but the idea is that when we look at the fundamental needs like air, and movement, and shelter, and nutrition, and water, those are universal. I mean, I don’t think anyone could disagree with that. And then we get into more spiritual needs and social needs. Things like discovery and creativity and respect, what a big word, respect is. And the way I like to look at it is, you see, all the arguments we ever have can never be over needs, because I can recognize that need in myself, as well as in others, but they’re over the strategies that we’re trying to use to meet the need that may be at a cost to someone else’s needs, or to my own deeper needs.

So a really good example is if you take our need for air. There’s only one strategy to meet our need for air and that’s to breathe. How many arguments do people have over the need for air and the strategy of breathing?

Lucas Perry: Zero.

Maria Arpa: Right. Now, let’s take a really big word that gets bandied about everywhere, respect. How many arguments do we have over the strategies we’re using to meet a need for respect?

Lucas Perry: A million. And another million.

Maria Arpa: Exactly, exactly. And so the arguments are only ever about strategy. And once you’ve understood it, and practiced it, and embodied that, and you can see the world through that lens, everything changes. And that’s why I can do what I do.

Lucas Perry: Yeah, well, so let’s stick with air. So some people have a strategy for meeting their needs by polluting the air with things. So there’s some strategy to meet needs where the air gets worse, and everyone has this more basic need to breathe clean air, and so the government has to step in and make regulations so that some more basic need is protected. But then so there’s this sense that strategies may harm other people’s needs, and there’s a sense in which sometimes the strategies are incompatible. But there’s this assumption that I think is brought in that the world is sufficiently abundant to meet everyone’s needs and that’s a way, I think, of subverting or getting around this contradictory strategy problem where it would suggest that, okay, oil companies, we can meet your needs some other way, as long as you change your strategy and we’ll help you do that so that we have clean air. Does this make sense?

Maria Arpa: It makes total sense, and I’ve got two sort of parallel answers, maybe even three. So the first one, we’ve got where we have where there are people in the world who don’t mind, or maybe they do mind secretly, doesn’t matter, where there are people in the world who will pollute the air for profit. And we’ve reached that point because we have been using an adversarial system with each other that means that as long as I can turn someone into the enemy, I can justify doing whatever I want. So we create bad people and good people.

So in this adversarial system, one of the things we can do is justify what we’re doing by holding up other people as being in the way. So we’ve created that system and actually what we’re finding, is that the system is failing. I don’t know, I don’t want to predict things, I’m not an economist or a politician, but it seems to me that the system is failing rapidly, more and more. More harm is being visited on the planet than is necessary and lots of people are waking up to that.

So now we’re hitting some kind of tipping point where in giving people things like the internet and all this stuff to self soothe them, actually, a lot of people got educated and started to ask better quality questions about the world they’re living in. And I think there’s a bit of an age difference between us, the wrong way for my end, but people of your generation are definitely asking better quality questions, and they’re less willing to be fobbed off.

So now we’ve got to figure out, how do we change things? And while I understand that from time to time, we need to go out and we need to actually put our foot down and make a protest and make a stand and say, “We’re not putting up with this,” and use protective force, and nonviolent resistance, and civil disobedience, while we need to do those things, we will never change things if we’re only operating at the incident level. If you try to do everything and fix it at the incident level without somebody working long-term on the system… People need to organize, and work out how people like you could get into positions of power.

I mean, I did a lovely piece of work with a Somalian community many years ago, and they’d arrived in the UK as refugees, and when they first arrived, they thought they were only going to be around for a few years and that the war would sort itself out and they’d all go back home. So they kept to themselves and they were very excluded and left out of society, and some of the sons were getting into trouble with the police because they hadn’t really worked out how to live in this society, and after they realized that actually they weren’t going back, “We’re here, this is our home,” what they’ve realized is they needed to start organizing. They needed to become teachers and doctors and lawyers and actually start to help their own community in that way. And I found that very moving and very empowering, and I loved doing the work with them. And the work we were doing was literally around the mothers and the sons. So that’s changing things at a system level.

Lucas Perry: Okay, so the final point here is about making requests. And I think this is a good way to pivot into talking about, you can’t make requests to make systemic change, because the power structures are sufficiently embedded in the incentives are structured such that, “Hey, excuse me, please stop having all that power and money, my needs are not being met,” isn’t going to work.

So let’s talk about the efficacy of NVC and how it’s effectively used. So I think it’s quite obvious how NVC is excellent for personal relationships, where there’s enough emotional investment in one another, and authentic care and community where everyone’s clearly invested in actually trying to NVC if they can. Then the question becomes, for bigger problems in the world like existential risk, if NVC can be effective in social movements, or with policy makers, or with politicians, or with industry or other powerful actors whose incentives aren’t aligned with our needs and who function impersonally. What is your reaction to NVC as applied to systemic problems and impersonal, large, powerful actors who have historically never cared about individual needs?

Maria Arpa: That’s a really interesting question because in my experience of the world, nothing happens without some kind of relationship between people. I mean, you can talk about powerful actors that don’t care, but bring me a powerful actor that doesn’t care and let me have a conversation with them. So for me, I agree that there’s a place for NVC in a group of people who care. There’s also a place for NVC in making the conversation irresistible, finding that place in somebody, because if we work on the basis that there are human beings in the world that have no self-love, or no love at all, if we work on the basis that there are human beings that walk the planet that are just all selfish and dangerous and nothing else, then of course, we’re doomed.

But I don’t believe that, you see, I believe that we are all selfish, greedy, kind, and considerate. And I know this from doing this work in prisons, that often what’s happened is the kind and considerate has just gone to sleep, or it’s paralyzed, or it’s frozen, but it is there to be woken up. And that’s the power of this work, when the person has sufficiently embodied it, has practiced this, and really understands that this involves seeing the world through a different lens. That actually, my role in the work I do, and I work in the front line of some of the worst things that go on in society, my role is to wake up the part of a person that is kind and considerate, and nurture it and bring it to life and grow it and work with it. And that doesn’t happen in one conversation. I don’t do that because I want something, I do that because I generally care about how that person is destroying themselves.

I can give you an example of somebody I met in prison who had been imprisoned for being part of a very, very violent gang, been in violent gangs for most of his life, done a lot of time in prison, and the judge called him evil, and greedy, or whatever. And he came on one of my trainings in around 2013. And he kept coming to talk to me in the breaks, it was like he really wanted some kind of connection or some affirmation or something, and he said, “I did a restorative justice training last month, and I really have to think about the harm I’ve done to my victims.” And I said, “You also have to think about the harm you’ve done to yourself.” And that was the first moment of engagement. And actually, now this man will be out of prison in I think 2022. He has put himself through a therapeutic prison for six years. I’ve never seen a life change to such an extent or such a degree. We’re thinking about employing him when he comes out of prison.

And that’s the thing is, it’s how do you engage a person to look at themselves and to look at how they may be destroying themselves in the pursuit of whatever it is they think they need. So, bring me somebody who is a powerful actor, who doesn’t care about anyone else, and we’ll open the conversation. That’s how I see it. The reason that I can do this and I can have these conversations is I don’t have an agenda for another human being. I simply want to understand what is going on, what the motivations are, what the needs are, and work out with that person, is that strategy actually working for you? And if you’re meeting your need for power or growth or structure or whatever, is it costing you in some other needs that’s actually killing you slowly?

Lucas Perry: Yeah, I mean, I think that, for example, at risk of becoming too esoteric, non-dual wisdom traditions, I think would see this kind of violent satisfaction of one own’s needs is also a form of, first of all self harm, because your needs extend beyond what the conceptual egoistic mind would expect to be your needs. I’m thinking of someone who owns a cigarette company, and who’s selling them and knows that he’s basically helping to lie about the science of it, and also promoting that kind of harm. There’s a sense in which it’s spiritually corrupting, and leads to the violation of other needs that you have when you engage in the satisfaction of your needs through these toxic methodologies.

Maria Arpa: Absolutely, and it’s a kind of addiction, it’s a kind of habit, or obsession. One of the things that I’m really interested in is, at the end of the day, when we get to the request part of NVC, the real request is change. Whatever it is I’m asking for, whether I’m making a request of myself or the person in front of me, I’m requesting change. And change isn’t easy for most people. People need to go through a change process. And so it’s not just about the use of NVC as an isolated tool that is going to change the world, it is about contextualizing the use of NVC within other structures and systems like change processes, understanding group dynamics, understanding psychology, and all of those things, and then it has its place.

Lucas Perry: Yeah, it has a place amongst many other tools in which it becomes quite effective, I imagine. I suppose my only reaction here then is, you have this perspective, like, “Bring me someone in one of these positions of power, or who has sufficient leverage on something that looks to be extremely impersonal, and let’s have a conversation,” those conversations don’t happen. And no one could bring you that person really, and the person probably wouldn’t want to even talk to you, or anyone really who they know is coming at them from some kind of paradigm like this.

Maria Arpa: Oh, I don’t know about that. I mean, in the work I do, it’s a very small world, I’m not trying to affect global change. I would love to, but I’m not. But in the prison work I was doing, we managed to get the prison’s minister to come and see the work, and I managed to then have a meeting with him. And I managed to convince him on one or two things that had an effect at the time. So I don’t know that these things don’t happen. I think it’s about the courage and determination of the people to get those meetings, not coming from having an agenda for that person, but coming from really wanting to understand what the thinking is.

Again, in my experience, having been around the block a few times, the people making policies would be absolutely horrified if they saw how those policies are being delivered on the ground. There’s a huge gap between people sitting somewhere making a policy, and then how it gets translated down hierarchical systems, and then how it gets delivered. I like to think that policy makers aren’t sitting around the table going, “How can we make life worse for everybody, because we hate everybody.” Policy makers are sometimes very misguided and detached and unable to connect, but I don’t think policy makers are sitting there going, “We hate everybody, let’s just make life difficult.” They really genuinely believe they’re solving problems. But the issue with solving problems is that we’re addicted to strategy before understanding the needs.

Lucas Perry: We’re addicted to strategy before understanding needs.

Maria Arpa: Yeah. Our whole mentality is, “Problem? Fix it.”

Lucas Perry: So I mean, the idea here then, is that the recognition of needs, as well as bringing in some other assumptions that we can talk about shortly, and relaxing this adversarial communicative paradigm into a needs-based one where you take people on good faith and you recognize the universality of human needs, and there’s this authentic care and empathy which is born of, not something which you’re fabricating, but something which participating in actually serves some kind of need that you actually already have to have authentic human connection, or maybe that boils down to love. And so NVC can be an expression of love in which NVC becomes something spiritual. And then that this kind of process is what leads to a reexamination of strategy.

Maria Arpa: Yeah, so the idea is that because we have a problem, fix it mentality, we are skipping over the main part which is to sit with the pain of not knowing. So what we do is we jump to strategy, whether that’s in our daily lives, “I feel bad, I’ll go and get a haircut or buy myself a new wardrobe, or I’ve got a problem, and it’s going to create a big PR problem, so I’m going to do this,” and what we’re missing is the richness of understanding that when you do that, you’re acting out of fear, you’re jumping, because you’ve got triggered or stimulated in some way, and you’re acting out of fear to prevent yourself from the feelings that you don’t believe you’re going to be able to cope with.

And what I’m saying is that we understand, we get to the observation, we identify, is this an issue? Is it not an issue? And then we go within, in a group, and we sit with the pain, the mourning, of the mistakes we’ve made, or the problem we haven’t solved, or the world we’ve created, whatever it is, and it’s in sitting together with that, and being willing to say, “I don’t know what the answer is right now, or today. Maybe I just need to breathe,” in being able to do that, we reach our creativity.

So we’re coming out of a place of absolute creativity and love, not jumping out of fear. And there’s a tremendous difference in operating in the world in this way. But it requires us to be willing to be vulnerable, and I think that’s what I think you’re talking about when you talk about people being detached. They’re so far away from their vulnerability, and when people are so far away from their vulnerability, they can do terrible things to other people or themselves.

Lucas Perry: Yeah, I mean, this is a sense of vulnerability in which, it’s a vulnerability of the recognition and sensitivity of your needs, but there’s a kind of stable foundation and security in that vulnerability. It’s a kind of vulnerability of self-knowing, it seems.

Maria Arpa: It’s vulnerability, plus trust.

Lucas Perry: It seems to me then, NVC’s place in the repertoire of an effective altruist, or someone interested in systemic change or existential risk, is that it becomes a tool in your toolkit for having a kind of discourse with people that may be surprising. I definitely believe in the capacity for enlightened or awakened people to exist as an example of what might be possible. And so if you come at someone with NVC who’s never experienced NVC before, I agree with you that that is where, “Oh, just have the conversation with the person,” might lead to some kind of transformative change. Because if you exist as a transformative example of what is possible, then there is the capacity for other people to recognize the goodness in you that is something that they would want and that leads to peace and freedom. NVC is obviously not the perfect solution to conversation, or the perfect solution to the problem of strategy, for example, and I guess, broadly, strategy can also be understood as game theory, where you’re going to have lots of different actors with different risk tolerances and incentives, but it is a much, much better way of communicating, full stop.

Maria Arpa: I notice I feel a slight discomfort when you call NVC a tool, because I don’t see it as a tool, I see it as a way of life.

Lucas Perry: Yeah, I hear that.

Maria Arpa: When I’m in that frame, because I look at the person I was 20 years ago, and I look at the person I am now and I see the transformation, but it’s because of the embodiment of something. It’s because it’s really helped me to look at all aspects of my life. It’s helped me to understand things that I wasn’t understanding, it helped me to wake up and become functional, and mindful, and all of those things, but that’s who I am now. I mean, I’m not saying that I’m some perfect person, and of course, occasionally, the shadows always there, but I’ve learned not to act on my shadow. I’ve learned to play with it. But when I am that embodiment or that person, then I’m bringing a new perspective into any conversation I have. And sometimes people find that disarming in an engaging way.

Lucas Perry: Yeah, that’s right. It can be disarming and engaging. I like that you use the word waking up. We just had a podcast with a former Buddhist monk and we talked a lot about awakening. And I agree with you that calling it a tool is an instrumentalisation of it, which lends itself to the problem-solving mindset, which is kind of adversarial with relation to the object which is being problem solved, which in this case, could be a person. So if it becomes a kind of non-conceptual embodied knowing or being, then there is the spiritual transformation and growth that is born of adopting this upgraded way of being. If you download the software of NVC, things will run much better.

Maria Arpa: So then I wanted to comment on the strategy. NVC is a way of unlocking something, okay. Now once I’ve unlocked it, and once I’ve got to the part where we’re now looking at strategies that will satisfy needs, now, we might need a different way of conversing. Now it might be very robust, it might be from the point of negotiation, and that negotiation may be very gentle and sensitive, but it can also be very boundaried. And so yeah, NVC for me is the way to unlock something, to bring people into a consciousness that what we’re going to do is, what’s the point of making strategies if we don’t understand the needs we’re trying to meet, and then using those needs as the measurement for whether the strategy is going to satisfy or not.

Lucas Perry: Okay. And I think I do also want to put a flag here, and you can tell me if this is wrong, that even those negotiations can fail. And that comes back to this kind of assumption that the world has sufficient abundance, that everyone’s needs can be met. So I mean, my perspective is I think that the negotiations can fail, and that when they fail, then something else is needed.

Maria Arpa: So if the negotiation has failed, in my experience, it’s because somebody wanted something, even if it was just speed, that wasn’t available. And so a really big deal for me is understanding where we want to get to, having that shared vision that we’re all trying to get to this place, and working towards it at the speed and tolerances of the whole group, and yet not allowing it to go at the slowest person’s pace. And that’s an art. There’s a real skill to, “We’re not going at the slowest person’s pace, but we’re also not going to take people out of their tolerances.”

Lucas Perry: But it seems like often with so many different people, tolerances are all over the place and can be strictly incompatible.

Maria Arpa: So that means we didn’t do enough work, and our shared vision isn’t right, and maybe we need to go back and look at the deeper needs. One of the things I talk about in this work is you’re never going to undo 30 years of poor communication in one conversation. It’s a process, and what I’m looking for is progress. And sometimes progress is literally just the agreement that the person will have another conversation with me, rather than slam the door in my face.

I’ve done neighbor disputes where I have knocked on someone’s door, they haven’t responded to the letters or the phone calls, and I have knocked on someone’s door, and I’ve got 30 seconds before they slam the door in my face and tell me, in no uncertain terms, tell me to whatever. And so for me at that moment, just then giving me two minutes, and then just getting to the agreement, I’m not going to try and do any business with you right now, just to get to the agreement that you will have another conversation with me, is progress.

And so it’s really about expectations and how quickly we think we can undo things or change things. And change processes are complex. How many times did you wake up and say, “I want to get fit or eat healthier food or lose weight or stop smoking or drink less,” or whatever it was, and then did you execute it straight away? No, you fluctuated. You probably relapsed, and relapse is a really important part of change. But then do we give up? Do we say, “Well that’s it, it’s over, we can’t negotiate,” or do we say, “Well, okay, that didn’t work. What else could we try?”

So in my world, and what I’ve understood, is the art, or the trick to life is not constantly searching to get your needs met. The trick to life is understanding that I have many needs, and on any day, week, month, year, some go met, and some go unmet, and I’m okay with that. It’s just looking on balance. Because if the aim of the game is to go, “Yeah, my need for this, this, this and this are all not being met, so therefore, I’m going to just make it my mission to get my needs met,” you’re still in the adversarial paradigm.

So I have lots of needs that go unmet, and you know what, it’s fine. It doesn’t mean I can’t express gratitude for what I do have. It doesn’t mean I don’t love everybody and everything in the way it is. It’s fine. I have no expectation that all my needs will get met.

Lucas Perry: Yeah, so you’re talking here about some of your experience, which I think boils down to some axioms or assumptions that one makes in NVC that I think are quite wholesome and beneficial. And I’ll just read off a bunch of them here and if you have any reactions or want to highlight any of them, then you can.

So the first is that all human beings have capacity for compassion and empathy. I believe in that. Behavior stems from trying to meet needs, so everything that we do is an expression of just trying to meet needs. You said earlier there are no bad or good people, they are just people trying to meet needs. Needs are universal, shared, and never in conflict. I think that one’s maybe 99.9% true. I don’t know how psychopaths fit in there, like Jeffrey Dahmer, fits in.

Maria Arpa: Well, I mean, I’ve worked with people in prisons who have been labeled as psychopaths, and I have on that very clear basis that people are selfish, greedy, kind and considerate, but the kind and considerate is either not on show, not available, has been put in a box, paralyzed, not today, I have woken up the kind and considerate.

Lucas Perry: You don’t think that there are people that are sufficiently neurodivergent and neuroatypical, that they don’t fit into these frameworks? It seems clearly physically possible.

Maria Arpa: It only runs out when I run out of patience, love, and tolerance to try. It only ends at that point, when I run out of patience, love and tolerance to try, and there might be many reasons why I would say, “I’m no longer going to try,” don’t get me wrong. We’re not asking everybody to just carry on regardless. But yeah, when I say I’ve had enough, and I don’t want to do this anymore, that’s when it, the trouble is we do that I think far too quickly with most people.

Lucas Perry: Yeah. All right. And so kind of moving a bit along here. The world has enough resources for meeting everyone’s basic needs, we mentioned this.

Maria Arpa: I do want to just comment on the world is abundant, and it has enough abundance to meet everybody’s needs.

Lucas Perry: Yeah.

Maria Arpa: The issue is, if it hasn’t, what’s the conversation we’re going to have? Or do we just want to inflict more and more unnecessary human suffering on each other? If, as is predicted, there’s going to be climate change on a scale that renders parts of the planet uninhabitable and there’s going to be mass migration, what are we going to do? Are we going to just keep killing them? Are we going to have a race to the bottom?

Lucas Perry: Are we going to leave it to power dynamics?

Maria Arpa: Yeah, or are we going to say, “Actually, things are getting tighter now, so we need to figure out how to collaborate so that we don’t kill each other.”

Lucas Perry: And then, so I was saying, if feelings point to needs being met or unmet, I would argue this is more like feelings point to needs being met or unmet, or the perception that they are being met or unmet.

Maria Arpa: So I just say, being able to identify the feeling and name the emotion is an alarm system. It’s our body’s natural alarm system, and we can use that to our advantage.

Lucas Perry: And I’ll just finish with the last one here that I think is related to the spiritual path, which says that the most direct path to peace is through self connection. So through self connection to needs, one becomes, I think, increasingly empathetic and compassionate, which leads to a deepening and an expression of NVC, which leads to more peace.

Maria Arpa: Yeah, the first marriage is this one. The first marriage is the one between I and I, and if that one ain’t working, nothing else is going to work.

Lucas Perry: All right. So can we engage in an NVC exercise now as an example of the format, so moving from observations to feelings to needs to request?

Maria Arpa: Okay, so I think it would be really useful if you could tell me about a situation that’s on your mind right now. And Marshall Rosenberg would say, “Can you tell me in 40 words or less what’s alive in you right now?”

Lucas Perry: So let’s NVC about this morning, then? I was late, and I kept you waiting for me, and I also showed up to the Zoom call late because… I guess the because doesn’t matter. Unless that’s part of the chronology of what happened. But yeah, I showed up to the Zoom call late and then my microphone wasn’t working, so I had to get another microphone. And I feel… how do I feel? I feel bad that I mean, bad isn’t the right word, though, right?

Maria Arpa: Mm-hmm (affirmative). But you just say it, just say it, and then we’ll go over it together.

Lucas Perry: Yeah, so I guess I regret and I feel bad that I wasn’t fully prepared, and then it took me 15, 20 minutes to get things started. And this probably relates to some need I have the podcasts go well, but also that I not damage my relationship with guests, which relates to some kind of need about… I mean, this probably goes all the way down to something like a need for love or self-love, very quickly, it would seem. And yeah, it’s unclear that we’ll have another conversation like this anytime soon, so it’s unclear to me what the kinds of requests are, though maybe there’s some requests for understanding and compassion for my failure to completely show up and arrive for the interview perfectly. How’s that for a start?

Maria Arpa: Yeah, it’s really very sweet, and so I’d love to just tell you some of what I’ve heard, first. So I heard you say that you’re feeling bad because you turned up late and unprepared for our interview, and that feeling bad or regretful is linked to some kind of need, at the end of the day you went there, and you said, “It’s probably a need for self-love.” And it’s hard to know what a request would be like, but I guess what I heard you say is you’d like to request some understanding.

Lucas Perry: Yeah, that’s right.

Maria Arpa: Before I respond to you, I would really love to just break down how you did observation, feeling, need, request, and then work with you a little on each of those. Would that be okay?

Lucas Perry: Yeah, that sounds good.

Maria Arpa: So the biggest judgment in your observation was the word late. It takes a bit of understanding, but who defines late?

Lucas Perry: Yeah, it started 15 minutes after the agreed upon time, is more of a concrete observation.

Maria Arpa: Well, a concrete observation actually, is that we got online at the time we agreed, and we didn’t start the interview until 15 minutes later.

Lucas Perry: Well, I was five minutes late to getting to the Zoom call.

Maria Arpa: Okay. Well, yeah, that word late again.

Lucas Perry: Sorry, I arrived five minutes…

Maria Arpa: After the agreed time.

Lucas Perry: Thank you.

Maria Arpa: Because late can be a huge weapon for self punishment. So the observation is, you came on the call five minutes after the agreed time, and we didn’t begin the interview until 15 minutes after the agreed time. So unprepared, late, and all of those things, they’re what you’re telling yourself. It’s part of the story, because from my perspective, since the example you gave includes me in that narrative, from my perspective, we didn’t have an agreement about what to expect at 10:00 AM your time, 3:00 PM my time. So how do I know you were unprepared? I’ve never done this before with you.

Lucas Perry: Okay.

Maria Arpa: Does that make sense? Can you see that you put a huge amount of judgment into what you thought was your observation, when in actual fact, who knows?

Lucas Perry: Yeah, so introducing other observations now here, is the observation that, I believe I pushed this back twice.

Maria Arpa: Once.

Lucas Perry: I pushed this back once?

Maria Arpa: Mm-hmm (affirmative).

Lucas Perry: Okay, so I pushed this back once. And then there was also this long period where I did not get back to you about scheduling after we had an initial conversation. So the feelings are things that need to be deconstructed. They’re around messiness, or disorganization, or not doing a good job, which are super synthetic in an evaluative, and would need to be super deconstructed, but that’s what I have right now.

Maria Arpa: So on some level, you didn’t meet your own standards.

Lucas Perry: No, I did not meet my own standards.

Maria Arpa: Right. So on some level, you didn’t meet your own standards, and that’s giving rise to a number of superficial feelings, like you’re feeling bad and guilty, and all of those things. And I can only guess, I’ve told myself, but perhaps you’re feeling regret, possibly some shame, and I don’t know why the word loneliness comes up for me. Isolation or loneliness, disconnection, something around, that you’ve screwed up, and now you have to sit in there and now there’s some shame and some regret and some embarrassment. Embarrassment, that’s it. I’m telling myself the feeling is embarrassment.

Lucas Perry: I’m trying to focus and see how accurate these are.

Maria Arpa: Yeah, and I could be completely wrong. I can only guess.

Lucas Perry: Yeah, I mean, you’re trying to guide me towards explaining my own feelings.

Maria Arpa: Yeah, so does anything resonate with embarrassment, or shame, or regret, or mourning.

Lucas Perry: Mostly regret. This kind of loneliness thing is interesting, because I think it’s related to the feeling of… If there was sufficient understanding and compassion and self-love, then these feelings wouldn’t arise because there would be understanding. And so the loneliness is born out of the self-rejection of the events as they transpired. And so there’s this need for wholeness, which is a kind of self-love and understanding and compassion and knowledge. It’s just a aligned state of being, and I’ve become unaligned by creating this evaluative story that’s full of judgment. Because all this could happen and I could feel totally fine, right? That roughly captures the feelings.

Maria Arpa: Okay. And then it really resonated for me and I heard you say that this need for wholeness, and definitely for understanding and love, and a deep need for mutuality.

Lucas Perry: Yeah. There’s a sense that I can’t fully meet you when I haven’t fully accepted myself for what I have done. Or if there’s a kind of self consciousness around the events and how it has impacted you.

Maria Arpa: Yeah, so I’m telling myself that we made an agreement, and actually, part of it is a story that you’re telling yourself, and part of it has some reality in it, that you didn’t meet the terms of our agreement.

Lucas Perry: Yeah.

Maria Arpa: And then what that’s doing to you is, when you didn’t meet the terms of the agreement, I’m telling myself that now what happens to you is you worry, I think that’s the word. Ah, that’s it, maybe the feeling’s worry, or anxiety, that then any connection that we might have made is disconnecting or breaking, or we’re losing mutuality, because I may be now looking at you differently for not having met the terms of the agreement.

Lucas Perry: Yeah, and there’s also a level of self-rejection. And someone who is self-rejecting is in contradiction and cannot fully connect with someone else. If you find that yourself is unlovable or that you do not love yourself, then it’s like impossible to love someone else. So I think there’s a sense also, then of, if you’re creating a bad narrative and that’s leading to a sense of self-rejection, then there’s an inability to fully be with the other person. So then I think that’s why you were pointing out to the sense of loneliness, because this kind of self-rejection leads to isolation and inability to fully meet the person as they are.

Maria Arpa: Yeah, so we went over the observation, we got to some of the feelings, we got to some of the needs, now do you have a request? And I think I heard you say you have a request for understanding.

Lucas Perry: Yeah, understanding and compassion, probably mostly from myself, but also from you.

Maria Arpa: Yeah, so I’m wondering if you’d like me to respond to that request now. Would that be helpful for you?

Lucas Perry: Yeah, sure.

Maria Arpa: So I guess when I hear your request for understanding and compassion, and that you’re also recognizing you need to give it to yourself, and that’s a relief for me that you know you need to give it to yourself, and yet on some level, we do have a situation over an agreement that was broken. I would love for you to be able to hear where I am. And I’m just wondering, would you be willing to hear where I am in that, to support you in your request?

Lucas Perry: Yeah, and probably some need for love in the sense of, I mean, there are different kinds of love. So whatever kind of love exists between you and I, as human beings that is coworker love, or colleague love, or whatever kind of relationship we have, I don’t know how to explain our relationship, but whatever kind of love is appropriate in that context.

Maria Arpa: So I guess where I’m coming from, is, I feel deeply privileged and honored to be asked to do this podcast. I’ve heard some of your other podcasts, and I think they’re masterpieces. So to be invited to do this at all and for us to have met and for you to have actually said, “Yeah, let’s go ahead and do this,” went a long way for me to believe in myself as well.

So you may be having your own moment of self punishment, and so did I. “What happens if he doesn’t like me at the end of our interview, and doesn’t want to do it?” And in terms of our agreement, as far as I’m concerned, you got online roughly at the time we said, and I have no idea, we didn’t have a tacit agreement that the interview then starts at, and so in terms of being alongside you while you make preparations and whatever, actually, it helped me to see you as human. So it actually increased my love for you.

Lucas Perry: Oh, nice.

Maria Arpa: Because I saw in that first meeting, you were kind of interviewing me and seeing if there was suitability for a podcast. And of course, I know I know my stuff, right. That’s not the issue. But there was a sense of me wanting to be on best behavior. But now I come to this call, and there you were just being human and expressing it, and I was able to say a few things to you. And I felt that 15 minutes was very connecting, it was very connecting for me. And so I just wonder when you hear that, does it change anything for you?

Lucas Perry: Yeah, I feel much better. I feel more capacity to connect with you and I appreciate the honesty and transition of how you felt with regards to the first time we talked where, because I couldn’t find any of your content online, so I didn’t really know anything about you, so we had this first conversation where you felt almost as if there was kind of evaluative relationship going on, which was then I guess, dissolved by having this conversation in particular, but also the beginning of the podcast where I was being human and my microphone wasn’t working, and my electricity was out this morning and things weren’t working out. So yeah, I appreciate that. I feel much better and warm and more capacity for self-love and connection. So thanks.

Maria Arpa: Yeah.

Lucas Perry: I think that means the NVC was successful.

Maria Arpa: Yeah. And then just to add one thing, during the interview, I heard you say something like, “I don’t know what my requests would be because the opportunity for us to connect again like this,” or whatever, you said something about that we probably wouldn’t speak to each other again in a hurry. I actually felt really sad when I heard that. I felt such sadness, “Oh, no, I’ve connected with Lucas now.” So I hope that there’ll be other opportunities to just chat or stay in touch or whatever, because there’s something about you that I feel really resonates. And I love where you’re coming from, and I love what you’re trying to do. It’s really important.

Lucas Perry: Well, thank you, that’s really sweet. I appreciate it.

Maria Arpa: Thank you. And look at that, we’re bang on time.

Lucas Perry: So that means you escaped question eight, which is my criticisms of NVC.

Maria Arpa: We could come back and add that on another time, but I can’t do it now. If you want to do another bit, we can do another bit. I’m really happy to do that.

Lucas Perry: Yeah. Great. So as we wrap up, if people want to follow you or to learn more about NVC, or the Center for Nonviolent Communication, where are the best places to follow you or get more information or check out more of Marshall’s work?

Maria Arpa: So obviously, CNVC has a website which is CNVC, Center for Nonviolent Communication, .org. That’s your first port of call. Marshall’s books are published by Puddle Dancer Press. I know Meiji very well. I know him reasonably well, and he’s a really wonderful guy, so buy some books from Puddle Dancer Press, because Marshall’s books are amazing. There are 700 NVC trainers across the world, and you can find those on the website if you go to the right bit and search. So if you want to find someone local in your area, and they all work differently and specialize in different things. If you put NVC into Facebook, you will find countless NVC pages. And if you’re looking for me, Google my name, Maria Arpa, and I will come up. Thank you.

Lucas Perry: All right. Thanks, Maria.

Stephen Batchelor on Awakening, Embracing Existential Risk, and Secular Buddhism

 Topics discussed in this episode include:

  • The projects of awakening and growing the wisdom with which to manage technologies
  • What might be possible of embarking on the project of waking up
  • Facets of human nature that contribute to existential risk
  • The dangers of the problem solving mindset
  • Improving the effective altruism and existential risk communities

 

Timestamps: 

0:00 Intro

3:40 Albert Einstein and the quest for awakening

8:45 Non-self, emptiness, and non-duality

25:48 Stephen’s conception of awakening, and making the wise more powerful vs the powerful more wise

33:32 The importance of insight

49:45 The present moment, creativity, and suffering/pain/dukkha

58:44 Stephen’s article, Embracing Extinction

1:04:48 The dangers of the problem solving mindset

1:26:12 Improving the effective altruism and existential risk communities

1:37:30 Where to find and follow Stephen

 

Citations:

Stephen’s website

Stephen’s teachings and courses

 

We hope that you will continue to join in the conversations by following us or subscribing to our podcasts on YoutubeSpotify, SoundCloudiTunesGoogle PlayStitcheriHeartRadio, or your preferred podcast site/application. You can find all the AI Alignment Podcasts here.

You can listen to the podcast above or read the transcript below. 

Lucas Perry: Welcome to the Future of Life Institute Podcast. I’m Lucas Perry. Today, we have a special episode for you with Stephen Batchelor. Stephen is a secular and skeptical Buddhist teacher and practitioner with many years under his belt in a variety of different Buddhist traditions. You’ve probably heard often on this podcast about the dynamics of the race between the power of our technology and the wisdom with which we manage it. This podcast is primarily centered around the wisdom portion of this dynamic and how we might cultivate wisdom, and how that relates to the growing power of our technology. Stephen and I get into discussing the cultivation of wisdom, what awakening might entail or look like. And also his views on embracing existential risk and existential threats. As for a little bit more background, we can think of ourselves as contextualized in a world of existential threats that are primarily created due to the kinds of minds that people have and how we behave. Particularly how we decide to use industry and technology and science and the kinds of incentives and dynamics that are born of that. And so cultivating wisdom here in this conversation is seeking to try to and understand how we might better gain insight into and grow beyond the worst parts of human nature. Things like hate, greed, and delusion, which motivate and help to cultivate the manifestation of existential risks. The flipside of understanding the ways in which hate, greed, and delusion motivate and lead to the manifestation of existential risk is also uncovering and being interested in the project of human awakening and developing into our full potential. So, this just means that whatever idealized kind of version you think you might want to be or that you might strive to be there is a path to getting there and this podcast is primarily interested in that path and how that path relates to living in a world of existential threat and how we might relate to existential risk and its mitigation. This podcast contains a bit of Buddhist jargon in it. I do my best in this podcast to define the words to the best of my ability. I’m not an expert but I think that these definitions will help to bring a bit of context and understanding to some of the conversation. 

Stephen Batchelor is a contemporary Buddhist teacher and writer, best known for his secular or agnostic approach to Buddhism. Stephen considers Buddhism to be a constantly evolving culture of awakening rather than a religious system based on immutable dogmas and beliefs. Through his writings, translations and teaching, Stephen engages in a critical exploration of Buddhism’s role in the modern world, which has earned him both condemnation as a heretic and praise as a reformer. And with that, let’s get into our conversation with Stephen Batchelor. 

Thanks again so much for coming on. I’ve been really excited and looking forward to this conversation. I just wanted to start it off here with a quote by Albert Einstein that I thought would set the mood and the context. “A human being is a part of the whole called by us universe, a part limited in time and space. He experiences himself, his thoughts and feelings as something separated from the rest, a kind of optical delusion of his consciousness. This delusion is a kind of prison for us, restricting us to our personal desires and to affection for a few persons nearest to us. Our task must be to free ourselves from this prison by widening our circle of compassion to embrace all living creatures, and the whole of nature and its beauty. Nobody is able to achieve this completely. But the striving for such achievement is in itself a part of the liberation and a foundation of inner security.”

This quote to me is compelling, because, one, it comes from someone who is celebrated as one of the greatest scientists that have ever lived. In that sense, it’s a calling for the spiritual journey, it seems, from someone who, for people who are skeptical of something like the project of awakening or whatever a secular Dharma might be or look like. I think it sets up well the project. I mean, he talks about here how this idea of separation is a kind of optical delusion of his consciousness. He sets it up as the problem of trying to arrive at experiential truth and this project of self-improvement. It’s in the spirit of this, I think, seeking to become and live an engaged and fulfilled life that I am interested and motivated in having this conversation with you.

With that in mind, the problem, it seems, that we have currently in the 21st century is what Max Tegmark and others have called the race between the power of our technology and the wisdom with which we manage it. I’m basically interested in discussing and exploring how to grow wisdom and about how to grow into and develop full human potential so that we can manage powerful things like technology.

Stephen Batchelor: I love the quote. I think I’ve heard it before. I’ve come across a number of similar statements that Einstein has made over the years of his life. I’ve always been impressed by that. As you say, this is a man who’s not regarded remotely as a religious or a spiritual figure. Yet, obviously, a highly sensitive man, a man who has plumbed the depths of physics in a way that has transformed our world. Clearly, someone with enormous insight and understanding of the kind of universe we live in. Yet, at the same time, in these sorts of passages, we realize that he’s not just the stereotyped, detached scientist separated out from the world looking at things clinically and trying to completely subtract his own subjectivity.

This, I think, is often the problem with scientific approaches. The idea is that you have to get yourself out of the way in order to somehow see things as they really are. Einstein breaks that stereotype very well and recognizes the need that if we are to evolve as human beings and not just as scientists who get increasingly clear and maybe very deep understandings into the workings of the universe, something else is needed. Of course, Einstein himself does not seem to really have any kind of methodology as to how that might be achieved. He seems to be calling upon something he may consider to be an innate human capacity or quality. His words resonate very much in terms of certain philosophies, certain spiritual and religious traditions, but we don’t really see any kind of program or practice that would actually lead to what he recognizes to be so crucial.

I found the final comment he makes a bit deflating, he says, he seems to think it has to do with inner security, which is a highly subjective, and I would think rather limited goal to achieve, given what he’s just made out as his vision.

Lucas Perry: Yeah, that’s wonderfully said. You can help unpack and show the depths of your skepticism, particularly about Buddhism, but also your interest in creating what you call a secular Dharma. We’re on a planet with ancient wisdom traditions, which have a lot to say about human subjectivity. Einstein is setting up this project of seeing through what he says and takes to be a kind of delusion of consciousness, the sense of experiencing oneself and thoughts and feelings as something separate and this restricting us to caring most about our personal desires and to affection.

I mean, here, he seems to be very explicitly tapping into concepts in Buddhism, which have been explored like non-self and emptiness and non-duality. Hey this is post-podcast Lucas here and I just wanted to try and explain a few terms that were introduced here, like “non-self,” “emptiness,” and “non-duality.” And I’ll do my best to explain them but I’m not an expert and other people who think about this kind of stuff might have a different take or give a different explanation, but I’ll do my best. So, I think it’s best to first think about the universe 13.7 billion years ago as an unfolding continuous process, which contextualized within the unfolding of the universe is the process of evolution which has led to human beings. There’s this deep grounded connection of the human mind and human nature, and just being human with the very ground of being and the unfolding of everything. Yet, in that unfolding there is this construction of a dualistic world model, which is very fitness enhancing where you are constructing this model of self and a model of the world. This self-other dualistic conceptual framework born of this evolutionary process is fitness enhancing, it’s helpful, and it’s useful, yet it is a fabrication, an epistemic construction which is imposed upon a process for survival reasons. And so non-duality comes in and simply rejects this dualistic construction and says things are not separate things are not two, one undivided without a second. This means that once one sees all this dualistic construction and fabrication as it is then one enters, what I might say is something more like a don’t know mind, where one isn’t relying on this conceptual dualistic fabrication to ultimately know. Or one doesn’t take it as what reality ultimately is, as divided into all of these things like self and other and tables and chairs and stars and galaxy. Non-self and emptiness are both very much related to this.

Non-self is the view that the self is also this kind of construction or fabrication, which under experiential and conceptual analysis just falls apart and reveals that there is no core or essence to you, that there is nothing to find, that there is no self, but merely the continual unfolding of empty ephemeral conditioned phenomena where emptiness here means that the self and all objects that you think exist are empty of intrinsic existence and are merely these kinds of ephemeral appearances based on causes and conditions which when those causes and conditions no longer sustain for that thing to appear in that current form the thing dissolves. In non-duality there’s this sense of no coming, no going, there’s no real start, beginning or end to anything, there is this continual unfolding process where something like birth and death are abstractions or constructions which are imposed on a non-dual continuous process. And so the claims of non-self emptiness and non-duality are all both ontological claims about how the universe actually is, but they’re also experiential claims about how we can shift our consciousness into a more awake state where we have insight into the nature of our experience and to the nature of things and we’re able to shift into a clear non-conceptual seeing of something like non-dual awareness or emptiness, or non-self. This might entail something like noticing the sense of self becoming an object of witnessing. There’s no longer an identification with self and so there’s space from it. There might eventually be a dropping away of a sense of self where all that’s left is consciousness and contents without a center where there isn’t a distance between witnesser and what is perceived. All there is is consciousness and everything is perceived infinitely close, where everything is basically just made of consciousness and where consciousness is no longer structured by this dualistic frame work.

And so a layer of fabrication drops away and there’s just consciousness and this deep sense of interconnectivity and being. This is what I think Eisnstein is pointing to when he says that our experience of ourselves as separated from the rest of the universe is a kind of optical delusion of our consciousness. I think he is pointing towards how the constructed sense of self and the dualistic fabrication of a self-other world model populated by objects and things and other people and then buying into these constructions as a kind of an ultimate representation of how things are where there are all these things with intrinsic independent existence, with a kind of essence, rather than being this non-dual, undifferentiated, unfolding, continuous process, which can be said to be neither same nor different, which there can neither be said to not be a self or be a self, and so I think this is what he is pointing in the direction of and why I point out that there are other wisdom traditions which have been thinking about and practicing cultivating these kinds of insights and awareness for many years. So, back to the conversation. As you said, he doesn’t have a practice or a system for arriving at an experiential understanding of these things. Yet, there are traditions which have long studied and practiced this project.

Stephen Batchelor: Yes, this is absolutely correct. Myself and many of my peers and colleagues and friends have spent their lives exploring these wisdom traditions. In my own case, this has been various forms of Buddhism primarily. I think we also find these traditions within our own culture. I think we find something very similar in the Socratic tradition. We find something likewise in the Hellenistic philosophies, which also recognize that human flourishing, which is a term I very much like, is essentially an ethical practice. It’s a way of being in which we take our own subjective assumptions to task.

We don’t just assume that everything we think and feel is the way things actually are but we begin to look more critically. We pursue what Socrates calls an examined life. Remember, an unexamined life is not worth living, he said. Then, what is this examined life? Perhaps like yourself and others, I found, in a way, more richness in Asian traditions because they don’t just talk about these things, but they have actual living methodologies and practices that, if followed, can lead us to a radical change of mind can begin to unfold different layers of human experience that are often blocked and somehow ignored, and cut off from what we experience from moment to moment.

Lucas Perry: Yeah, exactly. There’s this valid project then of exploring, I think, the internal and the subjective point of view in a rigorous way, which leads to a project of something like living an examined life. From that perspective, one can come to experiential kinds of wisdom and I think can get in touch with kinds of skillfulness and wisdom, which an overreliance or a sole reliance on the conceptualization of the dualistic mind would fail at, thinking about like compassion or discovering something like Buddha nature, which I had been very skeptical of for a long time, but less so now, and heart-mind, and heart wisdom.

And that awakening is, I think, a valid project, and something that is real and authentic. I think that that’s what Einstein helps to explain. His credentials, I think, helped to beef this up a bit for people who may be skeptical of that project. I mean, I view this partially as, for thousands and thousands of years, people have been struggling just to meet their basic needs. As these basic needs, even just material needs, keep getting met, we have more and more space for authentic human awakening. Today, in the 21st century, we’re better positioned than ever to have the time, space and the information to live deeply and to live an examined life and to explore what is possible of the depths of human compassion and well-being and connection.

Thinking about the best moment of the best day of your life and if it’s possible to stabilize in that kind of introspection and compassion and way of being.

Stephen Batchelor: I broadly go along with exactly what you’re saying. It almost seems self-evident, I think, for those of us who have been involved in this process for a number of years. On the other hand, it’s very easy to sort of talk about emptiness and non-duality and Buddha nature and so on. What really makes the difference is how we actually internalize those ideas both conceptually. I think, it is important that we have a clear, rational understanding of what these ideas convey. Also, of course, through actual forms of spiritual practice, by performing spiritual exercises as we find already in the Greeks, that will hopefully actually lead to significant changes in how we experience life and experience ourselves.

What we’ve said so far is still leaving this somewhat at a level of abstraction. I’ve been involved in Buddhism now full time for the last 45 years. If I’m entirely honest with myself, I have to acknowledge that at many levels, my consciousness seems to be much the same. There are moments in my practice in which I’ve gained what I would consider to be insights. I like to think that my practice of Dharma has also made me more sensitized, more empathetic to the needs of others. I like to think that I’ve committed myself to a way of life in which I put aside the conventional ambitions of most people of my generation.

Yet, I can also see that I still suffer from anxieties and high moods and low moods and I get irritated and can behave very selfishly at times. I see, in many ways, that what this practice leads me to is not a transcendence of these limiting factors that Einstein refers to, but let’s say a greater clarity and a greater humility in acknowledging and accepting these limitations. That I think is, if anything, where the practice of awakening goes to. Not so much to gaining a breakthrough into some transcendental reality, but rather to gain a far more intimate and real encounter with the limitations of one’s own experience as a human being.

I think we can sometimes lose touch with the need for this moment to moment humility, this recognition that we are, I think, to a considerable degree, built as biological organisms to maintain a certain kind of consciousness that will, I suspect, be with us largely until we die. I would say the same also about the Buddhist teachers that I’ve met over the decades that however insightful their teachings may be, however fine examples they are of what a human life can be, the more time I spend with them on a day to day basis. I have done with Tibetan lamas, with Zen masters, I’ve got to know them quite well.

I discover not so much a person who is almost as it were out of this world, but rather someone who carries still with them the same kinds of human traits and quirkiness and has good days and bad days like the rest of us. I would be cautious, in a way, setting up another kind of divide, another kind of duality between the unenlightened and the enlightened. One of the things I like very much about Zen is that it’s quite conscious of this problem. One of my favorite citations is from the Platform Sutra of the Sixth Patriarch called Hui-neng, who says, “When an ordinary person becomes awakened, we call them a Buddha. When a Buddha becomes deluded, we call them an ordinary person.”

This is a way of thinking that is perhaps more close to Taoism than it might be to the Indian Buddhist traditions, which to me, tend to operate rather more explicitly within the domain of there being enlightenment on the one hand, and delusion on the other, awakening on the one hand, non-awakening on the other. Then, there’s a kind of a step-by-step path that leads from one unenlightened to enlightened. The Zen tradition is suspicious of that kind of mental description of that kind of frame and recognizes that awakening is not something remote or transcendent. Awakening is a capacity that is open to us in each moment.

It’s not so much about gaining insight into some deep ontology into the nature of how things really are. It’s understood far more ethically. In other words, if I respond to life in a more awake way that can occur for me as well as the Buddha or anyone else in any given moment. In the next moment, I may have slipped back into my old neurotic self and recognize, in fact, that I don’t respond appropriately to the situation’s I face. I’m a little bit wary of this language. A lot of the language we find in the Indian Buddhist traditions we find also in Advaita Vedanta and so on do tend to set up a kind of a polarity. That’s kind of unavoidable, because given what we’re talking about, we need to have some idea of what it is that we are aspiring to achieve.

The danger is we set up another dualism and that’s problematic. I feel that we need a discourse that is able to affirm awakening and enlightenment in the midst of the every day, in the midst of the messiness of my own mind now. The challenge is not so much to become an enlightened person, but it’s to live a more awake and responsive existence in each situation that I find myself dealing with in my life. At the moment, my practice is talking to you, is having this conversation. What matters is not how deeply I may have understood emptiness, but how appropriately, given our conversation, I can take this conversation forward in a way that may be a benefit to others.

I might even learn something myself in this conversation, maybe you will too. That’s where I would like to locate the idea of awakening, the idea of enlightenment, in how I respond to the given situation, I find myself at any moment.

Lucas Perry: Yeah. One thing that comes to mind that I really like and I think might resonate with you, given what you said, I think, Ram Dass said something like, “I’ve become a connoisseur of my neuroses.” I think that there’s just this distinction here between the immersion in neuroses, and then this more awake state with choice where you can become a connoisseur of your neuroses. It’s not that you’ve gotten rid of every bad possible thought that you can have, but that there’s a freedom of non-reactivity in relation to them. That gives a freedom of experience and a freedom of choice.

I think you very much well set up a pragmatic approach to whatever awakening might be. Given the project of living in a world with many other actors with much pain and with much suffering and with much ignorance and delusion, I’m curious to know how you think one might approach spreading something like a secular Dharma. There’s kind of two approaches here, one where we might make the wise more powerful and one where we might be making the powerful more wise.

If listeners take anything away today, in terms of wisdom, how would you suggest that wisdom be shared and embodied and spread in the world, given these two directions of either making the wise more powerful or making the powerful more wise?

Stephen Batchelor: Okay. To answer that question, I think, I have to maybe flesh out more clearly what I understand by awakening. My understanding of awakening is rooted in the conclusion to the Buddha’s first discourse, or what’s regarded as the Buddha’s first discourse. There, he says very clearly that he could not consider himself to be fully awake until he had recognized, performed and mastered four tasks. The first task is that of embracing life fully. The second task is about letting reactivity be, or letting it go, I think letting it be is probably better. The third is about seeing for oneself, the stopping of reactivity, or seeing for oneself a nonreactive space of mind. Then, from that nonreactive space of mind, being able to respond to life in such a way that you actually open up another way of being in the world.

That’s called as the fourth task creating a path or actualizing a path. That path is understood not just as a spiritual path, but one that engages how we think and how we speak and how we work and how we are engaged with the world. What’s being presented here as awakening is not reducible to gaining a privileged insight into say the nature of emptiness or into the nature of the divine, or into something transcendent or into the unconditioned. Buddha doesn’t speak that way. These early Buddhist texts on which I base what I’m saying have somehow been relegated to the sidelines. Instead, we find a Buddhist tradition today speaking of awakening or sometimes they’ll use the word enlightenment, as basically a kind of mystical breakthrough into seeing a transcendent or truer reality, which is often called an absolute truth or an ultimate truth.

Again, these are words the Buddha never used. In my own approach, I’m very keen to try to recover what seems to me and I admit that this is my own project. I don’t think it’s a widespread view. I find this very, very helpful, because awakening is now understood not as a state that some people have arrived at and other people haven’t, which I think sets up too harsh a split, a duality, but rather, awakening begins to be understood as a process. It begins to be understood as part and parcel of how we lead our lives from moment to moment in the world. We can look at this in four phases as embracing life, of letting our reactivity be, seeing the stopping of reactivity and then responding in an appropriate way.

In practice, this process is going on so rapidly that it’s effectively a single task. It’s a single task with what we might call four facets. Here, I would come to what you talk about as wisdom. Wisdom, in this sense, is not reducible to some cognitive understanding. It has to do with the way in which we engage with life as a whole, as an embodied, enacted person. Again, it reflect somewhat the Zen quotation I already mentioned. It has to do with the whole of ourselves, the whole of the way we are in the world as an embodied creature.

I feel that to make the wise more powerful, by wise, I would mean people who actually have given the totality of their life, not just in terms of years, but I mean, in terms of their varying skill sets that they have as human beings, their emotional life, their intuitive life, their physical life, their cognitive life, that all of these elements begin to become more integrated, the person becomes less divided between the spiritual part of themselves and the material part of themselves, for example.

The whole of one’s being is then drawn into the unfolding of this process within the framework of this fourfold task. I feel that if we are to make the powerful more wise, one would require the powerful to make, not just changes in how they might think about things or even gain mystical insights, but actually to have the courage to embark on another way of being in the world at all levels. That is a much greater challenge, I feel. I think we also need the humility to recognize that although, as you say, we do have at our time in the 21st century now, access to traditions of practice, philosophies, we have leisure, we have the times and places to pursue these sorts of practices, we should be wary of the hubris that thereby we can somehow, by mastering these different approaches, we can thereby be in a far better position to solve the problems that the world presents to us.

I think that may be true in a very general sense. But I feel that what’s really called for is a fundamental change in our perspective, with regard to how we not only think of ourselves, but how we actually behave in relationship to the life that we share on this planet with other beings and this planet that we are endangering through our activities. The other dimension, of course, is this is not going to be something that any particular individual alone will be able to accomplish but it requires a societal shift in perspective. It requires communities. It requires institutions to seek to model themselves more on this kind of awake way of living, so that we can more effectively collaborate.

I think the Buddhists and the Advaitists and the Sufis and the Taoists and so on certainly have a great deal to offer to this conversation. I feel that if we’re really to make a difference, their insights have to be incorporated into a much wider and more comprehensive rethinking of the human situation. That’s not something I can see taking place in a short term. I feel that the degree of change required will probably require generations, if I’m really honest about this.

Lucas Perry: You see living an awake life to be something more like a way of being, a way of engaging with the world, a kind of non-reactivity and a freedom from reactivity and habitual conditioned modes of being and a freedom to act in ways which are ethical and aligned and which are conducive to living an examined and fulfilling life.

Stephen Batchelor: Exactly. I couldn’t have put it better myself, Luke. That’s kind of my perspective. Yes.

Lucas Perry: I wanted to put up a little bit of a defense here of insight because you said that it’s not about pursuing some kind of transcendent experience. Having insight can be such a paradigm shift that it is seemingly transcendent in some way. Or it can also be ordinary and not contain something like spiritual fireworks, but still be such a paradigm shift that you’ll never be the same ever again. I don’t expect you’ll disagree with this. In defense of insight into impermanence, it’s like our brains are hooked up to the world where there’s just all this form and sense data and we’re engaging with many things as if they were to bring us lasting satiation and happiness but they can never do that.

Insight into impermanence, I mean, impermanence, conceptually is so obvious and trivial everyone. Everything is impermanent, everyone understands this. If at a deep, intuitive and experiential and non-conceptual level one embodies impermanence, one doesn’t grasp or interact in the world in the same way, because you just can’t, because you see how things are. Then similarly, if one is living their life from within immersion in conceptual thought and ego identification, the capacity to drop back into witnessing the self and for that to dissolve and drop away, and then for all to remain is consciousness and its contents without a center. Hey this is  post-podcast Lucas again and I just wanted to unpack a little bit here what I meant by “immersion in conceptual thought” and “ego-identification.” By ego identification I mean this kind of immersion and identification with thought where one generates and reifies the sense of self as being in the head, and as the haver of an experience, as someone having an experience in the head, and thinking the thoughts and executing all of the commands of the mind and body. And this stands in distinction with a capacity to unhook awareness from that process and to witness the self as a object of perception, an object to be witness, rather than as the center of identity, and for that to then create a distance between the foundation of witnessing and the process of being identified with the ego or reifying and constructing an ego, which is the beginning of shifting towards a perception of consciousness and it’s contents without a center. So, back to the conversation.  That kind of insight and the stabilization in that can be a foundation for loving-kindness and openness and connection.

People experience a great immense of relief, like, oh, my God, I was a poor little self in this world, and I thought I was going to die. Now, I’ve had this insight into non-self, which can be practiced and stabilized, even in a normal day to day life through the practices of, for example, Dzogchen and Mahamudra. This is just my little defense of insight as I think also offering a lot of freedom and capacity for like an authentic shift in consciousness, which I think is part of awakening and that it’s likely not just a way of being in the world. Do you have any reactions to that?

Stephen Batchelor: I have plenty of reactions to that.

Lucas Perry: Yeah.

Stephen Batchelor: I don’t disagree with you, clearly. Of course, there are moments in people’s lives, whether they’re practicing Buddhism or whatever it is. Sometimes, they’re not doing any kind of formal spiritual practice whatsoever but life itself is a great teacher and shows them something about themselves or about life that hits you very powerfully, and does have a transformative effect. Those moments are often the keynotes that define how we then are in the world. Perhaps my work has tended to be recently at least somewhat of a reaction against the over privileging of these special moments.

And attempt to recover a much more integrated understanding of being awake is about being awake moment to moment in our daily lives. Let me give you a couple of examples. With hand on heart, I can say that I have had one experience which would fit your definition probably of what we might call a mystical experience. This occurred when I was a young Tibetan Buddhist monk, I was probably 22 or 23 years old. And I was living in Dharamsala, where the Dalai Lama has his residence. I was studying Tibetan Buddhism. I was very deeply involved in it. One day, I was out in the forest in the huts near where I lived and I went to get some water. Coming back to my hut with a bucket full of water, I suddenly was stopped in my tracks by this extraordinary realization that there was anything at all, rather than just nothing, a sense of total and utter astonishment that this was happening.

It was a moment that maybe lasted a few minutes and all of its intensity. What it revealed to me was not some ultimate truth and not certainly anything like emptiness or pristine awareness. What it revealed to me was the fundamentally questionable nature of experience itself. That experience, I will fully accord with what you have said, has changed my life, it continues to do so now. Yes, I do feel very strongly that deep personal experience of this sort of nature can have a profoundly transformative effect on the way we then inhabit the value system, what we regard as really being important in life.

As for impermanence, for me, the most effective meditations on impermanence were those on death. Again, this is from my Tibetan Buddhist training. Impermanence is of key significance regarding the fact that I am impermanent, that you are impermanent, the people I love are . When I was a young monk, every day, I had to spend at least 30 minutes meditating on what they call in, Tibetan, chiwa’i mitagpa: The impermanence, which is death, and you contemplate reflectively the certainty of death, the uncertainty of its time, and the fact that since death is certain and its time is uncertain, then how should I live this life? The paradox I find with death meditation was not that it made me feel gloomy or pessimistic or anything like that.

In fact, it had the very opposite effect. It made me feel totally alive. It made me realize that I was a living being. This kind of meditation is just one I did for many, many years. That likewise did have a very transformative effect on my life and it continues to do so. Yes, I agree with you. I think that it is important that we experience ourselves, our existence, our life, our consciousness, from a perspective that suddenly reveals it to be quite other than what we had thought. It’s more than just as we thought, it’s also as we felt. Once these kind of insight become internalized and they become part and parcel of who we are, that I feel is what contributes very much to enabling this being in the world version of awakening.

In the Korean Zen tradition in which I trained, they used to speak about sudden awakening, followed by gradual practice. In other words, they understood that this process that we’re involved in of what we loosely call the spiritual path or awakening is comprised both of moments of deep insight that may be very brief in terms of their duration. But if they’re not somehow followed through with a gradual moment-to-moment commitment to living differently. Then, they can have relatively little impact. The danger I feel of making too big a deal out of these moments of insight is that they come to be regarded as the summum bonum of what it’s all about.

Whereas, I don’t think it is, I think, that they are moments within a much richer and more complex process of living. I don’t think they should be somehow given excessive importance in that overall scheme of things.

Lucas Perry: That makes a lot of sense. This is very much rings true from your conversation you had with Sam Harris and the back and forth that you had there is that you put a very certain kind of emphasis on the teachings. Many of the things that I might respond with over the next hour, you would probably agree with. You see them as not the end goal, you would deemphasize them in relation to this living an authentic, fulfilled, awakened life as a mode of being.

Stephen Batchelor: I think that is broadly correct. I honor the insights that come from all traditions, really. I don’t think Buddhism has a monopoly on these things at all.

Lucas Perry: You’re talking about death. I’ve been listening to a lot of Thích Nhất Hạnh recently. I mean, even insight will change one’s relationship with death. He talks about a lot no coming, no going, no birth, no death and the wave like nature of things. Insight into that plus I find this daily mahamudra practice of dropping back to pristine awareness or I think in Dzogchen, they call Rigpa, and glimpses of non-duality and the nature of things. All this coming together can lead to a very beautiful life where I feel like these peak spiritual experience moments are a part of the ordinary, part of the mystery and part of checking and seeing how things are.

I guess, just the last thing I’m trying to emphasize here is that I think the project does lead to a totally new way of being new paradigm shifts much more well-being and capacity for loving-kindness and discovering parts of you that you never knew existed or that the possible. Like, if you’ve spent your whole life using conceptual thought to know everything and you’ve been within ego identification, and you didn’t know that there was anything else, changing from that and arriving at something like heart mindfulness changes the way you’re going to live the whole rest of your life and with much greater ease and well-being.

Hey it’s post-podcast Lucas back again and just wanted to define a few words here that were introduced that might be interesting and helpful to know. The first two are prestine awareness or rigpa. Rigpa is a Dzogchen word for this. I think they both point toward the same thing, which is this original and ground of consciousness which is this pure witnessing or this original wakefulness of personal experience, which all content and form or perceptions or even the sense of self and experience of self appear in relation to this witnessing or pure knowing, which is the ground of consciousness. One can be caught up in ego-identification or can be obscured and lost in thought or anything like this, but this pristine awareness or rigpa or this witnessing is always there underneath whatever form and phenomena are obscuring it. This is related to glimpses, I mentioned glimpses here, and these are pointing out instructions for noticing this aspect of mind. And if that’s something that you’re interested in doing, I highly recommend the work and books of Loch Kelly. He teaches very skillful pointing out instruction which, I think, help to demonstrate and pointing towards pristine awareness or rigpa, which he calls awake awareness. And I also brought up heart-mind here. Heart-mind is a part which is something that one arrives at by unhooking from conceptual thinking and dropping awareness down into the center of the chest where one finds non-conceptual knowing, effortless loving-kindness, a sense of okay-ness, non-judgement, and a place of continuous intuition from which to operate. But that can use conceptual dualistic thought, but doesn’t need to and also understands when conceptual dualistic thought is useful and when it is not. So, alright, back to the episode.

Stephen Batchelor: I don’t disagree with what you have been saying. I feel somehow that we haven’t really found the right language to talk about it in. We’re still falling back on ideas that are effectively jargon terms, in many ways, that people who are involved in Buddhism and Eastern spirituality will understand. If you haven’t had exposure to those traditions, a lot of this, I think, will sound a little bit obscure, maybe very tantalizing. So many of these words are never really very clearly defined.

I feel that there’s a risk there that we create a kind of a spiritual bubble, in which a certain kind of privileged group of initiates, as it were, are able to discuss and talk around these things. It’s a language that as it stands at the present, I think, excludes a great many people. This is what brings me to my other point. Again, you were talking in terms of well-being, in terms of living at ease, in terms of being more fulfilled, but what does that mean? Words that haven’t yet come up in our conversation are those of imagination, those of creativity, we haven’t touched upon the arts.

I’m always rather surprised, to be honest, in these kinds of discussions to hear very little about the arts and imagination and creativity. For myself, my practice is effectively my art. I do work as an artist. That’s been my vocation since a teenager. It got sidetracked by Buddhism for about 20 years. To me, the creative process, you were saying, we come to experience our ways in ways we’ve never suspected before, that we have a much less central insistence on our ego, we’re less preoccupied with concepts. This is all very good. To me, that’s only, in a way, establishing a foundation or a ground for us to be able to actively and creatively imagine another world, another future, another way in which we could be.

For me, ethics is not about adhering to certain precepts, it’s about becoming the kind of person one aspires to be. That, you can extend socially as well what kind of society do I wish there to be on this earth?

Lucas Perry: There’s this emphasis you come at this with, it’s about this mode of being and acting and living an ethical life, which is like awakened being. Then, I’m like, well, the present moment is so much better. There’s this sense where we want to arrive in the present moment without being extended into the past or the future, experientially, so that right now is the point. Also, you’re emphasizing this way of being where we’re deeply ethically mindful about the kind of world that we’re trying to bring into being.

I just want to, as we pivot into your article on extinction, unify this. The present moment is the point and there’s a way to arrive in it so fully and with such insight that you’re tapping into depths of well-being and compassion that you always wish you would have known were there. Also, with this examined and ethical nature, where you are not just sitting in your cave, but you’re helping to liberate other people from suffering using creativity to imagine a better world and helping to manifest moments to come that are beautiful and magnificent and worthy of life. That doesn’t have to mean that you’re an anxious little self caught up in your head worried about the future.

Stephen Batchelor: I don’t actually believe in the present moment.

Lucas Perry: Okay.

Stephen Batchelor: Quite seriously, nor does Nagarjuna. I’ve never been able to find the present moment, I’ve looked and looked and looked a long time.

Lucas Perry: It’s very slippery, it’s always gone when you check.

Stephen Batchelor: Arguably, it’s only a conceptual device to basically describe what is neither gone nor what is yet to come. There’s no point, there’s no actual present moment, there is only flux and process and change. It’s continuous. It is ongoing. I’m a little bit wary of actually even using the term present moment. I would use it as a useful tool in meditation instruction, come back to the present moment, everyone knows pretty much what that means.

I wouldn’t want to make it into an axiom of how I understand this process as something highly privileged and special. It’s to me more important to somehow engage with the whole flow of my temporality with everything that has gone, with everything that is to come and I’d rather focus my practice really within that flow, rather than singling out any particular moment, the present or any other, as having a kind of privileged position. I’m not so sure about that.

Lucas Perry: Okay.

Stephen Batchelor: Also, creativity, I don’t think is just some sort of useful way whereby we might think of a better world in the future. To me, creativity is built into the very fabric of the practice itself. It’s the capacity in each moment to be open to responding to this conversation, for example, in a way that I’m not held back by my fears and my attachments, and so on and so forth, but have found in this flow an openness to thinking differently, to imagine differently, to communicate, to embody what I believe in ways that I cannot necessarily foresee that I can only work towards.

That’s really where I feel most fully alive. I’d much rather use that expression, a sense of total aliveness. That’s really what I value and what I aspire to, is what are the moments in which I really feel that I’m totally alive? That’s what is to me of such great value. I’m also not sure that by doing all these practices that you find deep happiness and so forth and so on. I would not say that for myself. I’ve certainly experienced periods of great sadness, sometimes of something close to depression, anxiety. These, again, are part and parcel of what it is to be human.

I like Ram Dass’s expression becoming a connoisseur of one’s neuroses. I think that’s also very true. I’m afraid that the language of enlightenment and so forth often tends to give you the impression that if you get enlightened, you won’t feel any of these things anymore. Arguably, you’ll feel them more acutely, I think, particularly as we talk about compassion or loving kindness or bodhichitta, we are effectively opening ourselves to a life of even greater suffering. When we truly empathize with the suffering of maybe those close to us, or the suffering that we are inflicting upon the planet, this is not something that is going to make us feel happy or even at ease. Hey it’s post-podcast Lucas here and just wanted to jump in to define a term here that Stephen brings in which is bodhicitta. Which is a mind that is striving for awakening or enlightenment for the benefit of all sentient beings that they also achieve freedom or awakening and liberation from suffering. Alright, back to the conversation. 

I feel that these kinds of forms of compassion are actually inseparable from experiencing a deep pain, something that’s very hard to bear. I’m afraid that that side of things can easily be somehow marginalized in favor of these moments of deep illumination and insight and so forth and so on.

Lucas Perry: Yeah. I mean, pain and pleasure are inevitable. I think it’s very true that suffering is optional and … Okay, yeah.

Stephen Batchelor: Again, what you’ve just said is one of the cliches that we get a lot. A lot of this has come from out of the mindfulness world. The pain is somehow unavoidable but suffering is optional. I find that very difficult to understand.

Lucas Perry: The direction that I’m going is there’s this kind of loving kindness that is always accessible and I think this fundamental sense of okayness, so that there can be mental anguish and pain and all these things, but they don’t translate into suffering, where I would call suffering, the immersion inside of the thing. If there is always a witnessing of the content of consciousness from, for example, heart-mind, there is this okayness and maybe at worst, bittersweet sadness and compassion, which transforms these things into something that is not I would call suffering.

You also gain the degree of skillfulness to work with the mud of pain and suffering to transform it into what Thich Nhat Hanh would say like a lotus.

Stephen Batchelor: Again, we might be on a semantic thing here.

Lucas Perry: I see.

Stephen Batchelor: If we go back to the early Buddhist texts, or most Buddhists, they have this one word dukkha, they don’t have a separate word for pain or for suffering. This is an intervention that’s come along more recently in the last 20 or 30 years, I think, this distinction. There is dukkha. The first task of the four tasks is to embrace dukkha. Dukkha includes pain, it includes suffering, it includes anything. It has to do with being capable of embracing the tragic nature of our existence. It has to do with being able to confront and be open to death, to sickness, to aging, to extinction, as we’re going to go on and talk.

I find it difficult personally, to somehow imagine we can do all of that without suffering. I don’t know what you mean by suffering but it looks to me as though you’ve defined it in a fairly narrow way, in order to separate it off from pain. In other words, suffering becomes mental anguish. They often talk of this image of the second arrow. The first arrow is the physical pain. Then, you add on to that all of the worries about it and all of the oh, how poor me, and all that kind of stuff. That’s psychologically true. I accept that.

That’s a way too narrow way of talking about dukkha. Dukkha, there is a grandeur and a beauty in dukkha. I know that sounds strange. For me, that’s really, really important not to feel that these spiritual practices somehow can alleviate human suffering in the way that it’s often presented, and that we become everyone smiling and happy, would you get that too. I mean, Thích Nhất Hạnh’s approach. There’s a kind of saccharin sweetness in this approach, which I find kind of false. That’s one of the reasons also like a lot of the Christian tradition, the image of Christ on the cross is not the image of a happy at ease kind of person. There’s a deep tragedy in this dimension of love that I’m very wary of somehow discounting in favor of a kind of enlightened mind that really is happy and at ease all the time.

Of all the different teachers and people I’ve met, I’ve never met anyone like that. It’s a nice idea. I don’t know whether it’s terribly realistic or whether it actually corresponds to how Buddhists, Hindus, Jains, and others have lived over the last centuries.

Lucas Perry: All right. Your skepticism is really refreshing and I love it. I wish we could talk about just this part forever, but let’s move on to extinction. You have an article that you wrote called Embracing Extinction. You talk a lot about these three poisons leading to everything burning, everything being on fire. Would you like to unpack a little bit of your framing here for this article? How is it that everything in the world is burning? What is it that it’s all consuming? How does this relate to extinction?

Stephen Batchelor: Okay. I start this article, which was published in the summer edition of Tricycle, this year, by quoting the famous statement of the Buddha, we find in what’s called the Fire Sermon, where he says, the world is burning, the eyes are burning, the ears are burning, et cetera, et cetera, the senses are burning, the mind is burning. Then he asked, burning with what? The answer is burning with greed, burning with hatred, burning with confusion. That’s his way of speaking about what I would call reactivity.

In other words, when the organism encounters its environment, it’s a bit like a match encountering a match box. That causes certain reactive patterns to flare up. These are almost certainly the result of our evolutionary biology that we have managed to survive as a race, as a species, so successfully, because we’ve been very good for getting what we want. We’ve been very good at getting rid of things that have gotten in our away. We’ve been very good at stabilizing our sense of me and us at the expense of others, by having a very strong sense of ego, a very strong sense of me.

These are understood as fires in the earliest texts and then later, Buddhism, begins to think of them more are toxins, as viruses, as poison that contaminate the whole system, as it were once they have taken hold. What I find quite striking is how this metaphor of fire, which was probably spoken by the Buddha about 500 B.C., a long, long time ago. Yet, when we read that today, it’s very difficult not to hear it as a rather prescient insight into the literal heating up of the physical environment, that through living a life of industrial technology, basically, whereby we have managed very successfully to develop, as we call it, industries and great cities and systems of transport and electricity, all this kind of stuff.

The consequence has been that we’re actually now poisoning the very environment that we depend upon in order to live. For that reason, I feel that there’s something in the Buddhist Dharma that recognizes the heating up that occurs when we lead a life that is driven by our reactive habits, our reactive patterns. The second of the four tasks is to let those be, is to let them go, is to find a way of leading a life that is not conditioned by greed and by hatred and by egoism and confusion. That’s the challenge.

Of course, on an individual level, we can do the best we can. If it’s going to have any lasting impact on the condition of life on earth, then this has to be a societal cultural movement. This comes back to something we already talked about before. If we’re going to make a difference to our future, if we’re going to stave off what might turn out to be rapid extinction, not only of other species, but possibly even of the human species, and not within billions of years, but possibly within the next century, then we have as a human community, a global, to really alter the ways in which we live.

I do think that spiritual traditions, Buddhism and others, offer us a framework in which we can work with these destructive emotions. Hopefully, in our own lives, maybe in the lives of those who we’re able to affect closely, maybe in the lives of people who are listening to this podcast, can ripple out and maybe, in the long term, diminish the kinds of powers that are at work, that in many ways seem unstoppable. At one level, I can be optimistic. I can see that we do have the understanding of what’s creating the problem. There is amongst more and more people, I think, a genuine commitment to lead lives that do not contribute to such a crisis.

I’m also aware, both in myself and many others, I know that we are complicit in this process. Each time we take a plane, each time we put on our heating system. I had a mango last night and I realized it came from the Ivory Coast. I mean that’s entirely unnecessary, yet I still go out and get the things. Again, it’s humility to recognize that I can have all these very high minded ecological ideas, but how am I actually changing the way I live? What am I doing in my life that will help others to likewise take those steps? I feel the power of evolution, the power of greed, hatred, and delusion, which I think are really just the instinctual forces that have got human beings to where they are, are very, very forceful.

They’re the armies of Mara, the Buddha used to call them. He says, there’s nothing in this world as powerful as the armies of Mara. Mara being the demonic or the devil. I wonder for many reasons, whether in fact, we are capable as a human community of restraining such instincts and impulses. I hope so. I’m not totally optimistic.

Lucas Perry: Right. Wisdom, as we would have understood it from the beginning of our conversation would be an understanding of these three poisons of hate, greed and delusion. It’s coming to understand them from this mind of non-reactivity and awareness, one can see their arising and by witnessing, disdentify with them, and then have choice over what qualities will be expressed in the world. One thing that I really liked about your article was how you talk about this problem solving mode of being. Many of our listeners, and myself included, and I think this especially comes from the computer science mindset, is this very strong reliance on conceptual dualistic thought as the only mode of knowing.

From the head, one is embedded in this duality with the world where one is in problem solving mode. You talk about how the world becomes an object of problems to be solved by conceptual thinking. This isn’t, as you say, to vilify conceptual thinking or to vilify and attack something like technology, but is to become aware of where it is skillful and where it is unskillful. To use it in ways, which will bring about better worlds, I’m wondering if you can help to articulate the danger of being in the problem solving mode of being where we lack connection with interdependence with the outside world, where there’s perhaps a strong sense of self from the problem solving mode of being.

Just to finish this off, I’m quoting you here, you say, “Such alienation allows us to regard the world either as a resource for the gratification of our longings or as a set of problems to be solved for the alleviation of our discontents.”

Stephen Batchelor: Yes, okay. To me, this is a very important point. I’m inspired in this thinking by a Martin Heidegger, who’s a very controversial thinker. But someone I feel who did have some considerable insight into this process long before anybody else. I cite him in the article. His point, which I completely agree with, actually, is that the problem with technology is not the technological machines and computers and so forth in themselves but it’s the mindset that, in a way, justifies and enables those kinds of technological behaviors to happen.

As you said, this is effectively a mindset that is cut off from the natural world. I think we can see this beginning in about the 18th century with Descartes and others, whereby we set up the idea that there is a world out there and that there is an internal subject, a consciousness that is able to distance itself from the natural world in order to have the objectivity and the clarity to be able to then manipulate it to suit our particular desires and to ward off our particular fears. Now, one of the things that often disturbs me is that this technological language is often used to describe these spiritual practices as spiritual technologies.

This is a term I hear quite a lot actually, or our rather, unthinking an uncritical use of the word technique, the technique of mindfulness, the techniques of meditation, meditational techniques. As long as we’re not thinking critically around that term technique, I think, very often we are unconsciously perpetuating precisely the distinction that we often, in another part of our mind, we’re trying to overcome, namely this notion of separation. We see this in meditation. If you see your mind as it is, if you recognize what are the destructive emotions, then you can get to the root of them, get rid of them, and then you’ll be happy.

That, again, carries with it a certain mindset, which is so much part, not only of our modern western culture, but I feel is part of the human condition. I think, we’re very deeply primed to think of the world as something out there and ourselves as something in here. We find in eastern religions, for example, the idea of rebirth that when we die, we don’t really die, our mind will sort of go on somewhere else, which again, I think reinforces this notion that there is a duality. There’s a spiritual inside and there is a material outside. That’s just simply the way things are. Many of the people who teach Dzogchen and Vipassana and Mahamudra believe very strongly in there being a mind that is not part of the physical world, that somehow transcends the physical world, that gives us the opt-out clause, that when we die, we don’t really die.

Something mysterious will carry on. To that extent, I feel that I’ll stick to Buddhism, because it’s the one I know, I think Buddhism can actually, again, reinforce this technological mindset as an inner technology. I think that’s a very dangerous idea. If I go back to that experience I had in the woods in Dharamsala when I was 22 years old, it was that idea that was really overthrown. It was a recognition of the mystery that I am part and parcel of I cannot meaningfully separate my experience from what is going on around me. Again, it’s easy to say that, it’s another thing altogether to really feel that in your bones.

I think that requires a lifelong practice, a refinement of sensitivity. It also requires, I think, a much more critical way of thinking about so many of the ideas that we take on board without really examining to see whether they are in fact tacitly reinforcing certain mindsets that we will probably not be happy to endorse. I think, all of this goes together. If we are to engage with this environmental crisis, which undeniably is the consequence of our industrial technologies, then we have to also see to what extent we are complicit, not just as consumers in buying mangoes from the Ivory Coast, but also as subjects. As a subjective conscious beings who are at one level still buying into the mind-matter split.

I get into a lot of trouble with Buddhists because I reject the idea of reincarnation and the mind goes on somewhere after death precisely because I feel it is a dualism that actually undergirds our sense, the core difference, I would say, that separates us from being participants in the natural world. I cannot think of birth, sickness, aging and death as problems to be overcome. Yet, that is quite clearly the goal of Buddhism. It’s to bring the end of suffering, which doesn’t mean just the ending of mental anguish, which is in a sense, just scratching at the surface. It’s the ending of birth, the ending of sickness, the ending of aging, and the ending of death. It’s a total transcendence of an embodied life.

For this reason, I feel that it’s very helpful to replace the idea of solving a problem with the idea of penetrating a mystery. Because birth, sickness, aging and death are not problems, they’re mysteries and they’re mysteries because I cannot separate myself from death. I am the one who is going to die. I cannot separate myself from aging. I am the one who is aging, and so on. To do that is to acknowledge these things cannot be solved in the way that problems are solved. Likewise, confusion and greed and hatred, these are not problems to be solved, as Buddhism would often make us believe, they are mysteries too because I am greedy, I am hateful, I am confused.

They’re part and parcel of the kind of being that evolution has brought about that I am one of million examples. By making that change, and I think that change for me was put into practice by doing Zen meditation, primarily, the meditation of asking the question, what is this? That is a Koan or a hwadu, literally. It’s a practice that I trained in for four years in Korea. I did something like seven, three months retreats, just asking the question, what is this? In other words, getting myself to experience in an embodied, in an emotive way, the fact that I am inseparable from the mystery that is life.

That, for me, is the kind of foundation that can lead us into a profoundly different relationship with the natural world. Again, I need to emphasize that this is working against very profoundly rooted human attachments and beliefs. I think the Four Noble Truths is, again, it’s a problem solving paradigm. Suffering is the problem, ignorance is its cause, get rid of the cause, you get rid of the problem. That’s Nirvana. I think that shows that this problem solving mindset is not just modern technology from the 18th century in Europe, as Heidegger seems to think. It goes back to something way deeper. It seems to be built into the human consciousness itself, maybe even into the structures of our neurology. I can’t really speak with any authority on this thing. That’s my sense.

Lucas Perry: I can also hear the transhumanists and techno-optimists screaming, who want to problem solve the bad things that evolution has given us, like hate, anger and greed. You just find those in the genetics and the conditioning of evolution and snip them out and replace them with awakening or enlightenment, that sounds much better. Sorry, can you more fully unpack this mystery mode of being and what it solves, that being embedded as a subjective creature, who is witnessing things as a mystery rather than as viewing them as a problem to be solved?

Stephen Batchelor: Again, my emphasis in what I just said was effectively to swing the pendulum back to a perspective that’s usually ignored. In practice, we need both obviously. It would be absurd to be just an out and out technophobe and to reject technology and to reject problem solving per se. That would be silly. Technologies have been enormously beneficial to us in so many ways. Look at the current pandemic, it’s quite amazing how we’ve been able to identify the virus so quickly, how we’ve been able to then proceed towards developing vaccines. This is all because of our extraordinary medical technologies. That’s great. I’ve no problem with that at all.

The real issue is when we start to think that a technological way of thinking is the only way of thinking. In the same way that you said earlier that we tend to think that conceptuality and duality and egos are the only ways of being. I think we have to add to that a technological mindset. I think that is just as much part of the problematic with greed, hatred and delusion. I think, it is a form of delusion. It’s a very primary form of delusion. In practice, that’s the challenge is to differentiate between those areas of our life when it is useful to stand apart from let’s say, a novel coronavirus and look at it under a microscope, very useful, very necessary. Not to let that way of thinking become normative to the whole way we lead our lives and to open ourselves to the possibility of encountering the world and ourselves, our mental states, other people, not as problems but as mysteries.

To be able to value that dimension of our experience without reducing it to a technological kind of thinking, but to honor it for what it is, as something that cannot be captured by concepts, by language.

Lucas Perry: Yet, they go together. I think the distinction here is subtle. People might be reacting to this and maybe a little bit confused about the efficacy of relating to things as a mystery as I am a little bit. Let me see if I am capturing this correctly. I can sense myself suffering right now taking the attitude and view that my neuroses and my sufferings and my pain are problems to be solved. It creates a duality between me and them. It creates this adversarial relationship. I’m not willing to be with them or to experience them. It’s this sense of striving or craving for them to go away or to be other than what they are. I think that’s why I am sensing myself suffering right now taking on the problem solving sense of view.

If I disdentify with that and I begin witnessing that, and I shift to these are mysteries and there is this sense of beauty that is compelling to you, which I can sense and this kindness and compassion and ease towards them. This doesn’t mean that the problem solving sense goes away. There’s more of a dropping into heart, into being, into willingness to be with them and explore them and to be skillful in their unfolding and change. That is in its sense still a kind of problem solving. I mean, there are parts of me that are unwanted, but there is a way of coming to issues which are unwanted and seeing them as mysteries and being with them in an experiential way other than the industrial 20th century ego-identification, conceptual thought problem solving mode of being, which feels quite samsaric, like you’re in a hell realm of hungry ghosts in the mind, everything needs to be different.

How do I think all the right thoughts to change the atoms to make the problems go away? Hey it’s post-podcast Lucas here, which I mean as a conditioned pattern of thought which is motivated and structured by ignorance or confusion, as well as craving. And so I see this kind of structure also applying to the problem solving mode of thought which has this element of craving and confusion of separateness that leads to this sense of suffering or disease. It seems to me subtle like that, does this capture what you’re pointing toward?

Stephen Batchelor: I think it is very subtle. Again, I would also concur that yes, there are parts of our inner life, our psychology, that can be effectively dealt with by inner techniques. Like, for example, if we’re extremely distracted all the time, if we train ourselves to be more focused, if we do concentration exercises, do Shamatha practice, over time, we can get better at not being distracted. That’s the application of a technique. There are aspects of spiritual practice, not a term I’m terribly fond of, but let’s stick with it.

I think the cultivation of mindfulness, the cultivation of concentration, cultivation of application, for example, all of these things have a technical aspect to them. If I do therapy I’ve got some neuroses like chronic anxiety. I’m not going to resolve that by saying how mysterious, wow, this is wonderful being in the mystery of anxiety. That’s not what I meant. What I meant is that that is something that we can recognize as being a problem, a legitimate problem.

Lucas Perry: It’s unwanted.

Stephen Batchelor: Yeah, it’s unwanted. It’s unwanted for good reasons because it prevents us from living fully, from being fully alive. It constrains us from living. It keeps us locked up in a little bubble of our own neurotic thoughts. We can find technologies, psychotherapies, that if we apply them can actually effectively help get rid of that problem. Although, as both Freud and Jung were quite clear, the problem will not just evaporate, it’ll still be there, but we’ll be able to live with it better. Jung’s idea was that we get to the point where instead of the neurosis having you, you have the neurosis.

In some ways, I think, a lot of these neuroses are going to be around, whether we like it or not. We can, in a way, have them rather than them having us. That is a form of therapy. That is a form of cure. When we come to these deeper spiritual values, let’s say wisdom, or compassion, or love, I find it very difficult to understand how these are qualities that we can arrive at by simply pursuing a set of technological procedures. I think and I’ve, again, witnessed this in myself in others, colleagues, friends, monks and whatnot, people who’ve dedicated years and years and years and years and years to cultivating these inequalities of mind, but in some ways don’t really seem to have become significantly wiser or more loving.

I really question whether wisdom is something that can be produced by becoming an expert in certain meditation techniques or love. I think these are qualities that are meta-technical, they’re beyond the reach of technique. I think suffering in the deepest sense of existential suffering, which is effectively what I think the Buddha is primarily concerned with, is birth, sickness, aging and death. Birth, sickness, aging and death, likewise, I do not think can be resolved by finding a solution that can render them no more problematic. Even if you follow the traditional Buddhist way of describing this, that’s effectively what happens. It’s only when you’re dead that you are freed from birth, sickness, aging.

Birth, sickness, aging and death are mysteries, but a great amount of what we suffer from within our inner lives, within our social lives, within our world. Our problems, if correctly identified as such, that can be dealt with through applying techniques. The challenge, and this is, I think, perhaps where you talk of subtlety is to be able to differentiate between what is actually a mystery and cannot be solved as to what is a problem and can be solved. Western technological society particularly, really, has no room at all for this mystery focused way of life.

We might get it in church on Sundays, a little bit of it, but we seem to have almost disconnected from that whole side of life. I feel that one of the reasons we’re drawn to some of these eastern spiritualities is because they seem to bring us back to that quality of awareness. If you don’t like the word mystery, and a lot of people feel a little bit uncomfortable with it, just think of it as I do a lot of the time that we live in an incredibly strange world that is extremely weird that you and I are having this conversation.

I never cease to be utterly astonished and amazed by the most banal things. I think it’s to be able to recover a sense of the extra ordinary, within the utterly ordinary, that enables us to begin to have a very different relationship to the natural world that we’re threatening. I feel that if we haven’t embodied that sense of strangeness of … not only strangeness but the same recognition that I cannot separate myself from these things, I cannot distance myself from these things, they are infinitely close. That’s another definition of mystery.

Lucas Perry: The ground of your being.

Stephen Batchelor: Yeah, if you want. Remember, that this is a term coined by Paul Tillich, the Christian theologian in the 1960s. He understood the ground of being to be a groundless ground that is beautiful. A ground which is like an abyss literally in German. If we talk of ground of being be very careful not to make the ground too solid, it’s a ground which is no ground. That, again, is very close to Buddhist thinking.

Lucas Perry: Yeah, it seems subtle in the way that you’re still solving problems from this way of being. From embodying this experiential relationship and subjectivity in the world, it changes and modifies in skillful ways, perhaps the three poisons and it allows you to be more skillful is what you’re saying. It’s not like you pretend like problems don’t exist. It’s not like you stop solving problems. It’s that there’s a lot of skillfulness in the way that this modification of your own subjectivity leads to your own being in the world. I’d love to wrap up here with you then on talking about effective altruism in this field.

The Future of Life Institute is concerned with all kinds of different existential risk. We’re contextualized in the effective altruism movement, which is interested in helping all sentient beings everywhere, basically, by doing whatever is most effective in that pursuit and which leads to the alleviation of suffering and the promotion of well-being as potentially narrowly construed. Though that might not be the only ethical framework, you might decide what would be effective interventions in the world. What this has led to is what we’ve already talked about here, which is this extremely problem solving kind of mind. People are like very in their heads and interested and reliant on conceptual thought to basically solve everything.

Ethics is a problem to be solved. If you can just get everyone to do the right things, the animals will be better off, there will be less factory farms. We’ll get rid of existential threats. We can work on global poverty to do things that are really effective. This has been very successful to a certain degree. With this approach, tremendous suffering has already been alleviated and hopefully still will be. But it lacks many of these practices that you talked about, perhaps it suffers from some of the unskillfulness of the problem solving mindset. There isn’t any engagement in finding natural loving kindness, which already exists in us or cultivating loving kindness in our activities.

There’s not much emotional connection to the benefactors of the altruism. There’s not sufficient, perhaps, emotional satisfaction felt from the good deeds that are performed. There’s also lots of biases that I could mention that exist in general in the human species, like we care about people who are closer to us, rather than people who are far away. That’s a kind of bias. Children are drowning in shallow ponds all over the world, and no one’s really doing anything about it. Shallow ponds being places of easy intervention like you could easily save that child.

This conversation we’re having about wisdom, I think, for me, means that if effective altruism were potentially able to have its participants shift into a non-conceptual experiential embodying of perhaps kinds of insights or a way of being that you might support as living an examined life and as a method of awakening and perhaps insight into emptiness and impermanence and not-self and suffering, I think this could lead to transformative growth that might upgrade our ethics and experience of the world and the way of being and could de-bias some of these biases which lead to ineffective altruism in the world.

I think that seeing through non-self that really kind of annihilates caring about people closer to you rather than far away from you or people who are far away in time for those who are interested in existential threat. I’m curious if you have any reactions or perspective here about how the insights and wisdom of wisdom traditions and perhaps a secular Buddhism and secular Dharma could contribute to this community.

Stephen Batchelor: I have to confess that when confronted with these kinds of problems, the ones you just very clearly present, I really see considerable shortcomings in both the Buddhist community and in this broader spiritual community that we might feel we’re part of. Because in the end, a lot of these practices are effectively things we do on our own and we may do them within a small Sangha or small community. We may write books. We might get more and more people practicing mindfulness. That is all very well. I’m not actually convinced that simply by changing individual minds, and if we change enough individual minds, we’ll suddenly find ourselves in a much healthier world.

I think the problems are systemic. They are built into the structures of our human societies. They’re not intelligible purely as the collective number of individual deluded or undeluded minds. I think we’re going into the sort of territory of systems theory, whereby groups and systems do not behave in such a way that can be predicted by analyzing the behavior of the individual members of that system. I think, if I’m getting that correct. Again, I’ll just speak about the Buddhist community but it of course, probably refers to others as well.

I think the great challenge of the Buddhist community is that it has to come up with a social theory. It has to come up with a way of thinking that goes beyond the person and that is able to think more systemically. Now there are Buddhist thinkers who are trying to do that people like David Loy would be a very good example. Nonetheless, I don’t feel that we’ve really grappled with this question adequately. I have to admit to my own confusions and limitations in this area too. I feel that my writing, which is my main work, is slowly evolving in this direction. What really pushed me in this direction was an essay by Catherine Ingram, who you may have heard of, called Facing Extinction. I borrowed it effectively, my essay called Embracing Extinction is an acknowledgement of my debt to her.

I had been part of the green movement for the last 30 odd years or so. It was only on reading Catherine’s piece that I suddenly was struck viscerally by the fact of our creating a world that could well lead to the extinction of all species within the next century or so. I think we thereby need to be able to respond to these dilemmas at the same pitch and at the same level. In other words, the visceral level in which these questions are beginning to emerge in ourselves. Again, I go back to Zen, one of the favorite sayings of my teacher was great questioning, great awakening, little questioning, little awakening, no questioning, no awakening.

In other words, our capacity to be awake is correlated to our capacity to ask questions in a particular way. If we have intellectual questions or let’s say, problem solving questions, then we can resolve those questions by coming up with solutions. They’ll be at one level operating at the same pitch. In other words, they are conceptual problems, they’re intellectual problems. Great awakening arises because we’re able to ask questions at a deeper level. If you take the Legend of the Buddha, the young prince who goes out of the palace, he encounters a sick person, an aging person and a corpse, and that is what triggers within him what in Zen is called great questioning, or great doubt, great perplexity.

The practice of Zen is actually to stay with those great questions and to embody them to get them to actually penetrate into your flesh and bones. Then, within such a perspective, one then creates the conditions for a comparable level of visceral awakening. That now I feel has to be extended on a communal level. We have as a community, whether it’s a small, intentional community of Buddhists, or a larger human community, be able to actually ask these questions at a visceral level. The kind of empathy you speak of, I feel also has to come from this degree of questioning.

I think there’s often too much of an understandable sense of urgency in a lot of these questions. That urgency often just causes us to immediately try to go out and figure what we can do. That’s probably a good thing. We maybe do not allow enough time to really allow these questions to land at a deep visceral level within ourselves such that answers can then begin to emerge from that same depth. That is the kind of depth I feel that if we are to come up with a more systemic philosophy, a social theory, maybe an economic theory that is grounded in such depth that will perhaps be able to guide us more effectively towards being effectively altruistic.

That’s kind of really where I’m at with this at the moment. My work as it’s evolving in what I’m writing now, for example, I’m writing a book called the Ethics of Uncertainty where I’m trying to flesh this out more fully. This is where I feel my life is going. I don’t know whether I’ll live long enough to actually do more than climb a few more steps if I’m lucky. I’m very moved by my colleagues and friends who were very much involved in the extinction rebellion demonstrations, particularly in London. I have a number of close friends who are very involved with that.

That likewise, I found a great source of inspiration and something towards which I would very much hope for my writing in my philosophy to be able to contribute. That’s kind of where I’m going. I think that humanity does face an existential crisis of a major order at the moment. I see all kinds of forces that are railed that are sent not in our favor, the least of which is the four year election cycle. I just wonder how national governments who are in effect beholden to electorates whose needs are probably largely about can I get work? Can my kids get a good school and good health care system? That’s going to be the priority for most people frankly.

It’s all very well talking about saving the environment. When push comes to shove, again, your bias will be basically my kids, my immediate community, or my nation. We have to get beyond that. We can’t think in national terms anymore. There are transnational movements. I think that they certainly need to be developed and further strengthened. Can such transnational movements ever achieve the kinds of power that will enable changes to occur on a global level? I can’t see that happening in our current world, I’m afraid. I find very distraught by that.

When you see some of these right wing populists, they’re effectively pushing back in the other direction, and that, unfortunately, on the ascendant, I do not feel at all optimistic, given our situation. As a person who tries to lead a life governed by care and compassion and altruism, I cannot but seek ways of embodying those feelings in actions. As a writer, that’s what I’m probably best at doing. I’m very glad I’ve had the opportunity to be able to speak to you and to the Future of Life community about my ideas that I don’t know whether I really have a great deal to say that’s really going to change the paradigm that we are, I think all of us, are working towards another paradigm altogether.

Lucas Perry: Thank you, Stephen. I’ve really, really enjoyed this conversation. To just close things off here, instead of making powerful people more wise or wise people more powerful, maybe we’ll take the wise people and get them to address systemic issues, which lead to and help manifest things like existential risk and animal suffering and global poverty.

Stephen Batchelor: That would be great. That would be wonderful. Thank you very much, Luke. It’s been a lovely conversation. I really wish you all the best and all of those of you who are listening to this likewise.

Lucas Perry: Yeah, thanks so much, Stephen. If people want to follow you or to check out more of your work, I’ve really enjoyed your books on Audible. If people want to follow you or find more of your work, where the best places to do that?

Stephen Batchelor: I have a website, which is www.stephenbatchelor.org or the main institution I’m involved with is called Bodhi College, B-O-D-H-I, hyphen college.org. There, you’ll find information on the courses that I lead through them. Next year, in 2021, I’m leading a series of 12 seminars on Secular Dharma, which I’ll be addressing a lot of the questions that have come up in this podcast. It will be an online course, once a week for 12 weeks, 12 three-hour seminars.

It’ll be publicized in the next few weeks. We’re just finalizing that program as of now. Thank you.

Lucas Perry: All right. Thank you, Stephen. It’s been wonderful.

 

Kelly Wanser on Climate Change as a Possible Existential Threat

 Topics discussed in this episode include:

  • The risks of climate change in the short-term
  • Tipping points and tipping cascades
  • Climate intervention via marine cloud brightening and releasing particles in the stratosphere
  • The benefits and risks of climate intervention techniques
  • The international politics of climate change and weather modification

 

Timestamps: 

0:00 Intro

2:30 What is SilverLining’s mission?

4:27 Why is climate change thought to be very risky in the next 10-30 years?

8:40 Tipping points and tipping cascades

13:25 Is climate change an existential risk?

17:39 Earth systems that help to stabilize the climate

21:23 Days where it will be unsafe to work outside

25:03 Marine cloud brightening, stratospheric sunlight reflection, and other climate interventions SilverLining is interested in

41:46 What experiments are happening to understand tropospheric and stratospheric climate interventions?

50:20 International politics of weather modification

53:52 How do efforts to reduce greenhouse gas emissions fit into the project of reflecting sunlight?

57:35 How would you respond to someone who views climate intervention by marine cloud brightening as too dangerous?

59:33 What are the main points of persons skeptical of climate intervention approaches

01:13:21 The international problem of coordinating on climate change

01:24:50 Is climate change a global catastrophic or existential risk, and how does it relate to other large risks?

01:33:20 Should effective altruists spend more time on the issue of climate change and climate intervention?

01:37:48 What can listeners do to help with this issue?

01:40:00 Climate change and mars colonization

01:44:55 Where to find and follow Kelly

 

Citations:

SilverLining

Kelly’s Twitter

Kelly’s LinkedIn

 

We hope that you will continue to join in the conversations by following us or subscribing to our podcasts on YoutubeSpotify, SoundCloudiTunesGoogle PlayStitcheriHeartRadio, or your preferred podcast site/application. You can find all the AI Alignment Podcasts here.

You can listen to the podcast above or read the transcript below. 

Lucas Perry: Welcome to the Future of Life Institute Podcast. I’m Lucas Perry. In this episode, we have Kelly Wanser joining us from SilverLining. SilverLining is a non-profit that is focused on ensuring a safe climate due to the risks of near-term catastrophic climate change. Given that we may fail to reduce CO2 emissions sufficiently, it may be necessary to take direct action to promote cooling of the planet to stabilize both human and Earth systems. This conversation centrally focuses how we might intervene in the climate by brightening marine clouds to reflect sunlight and thus cool the planet down and offset global warming. This episode also explores other methods of climate intervention, like releasing particles in the stratosphere, their risks and benefits, and we also get into how climate change fits into global catastrophic and existential risk thinking.

There is a video recording of this podcast conversation uploaded to our Youtube channel. You can find a link in the description. This is the first in a series of video uploads of the podcast to see if that’s something that listeners might find valuable. Kelly shows some slides during our conversation and those are included in the video version. The video podcast’s audio and content is unedited, so it’s a bit longer than the audio only version and contains some sound hiccups and more filler words.

Kelly Wanser is an innovator committed to pursuing near-term options for ensuring a safe climate. In her role as Executive Director of SilverLining, she oversees the organization’s efforts to promote scientific research, science-based policy, and effective international cooperation in rapid responses to climate change. Kelly co-founded—and currently serves as Senior Advisor to—the University of Washington Marine Cloud Brightening Project, an effort to research and understand one possible form of climate intervention: the cooling effects of particles on clouds. She also holds degrees in economics and philosophy from Boston College and the University of Oxford.

And with that, let’s get into our conversation with Kelly Wanser

Let’s kick things off here with just a simple introductory question. So could you give us a little bit of background about SilverLining and what is its mission?

Kelly Wanser: Sure Lucas. I’m going to start by thanking you for inviting me to talk with you and your community, because the issue of existential threats is not an easy one. So our approach at SilverLining I think overlaps with some of the kinds of dialogue that you’re having, where we’re really concerned about this sort of catastrophic risks that we may have with regards to climate change in the next 10 to 30 years. So SilverLining was started specifically to focus on near term climate risk and the uncertainty you have about climate system instability, runaway climate change, and the kinds of things we don’t have insurance policies against yet. My background is from the technology sector. I worked in areas of complex systems analysis and IT infrastructure. And so I came into this problem, looking at it primarily from a risk point of view, and the fact that the kind of risks that we currently have exposure to is an unacceptable one.

So we need to expand our toolkit and our portfolio until we’ve got sufficient options in there that we can address the different kinds of risks that we’re facing in the context of the climate situation. SilverLining is a two year old organization, and there are two things that we do. We look at policy and sort of driving how in particular these interventions in climate, these things that might help reduce warming or cool the planet quickly, how we might move those forward in terms of research and assessment from a policy perspective, and then how we might actually help drive research and technology innovation directly.

Lucas Perry: Okay, so the methods of intervention are policy and research?

Kelly Wanser: Our methods of operation are policy and research, the methods of intervention in particular that I’m referring to are these technologies and approaches for directly and rapidly reducing warming in the climate system.

Lucas Perry: So in what you just said you mentioned that you’re concerned about catastrophic risks from climate change for example, in the next 10 to 30 years. Could you paint us a little bit of a picture about why that kind of timescale is relevant? I think many people and myself included might have thought that the more significant changes would take longer than 10 to 30 years. So what is the general state of the climate now and where we’re heading in the next few decades?

Kelly Wanser: So I think there are a couple of key issues in the evolution of climate change and what to expect and how to think about risk. One is that the projections that we have, it’s a tough type of system and a tough type of situation to project and predict. And there are some things that climate modelers and climate scientists know are not adequately represented in our forecasts and projections. So a lot of the projections we’ve had over the past 10 or 15 years talk about climate change through 2100. And we see these sort of smooth curves depending on how we manage greenhouse gases. But people who are familiar with climate system itself or complex type systems problems know that there are these non-linear events that are likely to happen. Now in climate models they have a very difficult time representing those. So in many cases they’re either sort of roughly represented or excluded entirely.

And those are the things that we talk about in terms of abrupt change and tipping points. So our climate model projections are actually missing or under representing tipping points. Things like the release of greenhouse gases from permafrost that could happen suddenly and very quickly as the surface melts. Things like the collapse of big ice sheets and the downstream effects of that. So one of the concerns that we have in SilverLining is that some of the things that tech people know how to do, so similar problems to manage an IT network. It’s a highly complex systems problem, where you’re trying to maintain a stable state of the network. And some of the techniques that we use for doing that have not been fully applied to looking at the climate problem. Similarly, some of the similar techniques we use in finance, one of our advisors is the former director of global research at Goldman Sachs.

And this is a problem we’re talking to him about and folks in the IPCC and other places, essentially we need some new and different types of analysis applied to this problem beyond just what the climate models do. So problem number one is that our analytic techniques are under representing the risk, and particularly potentially risk in the near term. The second piece is that these abrupt climate changes tend to be highly related to what they call feedbacks, meaning that there are points at which these climate changes produce effects that either put warming back in the system or greenhouse gases back in the system or both. And once that starts to happen, the problem could get away from us in terms of our ability to respond. Now we might not know whether that risk is 5%, 10% or 80%. From SilverLinings perspective, from my perspective, any meaningful risk of that in the next 10 to 30 years is an unacceptable level of risk, because it’s approaching somewhere between catastrophic and existential.

So we’re less concerned about the arm wrestle debate over is there some scenario where we can constrain the system by just reducing greenhouse gases. We’re concerned about, are there scenarios where that doesn’t work, scenarios where the system moves faster than we can constrain greenhouse gases? The final thing I’ll say is that we’re seeing evidence of that now. So some of the things that we’re seeing like these extra ordinaries of wildfire events, what’s happening to the ice sheets. These are things that are happening at the far end of bad predictions. The observations of what’s happening in the system are indicative of the fact that that risk could be pretty high.

Lucas Perry: Yeah. So you’re ending here on the point that say fires that we’re observing more recently are showing that tail end risks are becoming more common. And so they’re less like tail end risks and more like becoming part of the central mass of the Gaussian curve?

Kelly Wanser: That’s right.

Lucas Perry: Okay. And so I want to slow down a little bit, because I think we introduced a bunch of crucial concepts here. One of these is tipping points. So if you were to explain tipping points in one to two sentences to someone who’s not familiar with climate science, how would you do that?

Kelly Wanser: The metaphor that I like to use is similar to a fever in the human body. Warming heat acts as a stressor on different parts of the system. So when you have a fever, you can carry a fever up to a certain point. And if it gets high enough and long enough, different parts of your body will be affected, like your brain, your organs and so on. The trapped heat energy in the climate system acts as a stressor on different parts of the system. And they can warm a bit over a certain period of time and they’ll recover their original state. But beyond a certain point, essentially the conditions of heat that they’re in are sufficiently different than what they’re used to, that they start to fundamentally change. And that can happen in biological systems where you start to lose the animal species, plant species, that can happen in physical systems where the structure of an ice sheet starts to disintegrate, and once that structure breaks down, it doesn’t come back.

Forests have this quality too where if they get hot enough and dry enough, they may pass a point where their operation as a forest no longer works and they collapse into something else like desertification. So there are two concerns with that. One is that we lose these big systems permanently because they change the state in a way that doesn’t recover. And the second is that when they do that, they either add warming or add greenhouse gases back into the system. So when an ice sheet collapse for example, these big ice structures, they reflect a huge amount of sunlight back out to space. And when we lose them, they’re replaced by dark water. And so that’s basically a trade-off from cooling to warming that’s happening with ice. And so there are different things like that, where that combination of losing that system and then having it really change the balance of warming is a double faceted problem.

Lucas Perry: Right, so you have these dynamic systems which play an integral part in maintaining the current climate stability, and they can undergo a phase state change. Like water is water until you hit a certain degree. And then it turns into ice or it evaporates and turns into steam, except you can’t go back easily with these kinds of systems. And once it changes, it throws off the whole more dynamic context that it’s in, it’s stabilizing the environment as we enjoy it.

Kelly Wanser: One of the problems that you have is not just that any one of these systems might change its state and might start putting warming or greenhouse gases back into the atmosphere, but they’re linked to each other. And so then they call that the cascade effect where one system changes its state and that pushes another system over the edge, and that pushes another system over the edge. So a collapse of ice sheets can actually accelerate the collapse of the Amazon rainforest for example, through this process. And that’s where we come more towards this existential category where we don’t want to come anywhere near that risk and we’re dangerously near it.

And, so one of the problems that scientists like Will Steffen and some arctic scientists for example are seeing, is that some of these tipping points they think we’re in. I work with climate scientists really closely, and I hear them saying, “We may be in it. Some of these tipping points are starting to occur.” And so the ice ones, we have front page news on that, the forest ones we’re starting to see. So that’s where the concern becomes that we sort of lack the measures to address these things if they’re happening in the next one, two or three decades.

Lucas Perry: Is this where a word like runaway climate change becomes relevant?

Kelly Wanser: Yes. When I came into the space like 12 years ago, and for many of your listeners, I came in from tech first as a sort of area of passion interest. And one of the first people I talked to was a climate scientist named Steve Schneider, who was at Stanford at the time, and he since passed away, but he was a giant of the field. And I asked him kind of the question you’re referring to, which is how would you characterize the odds of runaway change within our lifetime? And he said at that time, which was about 12 years ago, I put it in the single digits, but not the low single digits. My reaction to that was, if you had those odds of winning the lottery, you’d be out buying tickets. And that’s an unacceptable level of risk where we don’t have responses that really meaningfully arrest or reduce warming in that kind of time.

Lucas Perry: Okay. And so another point here is you used the word “existential” a few times here, and you’ve also used the word “global catastrophic.” I think broadly within the existential risk community, at least the place where I come from, climate change is not viewed as an existential risk. Even if it gets really, really, really bad, it’s hard to imagine ways in which it would kill all people on the planet rather than like make life very difficult for most of them and kill large fractions. And so it’s generally viewed as a global catastrophic threat being that it would kill large fractions, but not be existential. What is your reaction to that? And how do you view the use of the word “existential” here?

Kelly Wanser: Well, so for me there are two sides to that question. I normally stay on one of the two sides, which is for SilverLining our mission is to prevent suffering. The loss of a third of the population of the planet or two thirds of the population of the planet and the survival of some people in interconnected bubbles, which I’ve heard top analysts talk about. For us that’s an unacceptable level of suffering and an unacceptable outcome. And so in that way the debate about whether it’s all people or just lots of people is for us not material, because that whole situation seems to be not a risk that you want to take. In the other side of your question, whether is it all people and is it planetary livability? I think that question is subject to some of the inability to fully represent all of the systemic effects that happen at these levels of warming.

Early on when I talked about this with the director of NASA Ames at the time, who’s now at Planet Labs. What he talked to me about was the changes in chemistry of the earth system. This is something that hasn’t maybe been explored that widely, but we’re already looking at collapses of life in the ocean. And between the ocean and the land systems that generates a lot of the atmosphere that we’re familiar with and that’s comfortable for people. And there are risks to that, that we can’t have these collapses of biological life and necessarily maintain the atmosphere that we’re used to. And so I think that it’s inappropriate to discount the possibility that the planet could become largely unlivable at these higher levels of heat.

And at the end of the runaway climate change scenario, where the heat levels get very high and life collapses in an extreme way, I don’t think that’s been analyzed well enough yet. And I certainly wouldn’t rule it out as an existential risk. I think that that would be inappropriate, given both our level of knowledge and the fact that we know that we have these sort of non-linear cascading things that are going to happen. So to me, I challenge the existential threat community to look into this further.

Lucas Perry: Excellent.

Kelly Wanser: Put it out there.

Lucas Perry: I like that. Okay, so given tipping points and cascading tipping points, you think there’s a little bit more uncertainty over how unlivable things can get?

Kelly Wanser: I do. And that’s before you also get into the societal part of it, right? Going back to what I think has been one of the fundamental problems of the climate debate is this idea that there are winners and losers and that this is a reasonably survivable situation for a certain class of people. There’s a reasonable probability that that’s not the case, and this is not going to be a world that anyone, if they do get to live in it, is going to enjoy.

Lucas Perry: Even if you were a billionaire back before climate change and you have your nice stocked bunker, you can’t keep stocking it, your money won’t be worth anything.

Kelly Wanser: In a world without strawberries and lobsters and rock concerts and all kinds of things that we like. So I think we’re much more in it together than people think. And that over the course of many millennia, humans were engineered and fine tuned to this beautiful, extremely complicated system that we live in. And we’re pushing it, we can use our technology to the best of our ability to adapt, but this is an environment that’s beautifully made for us and we’re pushing it out of the state that supports us.

Lucas Perry: So I’d be curious if you could expand just fairly briefly here on more of the ways in which these systems, which help to maintain the current climate status function. So for example, like the jet stream and the boreal forest and the Amazon rainforest and the Sahel in the Indian summer monsoon and the permafrost and all these other things. If you can choose, I don’t know, maybe one or two of your favorites or something or whichever or few are biggest, I’m curious how these systems help continue to maintain the climate stability?

Kelly Wanser: Well, so there are people more expert than me, but I’ll talk about a couple that I care about a lot. So one is the permafrost, which is the frozen layer of earth. And that frozen layer of earth is under the surface in landmasses and also frozen layers under the ocean. For many thousands of years, if not longer, those layers capture and build up biological life that’s died and decayed within these frozen layers of earth and store massive amounts of carbon. And so provided the earth system is working within its usual parameters, all of those masses stay frozen, and that organic material stays there. As it warms up in a way that it moves beyond its normal range of parameters, then that stuff starts to melt and those gases start to be released. And the amount of gas stored in the permafrost is massive. And particularly it includes both CO2 and the more dense, fast acting gases like methane. We’re kind of sitting on the edge of that system, starting to melt in a way where those releases could be massive.

And in my work that’s to me one of the things that we need to watch most closely, that’s a potential runaway situation. So that’s one, and that’s a relatively straightforward one, because that’s a system storing greenhouse gases, releasing greenhouse gases. They range in complexity. Like the arctic is a much more complicated one because it’s related to all the physics of the movement of the atmosphere and ocean. So the circulation of the way the jet stream and weather patterns work, the circulation of the ocean and all of that. So there could be potential drastic effects on what weather is where on the planet. So major changes in the Arctic can lead to major changes in what we experience as like our normal weather. And we’re already seeing this start to happen in Europe. And that was predicted by changes in the jet stream where Europe’s always had this kind of mild sort of temperate range of temperature.

And they’re starting to see super cold winters and hot summers. And that’s because the jet stream is moving. And a lot of that is because the Arctic is melting. A personal one that’s dear to me and it is actually happening now and we may not be able to stop no matter what we do, are the coral reefs. Coral reefs are these organic structures and they teem with all different levels of life. And they trace up to about quarter of all life in the ocean. So as these coral reefs are getting hit by these waves of hot water, they’re dying. And ultimately they’re collapsed, so mean the collapse of at least 25% of life in the ocean that they support. And we don’t really know fully what the effects of that will be. So those are a few examples.

Lucas Perry: I feel like I’ve heard the word heat stress before in relation to coral reefs and then that’s what kills it.

Kelly Wanser: Yep.

Lucas Perry: All right. So before we move into the area you’re interested in, intervening as a potential solution if we can’t get the greenhouse gases down enough, are there any more bad things that we missed or bad things that would happen if we don’t sufficiently get climate change under control?

Kelly Wanser: So I think that there are many, and we haven’t talked too much about what happens on the human side. So there are even thresholds of direct heat for humans like the hot bulb temperature. I’m not going to be able to describe it super expertly, but the combination of heat and humidity at which the human body changes the way it’s expiring heat and that heat exchange. And so what’s happening in certain parts of the world right now, like in parts of India, like Calcutta, there’s an increasing number of days of the year where it’s not safe to work outside. And there were some projections that by 2030 there would be no days in Calcutta where it was safe to work outside. And we even see parts of the U.S. where you have these heat warnings. And right now, as a direct effect on humans, I just saw a study that said the actual heat index is killing more people than the smoke from fires.

The actual increase in heat is moving past where humans are actually comfortable living and interacting. As a secondary point, obviously in developed countries we have lots of tools for dealing with that in terms of our infrastructure. But one of the things that’s happening is the system is moving outside the band in which our infrastructure was built. And this is a bit of an understudied area. As warming progresses, and you have extreme temperature, you have more flooding, you have extreme storms and winds. We have everything from bridges to nuclear plants, to skyscrapers that were not engineered for those conditions. Full evaluation of that is not really available to us yet. And so I think we may be underestimating, even things like in some of these projections, we know that our sea level rise happens and extreme storms happen, places like Miami are probably lost.

And in that context, what does it mean to have a city the size of Miami sitting under water at the edge of the United States? It would be a massive environmental catastrophe. So I think unfortunately we haven’t looked closely enough at what it means for all of these parts of our human infrastructure for their external circumstances to be outside the arena they were engineered for.

Lucas Perry: Yeah. So natural systems become stressed. They come to fail, there could be cascades. Human systems and human infrastructure becomes stressed. I mean, you can imagine like nuclear facilities and oil rigs and whatever else can cause massive environmental damage getting stressed as well by being moved outside of the normal bandwidth of operation. It’s just a lot of bad things happening after bad things after bad things.

Kelly Wanser: Yeah. And you know, a big problem. Because I’ve had this debate with people who are bullish on adaptation. Hey, we can adapt to this, but the problem is you have all these things happening concurrently. So it’s not just Miami, it’s Miami and San Francisco and Bangladesh. It’s going to be happening lots of different variants of it happening all at the same time. And so anything we could do to prevent that, excuse my academic language, shit show is really something we should consider closely because the cost of that and this sort of compound damage is just pretty staggering.

Lucas Perry: Yeah. It’s often much cheaper to prevent risks than to deal with them when they come up and then clean up the aftermath. So as we try to avoid moderate to severe bad effects of climate change, we can mitigate. I think most everyone is very familiar with the idea of reducing greenhouse gas emissions. So the kinds of gases that help trap heat inside of the atmosphere. Now you’re coming at this from a different angle. So what is the research interest of SilverLining and what is the intervention of mitigating some of the effects of climate change? What is that intervention you guys are exploring?

Kelly Wanser: Well, so our interest is in the near term risk. And so therefore we focus most closely on things that might have the potential to act quickly to substantially reduce warming in the climate system. And the problem with greenhouse gas reduction and a lot of the categories of removing greenhouse gases from the air, are that they’re likely to take many decades to scale and even longer to actually act on the climate system. And so if we’re looking at sub 30 years where we’re coming from and SilverLining is saying, “We don’t have enough in that portfolio to make sure that we can keep the system stable.” We are a science led organization, meaning we don’t do research ourselves, but we follow the recommendations of the scientific community and the scientific assessment bodies. And in 2015 the National Academy of Sciences in the United States ran an assessment that looked at the different sort of technological interventions that might be used to accelerate, addressing climate warming and greenhouse gases.

And they issued two reports, one called climate intervention, carbon dioxide removal, and one called climate intervention, reflecting sunlight to cool earth. And what they found was that in the category where you’re looking to reduce warming quickly within a decade or even a few years, the most promising way to try to do that as based on one of the ways that the earth system actually regulates temperature, which is the reflection of sunlight from particles and clouds in the atmosphere. The theories behind why they think this might work are based on observations from the real world. And so what I’m showing you right now is a picture of a cloud bank off the Pacific West coast and the streaks in the clouds are created by emissions from ships. The particulates in those emissions, usually what people think of as the dirty stuff, has a property where it often mixes with clouds in a way that will make the clouds slightly brighter.

And so based on that effect, scientists think that there’s cooling that could be generated in this way actively, and also that there’s actually cooling going on right now as a result of the particulate effects of our emissions overall. And they think that we have this accidental cooling going on somewhere between 0.5 degrees and 1.1 degrees C, and this is something that they don’t understand very well, but is potentially both a promise and a risk when it comes to climate.

Lucas Perry: So there’s some amount of cooling that’s going on by accident, but the net anthropogenic heating is positive, even with the cooling. I think one facet of this that I learned from looking into your work is that the cooling effect is limited because the particles fall back down and so it goes away. And so there might be a period of acceleration of the heat. Is that right?

Kelly Wanser: Yes. I think what you’re getting at. So two things I’ll say, these white lines indicate the uncertainty. And so you can see the biggest line is on that cloud albedo effect, which is how much do these particles brighten clouds. The effects could be much bigger than what’s going into that net effect bar. And a lot of the uncertainty in that net effect bar is coming from this cloud albedo effect. Now the fact that they fall is an issue, but what happens today for the most part is we keep putting them up there. As long as you continuously put them up there, you continuously have this effect. If you take it away, which we’re doing a couple of big experiments in this year, then you lose that cooling effect right away. And so one of the things that we’re hoping to help with is getting more money for research to look at two big events that took that away this year.

One is the economic shutdowns associated with COVID where we had these clean skies all over the world because all this pollution went down. That’s a big global experiment in removing these particles that may be cooling. We are hoping to gain a better understanding from that experiment if we can get enough resources for people to look at it well.

Lucas Perry: So, the uncertainty with the degree to which current pollution is reflecting sunlight, is that because we have uncertainty over exactly how much pollution there is and how much sunlight that is exactly reflecting?

Kelly Wanser: It’s not that we don’t know how much pollution there is. I think we know that pretty well. It’s that this interaction between clouds and particles is one of the biggest uncertainties in the climate system. And there’s a natural form of it, when you see salt spray generating clouds, you’re in Big Sur looking at the waves and the clouds starting to form, that whole process is highly complex. Clouds are among the most complex creatures in our earth system. And they’re based on the behavior of these tiny particles that attract water to them and then create different sizes of droplets. So if the droplets are big, they reflect less total sunlight off less total surface area, and you have a dark cloud. And eventually, the droplets are big enough, they fall down as rain. If the droplets are small, there’s lots of surface area and the cloud becomes brighter.

The reason we have that uncertainty is that we have uncertainty around the whole process and some of the scientists that we work with in SilverLining, they really want to focus on that because understanding that process will tell you what you might be able to do with that artificially to create a brightening effect on purpose, as well as how much of an accidental effect we’ve got going on.

Lucas Perry: So you’re saying we’re removing sulfate from the emission of ships, and sulfate is helping to create these sea clouds that are reflecting sunlight?

Kelly Wanser: That’s right. And it happens over land as well. All the emissions that contain these sulfate and similar types of particles can have this property.

Lucas Perry: And so that, plus the reduction of pollution given COVID, there is this ongoing experiment, an accidental experiment to decrease the amount of reflective cloud?

Kelly Wanser: That’s right. And I should just note that the other thing that happened in 2020 is that the International Maritime Organization implemented regulations to drastically reduce emissions from ships. Those went into effect in January, an 85% reduction in these sulfate emissions. And so that’s the other experiment. Because sulfate and these emissions, we don’t like as pollutants for human health, for local ecosystems. They’re dirty. So we don’t like them for very good reasons, but they happen to have the side effect of producing a brightening effect on clouds, and that’s the piece we want to understand better.

When I talk to especially people in the Bay Area and people who think about systems, about this particular dynamic, most of the people that I’ve talked to were unfamiliar with this. And lots of people, even who think about climate a lot, are unfamiliar with the fact that we have this accidental cooling going on. And that as we reduce emissions, we have this uncertain near-term warming that may result from that, which I think is what you were getting at.

Lucas Perry: Yeah.

Kelly Wanser: So where I’m headed with this is that in the early ’90s, some British researchers proposed that you might be able to produce an optimized version of this effect using sea salt particles, like a salt mist from seawater, which would be cleaner and possibly actually produce a stronger effect because of the nature of the salt particles, and that you could target this at areas of unpolluted clouds and certain parts of the ocean where they’d be most susceptible, and you’d get this highly magnified reflective effect. And that in doing that, in these sort of few parts of the world where it would work best by brightening 10% to 20% of marine clouds or, say, the equivalent of 3% to 5% of the ocean’s surface, you might offset a doubling of CO2 or several degrees of warming. And so that’s one approach to this kind of rapid cooling, if you like, that scientists are thinking about that’s related to an observed effect.

This marine cloud brightening approach has the characteristic that you talked about, that it’s relatively temporary. So you have to do it continuously, last a few days and otherwise, if you stop, it stops. And it’s also relatively localized. So it opens up theoretical possibilities that you might consider it as a way of cooling ocean water and mitigating climate impacts regionally or locally. In theory, what you might do is engage in this technique in the months before hurricane season. So your goal is to cool the ocean surface temperatures, which are a big part of what increases the energy and the rainfall potential of storms.

So, this idea is very theoretical. There’s been almost no research in it. Similarly, there’s a little bit of emerging research in could you cool waters that flow on to coral reefs? And you might have to do this in areas that are further out from the coral reefs because coral reefs tend to be in places where there are no clouds, but your goal is to try to get those big currents of water they’re flowing on and cool them off. There was a little test, very little tests, tiny little tests of the technology that you might use down in Australia as part of their big program, I think it’s an $800 million program, to look at all possibilities for saving the Great Barrier Reef.

Lucas Perry: Okay. One thing that I think is interesting for you to comment on briefly is I think many people, and myself included, don’t really have a good intuition about how thick the atmosphere is. You look up and it’s just big open space, maybe it goes on forever or something. So how thick is it? Put it into scale so it makes sense that seven billion humans could effect it in such large scale ways.

Kelly Wanser: We’re going to talk about it a little bit differently because the things I’m talking to you about are different layers of the atmosphere. So, the idea that I talked to you about here, marine cloud brightening, that’s really looking at the troposphere, which is the lowest layer of the atmosphere, which are, when you look up, these are the clouds I see. It’s the cloud layer that’s closest to earth that’s going from sort of 500 feet up to a couple thousand feet. And so in that layer, you may have the possibility, especially over the ocean, of generating a mist from the surface where the convection, the motion of the air above you kind of pulls the particles up into the cloud layer. And so you can do this kind of activity potentially from the surface, like from ships. And it’s why the pollution particles are getting sucked up into the clouds too.

So that idea happens at that low layer, sub mile layer, visible eye layer of stuff. And for the most part, what’s being proposed in terms of volume of material, or when scientists are talking about brightening these clouds, they’re talking about brighten them 5% to 7%. So it’s not something that you would probably see as a human with your own naked eyes, and it’s over the ocean, and it’s something that would have a relatively modest effect on the light coming in to the ocean below, so probably, a relatively modest effect on the local ecology, except for the cooling that it’s creating.

So in that way, it’s potentially less invasive than people might think. Where the risks are in a technique like this are really around the fact that you’re creating these sort of concentrated areas of cooling, and those have the potential to move the circulation of the atmosphere and change weather patterns in ways that are hard to predict. And that’s probably the biggest thing that people are concerned about with this idea.

Now, if you’d like, I can talk about what people are proposing at the other side, the high end of the atmosphere.

Lucas Perry: Yeah. So I was about to ask you about stratospheric sunlight reflection.

Kelly Wanser: Yeah, because this is the one that most people have heard about those have heard about, and it’s the most widely studied and talked about, partly because it’s based on events that have occurred in nature. Large volcanoes push material into the atmosphere and very large ones can push material all the way into the outer layer of the atmosphere, the stratosphere, which I thinks starts at about 30,000 or 40,000 feet and goes up for a few miles. So when particles reach the stratosphere, they get entrained and they can stay for a year or two.

And when Mount Pinatubo erupted in 1991, it was powerful and it pushed particles into the stratosphere that stayed there for almost two years. And it produced a measurable cooling effect in the entire planet of at least a half a degree C, actually, popped up closer to one degree C. So this cooling effect was sustained. It was very clear and measurable. It lasted until the particles fell down to earth, and it also produced a marked change in Arctic ice where Arctic ice mass just recovered drastically. This cooling effect where the particles reach the stratosphere, they immediately or very quickly get dispersed globally. So it’s a global effect, but it may have an outsize effects on the Arctic, which warms and cools faster than the rest of the planet.

This idea, and there are some other examples in the volcanic record, is what led scientists, including the Nobel prize winning scientist who identified the ozone hole, Paul Crutzen, to suggest that one approach to offsetting the warming that’s happening with climate change would be to introduce particles in the stratosphere that reflects sunlight directly, almost kind of bedazzling the stratosphere, and that by increasing this reflectivity by just 1%, that you could offset a doubling of CO2 or several degrees of warming.

Now the particles that volcanoes release in this way are similar to the pollution particles on the ground. There are sulfates and there are precursors. These particles also have the property where they can damage the ozone layer and they can also cause the stratosphere itself to heat up, and so that introduces risks that we don’t understand very well. So that’s what people want to study. There isn’t very much research on this yet, but one of the earliest models is produced at NCAR that compared the course of global surface temperatures in a business as usual scenario in the NCAR global climate model versus introducing particles into the stratosphere starting in 2020, gradually increasing them and maintaining temperatures through the end of the century. And what you can see in that model representation is that it’s theoretically possible to keep global surface temperatures close to those of today with this technique and that if we were to go down this business as usual path or have higher than expected feedbacks that took us to something similar, that, that’s not a very livable situation for most people on the planet.

Lucas Perry: All right. So you can intervene in the troposphere or the stratosphere, and so there’s a large degree of uncertainty about indirect effects and second and third order effects of these interventions, right? So they need to be studied because you’re impacting a complex system which may have complex implications at different levels of causality. But the high level strategy here is that these things may be necessary if we’re not able to reduce greenhouse gas emissions sufficiently. That’s why we may be interested in it for mitigating some degree of climate change that happens or is inevitable.

Kelly Wanser: That’s right. There’s a slight sort of twist on that, I think, where it’s really about, if we can, trying to look at these dangerous instabilities and intervene before they happen or before they take us across thresholds we don’t want to go. It is what you’re saying, but it’s a little bit of a different shade where we don’t wait to see how our mitigation effort is going necessarily. What we need to do is watch the earth system and see whether we’re reaching kind of a red zone where we’ve got to bring the heat down in the system.

Lucas Perry: What kinds of ongoing experiments are happening for studying these tropospheric and stratospheric interventions in climate change?

Kelly Wanser: Well, so the first thing we’ll say is that the research in this field has been very taboo for most of the past few decades. So, relative to the problem space, very little research has been done. And the global level of investment in research even today is probably in the neighborhood of $10 million a year, and that includes a $3 million a year program in China and a program at Harvard, which is really the biggest funded program in the world. So, relative to the problem space and the potential, we’re very under-invested. And the things I’m going to talk to you about are really promising, and there are prestigious institutions and collaborations, but they’re still at, what I would call, a very seed level of funding.

So the two most significant interdisciplinary programs in the field, one is aimed at the stratosphere, and that’s a program at Harvard called the Harvard Solar Geoengineering Program and includes social science and physical sciences, but a sort of flagship of what they’re trying to do is to do an experiment in the stratosphere. And in their case, they would try to use a balloon, which is specially crafted to navigate in the stratosphere, which is a hard problem, so that they can do releases of different materials to look at their properties in the stratosphere as they disperse and as they mix with the gases in the stratosphere.

And so for understanding, what we hope, and I think the people in the field, is that we can do these small scale experimental studies that help you populate models that will better predict what happens if you did this at a bigger scale. So, the scale of this is tiny. It’s less than a minute of an emissions of an aircraft. It’s tiny, but they hope to be able to find out some important things about the properties of the chemical interactions and the way the particles disperse that would feed into models that would help us make predictions about what will happen when you do this and also, what materials might be more optimum to use.

So in this case, they’re going to look at sulfates, which we talked about, but also materials that might have better properties. Two of those are calcium carbonate, which is what were used doing chalk, and diamonds. What they hope to do is start down the path to finding out more about how you might optimize this in a way to minimize the risks.

The other effort is on the other side of the United States, this is an effort that’s based at the University of Washington, which is one of the top atmospheric science institutions in the country. It’s a partnership with Pacific Northwest National Labs, there’s the Department of Energy Lab, and PARC, which many of your community may know it’s the famous Xerox PARC, who has since developed expertise in aerosols.

At the University of Washington, they are looking to do a set of experiments that would get at this cloud brightening question. And their scientific research and their experiments are classified as dual purpose, meaning that they are experiments about understanding this climate intervention technique, can we brighten clouds to actively cool climate, but they’re also about getting out the question of what is this cloud aerosol effect? What is the accidental effect of emissions having and how does this work in the climate system more broadly? So, what they’re proposing to do is build a specialized spray technology. So one of the characteristics of both efforts is that you need to create almost a nano mist, the particles, 80 to 100 nanometers, very consistently, at a massive scale. That hasn’t been done before. And so how do we generate this massive number of tiny droplets of materials of salt particles from seawater or calcium carbonate particles?

And some retired physicists and engineers in Silicon Valley took on this problem about eight years ago. And they’ve been working on it for four days a week in their retirement for free for the sake of their grandchildren to invent this nozzle that I’m showing you, which is the first step of being able to generate the particles that you need to study here. They’re in the phase right now, where, because of COVID, they’ve had to set up a giant tent and do indoor spray tests, and they hope next year to go out and do what they call individual plume experiments. And then eventually, they would like to undertake what they call limited area field experiment, which would actually be 10,000 square kilometers, which is the size of a grid cell on a climate model. And that would be the minimum scale at which you could actually potentially detect a brightening effect.

Lucas Perry: Maybe it makes sense on reflection, but I guess I’m kind of surprised that so much research is needed to figure out how to make a nozzle make droplets of aerosol.

Kelly Wanser: I think I was surprised too. It turns out, I think for certain materials, and again, because you’re really talking about a nano mist, like silicon chip manufacturer, like asthma inhaler. And so here, we’re talking about three trillion particles a second from one nozzle and an apparatus that can generate 10 to the 16th particles and lift it up a few hundred meters.

It’s not nuclear fusion and it wouldn’t necessarily have taken eight years if they were properly funded and it was a focus program. I mean, these guys, the lead Armand Neukermans funded this with his own money and he was trading the biscottis from Belgium. He was trading biscottis for measurement instruments. And so it’s only recently in the past year or two where the program has gotten its first government funding, some from NOAA and some from the Department of Energy, very relatively small and more focused on the scientific modeling, and some money from private philanthropy, which they’re able to use for the technology development.

But again, going back to my comment earlier, this has been a very taboo area for scientists to even work in. There have been no formal sources of funding for it, so that’s made it go a lot slower. And the technology part is the hardest and most controversial. But overall, as a point, these things are very nascent. And the problem we were talking about at the beginning, predicting what the system is going to do, that in order to evaluate and assess these things properly, you need a better prediction system because you’re trying to say, okay, we’re going to perturb the system this way and this way and predict that the outcome will be better. It’s a tough challenge in terms of getting enough research in quickly. People have sort of propagated the idea that this is cheap and easy to do, and that it could run away from us very quickly. That has not been my experience.

Lucas Perry: Run away in what sense? Like everyone just starts doing it?

Kelly Wanser: Some billionaire could take a couple of billion dollars and do it, or some little country could do it.

Lucas Perry: Oh, as even an attack?

Kelly Wanser: Not necessarily an attack, but an ungoverned attempt to manage the climate system from the perspective of one individual or one small country, or what have you. That’s been a significant concern amongst social scientists and activists. And I guess my observation, working closely with it is, there are at least two types of technology that don’t exist yet that we need, so we have a technology hurdle. These things scale linearly and they pretty much stop when you stop, specifically referring to the aerosol generation technology. And for the stratosphere, we probably actually need a new and different kind of aircraft.

Lucas Perry: Can you define aerosol?

Kelly Wanser: I’ll caveat this by saying I’m not a scientist, so my definition may not be what a scientist would give you. But generally speaking, an aerosol is particles mixed with gases. It’s a manifestation in error of a mixed blend of particles and gases. I’ll often talk about particles because it’s a little bit clearer, and what we’re doing with these techniques for the most part is dispersing particles in a way that they mix with the atmosphere and…

Lucas Perry: Become an aerosol?

Kelly Wanser: Yeah. So, I would characterize the challenge we have right now is that we actually have a very low level of information and no technology. And these things would take a number of years to develop.

Lucas Perry: Yeah. Well, it’s an interesting future to imagine the international politics of weather control, like in negotiating whether to stop the hurricanes or new powers we might get over the weather in the coming decades.

Kelly Wanser: Well, you bring up an interesting point because as I’ve gotten into this field, I’ve learned about what’s going on. And actually, there’s an astonishing amount of weather modification activity going on in the world and in the United States.

Lucas Perry: Intentional?

Kelly Wanser: Intentional, yeah.

Lucas Perry: I think I did hear that Russia did some cloud seeding, or whatever it’s called, to stop some important event getting rained on or something.

Kelly Wanser: Yeah. And that kind of thing, if you remember the Beijing Olympics where they seeded clouds to generate rain to clear the pollution, that kind of localized cloud seeding type of stuff has gone on for a long time. And of course, I’m in Colorado, there’s always been cloud seeding for snowmaking. So what’s happened though in the Western United States, there’s even an industry association for weather modification in the United States. What started out as, especially snowmaking and a little bit of attempt to affect a snow pack in the West, has grown. And so there are actually major weather modification efforts in seven or eight Western states in the United States. And they’re mostly aimed at hydrology, like snow pack and water levels.

Lucas Perry: Is the snow pack for a ski resort?

Kelly Wanser: I believe, and I’m not an expert on the history of this, but I believe that snowmaking started out from the ski resorts, but when I say snow pack, it’s really about the water table. It’s about effecting the snow levels that generate the water levels downstream. Because in the West, a lot of our water comes from snow.

Lucas Perry: And so you want to seed more snow to get more water, and the government pays for that?

Kelly Wanser: I can’t say for sure who pays. This is still an exploration for us, but there are fairly significant initiatives in many Western states. And like I said, they’re primarily aimed at the problem of drought and hydrology. That’s in the United States. And if you look at other parts of the world, like the United Arab Emirates, they have a $400 million rainmaking fund. Can we make rain in the desert?

Lucas Perry: All right.

Kelly Wanser: Flip side of the coin. In Indonesia in January, this was in the news, they were seeding clouds off shore to induce rainfall off shore to prevent flooding, and they did that at a pretty big scale. In China last year, they announced a program to increase rainfall in the Tibetan plain, in an area the size of Alaska. So we are starting to see, I think around the world, and this activity would likely grow, weather extremes and attempts to deal with them locally.

Lucas Perry: Yeah. That makes sense. What are they using to do this?

Kelly Wanser: The traditional material is silver dioxide. That’s what’s proposed in the Chinese program and many of the rainmaking types of ideas. There are two things we’ll start to see, I think, as climate extremes grow and there’s pressure on politicians to act, growing interest in the potential for global mechanisms to reduce heat and bottoms up efforts that just continue to expand that try to manage weather extremes in these kinds of ways.

Lucas Perry: So we have this tropospheric intervention by using aerosols to generate clouds that will reflect sunlight, and then we have the stratospheric intervention, which aims to release particles which do something similar, how do you view the research and the project of understanding these things as fitting in with and informing efforts to decrease greenhouse gas emissions? And then also, the project of removing them from the atmosphere, if that’s also something people are looking into?

Kelly Wanser: I think they’re all very related because at the end of the day, from the SilverLining perspective and a personal perspective, we see this as a portfolio problem. So, we have a complex system that we need to manage back into a healthy state, and we have kind of a portfolio of things that we need to apply at different times and different ways to do that. And in that way, it’s a bit like medicine, where the interventions I’m talking about address the immediate stressor.

But to restore the system to health, you have to address the underlying cause. Where we see ourselves as maybe helping bridge those things is that we are under-invested in climate research and climate prediction. In the United States, our entire budget for climate research is about 2-1/2 billion dollars. If you put that in perspective, that’s like one 10th of an aircraft carrier. It’s half of a football stadium. It’s paltry. This is the most complicated, computing-intensive problem on planet earth.

It takes massive super computing capacity and all the analytical techniques you can throw at it to try to reduce the uncertainty around what’s going to happen to these systems. What I believe happened, in the past few decades, is the problem was defined as a need to limit greenhouse gases. So if you think of an equation, where one side is the greenhouse gases going in, and the other side is what happens to the system on the other end. We’ve invested most of our energy in climate advocacy and climate policy about bringing down greenhouse gases, and we’re under-invested in really trying to understand and predict what happens on the other side.

When you look at these climate intervention techniques, like I’m talking about, it’s pretty critical to understand and be able to predict what happens on the other side. It turns out, if you’re looking at the whole portfolio, typically, if you want to blend in these sort of nature-based solutions that could bring down greenhouse gases, but they have complex interaction with the system. Right? Like building new forests, or putting nutrients on the ocean. That need to better understand the system and better predict the system, it turns out we really need that. It would behoove us to be able to understand and predict these tipping points better.

I think that then where the interventions come in is to try to say, “Well, what does reducing the heat stress, get you in terms of safety? What time does it by you for these other things to take effect?” That’s kind of where we see ourselves fitting in. We care a lot about mitigation, about let’s move away from this whole greenhouse gas emissions business. We care a lot about carbon removal, and accelerating efforts to do that. If somebody comes up with a way to do carbon removal at scale in the next 10 years, then we won’t need to do what we’re doing. But that doesn’t look like a high probability thing.

And so what we’ve chosen to do is to say there’s a part of the portfolio that is totally unserviced. There are no advocates. There’s almost no research. It’s taboo. It’s complicated. It requires innovation. That’s where we’re going to focus.

Lucas Perry: Yeah. That makes sense. Let’s talk a little bit about this taboo aspect. Maybe some number of listeners have some initial reaction. Like anytime human beings try to in complex systems, there’s always unintended consequences or things happen that we can’t predict or imagine, especially in natural systems. How would you speak to, or connect with someone who viewed this project of releasing aerosols into the atmosphere to create clouds or reflect sunlight as dangerous?

Kelly Wanser: I’ll start out by saying, I have a lot of sympathy with that. If we were 30 years ago, if you’re at a different place in this sort of risk equation, then this kind of thing really doesn’t make any sense at all. If we’re in 1970 or 1980, and someone’s saying, “Look, we just need to economically tune the incentives, so that we phase greenhouse gases out of the bulk of our economic system,” that is infinitely smarter and less risky.

I believe that a lot of the principle and structure of how we think about the climate problem is based on that, because what we did was really stupid. It would be the same thing as if the doctor said, “Well, you have stage one cancer. Stop smoking,” and you just kept on puffing away. So I am very sympathetic to this. But the primary concern that we’re focused on, are now our forward outcomes and the fact that we have this big safety problem.

So now, we’re in a situation where we have greenhouse gas concentrations that we have. They were already there. We have warming and system impacts that are already there and some latency built in, that mean we’re going to have more of those. So that means we have to look at the risk-risk trade-off, based on the situation that we’re in now. Where we have conducted the experiment. Where we pushed all these aerosols into the atmosphere that mostly trap heat and change the system radically.

We did that. That was one form of human intervention. That wasn’t a very smart one. What we have to look at now is we’re not saying that we know that this is a good idea, or that the benefits outweigh the risks. But we’re saying that we have very few alternatives today to act in ways that could help stabilize the system.

Lucas Perry: Yeah. That makes sense. Can you enumerate what the main points are of detractors? If someone is skeptical of this whole approach and thinks, “We just need to stick to removing greenhouse gases by natural intervention, by building forests, and we need to reduce CO2 emissions and greenhouse gas emissions drastically. To do anything else would be adding more danger to the equation.” What are the main points of someone who comes with this problem, with such a perspective?

Kelly Wanser: You touched on two of them already. One, is that the problem is actually not moving that quickly and so we should be focused on things that are root cause, even if they take longer. Then the second one, being the fact that this introduces risks that are really hard to quantify. But I would say the primary objection, that’s raised by people like Al Gore, most of the advocates around climate, that have a problem with this is what they call the moral hazard. The idea that it gets put forward as a panacea and therefore, it slows down efforts to address the underlying problem.

This is sort of saying, even research in this stuff could have a societal negative effect, that it slows us down in doing what we’re really supposed to do. That has some interesting angles on it. One angle, which was talked about in a recent paper by Joseph Aldy at Harvard, and also was talked about with us, by Republicans we talked to about this early on, was that there’s also the thesis that it could have the opposite effect.

That the sort of drastic nature of these things could actually signal, to society and to skeptics, the seriousness of the problem. I did a bipartisan panel. The Republican on the panel, who was a moderate guy, pro-climate guy. He said, “When we, Republicans, hear these kinds of proposals coming from people who are serious about climate change, it makes you more credible than when you come to us and say, ‘The sky is falling,’ but none of these things are on the table.”

I thought that was interesting, early on. I thought it was interesting recently, that there’s at least an equal possibility that these things, as we look into them, could wake everyone up in the same way that more drastic medical treatments do and say, “Look, this is very serious. So on all fronts, we need to get very serious.” But I think, in general, this idea of moral hazard comes up pretty much as soon as the idea is there. And it can come up in the same way that Trump talks about planting trees.

Almost anything can be positioned in a way that could be attempted to use this as this panacea. I actually think that one of the moral hazards of the climate space has been the idea of winners and losers, because I think many more powerful people assume that this problem didn’t apply to them.

Lucas Perry: Like they’re not in a flooding zone. They can move to their bunker.

Kelly Wanser: The people who put forward this idea of winners and losers in climate did that because they were very concerned about the people who are impacted first. The mistake was in letting powerful people think that this wasn’t their problem. In this particular case, I’m optimistic that if we talk about these things candidly, and we say, “Look, these are serious, and they have serious risks. We wouldn’t use them, if we had a better choice.”

It’s not clear to me that that moral hazard idea really holds, but that is the biggest reservation, and it’s a reservation. That means that many people, very passionately, object to research. They don’t want us to look into any of this, because it sets off this societal problem.

Lucas Perry: Yeah. That makes a lot of sense. It seems like moral hazard should be called something more like, information hazard. The word moral seems a little bit confusing here, because it’s like if people have the information that this kind of intervention is possible, then bad things may happen. Moral means it has something to do with ethics, rather than the consequences of information. Yeah, so whatever. No one here has control over how this language was created.

Kelly Wanser: I agree with you. It’s an idea that comes from economics originally, about where the incentives are. But I think your point is well taken, because you’re exactly right. It’s information is dangerous and that’s a fundamental principle. I find myself in meetings with advocates, and around this issue having to say, “Look, our position is that information helps with fair and just consideration of this. That information is good, not bad.”

But I think you hit on an extremely important point, that it’s a masked way of saying that information is too dangerous for people to handle. Our position is information about these things is what empowers people all over the world to think about them for themselves.

Lucas Perry: Yeah. There’s a degree to which moral hazards or information hazards lack trust or confidence in the recipients of that information, which may or may not be valid, depending on the issue and the information. Here, you argue that this information is necessary to be known and shared, and then people can make informed decisions.

Kelly Wanser: That’s our argument. And so for us, we want to keep going forward and saying, “Look, let’s generate information about this, so we can all consider it together.” I guess one thing I should say about that, because I was so shocked by it when I started working in climate. That this idea of moral hazard, it isn’t new to this issue. It actually came up when they started looking at adaptation research in the IPCC and the climate community. Research and adaptation was considered to create a moral hazard, and so it didn’t move forward.

One of the reasons that we, as a society, have relatively low level of information about the things I was talking about, like infrastructure impacts, is because there was a strong objection to it, based on moral hazard. The same was true of carbon removal, which has only recently come into consideration in the IPCC. So this information is a dangerous idea because it will affect our motivation around this one part of the portfolio, that we think is the most important. I would argue that, that’s already slowed us down in really critical ways.

This is just another of those where we need to say, “Okay, we need to rethink this whole concept of moral hazard, because it hasn’t helped us.” So going back say 20 years ago, in the IPCC and the climate community, there’s this question of, how much should we invest in looking at adaptation? There was a strong objection to adaptation research, because it was felt it would disincentivize greenhouse gas reduction.

I think that’s been a pretty tragic mistake. Because if you had started research adaptation 20 years ago, you’d have much more information about what a shit show this is going to be and more incentive to reduce greenhouse gases, not less, because this is not very adaptable. But the effect of that was a real dampening of any investment in adaptation research. Even adaptation research in the US federal system is relatively new.

Lucas Perry: Yeah. The fear there is that McAlpha Corp will come and be like, “It’s okay that we have all these emissions, because we’ll just make clouds later.” Right? I feel like corporations have done extremely effective disinformation campaigns on scientific issues, like smoking and other things. I assume that would have been what some of the fear would have been with regards to adaptation techniques. And here, we’re putting stratospheric and tropospheric intervention as adaptation techniques. Right?

Kelly Wanser: Well, in what I was talking about before, I wasn’t referring to this category. But the more traditional adaptation techniques, like building dams and finding new different types of vegetation and things like that. I recognize that what I’m talking about in these common interventions is fairly unusual, but even traditional adaptation techniques to protect people were suppressed. I appreciate your point. It’s been raised to me before that, “Oh, maybe oil companies will jump on this, as a panacea for what they are doing.”

So we talked to oil companies about it, talked to a couple of them. Their response was, “We wouldn’t go anywhere near this,” because it would be admission that ties their fossil fuels to warming. They’re much more likely to invest in carbon removal techniques and things that are more closely associated with the actual emissions, than they are anything like this. Because they’re not conceding that they created the warming,

Lucas Perry: But if they’re creating the carbon, and now they’re like, “Okay, we’re going to help take out the carbon,” isn’t that admitting that they contributed to the problem?

Kelly Wanser: Yes. But they’re not conceding that they are the absolute and proven cause of all of this warming.

Lucas Perry: Oh. So they inject uncertainty, that people will say like, “There’s weather, and this is all just weather. Earth naturally fluctuates, and we’ll help take CO2 out of the atmosphere, but maybe it wasn’t really us.”

Kelly Wanser: And if you think about them as legal fiduciary entities. Creating a direct tie between themselves and warming is different than not doing that. This is how it was described to me. There’s a fairly substantial difference between them looking at greenhouse gases, which are part of the landscape of what they do, and then the actual warming and cooling of the planet, which they’re not admitting to be directly responsible for.

So if you’re concerned about there being someone doing it, we can’t count on them to bail us out and cool the planet this way, because they’re really, really not.

Lucas Perry: Yeah. Then my last quip I was suffering over, while you were speaking, was if listeners or anyone else are sick and tired of the amount of disinformation that already exists, get ready for the conspiracy theories that are going to happen. Like chemtrail 5.0, when we have to start potentially using these mist generators to create clouds. There could be even like significant social disruption just by governments undertaking that kind of project.

Kelly Wanser: That’s where I think generating information and talking about this in a way that’s well grounded is helpful. That’s why you don’t hear me use the term, geoengineering. It’s not a particularly accurate term. It sort of amplifies triggers. Climate intervention is the more accurate term. It helps kind of ground the conversation in what we’re talking about. The same thing when we explain that these are based on processes that are observed in nature, and some of them are already happening. So this isn’t some big, new Sci-Fi. You know, we’re going to throw bombs at hurricanes or something. Just getting the conversation better grounded.

I’ve had chemtrails people at my talks. I had a guy set up a tripod in the back and record it. He was giving out these little buttons that had an airplane with little trail coming out, and a strike through it. It was fantastic. I had a conversation with him. When you talk about it in this way, it’s kind of hard to argue with. The reality is that there is no secret government program to do these things, and there are definitely no mind-altering chemicals involved in any proposals.

Lucas Perry: Well, that’s what you would be saying, if there were mind-altering chemicals.

Kelly Wanser: Fair point. We tend to try to orient the dialogue at the sort of 90% across the political and thought spectrum.

Lucas Perry: Yeah. It’s not a super serious consideration, but something to be maddened about in the future.

Kelly Wanser: One of the other things I’ll say, with respect to the climate denial side of the spectrum. Because we work in the policy sphere in the United States, and so we have conversations across the political spectrum. In a strange way, coming out at the problem from this angle, where we talk about heat stress and we talk about these interventions, helps create a new insertion point for people who are shut down in the traditional kind of dialogue around climate change.

And so we’ve had some pretty good success actually talking to people on the right side of the spectrum, or people who are approaching the climate problem from a way that’s not super well-grounded in the science. We kind of start by talking about heat stress and what’s happening and the symptoms that we’re seeing and these kinds of approaches to it, and then walking them backwards into when you absolutely positively have to take down greenhouse gases.

It has interestingly, and kind of unexpectedly, created maybe another pathway for dealing with at least parts of those populations and policy people.

Lucas Perry: All right. I’d be interested in pivoting here into the international implications of this, and then also talking about this risk in the context of other global catastrophic and existential risks. The question here now is what are the risks of international conflict around setting the global temperature via CO2 reduction and geo… Sorry. Geoengineering is the bad word. Climate intervention? There are some countries which may benefit from the earth being slightly warmer, hotter. You talked about how there were no winners or losers. But there are winners, if it only changes a little bit. Like if it gets a little bit warmer, then parts of Russia may be happier than they were otherwise.

The international community, as we gain more and more efficacy over the problem of climate change and our ability to mitigate it to whatever degree, will be impacting the weather and agriculture and livability of regions for countries all across the planet. So how do you view this international negotiation problem of mitigating climate change and setting the global temperature to something appropriate?

Kelly Wanser: I don’t tend to use the framing of, setting the global temperature. I mean, we’re really, really far from having like a fine grained management capability for this. We tend to think of it more in the context of preventing certain kinds of disastrous events in the climate system. I think in that framing, where you say, “Well, we can develop this technology,” or where we have knobs and dials for creating favorable conditions in some places and not others, that would be potentially a problem. But it doesn’t necessarily look like that’s how it works.

So it’s possible that some places, like parts of Russia, parts of Canada, might for a period of time, have more favorable climate conditions, but it’s not a static circumstance. The problem that you have is well, the Arctic opens up, Siberia gets warmer and for a couple of decades, that’s nicer. But that’s in the context of these abrupt change risks that we were talking about, where that situation is just a transitory state to some worse states.

And so the question you’re asking me is, “Okay. Well, maybe we hold a system to where Russia is happier in this sort of different state that they had.” I think that the massive challenge, which we don’t know if we can do, is just whether we can keep the system stable enough. The idea that you can stabilize the system in a way that’s different then now, but still prevents these like cascading outcomes. That’s a pretty, I would say, not the highest probability scenario.

But I think there’s certainly validity in your question, which is this just makes everybody super nervous. It is the case that this is not a collective action capability. One of its features is that it does not require everyone in the world to agree, and that is a very unstable concerning state for a lot of people. It is true that its outcomes cannot be fully predicted.

And so there’s a high degree of likelihood that everyone would be better off or that the vast majority of the world would be better off, but there will be outcomes in some places that might be different. It’s more likely, rather than people electively turning the knobs and making things more favorable for themselves, just that 3 to 5% of the world thinks they’re worse off, while we’ve tried to keep the thing more or less stable.

I think behind your question is even the dialogue around this is pretty unnerving and has the potential to promote instability and conflict. One of the things that we’ve seen in the past, that’s been super helpful, is for scientific cooperation. Lots of global cooperation in the evolution of the research and the science, so that everybody’s got information. Then we’re all dealing from an information base where people can be part of the discussion.

Because our strong hypothesis is like we’re kind of looking at the edge of a cliff, where we might not have so much disagreement that we need to do something, but we all need information about this stuff. We have done some work, in SilverLining, at looking at this and how the international community has handled things better or worse, when it comes to environmental threats like this. Our favorite model is the Montreal Protocol, which is both the scientific research and the structure that helped manage what is, many perceive, to be an existential risk around the ozone layer.

That was a smaller, more focused case of, you have a part of the system that if it falls outside a certain parameter, lots and lots of people are going to die. We have some science we have to do to figure out where we can let that go and not let it go. The world has managed that very well over the past couple of decades. And we managed to walk back from the cliff, restore the ozone layer, and we’re still managing it now.

So we kind of see some similarities in this problem space of saying, “We’ve got to be really, really focused about what we can and can’t let the system do, and then get really strong science around what our options are.” The other thing I’ll say about the Montreal Protocol, in case people aren’t aware, is it is the only environmental forum, environmental treaty that is signed by all countries in the world. There are lots of aspects of that, that are a really good model to follow for something like this, I think.

Lucas Perry: Okay. So there’s the problem of runaway climate change, where the destruction of important ecosystems lead to tipping points, and that leads to tipping cascades. And without the reduction of CO2, we get worse and worse climate change, where like everyone is worse off. In that context, there is increased global destability, so there’s going to be more conflict with the migrations of people and the increase of disease.

It’s just going to be a stressor on all of human civilization. But if that doesn’t happen, then there is this later 21st century potential concern of more sophisticated weather manipulation, weather engineering technologies, making the question of constructing and setting the weather in certain ways as a more valid international geopolitical problem. But primarily the concern is obviously regular climate change with the stressors and conflict that are induced by that.

Kelly Wanser: One thing I’ll say, just to clarify a little bit about weather modification and the expansion of that activity. I think that, that’s already happening and likely to happen throughout the century, and the escalation of that and the expansion of that as a problem. Not necessarily people using it as a weaponized idea. But as weather modification activities get larger, they have what are called telegraphic effects. They affect other places.

So I might be trying to cool the Great Barrier Reef, but I might affect weather in Bali. If I’m China and I’m trying to do weather modification to areas the size of Alaska, it’s pretty sure that I’m going to be affecting other places. And if it’s big enough, I could even affect global circulation. So I do think that that aspect, that’s coming onto the radar now. That is an international decision-making problem, as you correctly say. Because that’s actually, in some ways, even almost a bit of a harder problem than the global one. Because we’ve got these sort of national efforts, where I might be engaged in my own jurisdiction, but I might be affecting people outside.

Kelly Wanser: I should also say, just so everybody’s clear, weather modification for the purpose of weapons is banned by international treaty. A treaty called ENMOD. It arose out of US weather modification efforts in the Vietnam war, where we were trying to use weather as a weapon and subsequently agreed not to do that.

Lucas Perry: So, wrapping up here on the geopolitics and political conflict around climate change. Can you describe to what extent there is gridlock around the issue? I mean, different countries have different degrees of incentives. They have different policies and plans and philosophies. One might be more interested in focusing on industrializing to meet its own needs. And so it would deprioritize reducing CO2 emissions. So how do you view the game theory and the incentives and getting international coordination on climate change when, yeah, we’d all be better off if this didn’t happen, but not everyone is ready or willing to pay the same price?

Kelly Wanser: I mean, the main issue that we have now is that we have this externality, this externalized costs that people aren’t paying for the damage that they’re doing. And so a modest charge for that, for greenhouse gas emissions, my understanding is that a relatively modest price for carbon can set the incentives such that innovation moves faster and you reach the thresholds of economic viability for some of these non-carbon approaches faster. I come from Silicon Valley, so I think innovation is a big part of the equation.

Lucas Perry: You mean like solar and wind?

Kelly Wanser: Well there’s solar and wind, which are the traditional techniques. And then there are emerging things which could be hydrogen fuel cells. It could be fusion energy. It could be really important things in the category of waste management, agriculture. You know, it’s not just energy and cars, right? And we’re just not reaching the economic threshold where we’re driving innovation fast enough and we’re reaching profitability fast enough for these systems to be viable.

So with a little turn of the dial in terms of pricing that in, you get all of that to go faster. And I’m a believer in moving that innovation faster means that the price of these low carbon techniques will come down, it will also accelerate offlining the greenhouse gas generating stuff. So I think that it’s not sensible that we’re not building in like a robust mechanism for having that price incentive, and that price incentive will behave differently in the developed countries versus the emerging markets and the developing countries. And it might need to be managed differently in terms of the cost that they face.

But it’s really important in the developing countries that we develop policies that incentivize them not to build out greenhouse gas generating infrastructure, however we do that. Because a lot of them are in inflection points, right? Where they can start building power plants and building out infrastructure.

So we also need to look closely at aligning policies and incentives for them that they just go ahead and go green, and it might be a little bit more expensive, which means that we have to help with that. But that would be a really smart thing for us to do. What we can’t do is expect developing countries who mostly didn’t cause the problem to also eat the impact in terms of not having electricity and some of the benefits that we have of things like running water and basic needs. I don’t actually think this is rocket science. You know, I’m not a total expert, but I think the mechanisms that are needed are not super complicated. The getting the political support for them is what the problem is.

Lucas Perry: A core solution here being increased funding into innovation, into the efficacy and efficiency of renewable energy resources, which don’t pollute greenhouse gases.

Kelly Wanser: The R&D funding is key. In the U.S. we’ve actually been pretty good at that in a lot of parts of that spectrum, but you also have to have the mechanisms on the market side. Right now you have effectively fossil fuels being subsidized in terms of not being charged for the problem they’re creating. So basically we’ve got to embed the cost in the fossil fuel side of the damage that they’re doing, and that makes the market mechanisms work better for these emerging things. And the emerging things are going to start out being more expensive until they scale.

So we have this problem right now where we have some emerging things, they’re expensive. How do we get them to market? Fossil fuels are still cheaper. That’s the problem where it will eventually sort itself out, but we need it to sort itself out quickly. So we’ve got to try to get in there and fix that.

Lucas Perry: So, let’s talk about climate change in the context of existential risks and global catastrophic risks. The way that I use these language is to say that global catastrophic risks are ones which would kill some large fraction of human civilization, but wouldn’t lead to extinction. And existential risks lead to all humans dying or all earth-originating intelligent life dying. The relevant distinction here for me is that the existential risks cancel the entire future. So there could be billions upon billions or trillions of experiential life years in the future if we don’t go extinct. And so that is this value being added into the equation of trying to understand which risks are the ones to pay attention to.

So you can react to this framing if you’d like, I’d be interested in what you think about it. And also just how you see the relative importance of climate change in the context of global catastrophic and existential risks and how you see its interdependence with other issues. So I’m mainly talking about climate change as being in a context of something like other pandemics, other than COVID-19, which may kill large fractions of the population and synthetic biorisk, which a sufficiently dangerous engineered pandemic could possibly be existential or an accidental nuclear war or misaligned artificial superintelligence that could lead to the human species extinction. So how do you think about climate change in the context of all of these very large risks?

Kelly Wanser: Well, I appreciate the question. Many of the risks that you described, how the characteristics that they are hard to quantify, and they’re hard to predict. And some of them are sort of like big black swan events, like even more deadly pandemics or pandemics polarized, artificially engineered things. So climate change I think shares that characteristic that it’s hard to predict. I think that climate change, when you dig into it, you can see that there are analytical deficiencies that make it very likely that we’re underestimating the risk.

In the spectrum between sort of catastrophic and existential we have not done the work to dig into the areas in which we are not currently adequately representing the risk. So I would say that there’s a definite possibility that it’s existential and that that possibility is currently under analyzed and possibly under estimated. I think there are two ways that it’s existential. So I’ll say I’m not an expert in survivability in outlier conditions, but if we just look at two phenomenon that are part of non-zero probability projections for climate, one is this example that I showed you where warming goes beyond five or six degrees C. The jury’s pretty far out on what that means for humans and what it means about all the conditions of the land and the sea and everything else.

So the question is like, how high does temperature go? And what does that mean in terms of the population livability curve? Part of what’s involved in that how high does temperature go is the biological species and their relationship to the physics and chemistry of the planet. This concern that I had from Pete Warden at NASA aims that I had never heard before talking to him is that at some point in the collapse of biological life, particularly in the ocean, you have a change in the chemical interactions that produce the atmosphere that we’re familiar with.

So for example, the biological life at the surface of the ocean, the phytoplankton and other organisms, they generate a lot of the oxygen that we breathe in the air, same with the forests. And so the question is whether you get collapse in the biological systems that generate breathable air. Now, if you watch sci-fi, you could say, “Well, we can engineer that.” And that starts to look more like engineering ourselves to live on Mars, which I’m happy to talk about why I don’t think that’s the solution. But so I think that it’s certainly reasonable for people to say, “Well, could that really happen?” There is some non-zero probability that that could happen that we don’t understand very well and we’ve been reluctant to explore.

And so I think that my challenge back to people about this being an existential risk is that the possibility that it’s an existential risk in the nearer term than you think may be higher than we think. And the gaps in our analysis of that are concerning.

Lucas Perry: Yeah. I mean, the question is like, do you know everything you need to know about all of the complex systems on planet Earth that help maintain the small bandwidth of conditions for which human beings can exist? And the answer is, no I don’t. And then the question is, how likely it is that climate change will perturb those systems in such a way that it would lead to an existential catastrophe? Well, it’s non-zero, but besides that, I don’t know.

Kelly Wanser: And one thing to look at that I think everyone should look at who’s interested in this is the observations of what’s happening in the system now. What’s happening in the system now are collapses of some biological life changes and some of the systems that are indicative that this risk might be higher than we think. And so if you look at things like, I think there was research coming out that estimates that we may have already lost like 40% of the phytoplankton on the surface of the ocean. So much so that the documentary filmmaker who made Chasing Coral was thinking about making a documentary about this.

Lucas Perry: About phytoplankton?

Kelly Wanser: Yeah. And phytoplankton, I think of it as the API layer between the ocean and the atmosphere, it’s the translation layer. It’s really important. And then I go to my friends who are climate modelers, and they’re like, “Yeah, phytoplankton isn’t well-represented in the climate models, there are over 500 species of phytoplankton and we have three of them in the climate models.” And so you look at that and you say, “Okay, well, there’s a risk that we’re don’t understand very well.” So, from my perspective, we have a non-zero risk in this category. I’d be happy if I was overstating it, but it may not be.

Lucas Perry: Okay. So that’s all new information and interesting. In the context of the existential risk community that I’m most familiar with, climate change, the way in which it’s said to potentially lead to existential risks is by destabilizing global human systems that would lead to the actualization of other things that are existential risks. Like if you care about nuclear war or synthetic bio or pandemics or getting AI right, that’s all a lot harder to do and control in the context of a much hotter earth. And so the other question I had for you, speaking of hotter earths, has the earth ever been five C hotter than it is now while mammals have been on it?

Kelly Wanser: So hasn’t been that hot while humans have been on it, but I’m not expert enough to know, as far as the mammal picture, I’m going to guess, probably yes. So when I touch on the first points that you were making too about the societal cascade, but on this question, the problem with the warming isn’t just whether or not the earth has ever been this warm, but it’s the pace of warming. If you look at over the past couple thousand years, how far and how fast we’re pushing the system, that normally when the earth goes through its fluctuations of temperature, and you can see in the past 2,000 years, it’s been small fluctuations, it’s been bigger. But it’s happened over very long periods of time, like hundreds of thousands of years, which means that all of the little organisms and all the big structures are adapting in this very slow way.

And in this situation where we’re pushing it this fast, the natural adaptation was very, very low. You know, you have species of fish and stuff that can move to different places, but it’s happening so fast in Earth system terms that there’s no adaptation happening. But to your other point about climate change setting off existential threats to society in other ways, I think that’s very true. And the climate change is likely to heighten the risk of like nuclear conflict on a couple of different vectors. And it’s also likely to heighten the risk that we throw biological solutions out there whose results we can’t predict. So I think one of the facets of climate change that might be a little bit different than runaway AI is just that it applies stress across every human and every natural system.

Lucas Perry: So this last point here then on climate change contextualized in this field of understanding around global catastrophic and existential risks, FLI views itself as being a part of the effective altruism community, and many of the listeners are effective altruists and 80,000 hours has come up with this simple framework for thinking about what kinds of projects and endeavors you should take on. And so the framework is just thinking about tractability, scope and neglectedness.

So tractability is just how much you can do to actually affect the thing. Scope is how big of a problem is it, how many people does it affect, and neglectedness is how many people are working on it? So you want to work on things that are highly tractable or tractable that have a large scope and that are neglected. So I think that there’s a view or the sense of climate change is that … I mean, from our conversation, it seems very tractable.

If we can get human civilization and coordinate on this, it’s something that we can do a lot about. I guess it’s another question on how tractable it is to actually get countries and corporations to coordinate on this. But the scope is global and would in the very least effect our generation and the next few generations, but it seems to not be neglected relative to other risks. One could say that it’s neglected relative to how much attention it deserves. But so I’m curious to know how you would react to this tractability, scope, and neglectedness framework being applied to climate change and in the context of other global catastrophic and existential risks.

Kelly Wanser: Firstly, I’m a big fan of the framework. I was familiar with it before, and it’s not dissimilar to the approach that we took in founding SilverLining, where I think this issue might fit into that framework depends on whether you put climate change all in one bucket and treat it as not neglected. Or you say in the portfolio of responses to climate change of which we have a significant gap in terms of ability to mitigate heat stress while we work on other parts of the portfolio, that part is entirely neglected.

So I think for us it’s about having to dissect the climate change problem, and we have this collective action problem, which is a hard problem to solve, to move industrial and other systems away from greenhouse gas emissions. And we have the system instability problem, which requires that we somehow alleviate the heat stress before the system breaks down too far.

I would say in that context, if your community looks at climate change as a relatively slowly unfolding problem, which has a lot of attention, then it wouldn’t fit. If you look at climate change as having some meaningful risk of catastrophic to existential unfolding in the next 30 to 50 years and not having response measures to try to stabilize the system, then it fits really nicely. It’s so under serviced that I represent the only NGO in the world that advocates for research in this area. So it depends on how your community thinks about it, but we look at those as quite different problems in a way.

Lucas Perry: So the problem of for example adaptation research, which has historically been stigmatized, we can apply this framework to this and see that you might get a high return on impact if you focus on supporting and doing research in climate intervention technologies and adaptation technologies?

Kelly Wanser: That’s right. What’s interesting to me and the people that I work with on this problem is that these climate intervention technologies have the potential to have very high leverage on the problem in the short term. And so from a philanthropic perspective or an octopus perspective, oftentimes I’m engaged with people who are looking for leverage, where can I really make a difference in terms of supporting research or policy? And I’m in this because literally I came from tech into climate, looking what is the most under-serviced highest leverage part of the space. And I landed here. And so I think that of your criteria that it’s under serviced and potentially high leverage, then this fits pretty well. It’s not the same as addressing the longer term problem of greenhouse gases, but it has very high leverage on the stability risk in the next 50 years or so.

Lucas Perry: So if that’s compelling to some number of listeners, what is your recommendation for action and participation for such persons? If I’m taking a portfolio approach to my impact or altruism, and I want to put some of it into this, how do you recommend I do that?

Kelly Wanser: So it’s interesting timing because we’re just a few weeks of launching something called a safe climate research initiative where we’re funding a portfolio of research programs. So what we do at Silver Lining is try to help drive philanthropic funding for these high leverage nascent research efforts that are going on and then try to help drive government funding and effective policy so that we can get resources moving in the big climate research system. So for people looking for that, when we start talking about the safe climate research initiative, we were agnostic as to whether, if you want to give money to SilverLining for the fund, or you want to donate to these programs directly.

So we interface with most of the mature-ish programs in the United States and quite a few around the world, mature and emerging. And we can direct people based on their interests, whether alumni, whether parts of the world there are opportunities for funding really high caliber things, Latin America, the UK, India.

So we’re happy to say, “You know, you can donate to our fund and we’re just moving through, getting seed funding to these programs as we can, or we can help connect you with programs based on your interests in the different parts of the world that you’re in, technology versus science versus impacts.” So that’s one way. For some philanthropists who are aware of the leverage on government R&D and government policy, Silver Lining’s been very effective in starting to kind of turn the dial on government funding. And we have some pretty big aspirations, not only to get funding directly in assessing these interventions, but also in expanding our capacity to do climate prediction quickly. So that’s another way where you can fund advocacy and we would appreciate it.

Lucas Perry: Accepting donations?

Kelly Wanser: We’re definitely accepting donations, happy to connect people or be a conduit for funding research directly.

Lucas Perry: All right. So let’s end on a fun one here then. So we were talking a little bit before we started about your visit planet earth picture behind you, and that you use that as a message against the colonization of Mars. So why don’t you think Mars is a solution to all of the human problems on earth?

Kelly Wanser: Well, let’s just start by saying, I grew up on Star Trek and so the colonization of Mars and the rest of the universe is appealing to me. But as far as the solutions to climate change or an escape from it, just to level set, because I’ve had serious conversations with people. I lived for 12 years in Silicon Valley, spent a lot of time with the Long Now community. And people have a passion for this vision of living on another planet and the idea that we might be able to move off of this one if it becomes dire. The reality is, and it goes back to education I got from very serious scientists. The problem with living on other planets, it’s not an engineering problem or a physics problem. It’s a biology problem.

That our bodies are fine tuned to the conditions of Earth, radiation, gravity, the air, the colors. And so we degrade pretty quickly when we go off planet. That’s a harder problem to solve than building a spaceship or a bubble. That’s not a problem that gets solved right away. And we can see it from the conditions of the astronauts that come back after a few years in orbit. And so the kinds of problems that we would need to solve to actually have quality of life living conditions on Mars or anywhere else are going to take a while. Longer than what we think are the 30 to 50 year instability problem that we have here on earth.

We are so finely tuned to the conditions of earth, like the Goldilocks sort of zone that we’re in, that it’s a really, really hard thing to replicate anywhere else. And so it’s really not very rational. It’s actually a much easier problem to solve to try to repair earth than it is to try to create the conditions of earth somewhere else.

Lucas Perry: Yeah. So I mean, these things might not be mutually exclusive, right? It really seems to be a problem of resource allocation. Like it’s not one or the other, it’s like, how much are we going to put into each-

Kelly Wanser: It’s less of a problem of resource allocation than time horizon. So I think that the kinds of scientific and technical problems that you have to solve to meaningfully have people live on Mars, that’s beyond a 50 year time horizon. And our concern is that the climate instability problem is inside a 50 year time horizon. So that’s the main issue is that over the long haul, there are advanced technologies and probably bio-engineering things we need to do and maybe engineering of planets that we need to do for that to work. And so over the next 100 or 200 years, that would be really cool, and I’ll be in favor of it also. But this is the spaceship that we have. All of the people are on it, and failure is not an option.

Lucas Perry: All right. That’s an excellent place to end on. And I think both you and I share the science fiction geek gene about getting to Mars, but we’ll have to potentially delay that until we figure out climate change, but hopefully we get to that. So, yeah. Thanks so much for coming on. This has been really interesting. I feel like I learned a lot of new things. There’s a lot here that probably most people who are even fairly familiar with climate science aren’t familiar with. So I just want to offer you a final little space here if you have any final remarks or anything you’d like to say that you feel like is unresolved or unsaid, just any last words for listeners?

Kelly Wanser: Well, for those people who’ve made it through the entire podcast, thanks for listening and being so engaged and interested in the topic. I think that apart from the things we talked about previously, it’s heartening and important that people from other fields are paying attention to the climate problem and becoming engaged, particularly people from the technology sector and certain parts of industry that bring a way of thinking about problems that’s useful. I think there are probably lots of people in your community who may be turning their attention to this, or turning their attention to this more fully in a new way, and may have perspectives and ideas and resources that are useful to bring to it.

The field has been quite academic and more academic than many other fields of endeavor. And so I think what people in Silicon Valley think about in terms of how you might transform a sector quickly, or a problem quickly, presents an opportunity. And so I hope that people are inspired to become involved and become involved in the parts of the space that are maybe more controversial or easier for people like us to think about.

Lucas Perry: All right. And so if people want to follow or find you or check out SilverLining, where are the best places to get more information or see what you guys are up to?

Kelly Wanser: So I’m on LinkedIn and Twitter as @kellywanser and our website is silverlining.ngo, no S at the end. And the majority of the information about what we do is there. And feel free to reach out to me on LinkedIn or on Twitter or contact Lucas who can contact me.

Lucas Perry: Yeah, all right. Wonderful. Thanks so much, Kelly.

Kelly Wanser: All right. Thanks very much, Lucas. I appreciate it. Thanks for taking so much time.

Sam Barker and David Pearce on Art, Paradise Engineering, and Existential Hope (With Guest Mix)

Sam Barker, a Berlin-based music producer, and David Pearce, philosopher and author of The Hedonistic Imperative, join us on a special episode of the FLI Podcast to spread some existential hope. Sam is the author of euphoric sound landscapes inspired by the writings of David Pearce, largely exemplified in his latest album — aptly named “Utility.” Sam’s artistic excellence, motivated by blissful visions of the future, and David’s philosophical and technological writings on the potential for the biological domestication of heaven are a perfect match made for the fusion of artistic, moral, and intellectual excellence. This podcast explores what significance Sam found in David’s work, how it informed his music production, and Sam and David’s optimistic visions of the future; it also features a guest mix by Sam and plenty of musical content.

Topics discussed in this episode include:

  • The relationship between Sam’s music and David’s writing
  • Existential hope
  • Ideas from the Hedonistic Imperative
  • Sam’s albums
  • The future of art and music

Where to follow Sam Barker :

Soundcloud
Twitter
Instagram
Website
Bandcamp

Where to follow Sam’s label, Ostgut Ton: 

Soundcloud
Facebook
Twitter
Instagram
Bandcamp

 

Timestamps: 

0:00 Intro

5:40 The inspiration around Sam’s music

17:38 Barker- Maximum Utility

20:03 David and Sam on their work

23:45 Do any of the tracks evoke specific visions or hopes?

24:40 Barker- Die-Hards Of The Darwinian Order

28:15 Barker – Paradise Engineering

31:20 Barker – Hedonic Treadmill

33:05 The future and evolution of art

54:03 David on how good the future can be

58:36 Guest mix by Barker

 

Tracklist:

Delta Rain Dance – 1

John Beltran – A Different Dream

Rrose – Horizon

Alexandroid – lvpt3

Datassette – Drizzle Fort

Conrad Sprenger – Opening

JakoJako –  Wavetable#1

Barker & David Goldberg – #3

Barker & Baumecker – Organik (Intro)

Anthony Linell – Fractal Vision

Ametsub – Skydroppin’

Ladyfish\Mewark – Comfortable

JakoJako & Barker – [unreleased]

 

This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

All of our podcasts are also now on Spotify and iHeartRadio! Or find us on SoundCloudiTunesGoogle Play and Stitcher.

You can listen to the podcast above or read the transcript below. 

David Pearce: I would encourage people to conjure up their vision of paradise. and the future can potentially be like that only much, much better. 

Lucas Perry: Welcome to the Future of Life Institute Podcast. I’m Lucas Perry. Today we have a particularly unique episode with Berlin based DJ and producer Sam Barker as well as with David Pearce, and right now, you’re listening to Sam’s track Paradise Engineering on his album Utility. We focus centrally on the FLI Podcast on existential risk. The other side of existential risk is existential hope. This hope reflects all of our dreams, aspirations, and wishes for a better future. For me, this means a future where we’re able to create material abundance, eliminate global poverty, end factory farming and address animal suffering, evolve our social and political systems to bring greater wellbeing to everyone, and more optimistically, create powerful aligned artificial intelligence that can bring about the end involuntary suffering, and help us to idealize the quality of our minds and ethics. If we don’t go extinct, we have plenty of time to figure these things out and that brings me a lot of joy and optimism. Whatever future seems most appealing to you, these visions are a key component to why mitigating existential risk is so important. So, in the context of COVID-19, we’d like to revitalize existential hope and this podcast is aimed at doing that.  

As a part of this podcast, Sam was kind enough to create a guest mix for us. You can find that after the interview portion of this podcast and can find where it starts by checking the timestamps. I’ll also release the mix separately a few days after this podcast goes live. Some of my favorite tracks of Sam’s not highlighted in this podcast are Look How Hard I’ve Tried, and Neuron Collider. If you enjoy Sam’s work and music featured here, you can support or follow him at the links in the description. He has a Bandcamp shop where you can purchase his albums. I grabbed a vinyl copy of his album Debiasing from there. 

As for a little bit of background on this podcast, Sam Barker, who produces electronic music under the name Barker, has albums with titles such as Debiasing” and Utility. I was recommended to listen to these, and discovered his album “Utility” is centrally inspired by David Pearce’s work, specifically The Hedonistic Imperative. Utility has track titles like Paradise Engineering, Experience Machines, Gradients Of Bliss, Hedonic Treadmill, and Wireheading. So, being a big fan of Sam’s music production and David’s philosophy and writing, I wanted to bring them together to explore the theme of existential hope and Sam’s inspiration for his albums and how David fits into all of it. 

Many of you will already be familiar with David Pearce. He is a friend of this podcast and a multiple time guest. David is a co-founder of the World Transhumanist Association, rebranded Humanity+, and is a prominent figure within the transhumanism movement in general. You might know him from his work on the Hedonistic Imperative, a book which explores our moral obligation to work towards the abolition of suffering in all sentient life through technological intervention.

Finally, I want to highlight the 80,000 Hours Podcast with Rob Wiblin. If you like the content on this show, I think you’ll really enjoy the topics and guests on Rob’s podcast. His is also motivated by and contextualized in an effective altruism framework and covers a broad range of topics related to the world’s most pressing issues and what we can do about them. If that sounds of interest to you, I suggest checking out episode #71 with Ben Todd on the ideas of 80,000 Hours, and episode #72 with Toby Ord on existential risk. 

And with that, here’s my conversation with Dave and Sam, as well as Sam’s guest mix.

Lucas Perry: For this first section, I’m basically interested in probing the releases that you already have done, Sam, and exploring them and your inspiration for the track titles and the soundscapes that you’ve produced. Some of the background and context for this is that much of this seems to be inspired by and related to David’s work, in particular the Hedonistic Imperative. I’m at first curious to know, Sam, how did you encounter David’s work, and what does it mean for you?

Sam Barker: David’s work was sort of arriving in the middle of a sort of a series of realizations, and kind of coming from a starting point of being quite disillusioned with music, and a little bit disenchanted with the vagueness, and the terminology, and the imprecision of the whole thing. I think part of me has always wanted to be some kind of scientist, but I’ve ended up at perhaps not the opposite end, but quite far away from it.

Lucas Perry: Could explain what you mean by vagueness and imprecision?

Sam Barker: I suppose the classical idea of what making music is about is a lot to do with the sort of western idea of individualism and about self expression. I don’t know. There’s this romantic idea of artists having these frenzied creative bursts that give birth to the wonderful things, that it’s some kind of struggle. I just was feeling super disillusioned with all of that. Around that time, 2014 or 15, I was also reading a lot about social media, reading about behavioral science, trying to figure what was going on in this arena and how people are being pushed in different directions by this algorithmic system of information distribution. That kind of got me into this sort of behavioral science side of things, like the addictive part of the variable-ratio reward schedule with likes. It’s a free dopamine dispenser kind of thing. This was kind of getting me into reading about behavioral science and cognitive science. It was giving me a lot of clarity, but not much more sort of inspiration. It was basically like music.

Dance music especially is a sort of complex behavioral science. You do this and people do that. It’s all deeply ingrained. I sort of imagine the DJ as a sort Skinner box operator pulling puppet strings and making people behave in different ways. Music producers are kind of designing clever programs using punishment and reward or suspense and release, and controlling people’s behavior. The whole thing felt super pushy and not a very inspiring conclusion. Looking at the problem from a cognitive science point of view is just the framework that helped me to understand what the problem was in the first place, so this kind of problem of being manipulative. Behavioral science is kind of saying what we can make people do. Cognitive psychology is sort of figuring out why people do that. That was my entry point into cognitive psychology, and that was kind of the basis for Debiasing.

There’s always been sort of a parallel for me between what I make and my state of mind. When I’m in a more positive state, I tend to make things I’m happier with, and so on. Getting to the bottom of what tricks were, I suppose, with dance music. I kind of understood implicitly, but I just wanted to figure out why things worked. I sort of came to the conclusion it was to do with a collection of biases we have, like the confirmation bias, and the illusion of truth effect, and the mere exposure effect. These things are like the guardians of four four supremacy. Dance music can be pretty repetitive, and we describe it sometimes in really aggressive terminology. It’s a psychological kind of interaction.

Cognitive psychology was leading me to Kaplan’s law of the instrument. The law of the instrument says that if you give a small boy a hammer, he’ll find that everything he encounters requires pounding. I thought that was a good metaphor. The idea is that we get so used to using tools in a certain way that we lose sight of what it is we’re trying to do. We act in the way that the tool instructs us to do. I thought, what if you take away the hammer? That became a metaphor for me, in a sense, that David clarified in terms of pain reduction. We sort of put these painful elements into music in a way to give this kind of hedonic contrast, but we don’t really consider that that might not be necessary. What happens when we abolish these sort of negative elements? Are the results somehow released from this process? That was sort of the point, up until discovering the Hedonistic Imperative.

I think what I was needing at the time was a sort of framework, so I had the idea that music was decision making. To improve the results, you have to ask better questions, make better decisions. You can make some progress looking at the mechanics of that from a psychology point of view. What I was sort of lacking was a purpose to frame my decisions around. I sort of had the idea that music was a sort of a valence carrier, if you like, and that it could tooled towards a sort of a greater purpose than just making people dance, which was for Debiasing the goal, really. It was to make people dance, but don’t use the sort of deeply ingrained cues that people used to, and see if that works.

What was interesting was how broadly it was accepted, this first EP. There was all kinds of DJs playing it in techno, ambient, electro, all sorts of different styles. It reached a lot of people. It was as if taking out the most functional element made it more functional and more broadly appealing. That was the entry point to utilitarianism. There was sort of an accidentally utilitarian act, in a way, to sort of try and maximize the pleasure and minimize the pain. I suppose after landing in utilitarianism and searching for some kind of a framework for a sense of purpose in my work, the Hedonistic Imperative was probably the most radical, optimistic take on the system. Firstly, it put me in a sort of mindset where it granted permission to explore sort of utopian ideals, because I think the idea of pleasure is a little bit frowned upon in the art world. I think the art world turns its nose up at such direct cause and effect. The idea that producers could sort of be paradise engineers of sorts, so the precursors to paradise engineers, that we almost certainly would have a role in a kind of sensory utopia of the future.

There was this kind of permission granted. You can be optimistic. You can enter into your work with good intentions. It’s okay to see music as a tool to increase overall wellbeing, in a way. That was kind of the guiding idea for my work in the studio. I’m trying, these days, to put more things into the system to make decisions in a more conscious way, at least where it’s appropriate to. This sort of notion of reducing pain and increasing pleasure was the sort of question I would ask at any stage of decision making. Did this thing that I did serve those ends? If not, take a step back and try a different approach.

There’s something else to be said about the way you sort of explore this utopian world without really being bogged down. You handle the objections in such a confident way. I called it a zero gravity world of ideas. I wanted to bring that zero gravity feeling to my work, and to see that technology can solve any problem in this sphere. Anything’s possible. All the obstacles are just imagined, because we fabricate these worlds ourselves. These are things that were really instructive for me, as an artist.

Lucas Perry: That’s quite an interesting journey. From the lens of understanding cognitive psychology and human biases, was it that you were seeing those biases in dance music itself? If so, what were those biases in particular?

Sam Barker: On both sides, on the way it’s produced and in the way it’s received. There’s sort of an unspoken acceptance. You’re playing a set and you take a kick drum out. That signals to people to perhaps be alert. The lighting engineer, they’ll maybe raise the lights a little bit, and everybody knows that the music is going into sort of a breakdown, which is going to end in some sort of climax. Then, at that point, the kick drum comes back in. We all know this pattern. It’s really difficult to understand why that works without referring to things like cognitive psychology or behavioral science.

Lucas Perry: What does the act of debiasing the reception and production of music look like and do to the music and its reception?

Sam Barker: The first part that I could control was what I put into it. The experiment was whether a debiased piece of dance music could perform the same functionality, or whether it really relies on these deeply ingrained cues. Without wanting to sort of pat myself on the back, it kind of succeeded in its purpose. It was sort of proof that this was a worthy concept.

Lucas Perry: You used the phrase, earlier, four four. For people who are not into dance music, that just means a kick on each beat, which is ubiquitous in much of house and techno music. You’ve removed that, for example, in your album Debiasing. What are other things that you changed from your end, in the production of Debiasing, to debias the music from normal dance music structure?

Sam Barker: It was informing the structure of what I was doing so much that I wasn’t so much on a grid where you have predictable things happening. It’s a very highly formulaic and structured thing, and that all keys into the expectation and this confirmation bias that people, I think, get some kind of kick from when the predictable happens. They say, yep. There you go. I knew that was going to happen. That’s a little dopamine rush, but I think it’s sort of a cheap trick. I guess I was trying to get the tricks out of it, in a way, so figuring out what they were, and trying to reduce or eliminate them was the process for Debiasing.

Lucas Perry: That’s quite interesting and meaningful, I think. Let’s just take trap music. I know exactly how trap music is going to go. It has this buildup and drop structure. It’s basically universal across all dance music. Progressive house in the 2010s was also exactly like this. What else? Dubstep, of course, same exact structure. Everything is totally predictable. I feel like I know exactly what’s going to happen, having listened to electronic music for over a decade.

Sam Barker: It works, I think. It’s a tried and tested formula, and it does the job, but when you’re trying to imagine states beyond just getting a little kick from knowing what was going to happen, that’s the place that I was trying to get to, really.

Lucas Perry: After the release of Debiasing in 2018, which was a successful attempt at serving this goal and mission, you then discovered the Hedonistic Imperative by David Pearce, and kind of leaned into consequentialism, it seems. Then, in 2019, you had two releases. You had BARKER 001 and you had Utility. Now, Utility is the album which most explicitly adopts David Pearce’s work, specifically in the Hedonistic Imperative. You mentioned electronic dance producers and artists in general can be sort of the first wave of, or can perhaps assist in paradise engineering, insofar as that will be possible in the near to short terms future, given advancements in technology. Is that sort of the explicit motivation and framing around those two releases of BARKER 001 and Utility?

Sam Barker: BARKER 001 was a few tracks that were taken out of the running for the album, because they didn’t sort of fit the concept. Really, I knew the last track was kind of alluding to the album. Otherwise, it was perhaps not sort of thematically linked. Hopefully, if people are interested in looking more into what’s behind the music, you can lead people into topics with the concept. With Utility, I didn’t want to just keep exploring cognitive biases and unpicking dance music structurally. It’s sort of a paradox, because I guess the Hedonistic Imperative argues that pleasure can exist without purpose, but I really was striving for some kind of purpose with the pleasure that I was getting from music. That sort of emerged from reading the Hedonistic Imperative, really, that you can apply music to this problem of raising the general level of happiness up a notch. I did sort of worry that by trying to please, it wouldn’t work, that it would be something that’s too sickly sweet. I mean, I’m pretty turned off by pop music, and there was this sort of risk that it would end up somewhere like that. That’s it, really. Just looking for a higher purpose with my work in music.

Lucas Perry: David, do you have any reactions?

David Pearce: Well, when I encountered Utility, yes, I was thrilled. As you know, essentially I’m a writer writing in quite heavy sub-academic prose. Sam’s work, I felt, helps give people a glimpse of our glorious future, paradise engineering. As you know, the reviews were extremely favorable. I’m not an expert critic or anything like that. I was just essentially happy and thrilled at the thought. It deserves to be mainstream. It’s really difficult, I think, to actually evoke the glorious future we are talking about. I mean, I can write prose, but in some sense music can evoke paradise better, at least for many people, than prose.

Sam Barker: I think it’s something you can appreciate without cognitive effort which, your prose, at least you need to be able to read. It’s a bit more of a passive way of receiving, music, which I think is an intrinsic advantage it has. That’s actually really a relief to hear, because there was just a small fear in my mind that I was grabbing these concepts with clumsy hands and discrediting them.

David Pearce: Not at all.

Sam Barker: It all came from a place of sincere appreciation for this sort of world that you are trying to entice people with. When I’ve tried to put into words what it was that was so inspiring, I think it’s that there was also a sort of very practical, kind of making lots of notes. I’ve got lots of amazing one liners. Will we ever leave the biological dark ages or the biological domestication of heaven? There was just so many things that conjure up such vividly, heavenly sensations. It sort of brings me back to the fuzziness of art and inspiration, but I hope I’ve tried to adopt the same spirit of optimism that you approached the Hedonistic Imperative with. I actually don’t know what state of mind your approach was at the time, even, but it must’ve come in a bout of extreme hopefulness.

David Pearce: Yes, actually. I started taking Selegiline, and six weeks later I wrote the Hedonistic Imperative. It just gave me just enough optimism to embark on. I mean, I have, fundamentally, a very dark view of Darwinian life, but for mainly technical reasons I think the future is going to be super humanly glorious. How do you evoke this for our dark, Darwinian minds?

Sam Barker: Yeah. How do we get people excited about it? I think you did a great job.

David Pearce: It deserves to go mainstream, really, the core idea. I mean, forget the details, the neurobabble of genetics. Yeah, of course it’s incredibly important, but this vision of just how sublimely wonderful life could be. How do we achieve full spectrum, multimedia dominance? I mean, I can write it.

Lucas Perry: Sounds like you guys need to team up.

Sam Barker: It’s very primitive. I’m excited where it could head, definitely.

Lucas Perry: All right. I really like this idea about music showing how good the future can be. I think that many of the ways that people can understand how good the future can be comes from the best experiences they’ve had in their life. Now, that’s just a physical state of your brain. If something isn’t physically impossible, then the only barrier to achieving and realizing that thing is knowledge. Take all the best experiences in your life. If we could just understand computation, and biology in the brain, and consciousness well enough. It doesn’t seem like there’s any real limits to how good and beautiful things can get. Do any of the tracks that you’ve done evoke very specific visions, dreams, desires, or hopes?

Sam Barker: I would be sort of hesitant to make direct links between tracks and particular mindsets, because when I’m sitting down to make music, I’m not really thinking about any one particular thing. Rather, I’m trying to look past things and look more about what sort of mood I want to put into the work. Any of the tracks on the record, perhaps, could’ve been called paradise engineering, is what I’m saying. The names from the tracks are sort of a collection of the ideas that were feeding the overall process. The application of the names was kind of retroactive connection making. That’s probably a disappointment to some people, but the meaning of all of the track names is in the whole of the record. I think the last track on the record, Die-Hards of the Darwinian Order, that was a phrase that you used, David, to describe people clinging to the need for pain in life to experience pleasure.

David Pearce: Yes.

Sam Barker: That track was not made for the record. It was made some time ago, and it was just a technical experiment to see if I could kind of recreate a realistic sounding band with my synthesizers. The label manager, Alex, was really keen to have this on the record. I was kind of like, well, it doesn’t fit conceptually. It has a kick drum. It’s this kind of somber mood, and the rest of the record is really uplifting, or trying to be. Alex was saying he liked the contrast to the positivity of the rest of the album. He felt like it needed this dose of realism or something.

David Pearce: That makes sense, yes.

Sam Barker: I sort of conceded in the end. We called it Die-Hards of the Darwinian Order, because that was what I felt like he was.

David Pearce: Have you told him this?

Sam Barker: I told him. He definitely took the criticism. As I said, it’s the actual joining up of these ideas that I make notes on. The tracks themselves, in the end, had to be done in a creative way sort of retroactively. That doesn’t mean to say that all of these concepts were not crucial to the process of making the record. When you’re starting a project, you call it something like new track, happy two, mix one, or something. Then, eventually, the sort of meaning emerges from the end result, in a way.

Lucas Perry: It’s just like what I’ve heard from authors of best selling books. They say you have no idea what the book is going to be called until the end.

Sam Barker: Right, yeah.

David Pearce: One of the reasons I think it’s so important to stress life based on gradients of bliss ratcheting up hedonic set points is that, instead of me or anyone else trying to impose their distinctive vision of paradise, it just allows, with complications, everyone to keep most of their existing values and preferences, but just ratchets up hedonic tone and hedonic range. I mean, this is the problem with so many traditional paradises. They involve the imposition of someone else’s values and preferences on you. I’m being overly cerebral about it now, but I think my favorite track on the album is the first. I would encourage people to conjure up their vision of paradise and the future can potentially be like that and be much, much better.

Sam Barker: This, I think, relates to the sort of pushiness that I was feeling at odds with. The music does take people to these kind of euphoric states, sometimes chemically underwritten, but it’s being done in a dogmatic and singular way. There’s not much room for personal interpretation. It’s sort of everybody’s experiencing one thing, which I think there’s something in these kind of communal experiences that I’m going to hopefully understand one day.

Lucas Perry: All right. I think some of my favorite tracks are Look How Hard I’ve Tried on Debiasing. I also really like Maximum Utility and Neuron Collider. I mean, all of it is quite good and palatable.

Sam Barker: Thank you. The ones that you said are some of my personal favorites. It’s also funny how some of the least favorite tracks, or not least favorite, but the ones that I felt like didn’t really do what they set out to do, were other people’s favorites. Hedonic Treadmill, for example. I’d put that on the pile of didn’t work, but people are always playing it, too, finding things in it that I didn’t intentionally put there. Really, that track felt to me like stuck on the hedonic treadmill, and not sort of managing to push the speed up, or push the level up. This is, I suppose, the problem with art, that there isn’t a universal pleasure sense, that there isn’t a one size fits all way to these higher states.

David Pearce: You correctly called it the hedonic treadmill. Some people say the hedonistic treadmill. Even one professor I know calls it the hedonistic treadmill.

Lucas Perry: I want to get on that thing.

David Pearce: I wouldn’t mind spending all day on a hedonistic treadmill.

Sam Barker: That’s my kind of exercise, for sure.

Lucas Perry: All right, so let’s pivot here into section two of our conversation, then. For this section, I’d just like to focus on the future, in particular, and exploring the state of dance music culture, how it should evolve, and how science and technology, along with art and music, can evolve into the future. This question comes from you in particular, Sam, addressed to Dave. I think you were curious about his experiences in life and if he’s ever lost himself on a dance floor or has any special music or records that put him in a state of bliss?

Sam Barker: Very curious.

David Pearce: My musical autobiography. Well, some of my earliest memories is of a wind up gramophone. I’m showing my age here. Apparently, as a five year old child, I used to sing on the buses. Daisy, Daisy, give me your answer, due. I’m so crazy over love of you. Then, graduating via the military brass band play, apparently I used to enjoy as a small child to pop music. Essentially, for me, very, very unanswerable about music. I like to use it as a backdrop, you know. At its best, there’s this tingle up one’s spine one gets, but it doesn’t happen very often. The only thing I would say is that it’s really important for me that music should be happy. I know some people get into sad music. I know it’s complicated. Music, for me, has to elicit something that’s purely good.

Sam Barker: I definitely have no problem with exploring the sort of darker side of human nature, but I also have come to the realization that there’s better ways to explore the dark sides than aesthetic stimulation through, perhaps, words and ideas. Aesthetics is really at its optimum function when it’s working towards more positive goals of happiness and joy, and these sort of swear words in the art world.

Lucas Perry: Dave, you’re not trying to hide your rave warehouse days from us, are you?

David Pearce: Well, yeah. Let’s just say I might not have been entirely drug naïve with friends. Let’s just say I was high on life or something, but it’s a long time since I have explored that scene. Part of me still misses it. When it comes to anything in the art world, just as I think visual art should be beautiful. Which, I mean, not all serious artists would agree.

Sam Barker: I think the whole notion is just people find it repulsive somehow, especially in the art world. Somebody that painted a picture and then the description reads I just wanted it to be pretty is getting thrown out the gallery. What greater purpose could it really take on?

David Pearce: Yeah.

Lucas Perry: Maybe there’s some feeling of insecurity, and a feeling and a need to justify the work as having meaning beyond the sensual or something. Then there may also be this fact contributing to it. Seeking happiness and sensual pleasure directly, in and of itself, is often counterproductive towards that goal. Seeking wellbeing and happiness directly usually subverts that mission, and I guess that’s just a curse of Darwinian life. Perhaps those, I’m just speculating here, contribute to this cultural distaste, as you were pointing out, to enjoy pleasure as the goals of art.

Sam Barker: Yeah, we’re sort of intellectually allergic to these kinds of ideas, I think. They just seem sort of really shallow and superficial. I suppose that was kind of my existential fear before the album came out, that the idea that I was just trying to make people happy would just be seen as this shallow thing, which I don’t see it as, but I think the sentiment is quite strong in the art world.

Lucas Perry: If that’s quite shallow, then I guess those people are also going to have problems with the Buddha in people like that. I wouldn’t worry about it too much. I think you’re on the same intentional ground as the Buddha. Moving a little bit along here. Do you guys have thoughts or opinions on the future of aesthetics, art, music, and joy, and how science and technology can contribute to that?

David Pearce: Oh, good heavens. One possibility will be that, as neuroscience advances, it’ll be possible to isolate the molecular experience of visual beauty, musical bliss, spiritual excellence, and scientifically amplify them so that one can essentially enjoy musical experiences that are orders of magnitude richer than anything that’s even physiologically feasible today. I mean, I can use all this fancy language, but what actually this will involve, in terms of true trans-human and post-human artists. The gradients of bliss is important here, in such that I think we will retain information sensitive gradients, so we don’t lose critical sharpness, discernment, critical appreciation. Nonetheless, this base point for aesthetic excellence. All experience can be superhumanly beautiful. I mean, I religiously star my music collection from one to five, but what would a six be like? What would 100 be like?

Sam Barker: I like these questions. I guess the role of the artist in the long term future in creating these kinds of states maybe gets pushed out at some point by people who are in the labs and reprogram the way music is, or the way that any sort of sensory experience is received. I wonder whether there’s a place in techno utopia for music made by humans, or whether artists sort of just become redundant in some way. I’m not going to get offended if the answer is bye, bye.

Lucas Perry: I’d be interested in just making a few points about the evolutionary perspective before we get into the future of ape artists or mammalian artists. It just seems like some kind of happy cosmic accident that, for the vibration of air, human beings have developed a sensory appreciation of information and structure embedded in that medium. I think we’re quite lucky, as a species, that music and musical appreciation is embedded in the software of human genetics, as such that we can appreciate, and create, and share musical moments. Now, with genetic engineering and more ambitious paradise engineering, I think it would be beautiful to expand the modalities for which artistic, or aesthetic, or the appreciation of beauty can be experienced.

Music is one clear way of having aesthetic appreciation and joy. Visual art is another one. People do derive a lot of satisfaction from touch. Perhaps that could be more information structured in the ways that music and art are. There might be a way of changing what it means to be an intelligent thing, such there can be just an expansion of art appreciation across all of our essential modalities, and even into essential modalities which don’t exist yet.

David Pearce: The nature of trans-human and post-human art just leaves me floundering.

Lucas Perry: Yeah. It seems useful here just to reflect on how happy of an accident art is. As we begin to evolve, we can get into, say, A.I. here. A.I. and machine learning is likely to be able to have very, very good models of, say, our musical preferences within the next few years. I mean, they’re somewhat already very good at it. They’ll continue to get better. Then, we have fairly rudimental algorithms which can produce music. If we just extrapolate out into the future, eventually artificial intelligent systems will be able to produce music better than any human. In that world, what is the role of the human artist? I guess I’m not sure.

Sam Barker: I’m also completely not sure, but I feel like it’s probably going to happen in my lifetime, that these technologies get to a point that they actually do serve the purpose. At the moment, there is A.I. software that can create unique compositions, but it does so by looking at an archive of music with Ava. It’s Bach, and Beethoven, and Mozart. Then it reinterprets all of the codes that are embedded in that, and uses that to make new stuff. It sounds just like a composing quoting, and it’s convincing. Considering this is going to get better and better, I’m pretty confident that we’ll have a system that will be able to create music to a person’s specific taste, having not experienced music, that would say look at my music library, and then start making things that I might like. I can’t say how I feel about that.

Let’s say if it worked, and it did actually surprise me, and I was feeling like humans can’t make this kind of sensation in me. This is a level above. In a way, yeah, somebody that doesn’t like the vagueness of the creative process, this really appeals, somehow. The way that things are used, and the way that our attention is sort of a resource that gets manipulated, I don’t know whether we have an incredible technology, once again, in the wrong hands. It’s just going to be turned into a mind control. These kind of things would be put to use for nefarious purposes. I don’t fear the technology. I fear what we, in our unmodified state, might do with it.

David Pearce: Yes. I wonder when the last professional musician will retire, having been eclipsed by A.I. I mean, in some sense, we are, I think, stepping stones to something better. I don’t know when the last philosophers will be pensioned off. Hard problem of mind solved, announced in nature, Nobel Prize beckons. Distinguished philosophers of mind announce their intention to retire. Hard to imagine, but one does suppose that A.I. will be creating work of ever greater excellence tailored to the individual. I think the evolutionary roots of aesthetic appreciation are very, very deep. It kind of does sound very disrespectful to artists, saying that A.I. could replace artists, but mathematicians and scientists are probably going to be-

Lucas Perry: Everyone’s getting replaced.

Sam Barker: It’s maybe a similar step to when portrait painters when the camera was threatening their line of work. You can press a button and, in an instant, do what would’ve taken several days. I sort of am cautiously looking forward to more intelligent assistance in the production of music. If we did live in a world where there wasn’t any struggles to express, or any wrongs to right, any flaws in our character to unpick, then I would struggle to find anything other than the sort of basic pleasure of the action of making music. I wouldn’t really feel any reason to share what I made, in a sense. I think there’s a sort of moral, social purpose that’s embedded within music, if you want to grasp it. I think, if A.I. is implemented with that same moral, ethical purpose, then, in a way, we should treat it as any other task that comes to be automated or extremely simplified. In some way, we should sort of embrace the relaxation of our workload, in a way.

There’s nothing to say that we couldn’t just continue to make music if it brought us pleasure. I think distinguishing between these two things of making music and sharing it was an important discovery for me. The process of making a piece of music, if it was entirely pleasurable, but then you treat the experience like it was a failure because it didn’t reach enough people, or you didn’t get the response or the boost to your ego that you were searching from it, then it’s your remembering self overriding your experiencing self, in a way, or your expectations getting in the way of your enjoyment of the process. If there was no purpose to it anymore, I might still make it for my own pleasure, but I like to think I would be happy that a world that didn’t require music was already a better place. I like to think that I wouldn’t be upset with my redundancy with my P45 from David Pearce.

David Pearce: Oh, no. With a neuro chip, you see, your creative capacities could be massively augmented. You’d have narrow super intelligence on a chip. Now, in one sense, I don’t think classical digital computers are going to wake up and become conscious. They’re never actually going to be able to experience music or art or anything like this. In that sense, they will remain tools, but tools that one can actually incorporate within oneself, so that they become part of you.

Lucas Perry: A friendly flag there that many people who have been on this podcast disagree with that point. Yeah, fair enough, David. I mean, it seems that there are maybe three options. One is, as you mentioned, Sam, to find joy and beauty in more things, and to sort of let go of the need for meaning and joy to come from not being something that is redundant. Once human beings are made obsolete or redundant, it’s quite sad for us, because we derive much of our meaning, thanks a lot, evolution, from accomplishing things and being relevant. The two paths here seems like reaching some kind of spiritual evolution such that we’re okay with being redundant, or being okay with passing away as a species and allowing our descendants to proliferate. The last one would be to change what it means to be human, such that by merging or bi-evolution we somehow remain relevant to the progress of civilization. I don’t know which one it will be, but we’ll see.

David Pearce: I think the exciting one, for me, is where we can harness the advances in technology in a conscious way to positive ends, to greater net wellbeing in society. Maybe I’m hooked on the old ideals, but I do think a sense of purpose in your pleasure elevates the sensation somewhat.

Lucas Perry: I think human brains on MDMA would disagree with that.

Sam Barker: Yeah. You’ve obviously also reflected on an experience like that after the event, and come to the conclusion that there wasn’t, perhaps, much concrete meaning to your experience, but it was joyful, and real, and vivid. You don’t want to focus too much on the fact that it was mostly just you jumping up and down on a dance floor. I’m definitely familiar with the pleasure of essentially meaningless euphoria. I’ll say, at the very least, it’s interesting to think about. Reading a lot about the nature of happiness and the general consensus there being that happiness is sort of a balance of pleasure a purpose. The idea that maybe you don’t need the purpose is worth exploring, I think, at least.

David Pearce: We do have this term empty hedonism. One thing that’s striking is that one, for whatever reason or explanation, gets happier and happier. Everything seems more intensely meaningful. There are pathological forms like mania or hypermania, where it leads to grandiosity, masonic delusions, even theomania, and thinking one is God. It’s possible to have much more benign versions. In practice, I think when life is based on gradients of bliss, eventually, superhuman bliss, this will entail superhuman meaning and significance. Essentially, we’ve got a choice. I mean, we can either have pure bliss, or one could have a combination of miss and hyper-motivation, and one will be able to tweak the dials.

Sam Barker: This is all such deliciously appealing language as someone who’s spending a lot of their time tweaking dials.

David Pearce: This may or may not be the appropriate time to ask, but tell me about what future projects have you planned?

Sam Barker: I’m still very much exploring the potential of music as an increaser of wellbeing, and I think it’s sort of leading me in interesting directions. At present, I’m sort of in another crossroads, I feel. The general drive to realize these sort of higher functions of music is still a driving force. I’m starting to look at what is natural in music and what is learned. Like you say, there is this long history of the way that we appreciate sound. There’s link to all kinds of repetitive experiences that our ancestors had. There’s other aspects to sound production that are also very old. Use of reverb is connected to our experience as sort of cavemen dwelling in these kind of reverberant spaces. These were kind of sacred spaces for early humans, so this feeling of when you walk into a cathedral, for example, this otherworldly experience that comes from the acoustics is, I think, somehow deeply tied to this historical situation of seeking shelter in caves, and the caves having a bigger significance in the lives of early humans.

There’s a realization, I suppose, that what we’re experiencing that relates to music is rhythm, tone, and timbre noise. If you just sort of pay attention to your background noise, the things that you’re most familiar with are actually not very musical. You don’t really find harmony in nature very much. I’m sort of forming some ideas around what parts of music and our response to music are cultural, and what are natural. It’s sort of a strange word to apply. Our sort of harmonic language is a technical construction. Rhythm is something we have a much deeper connection with through our lives as defined by rhythms of planets and that dividing our time into smaller and smaller ratios down to heartbeats and breathing. We’re sort of experiencing really complex poly-rhythmic silence form of music, I suppose. I’m separating these two concepts of rhythm and harmony and trying to get to the bottom of their function and the goal of elevating bliss and happiness. I guess, looking at what the tools I’m using are and what their role could be, if that makes any sense.

David Pearce: In some sense, this sounds weird. I think, insofar as it’s possible, one does have a duty to take care of oneself, and if one can give happiness to others, not least by music, in that sense, one can be a more effective altruist. In some sense, perhaps one feels, ethically, ought one to be working 12, 14 hours a day to make the world a better place. Equally, we all have our design limitations, and just being able to relax and, either as a consumer of music, or if one is a creator of music, that has a valuable role, too. It really does. One needs to take care of one’s own mental health to be able to help others.

Sam Barker: I feel like the kind of under the bonnet tinkering that, in some way, needs to happen for us to really make use of the new technologies. We need to do something about human nature. I feel like we’re a bit further away from those sort of realities than we are with the technological side. I think there needs to be sort of emergency measures, in some way, to improve human nature through the old fashioned social, cultural nudges, perhaps, as a stopgap until we can really roll our sleeves up and change human nature on a molecular level.

David Pearce: Yeah. I think we might need both. All the kind of environmental, social, political form together, whether biological, genetic, by a happiness revolution. I would love to be able to. A 100 year plan blueprint to get rid of suffering. Replace it with gradients of bliss, paradise engineering. In practice, I feel the story of Darwinian life still has several centuries to go. I hope I’m too pessimistic. Some of my trans-humanist colleagues, intelligence explosion, or a complete cut via the infusion of humans and our machines, but we shall see.

Lucas Perry: David, Sam and I, and everyone else, loves your prose so much. Could you just kind of go off here and muster your best prose to give us some thoughts as beautiful as sunsets for how good the future of music, and art, and gradients of intelligent bliss will be?

David Pearce: I’m afraid. Put eloquence on hold, but yeah. Just try for a moment to remember your most precious, beautiful, sublime experience in your life, whatever it was. It may or may not be suitable for public consumption. Just try to hold it briefly. Imagine if life could be like that, only far, far better, all the time, and with no nasty side effects, no adverse social consequences. It is going to be possible to build this kind of super civilization based on gradients of bliss. Be over ambitious. Needless to say, if anything I have written, unfortunately you’d need to wade through all matter of fluff. I just want to say, I’m really thrilled and chuffed with utility, so anything else is just vegan icing on the cake.

Sam Barker: Beautiful. I’m really, like I say, super relieved that it was taken as such. It was really a reconfiguring of my approach and my involvement with the thing that I’ve sort of given my life to thus far, and a sort of a clarification of the purpose. Aside from anything else, it just put me in a really perfect mindset for addressing mental obstacles in the way of my own happiness. Then, once you get that, you sort of feel like sharing it with other people. I think it started off a very positive process in my thoughts, which sort of manifested in the work I was doing. Extremely grateful for your generosity in lending these ideas. I hope, actually, just that people scratched the surface a little bit, and maybe plug some of the terms into a search engine and got kind of lost in the world of utopia a little bit. That was really the main reason for putting these references in and pushing people in that direction.

David Pearce: Well, you’ve given people a lot of pleasure, which is fantastic. Certainly, I’d personally rather be thought of as associated with paradise engineering and gradients of bliss, rather than the depressive, gloomy, negative utilitarian.

Sam Barker: Yeah. There’s a real dark side to the idea. I think the thing I read after the Hedonistic Imperative was some of Les Knight’s writing about the voluntary human extinction movement. I honestly don’t know if he’d be classified as a utilitarian, but this sort of egocentric utilitarianism, which you sort of endorse through including the animal kingdom in your manifesto. There’s sort of a growing appreciation for this kind of antinatal sentiment.

David Pearce: Yes, antinatalism seems to be growing, but I don’t think it’s every going to be dominant. The only way to get rid of suffering and ensure high quality of life for all sentient beings is going to be, essentially, get to the heart of the problem to rewrite ourselves. I did actually do an antinatalist podcast the other week, but I’m only a soft antinatalist, because there’s always going to be selection pressure in favor of a predisposition to go forth and multiply. One needs to build alliances with fanatical life lovers, even if when one contemplates the state of the world, one has some rather dark thoughts.

Sam Barker: Yeah.

Lucas Perry: All right. So, is there any questions or things we haven’t touched on that you guys would like to talk about?

David Pearce: No. I just really want to just thank you to Lucas for organizing this. You’ve got quite a diverse range of podcasts now. Sam, I’m honored. Thank you very much. Really happy this has gone well.

Sam Barker: Yeah. David, really, it’s been my pleasure. Really appreciate your time and acceptance of how I’ve sort of handled your ideas.

Lucas Perry: I feel really happy that I was able to connect you guys, and I also think that both of you guys make the world more beautiful by your work and presence. For that, I am grateful and appreciative. Also, very much enjoy and take inspiration from both of your work, so keep on doing what you’re doing.

Sam Barker: Thanks, Lucas. Same to you. Really.

David Pearce: Thank you, Lucas. Very much appreciated.

Lucas Perry: I hope that you’ve enjoyed the conversation portion of this podcast. Now, I’m happy to introduce the guest mix by Barker. 

Sam Harris on Global Priorities, Existential Risk, and What Matters Most

Human civilization increasingly has the potential both to improve the lives of everyone and to completely destroy everything. The proliferation of emerging technologies calls our attention to this never-before-seen power — and the need to cultivate the wisdom with which to steer it towards beneficial outcomes. If we’re serious both as individuals and as a species about improving the world, it’s crucial that we converge around the reality of our situation and what matters most. What are the most important problems in the world today and why? In this episode of the Future of Life Institute Podcast, Sam Harris joins us to discuss some of these global priorities, the ethics surrounding them, and what we can do to address them.

Topics discussed in this episode include:

  • The problem of communication 
  • Global priorities 
  • Existential risk 
  • Animal suffering in both wild animals and factory farmed animals 
  • Global poverty 
  • Artificial general intelligence risk and AI alignment 
  • Ethics
  • Sam’s book, The Moral Landscape

You can take a survey about the podcast here

Submit a nominee for the Future of Life Award here

 

Timestamps: 

0:00 Intro

3:52 What are the most important problems in the world?

13:14 Global priorities: existential risk

20:15 Why global catastrophic risks are more likely than existential risks

25:09 Longtermist philosophy

31:36 Making existential and global catastrophic risk more emotionally salient

34:41 How analyzing the self makes longtermism more attractive

40:28 Global priorities & effective altruism: animal suffering and global poverty

56:03 Is machine suffering the next global moral catastrophe?

59:36 AI alignment and artificial general intelligence/superintelligence risk

01:11:25 Expanding our moral circle of compassion

01:13:00 The Moral Landscape, consciousness, and moral realism

01:30:14 Can bliss and wellbeing be mathematically defined?

01:31:03 Where to follow Sam and concluding thoughts

 

You can follow Sam here: 

samharris.org

Twitter: @SamHarrisOrg

 

This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

All of our podcasts are also now on Spotify and iHeartRadio! Or find us on SoundCloudiTunesGoogle Play and Stitcher.

You can listen to the podcast above or read the transcript below. 

Lucas Perry: Welcome to the Future of Life Institute Podcast. I’m Lucas Perry. Today we have a conversation with Sam Harris where we get into issues related to global priorities, effective altruism, and existential risk. In particular, this podcast covers the critical importance of improving our ability to communicate and converge on the truth, animal suffering in both wild animals and factory farmed animals, global poverty, artificial general intelligence risk and AI alignment, as well as ethics and some thoughts on Sam’s book, The Moral Landscape. 

If you find this podcast valuable, you can subscribe or follow us on your preferred listening platform, like on Apple Podcasts, Spotify, Soundcloud, or whatever your preferred podcasting app is. You can also support us by leaving a review. 

Before we get into it, I would like to echo two announcements from previous podcasts. If you’ve been tuned into the FLI Podcast recently you can skip ahead just a bit. The first is that there is an ongoing survey for this podcast where you can give me feedback and voice your opinion about content. This goes a super long way for helping me to make the podcast valuable for everyone. You can find a link for the survey about this podcast in the description of wherever you might be listening. 

The second announcement is that at the Future of Life Institute we are in the midst of our search for the 2020 winner of the Future of Life Award. The Future of Life Award is a $50,000 prize that we give out to an individual who, without having received much recognition at the time of their actions, has helped to make today dramatically better than it may have been otherwise. The first two recipients of the Future of Life Award were Vasili Arkhipov and Stanislav Petrov, two heroes of the nuclear age. Both took actions at great personal risk to possibly prevent an all-out nuclear war. The third recipient was Dr. Matthew Meselson, who spearheaded the international ban on bioweapons. Right now, we’re not sure who to give the 2020 Future of Life Award to. That’s where you come in. If you know of an unsung hero who has helped to avoid global catastrophic disaster, or who has done incredible work to ensure a beneficial future of life, please head over to the Future of Life Award page and submit a candidate for consideration. The link for that page is on the page for this podcast or in the description of wherever you might be listening. If your candidate is chosen, you will receive $3,000 as a token of our appreciation. We’re also incentivizing the search via MIT’s successful red balloon strategy, where the first to nominate the winner gets $3,000 as mentioned, but there are also tiered pay outs where the first to invite the nomination winner gets $1,500, whoever first invited them gets $750, whoever first invited the previous person gets $375, and so on. You can find details about that on the Future of Life Award page. 

Sam Harris has a PhD in neuroscience from UCLA and is the author of five New York Times best sellers. His books include The End of Faith, Letter to a Christian Nation, The Moral Landscape, Free Will, Lying, Waking Up, and Islam and the Future of Tolerance (with Maajid Nawaz). Sam hosts the Making Sense Podcast and is also the creator of the Waking Up App, which is for anyone who wants to learn to meditate in a modern, scientific context. Sam has practiced meditation for more than 30 years and studied with many Tibetan, Indian, Burmese, and Western meditation teachers, both in the United States and abroad.

And with that, here’s my conversation with Sam Harris.

Starting off here, trying to get a perspective on what matters most in the world and global priorities or crucial areas for consideration, what do you see as the most important problems in the world today?

Sam Harris: There is one fundamental problem which is encouragingly or depressingly non-technical, depending on your view of it. I mean it should be such a simple problem to solve, but it’s seeming more or less totally intractable and that’s just the problem of communication. The problem of persuasion, the problem of getting people to agree on a shared consensus view of reality, and to acknowledge basic facts and to have their probability assessments of various outcomes to converge through honest conversation. Politics is obviously the great confounder of this meeting of the minds. I mean, our failure to fuse cognitive horizons through conversation is reliably derailed by politics. But there are other sorts of ideology that do this just as well, religion being perhaps first among them.

And so it seems to me that the first problem we need to solve, the place where we need to make progress and we need to fight for every inch of ground and try not to lose it again and again is in our ability to talk to one another about what is true and what is worth paying attention to, to get our norms to align on a similar picture of what matters. Basically value alignment, not with superintelligent AI, but with other human beings. That’s the master riddle we have to solve and our failure to solve it prevents us from doing anything else that requires cooperation. That’s where I’m most concerned. Obviously technology influences it, social media and even AI and the algorithms behind the gaming of everyone’s attention. All of that is influencing our public conversation, but it really is a very apish concern and we have to get our arms around it.

Lucas Perry: So that’s quite interesting and not the answer that I was expecting. I think that that sounds like quite the crucial stepping stone. Like the fact that climate change isn’t something that we’re able to agree upon, and is a matter of political opinion drives me crazy. And that’s one of many different global catastrophic or existential risk issues.

Sam Harris: Yeah. The COVID pandemic has made me, especially skeptical of our agreeing to do anything about climate change. The fact that we can’t persuade people about the basic facts of epidemiology when this thing is literally coming in through the doors and windows, and even very smart people are now going down the rabbit hole of this is on some level a hoax, people’s political and economic interests just bend their view of basic facts. I mean it’s not to say that there hasn’t been a fair amount of uncertainty here, but it’s not the sort of uncertainty that should give us these radically different views of what’s happening out in the world. Here we have a pandemic moving in real time. I mean, where we can see a wave of illness breaking in Italy a few weeks before it breaks in New York. And again, there’s just this Baghdad Bob level of denialism. The prospects of our getting our heads straight with respect to climate change in light of what’s possible in the middle of a pandemic, that seems at the moment, totally farfetched to me.

For something like climate change, I really think a technological elite needs to just decide at the problem and decide to solve it by changing the kinds of products we create and the way we manufacture things and we just have to get out of the politics of it. It can’t be a matter of persuading more than half of American society to make economic sacrifices. It’s much more along the lines of just building cars and other products that are carbon neutral that people want and solving the problem that way.

Lucas Perry: Right. Incentivizing the solution by making products that are desirable and satisfy people’s self-interest.

Sam Harris: Yeah. Yeah.

Lucas Perry: I do want to explore more actual global priorities. This point about the necessity of reason for being able to at least converge upon the global priorities that are most important seems to be a crucial and necessary stepping stone. So before we get into talking about things like existential and global catastrophic risk, do you see a way of this project of promoting reason and good conversation and converging around good ideas succeeding? Or do you have any other things to sort of add to these instrumental abilities humanity needs to cultivate for being able to rally around global priorities?

Sam Harris: Well, I don’t see a lot of innovation beyond just noticing that conversation is the only tool we have. Intellectual honesty spread through the mechanism of conversation is the only tool we have to converge in these ways. I guess the thing to notice that’s guaranteed to make it difficult is bad incentives. So we should always be noticing what incentives are doing behind the scenes to people’s cognition. There are things that could be improved in media. I think the advertising model is a terrible system of incentives for journalists and anyone else who’s spreading information. You’re incentivized to create sensational hot takes and clickbait and depersonalize everything. Just create one lurid confection after another, that really doesn’t get at what’s true. The fact that this tribalizes almost every conversation and forces people to view it through a political lens. The way this is all amplified by Facebook’s business model and the fact that you can sell political ads on Facebook and we use their micro-targeting algorithm to frankly, distort people’s vision of reality and get them to vote or not vote based on some delusion.

All of this is pathological and it has to be disincentivized in some way. The business model of digital media is part of the problem. But beyond that, people have to be better educated and realize that thinking through problems and understanding facts and creating better arguments and responding to better arguments and realizing when you’re wrong, these are muscles that need to be trained, and there are certain environments in which you can train them well. And there’s certain environments where they are guaranteed to atrophy. Education largely consists in the former, in just training someone to interact with ideas and with shared perceptions and with arguments and evidence in a way that is agnostic as to how things will come out. You’re just curious to know what’s true. You don’t want to be wrong. You don’t want to be self-deceived. You don’t want to have your epistemology anchored to wishful thinking and confirmation bias and political partisanship and religious taboos and other engines of bullshit, really.

I mean, you want to be free of all that, and you don’t want to have your personal identity trimming down your perception of what is true or likely to be true or might yet happen. People have to understand what it feels like to be willing to reason about the world in a way that is unconcerned about the normal, psychological and tribal identity formation that most people, most of the time use to filter against ideas. They’ll hear an idea and they don’t like the sound of it because it violates some cherished notion they already have in the bag. So they don’t want to believe it. That should be a tip off. That’s not more evidence in favor of your worldview. That’s evidence that you are an ape who’s disinclined to understand what’s actually happening in the world. That should be an alarm that goes off for you, not a reason to double down on the last bad idea you just expressed on Twitter.

Lucas Perry: Yeah. The way the ego and concern for reputation and personal identity and shared human psychological biases influence the way that we do conversations seems to be a really big hindrance here. And being aware of how your mind is reacting in each moment to the kinetics of the conversation and what is happening can be really skillful for catching unwholesome or unskillful reactions it seems. And I’ve found that non-violent communication has been really helpful for me in terms of having valuable open discourse where one’s identity or pride isn’t on the line. The ability to seek truth with another person instead of have a debate or argument is a skill certainly developed. Yet that kind of format for discussion isn’t always rewarded or promoted as well as something like an adversarial debate, which tends to get a lot more attention.

Sam Harris: Yeah.

Lucas Perry: So as we begin to strengthen our epistemology and conversational muscles so that we’re able to arrive at agreement on core issues, that’ll allow us to create a better civilization and work on what matters. So I do want to pivot here into what those specific things might be. Now I have three general categories, maybe four, for us to touch on here.

The first is existential risk that primarily come from technology, which might lead to the extinction of Earth originating life, or more specifically just the extinction of human life. You have a Ted Talk on AGI risk, that’s artificial general intelligence risk, the risk of machines becoming as smart or smarter than human beings and being misaligned with human values. There’s also synthetic bio risk where advancements in genetic engineering may unleash a new age of engineered pandemics, which are more lethal than anything that is produced by nature. We have nuclear war, and we also have new technologies or events that might come about that we aren’t aware of or can’t predict yet. And the other categories in terms of global priorities, I want to touch on are global poverty, animal suffering and human health and longevity. So how is it that you think of and prioritize and what is your reaction to these issues and their relative importance in the world?

Sam Harris: Well, I’m persuaded that thinking about existential risk is something we should do much more. It is amazing how few people spend time on this problem. It’s a big deal that we have the survival of our species as a blind spot, but I’m more concerned about what seems likelier to me, which is not that we will do something so catastrophically unwise as to erase ourselves, certainly not in the near term. And we’re capable of doing that clearly, but I think it’s more likely we’re capable of ensuring our unrecoverable misery for a good long while. We could just make life basically not worth living, but we’ll be forced or someone will be forced to live it all the while, basically a Road Warrior like hellscape could await us as opposed to just pure annihilation. So that’s a civilizational risk that I worry more about than extinction because it just seems probabilistically much more likely to happen no matter how big our errors are.

I worry about our stumbling into an accidental nuclear war. That’s something that I think is still pretty high on the list of likely ways we could completely screw up the possibility of human happiness in the near term. It’s humbling to consider what an opportunity cost this, compared to what’s possible, minor pandemic is, right. I mean, we’ve got this pandemic that has locked down most of humanity and every problem we had and every risk we were running as a species prior to anyone learning the name of this virus is still here. The threat of nuclear war has not gone away. It’s just, this has taken up all of our bandwidth. We can’t think about much else. It’s also humbling to observe how hard a time we’re having, even agreeing about what’s happening here, much less responding intelligently to the problem. If you imagine a pandemic that was orders of magnitude, more deadly and more transmissible, man, this is a pretty startling dress rehearsal.

I hope we learn something from this. I hope we think more about things like this happening in the future and prepare for them in advance. I mean, the fact that we have a CDC, that still cannot get its act together is just astounding. And again, politics is the thing that is gumming up the gears in any machine that would otherwise run halfway decently at the moment. I mean, we have a truly deranged president and that is not a partisan observation. That is something that can be said about Trump. And it would not be said about most other Republican presidents. There’s nothing I would say about Trump that I could say about someone like Mitt Romney or any other prominent Republican. This is the perfect circumstance to accentuate the downside of having someone in charge who lies more readily than any person in human history perhaps.

It’s like toxic waste at the informational level has been spread around for three years now and now it really matters that we have an information ecosystem that has no immunity against crazy distortions of the truth. So I hope we learn something from this. And I hope we begin to prioritize the list of our gravest concerns and begin steeling our civilization against the risk that any of these things will happen. And some of these things are guaranteed to happen. The thing that’s so bizarre about our failure to grapple with a pandemic of this sort is, this is the one thing we knew was going to happen. This was not a matter of “if.” This was only a matter of “when.” Now nuclear war is still a matter of “if”, right? I mean, we have the bombs, they’re on hair-trigger, overseen by absolutely bizarre and archaic protocols and highly outdated technology. We know this is just a doomsday system we’ve built that could go off at any time through sheer accident or ineptitude. But it’s not guaranteed to go off.

But pandemics are just guaranteed to emerge and we still were caught flat footed here. And so I just think we need to use this occasion to learn a lot about how to respond to this sort of thing. And again, if we can’t convince the public that this sort of thing is worth paying attention to, we have to do it behind closed doors, right? I mean, we have to get people into power who have their heads screwed on straight here and just ram it through. There has to be a kind of Manhattan Project level urgency to this, because this is about as benign a pandemic as we could have had, that would still cause significant problems. An engineered virus, a weaponized virus that was calculated to kill the maximum number of people. I mean, that’s a zombie movie, all of a sudden, and we’re not ready for the zombies.

Lucas Perry: I think that my two biggest updates from the pandemic were that human civilization is much more fragile than I thought it was. And also I trust the US government way less now in its capability to mitigate these things. I think at one point you said that 9/11 was the first time that you felt like you were actually in history. And as someone who’s 25, being in the COVID pandemic, this is the first time that I feel like I’m in human history. Because my life so far has been very normal and constrained, and the boundaries between everything has been very rigid and solid, but this is perturbing that.

So you mentioned that you were slightly less worried about humanity just erasing ourselves via some kind of existential risk and part of the idea here seems to be that there are futures that are not worth living. Like if there’s such thing as a moment or a day that isn’t worth living then there are also futures that are not worth living. So I’m curious if you could unpack why you feel that these periods of time that are not worth living are more likely than existential risks. And if you think that some of those existential conditions could be permanent, and could you speak a little bit about the relative likely hood of existential risk and suffering risks and whether you see the higher likelihood of the suffering risks to be ones that are constrained in time or indefinite.

Sam Harris: In terms of the probabilities, it just seems obvious that it is harder to eradicate the possibility of human life entirely than it is to just kill a lot of people and make the remaining people miserable. Right? If a pandemic spreads, whether it’s natural or engineered, that has 70% mortality and the transmissibility of measles, that’s going to kill billions of people. But it seems likely that it may spare some millions of people or tens of millions of people, even hundreds of millions of people and those people will be left to suffer their inability to function in the style to which we’ve all grown accustomed. So it would be with war. I mean, we could have a nuclear war and even a nuclear winter, but the idea that it’ll kill every last person or every last mammal, it would have to be a bigger war and a worse winter to do that.

So I see the prospect of things going horribly wrong to be one that yields, not a dial tone, but some level of remaining, even civilized life, that’s just terrible, that nobody would want. Where we basically all have the quality of life of what it was like on a mediocre day in the middle of the civil war in Syria. Who wants to live that way? If every city on Earth is basically a dystopian cell on a prison planet, that for me is a sufficient ruination of the hopes and aspirations of civilized humanity. That’s enough to motivate all of our efforts to avoid things like accidental nuclear war and uncontrolled pandemics and all the rest. And in some ways it’s more of motivating because when you ask people, what’s the problem with the failure to continue the species, right? Like if we all died painlessly in our sleep tonight, what’s the problem with that?

That actually stumps some considerable number of people because they immediately see that the complete annihilation of the species painlessly is really a kind of victimless crime. There’s no one around to suffer our absence. There’s no one around to be bereaved. There’s no one around to think, oh man, we could have had billions of years of creativity and insight and exploration of the cosmos and now the lights have gone out on the whole human project. There’s no one around to suffer that disillusionment. So what’s the problem? I’m persuaded that that’s not the perfect place to stand to evaluate the ethics. I agree that losing that opportunity is a negative outcome that we want to value appropriately, but it’s harder to value it emotionally and it’s not as clear. I mean it’s also, there’s an asymmetry between happiness and suffering, which I think is hard to get around.

We are perhaps rightly more concerned about suffering than we are about losing opportunities for wellbeing. If I told you, you could have an hour of the greatest possible happiness, but it would have to be followed by an hour of the worst possible suffering. I think most people given that offer would say, oh, well, okay, I’m good. I’ll just stick with what it’s like to be me. The hour of the worst possible misery seems like it’s going to be worse than the highest possible happiness is going to be good and I do sort of share that intuition. And when you think about it, in terms of the future of humanity, I think it is more motivating to think, not that your grandchildren might not exist, but that your grandchildren might live horrible lives, really unendurable lives and they’ll be forced to live them because there’ll be born. If for no other reason, then we have to persuade some people to take these concerns seriously, I think that’s the place to put most of the emphasis.

Lucas Perry: I think that’s an excellent point. I think it makes it more morally salient and leverages human self-interest more. One distinction that I want to make is the distinction between existential risks and global catastrophic risks. Global catastrophic risks are those which would kill a large fraction of humanity without killing everyone, and existential risks are ones which would exterminate all people or all Earth-originating intelligent life. And this former risk, the global catastrophic risks are the ones which you’re primarily discussing here where something goes really bad and now we’re left with some pretty bad existential situation.

Sam Harris: Yeah.

Lucas Perry: Now we’re not locked in that forever. So it’s pretty far away from being what is talked about in the effective altruism community as a suffering risk. That actually might only last a hundred or a few hundred years or maybe less. Who knows. It depends on what happened. But now taking a bird’s eye view again on global priorities and standing on a solid ground of ethics, what is your perspective on longtermist philosophy? This is the position or idea that the deep future has overwhelming moral priority, given the countless trillions of lives that could be lived. So if an existential risk occur, then we’re basically canceling the whole future like you mentioned. There won’t be any suffering and there won’t be any joy, but we’re missing out on a ton of good it would seem. And with the continued evolution of life, through genetic engineering and enhancements and artificial intelligence, it would seem that the future could also be unimaginably good.

If you do an expected value calculation about existential risks, you can estimate very roughly the likelihood of each existential risk, whether it be from artificial general intelligence or synthetic bio or nuclear weapons or a black swan event that we couldn’t predict. And you multiply that by the amount of value in the future, you’ll get some astronomical number, given the astronomical amount of value in the future. Does this kind of argument or viewpoint do the work for you to commit you to seeing existential risk as a global priority or the central global priority?

Sam Harris: Well, it doesn’t do the emotional work largely because we’re just bad at thinking about longterm risk. It doesn’t even have to be that long-term for our intuitions and concerns to degrade irrationally. We’re bad at thinking about the well-being, even of our future selves as you get further out in time. The term of jargon is that we “hyperbolically discount” our future well being. People will smoke cigarettes or make other imprudent decisions in the present. They know they will be the inheritors of these bad decisions, but there’s some short-term upside.

The mere pleasure of the next cigarette say, that convinces them that they don’t really have to think long and hard about what their future self will wish they had done at this point. Our ability to be motivated by what we think is likely to happen in the future is even worse when we’re thinking about our descendants. Right? People we either haven’t met yet or may never meet. I have kids, but I don’t have grandkids. How much of my bandwidth is taken up thinking about the kinds of lives my grandchildren will have? Really none. It’s conserved. It’s safeguarded by my concern about my kids, at this point.

But, then there are people who don’t have kids and are just thinking about themselves. It’s hard to think about the comparatively near future. Even a future that, barring some real mishap, you have every expectation of having to live in yourself. It’s just hard to prioritize. When you’re talking about the far future, it becomes very, very difficult. You just have to have the science fiction geek gene or something disproportionately active in your brain, to really care about that.

Unless you think you are somehow going to cheat death and get aboard the starship when it’s finally built. You’re popping 200 vitamins a day with Ray Kurzweil and you think you might just be in the cohort of people who are going to make it out of here without dying because we’re just on the cusp of engineering death out of the system, then I could see, okay. There’s a self interested view of it. If you’re really talking about hypothetical people who you know you will never come in contact with, I think it’s hard to be sufficiently motivated, even if you believe the moral algebra here.

It’s not clear to me that it need run through. I agree with you that if you do a basic expected value calculation here, and you start talking about trillions of possible lives, their interests must outweigh the interests of the 7.8 or whatever it is, billion of us currently alive. A few asymmetries here, again. The asymmetry between actual and hypothetical lives, there are no identifiable lives who would be deprived of anything if we all just decided to stop having kids. You have to take the point of view of the people alive who make this decision.

If we all just decided, “Listen. These are our lives to live. We can decide how we want to live them. None of us want to have kids anymore.” If we all independently made that decision, the consequence on this calculus is we are the worst people, morally speaking, who have ever lived. That doesn’t quite capture the moment, the experience or the intentions. We could do this thing without ever thinking about the implications of existential risk. If we didn’t have a phrase for this and we didn’t have people like ourselves talking about this is a problem, people could just be taken in by the overpopulation thesis.

That that’s really the thing that is destroying the world and what we need is some kind of Gaian reset, where the Earth reboots without us. Let’s just stop having kids and let nature reclaim the edges of the cities. You could see a kind of utopian environmentalism creating some dogma around that, where it was no one’s intention ever to create some kind of horrific crime. Yet, on this existential risk calculus, that’s what would have happened. It’s hard to think about the morality there when you talk about people deciding not to have kids and it would be the same catastrophic outcome.

Lucas Perry: That situation to me seems to be like looking over the possible moral landscape and seeing a mountain or not seeing a mountain, but there still being a mountain. Then you can have whatever kinds of intentions that you want, but you’re still missing it. From a purely consequentialist framework on this, I feel not so bad saying that this is probably one of the worst things that have ever happened.

Sam Harris: The asymmetry here between suffering and happiness still seems psychologically relevant. It’s not quite the worst thing that’s ever happened, but the best things that might have happened have been canceled. Granted, I think there’s a place to stand where you could think that is a horrible outcome, but again, it’s not the same thing as creating some hell and populating it.

Lucas Perry: I see what you’re saying. I’m not sure that I quite share the intuition about the asymmetry between suffering and well-being. I feel somewhat suspect about that, but that would be a huge tangent right now, I think. Now, one of the crucial things that you said was, for those that are not really compelled to care about the long-term future argument, if you don’t have the science fiction geek gene and are not compelled by moral philosophy, the essential way it seems to be that you’re able to compel people to care about global catastrophic and existential risk is to demonstrate how they’re very likely within this century.

And so their direct descendants, like their children or grandchildren, or even them, may live in a world that is very bad or they may die in some kind of a global catastrophe, which is terrifying. Do you see this as the primary way of leveraging human self-interest and feelings and emotions to make existential and global catastrophic risk salient and pertinent for the masses?

Sam Harris: It’s certainly half the story, and it might be the most compelling half. I’m not saying that we should be just worried about the downside because the upside also is something we should celebrate and aim for. The other side of the story is that we’ve made incredible progress. If you take someone like Steven Pinker and his big books of what is often perceived as happy talk. He’s pointing out all of the progress, morally and technologically and at the level of public health.

It’s just been virtually nothing but progress. There’s no point in history where you’re luckier to live than in the present. That’s true. I think that the thing that Steve’s story conceals, or at least doesn’t spend enough time acknowledging, is that the risk of things going terribly wrong is also increasing. It was also true a hundred years ago that it would have been impossible for one person or a small band of people to ruin life for everyone else.

Now that’s actually possible. Just imagine if this current pandemic were an engineered virus, more like a lethal form of measles. It might take five people to create that and release it. Here we would be locked down in a truly terrifying circumstance. The risk is ramped up. I think we just have to talk about both sides of it. There is no limit to how beautiful life could get if we get our act together. Take an argument of the sort that David Deutsch makes about the power of knowledge.

Every problem has a solution born of a sufficient insight into how things work, i.e. knowledge, unless the laws of physics rules it out. If it’s compatible with the laws of physics, knowledge can solve the problem. That’s virtually a blank check with reality that we could live to cash, if we don’t kill ourselves in the process. Again, as the upside becomes more and more obvious, the risk that we’re going to do something catastrophically stupid is also increasing. The principles here are the same. The only reason why we’re talking about existential risk is because we have made so much progress. Without the progress, there’d be no way to make a sufficiently large mistake. It really is two sides of the coin of increasing knowledge and technical power.

Lucas Perry: One thing that I wanted to throw in here in terms of the kinetics of long-termism and emotional saliency, it would be stupidly optimistic I think, to think that everyone could become selfless bodhisattvas. In terms of your interest, the way in which you promote meditation and mindfulness, and your arguments against the conventional, experiential and conceptual notion of the self, for me at least, has dissolved much of the barriers which would hold me from being emotionally motivated from long-termism.

Now, that itself I think, is another long conversation. When your sense of self is becoming nudged, disentangled and dissolved in new ways, the idea that it won’t be you in the future, or the idea that the beautiful dreams that Dyson spheres will be having in a billion years are not you, that begins to relax a bit. That’s probably not something that is helpful for most people, but I do think that it’s possible for people to adopt and for meditation, mindfulness and introspection to lead to this weakening of sense of self, which then also opens one’s optimism, and compassion, and mind towards the long-termist view.

Sam Harris: That’s something that you get from reading Derek Parfit’s work. The paradoxes of identity that he so brilliantly framed and tried to reason through yield something like what you’re talking about. It’s not so important whether it’s you, because this notion of you is in fact, paradoxical to the point of being impossible to pin down. Whether the you that woke up in your bed this morning is the same person who went to sleep in it the night before, that is problematic. Yet there’s this fact of some degree of psychological continuity.

The basic fact experientially is just, there is consciousness and its contents. The only place for feelings, and perceptions, and moods, and expectations, and experience to show up is in consciousness, whatever it is and whatever its connection to the physics of things actually turns out to be. There’s just consciousness. The question of where it appears is a genuinely interesting one philosophically, and intellectually, and scientifically, and ultimately morally.

Because if we build conscious robots or conscious computers and build them in a way that causes them to suffer, we’ve just done something terrible. We might do that inadvertently if we don’t know how consciousness arises based on information processing, or whether it does. It’s all interesting terrain to think about. If the lights are still on a billion years from now, and the view of the universe is unimaginably bright, and interesting and beautiful, and all kinds of creative things are possible by virtue of the kinds of minds involved, that will be much better than any alternative. That’s certainly how it seems to me.

Lucas Perry: I agree. Some things here that ring true seem to be, you always talk about how there’s only consciousness and its contents. I really like the phrase, “Seeing from nowhere.” That usually is quite motivating for me, in terms of the arguments against the conventional conceptual and experiential notions of self. There just seems to be instantiations of consciousness intrinsically free of identity.

Sam Harris: Two things to distinguish here. There’s the philosophical, conceptual side of the conversation, which can show you that things like your concept of a self, or certainly your concept of a self that could have free will that, that doesn’t make a lot of sense. It doesn’t make sense when mapped onto physics. It doesn’t make sense when looked for neurologically. Any way you look at it, it begins to fall apart. That’s interesting, but again, it doesn’t necessarily change anyone’s experience.

It’s just a riddle that can’t be solved. Then there’s the experiential side which you encounter more in things like meditation, or psychedelics, or sheer good luck where you can experience consciousness without the sense that there’s a subject or a self in the center of it appropriating experiences. Just a continuum of experience that doesn’t have structure in the normal way. What’s more, that’s not a problem. In fact, it’s the solution to many problems.

A lot of the discomfort you have felt psychologically goes away when you punch through to a recognition that consciousness is just the space in which thoughts, sensations and emotions continually appear, change and vanish. There’s no thinker authoring the thoughts. There’s no experiencer in the middle of the experience. It’s not to say you don’t have a body. There’s every sign that you have a body is still appearing. There’s sensations of tension, warmth, pressure and movement.

There are sights, there are sounds but again, everything is simply an appearance in this condition, which I’m calling consciousness for lack of a better word. There’s no subject to whom it all refers. That can be immensely freeing to recognize, and that’s a matter of a direct change in one’s experience. It’s not a matter of banging your head against the riddles of Derek Parfit or any other way of undermining one’s belief in personal identity or the reification of a self.

Lucas Perry: A little bit earlier, we talked a little bit about the other side of the existential risk coin. Now, the other side of that is this existential hope, we like to call at The Future of Life Institute. We’re not just a doom and gloom society. It’s also about how the future can be unimaginably good if we can get our act together and apply the appropriate wisdom to manage and steward our technologies with wisdom and benevolence in mind.

Pivoting in here and reflecting a little bit on the implications of some of this no self conversation we’ve been having for global priorities, the effective altruism community has narrowed down on three of these global priorities as central issues of consideration, existential risk, global poverty and animal suffering. We talked a bunch about existential risk already. Global poverty is prolific, and many of us live in quite nice and abundant circumstances.

Then there’s animal suffering, which can be thought of as in two categories. One being factory farmed animals, where we have billions upon billions of animals being born into miserable conditions and being slaughtered for sustenance. Then we also have wild animal suffering, which is a bit more esoteric and seems like it’s harder to get any traction on helping to alleviate. Thinking about these last two points, global poverty and animal suffering, what is your perspective on these?

I find the lack of willingness for people to empathize and be compassionate towards animal suffering to be quite frustrating, as well as global poverty, of course. If you view the perspective of no self as potentially being informative or helpful for leveraging human compassion and motivation to help other people and to help animals. One quick argument here that comes from the conventional view of self, so isn’t strictly true or rational, but is motivating for me, is that I feel like I was just born as me and then I just woke up one day as Lucas.

I, referring to this conventional and experientially illusory notion that I have of myself, this convenient fiction that I have. Now, you’re going to die and you could wake up as a factory farmed animal. Surely there are those billions upon billions of instantiations of consciousness that are just going through misery. If the self is an illusion then there are selfless chicken and cow experiences of enduring suffering. Any thoughts or reactions you have to global poverty, animal suffering and what I mentioned here?

Sam Harris: I guess the first thing to observe is that again, we are badly set up to prioritize what should be prioritized and to have the emotional response commensurate with what we could rationally understand is so. We have a problem of motivation. We have a problem of making data real. This has been psychologically studied, but it’s just manifest in oneself and in the world. We care more about the salient narrative that has a single protagonist than we do about the data on, even human suffering.

The classic example here is one little girl falls down a well, and you get wall to wall news coverage. All the while there could be a genocide or a famine killing hundreds of thousands of people, and it doesn’t merit more than five minutes. One broadcast. That’s clearly a bug, not a feature morally speaking, but it’s something we have to figure out how to work with because I don’t think it’s going away. One of the things that the effective altruism philosophy has done, I think usefully, is that it has separated two projects which up until the emergence of effective altruism, I think were more or less always conflated.

They’re both valid projects, but one has much greater moral consequence. The fusion of the two is, the concern about giving and how it makes one feel. I want to feel good about being philanthropic. Therefore, I want to give to causes that give me these good feels. In fact, at the end of the day, the feeling I get from giving is what motivates me to give. If I’m giving in a way that doesn’t really produce that feeling, well, then I’m going to give less or give less reliably.

Even in a contemplative Buddhist context, there’s an explicit fusion of these two things. The reason to be moral and to be generous is not merely, or even principally, the effect on the world. The reason is because it makes you a better person. It gives you a better mind. You feel better in your own skin. It is in fact, more rewarding than being selfish. I think that’s true, but that doesn’t get at really, the important point here, which is we’re living in a world where the difference between having good and bad luck is so enormous.

The inequalities are so shocking and indefensible. The fact that I was born me and not born in some hell hole in the middle of a civil war soon to be orphaned, and impoverished and riddled by disease, I can take no responsibility for the difference in luck there. That difference is the difference that matters more than anything else in my life. What the effective altruist community has prioritized is, actually helping the most people, or the most sentient beings.

That is fully divorceable from how something makes you feel. Now, I think it shouldn’t be ultimately divorceable. I think we should recalibrate our feelings or struggle to, so that we do find doing the most good the most rewarding thing in the end, but it’s hard to do. My inability to do it personally, is something that I have just consciously corrected for. I’ve talked about this a few times on my podcast. When Will MacAskill came on my podcast and we spoke about these things, I was convinced at the end of the day, “Well, I should take this seriously.”

I recognize that fighting malaria by sending bed nets to people in sub-Saharan Africa is not a cause I find particularly sexy. I don’t find it that emotionally engaging. I don’t find it that rewarding to picture the outcome. Again, compared to other possible ways of intervening in human misery and producing some better outcome, it’s not the same thing as rescuing the little girl from the well. Yet, I was convinced that, as Will said on that podcast and as organizations like GiveWell attest, giving money to the Against Malaria Foundation was and remains one of the absolute best uses of every dollar to mitigate unnecessary death and suffering.

I just decided to automate my giving to the Against Malaria Foundation because I knew I couldn’t be trusted to wake up every day, or every month or every quarter, whatever it would be, and recommit to that project because some other project would have captured my attention in the meantime. I was either going to give less to it or not give at all, in the end. I’m convinced that we do have to get around ourselves and figure out how to prioritize what a rational analysis says we should prioritize and get the sentimentality out of it, in general.

It’s very hard to escape entirely. I think we do need to figure out creative ways to reformat our sense of reward. The reward we find in helping people has to begin to become more closely coupled to what is actually most helpful. Conversely, the disgust or horror we feel over bad outcomes should be more closely coupled to the worst things that happen. As opposed to just the most shocking, but at the end of the day, minor things. We’re just much more captivated by a sufficiently ghastly story involving three people than we are by the deaths of literally millions that happen some other way. These are bugs we have to figure out how to correct for.

Lucas Perry: I hear you. The person running in the burning building to save the child is sung as a hero, but if you are say, earning to give for example and write enough checks to save dozens of lives over your lifetime, that might not go recognized or felt in the same way.

Sam Harris: And also these are different people, too. It’s also true to say that someone who is psychologically and interpersonally not that inspiring, and certainly not a saint might wind up doing more good than any saint ever does or could. I don’t happen to know Bill Gates. He could be saint-like. I literally never met him, but I don’t get that sense that he is. I think he’s kind of a normal technologist and might be normally egocentric, concerned about his reputation and legacy.

He might be a prickly bastard behind closed doors. I don’t know, but he certainly stands a chance of doing more good than any person in human history at this point, just based on the checks he’s writing and his intelligent prioritization of his philanthropic efforts. There is an interesting uncoupling here where you could just imagine someone who might be a total asshole, but actually does more good than any army of Saints you could muster. That’s interesting. That just proves a point that a concern about real world outcomes is divorceable from the psychology that we tend to associate with doing good in the world. On the point of animal suffering, I share your intuitions there, although again, this is a little bit like climate change in that I think that the ultimate fix will be technological. It’ll be a matter of people producing the Impossible Burger squared that is just so good that no one’s tempted to eat a normal burger anymore, or something like Memphis Meats, which actually, I invested in.

I have no idea where it’s going as a company, but when I had its CEO on my podcast back in the day, Uma Valeti, I just thought, “This is fantastic to engineer actual meat without producing any animal suffering. I hope he can bring this to scale.” At the time, it was like an $18,000-meatball. I don’t know what it is now, but it’s that kind of thing that will close the door to the slaughterhouse more than just convincing billions of people about the ethics. It’s too difficult and the truth may not align with exactly what we want.

I’m going to reap the whirlwind of criticism from the vegan mafia here, but it’s just not clear to me that it’s easy to be a healthy vegan. Forget about yourself as an adult making a choice to be a vegan, raising vegan kids is a medical experiment on your kids of a certain sort and it’s definitely possible to screw it up. There’s just no question about it. If you’re not going to admit that, you’re not a responsible parent.

It is possible, it is by no means easier to raise healthy vegan kids than it is to raise kids who eat meat sometimes and that’s just a problem, right? Now, that’s a problem that has a technical solution, but there’s still diversity of opinion about what constitutes a healthy human diet even when all things are on the menu. We’re just not there yet. It’s unlikely to be just a matter of supplementing B12.

Then the final point you made does get us into a kind of, I would argue, a reductio ad absurdum of the whole project ethically when you’re talking about losing sleep over whether to protect the rabbits from the foxes out there in the wild. If you’re going to go down that path, and I will grant you, I wouldn’t want to trade places with a rabbit, and there’s a lot of suffering out there in the natural world, but if you’re going to try to figure out how to minimize the suffering of wild animals in relation to other wild animals then I think you are a kind of antinatalist with respect to the natural world. I mean, then it would be just better if these animals didn’t exist, right? Let’s just hit stop on the whole