Joscha Bach and Anthony Aguirre on Digital Physics and Moving Towards Beneficial Futures

  • Understanding the universe through digital physics
  • How human consciousness operates and is structured
  • The path to aligned AGI and bottlenecks to beneficial futures
  • Incentive structures and collective coordination

You can find FLI’s three new policy focused job postings here

1:06:53 A future with one, several, or many AGI systems? How do we maintain appropriate incentive structures?

1:19:39 Non-duality and collective coordination

1:22:53 What difficulties are there for an idealist worldview that involves computation?

1:27:20 Which features of mind and consciousness are necessarily coupled and which aren’t?

1:36:40 Joscha’s final thoughts on AGI

Roman Yampolskiy on the Uncontrollability, Incomprehensibility, and Unexplainability of AI

  • Roman’s results on the unexplainability, incomprehensibility, and uncontrollability of AI
  • The relationship between AI safety, control, and alignment
  • Virtual worlds as a proposal for solving multi-multi alignment
  • AI security

You can find FLI’s three new policy focused job postings here

 

Paper’s discussed in this episode:

On Controllability of AI

Unexplainability and Incomprehensibility of Artificial Intelligence

Unpredictability of AI

 

Stuart Russell and Zachary Kallenborn on Drone Swarms and the Riskiest Aspects of Lethal Autonomous Weapons

  • The current state of the deployment and development of lethal autonomous weapons and swarm technologies
  • Drone swarms as a potential weapon of mass destruction
  • The risks of escalation, unpredictability, and proliferation with regards to autonomous weapons
  • The difficulty of attribution, verification, and accountability with autonomous weapons
  • Autonomous weapons governance as norm setting for global AI issues

You can check out the new lethal autonomous weapons website here

Beatrice Fihn on the Total Elimination of Nuclear Weapons

  • The current nuclear weapons geopolitical situation
  • The risks and mechanics of accidental and intentional nuclear war
  • Policy proposals for reducing the risks of nuclear war
  • Deterrence theory
  • The Treaty on the Prohibition of Nuclear Weapons
  • Working towards the total elimination of nuclear weapons

4:28 Overview of the current nuclear weapons situation

6:47 The 9 nuclear weapons states, and accidental and intentional nuclear war

9:27 Accidental nuclear war and human systems

12:08 The risks of nuclear war in 2021 and nuclear stability

17:49 Toxic personalities and the human component of nuclear weapons

23:23 Policy proposals for reducing the risk of nuclear war

23:55 New START Treaty

25:42 What does it mean to maintain credible deterrence

26:45 ICAN and working on the Treaty on the Prohibition of Nuclear Weapons

28:00 Deterrence theoretic arguments for nuclear weapons

32:36 Reduction of nuclear weapons, no first use, removing ground based missile systems, removing hair-trigger alert, removing presidential authority to use nuclear weapons

39:13 Arguments for and against nuclear risk reduction policy proposals

46:02 Moving all of the United State’s nuclear weapons to bombers and nuclear submarines

48:27 Working towards and the theory of the total elimination of nuclear weapons

1:11:40 The value of the Treaty on the Prohibition of Nuclear Weapons

1:14:26 Elevating activism around nuclear weapons and messaging more skillfully

1:15:40 What the public needs to understand about nuclear weapons

1:16:35 World leaders’ views of the treaty

1:17:15 How to get involved

 

Max Tegmark and the FLI Team on 2020 and Existential Risk Reduction in the New Year

  • FLI’s perspectives on 2020 and hopes for 2021
  • What our favorite projects from 2020 were
  • The biggest lessons we’ve learned from 2020
  • What we see as crucial and needed in 2021 to ensure and make improvements towards existential safety

54:35 Emilia Javorksy on the importance of returning to multilateralism and global dialogue

56:00 Jared Brown on the need for robust government engagement

57:30 Lucas Perry on the need for creating institutions for existential risk mitigation and global cooperation

1:00:10 Outro

 

Future of Life Award 2020: Saving 200,000,000 Lives by Eradicating Smallpox

  • William Foege’s and Victor Zhdanov’s efforts to eradicate smallpox
  • Personal stories from Foege’s and Zhdanov’s lives
  • The history of smallpox
  • Biological issues of the 21st century

18:51 Implementing surveillance and containment throughout the world after success in West Africa

23:55 Wrapping up with eradication and dealing with the remnants of smallpox

25:35 Lab escape of smallpox in Birmingham England and the final natural case

27:20 Part 2: Introducing Michael Burkinsky as well as Victor and Katia Zhdanov

29:45 Introducing Victor Zhdanov Sr. and Alissa Zhdanov

31:05 Michael Burkinsky’s memories of Victor Zhdanov Sr.

39:26 Victor Zhdanov Jr.’s memories of Victor Zhdanov Sr.

46:15 Mushrooms with meat

47:56 Stealing the family car

49:27 Victor Zhdanov Sr.’s efforts at the WHO for smallpox eradication

58:27 Exploring Alissa’s book on Victor Zhdanov Sr.’s life

1:06:09 Michael’s view that Victor Zhdanov Sr. is unsung, especially in Russia

1:07:18 Part 3: William Foege on the history of smallpox and biology in the 21st century

1:07:32 The origin and history of smallpox

1:10:34 The origin and history of variolation and the vaccine

1:20:15 West African “healers” who would create smallpox outbreaks

1:22:25 The safety of the smallpox vaccine vs. modern vaccines

1:29:40 A favorite story of William Foege’s

1:35:50 Larry Brilliant and people central to the eradication efforts

1:37:33 Foege’s perspective on modern pandemics and human bias

1:47:56 What should we do after COVID-19 ends

1:49:30 Bio-terrorism, existential risk, and synthetic pandemics

1:53:20 Foege’s final thoughts on the importance of global health experts in politics

 

Sean Carroll on Consciousness, Physicalism, and the History of Intellectual Progress

  • Important intellectual movements and their merits
  • The evolution of metaphysical and epistemological views over human history
  • Consciousness, free will, and philosophical blunders
  • Lessons for the 21st century

Mohamed Abdalla on Big Tech, Ethics-washing, and the Threat on Academic Integrity

 Topics discussed in this episode include:

  • How Big Tobacco used it’s wealth to obfuscate the harm of tobacco and appear socially responsible
  • The tactics shared by Big Tech and Big Tobacco to preform ethics-washing and avoid regulation
  • How Big Tech and Big Tobacco work to influence universities, scientists, researchers, and policy makers
  • How to combat the problem of ethics-washing in Big Tech

 

Timestamps: 

0:00 Intro

1:55 How Big Tech actively distorts the academic landscape and what counts as big tech

6:00 How Big Tobacco has shaped industry research

12:17 The four tactics of Big Tobacco and Big Tech

13:34 Big Tech and Big Tobacco working to appear socially responsible

22:15 Big Tech and Big Tobacco working to influence the decisions made by funded universities

32:25 Big Tech and Big Tobacco working to influence research questions and the plans of individual scientists

51:53 Big Tech and Big Tobacco finding skeptics and critics of them and funding them to give the impression of social responsibility

1:00:24 Big Tech and being authentically socially responsible

1:11:41 Transformative AI, social responsibility, and the race to powerful AI systems

1:16:56 Ethics-washing as systemic

1:17:30 Action items for solving Ethics-washing

1:19:42 Has Mohamed received criticism for this paper?

1:20:07 Final thoughts from Mohamed

 

Citations:

Where to find Mohamed’s work

The Future of Life Institute AI policy page

 

We hope that you will continue to join in the conversations by following us or subscribing to our podcasts on YoutubeSpotify, SoundCloudiTunesGoogle PlayStitcheriHeartRadio, or your preferred podcast site/application. You can find all the AI Alignment Podcasts here.

You can listen to the podcast above or read the transcript below. 

Lucas Perry: Welcome to the Future of Life Institute Podcast. I’m Lucas Perry. Today’s conversation is with Mohamed Abdalla on his paper The Grey Hoodie Project: Big Tobacco, Big Tech, and the threat on academic integrity. We explore how big tobacco has used and still uses its wealth and influence to obfuscate the harm of tobacco by funding certain kinds of research, conferences, and organizations, as well as influencing scientists, all to shape public opinion in order to avoid regulation and maximize profits. Mohammed explores in his paper and in this podcast how big technology companies engage in many of the same behaviors and tactics of big tobacco in order to protect their bottom line and appear to be socially responsible. 

Some of the opinions presented in the podcast may be controversial or inflammatory to some of our normal audience. The Future of Life Institute support’s hearing a wide range of perspectives without taking a formal stance on it as an institution. If you’re interested to know more about FLI’s work in AI policy, you can head over to our policy page on our website at futureoflife.org/ai-policy, link in the description. 

Mohamed Abdalla is a PhD student in the Natural Language Processing Group in the Department of Computer Science at the University of Toronto and a Vanier scholar, advised by Professor Frank Rudzicz and Professor Graeme Hirst. He holds affiliations with the Vector Institute for Artificial Intelligence, the Centre for Ethics, and ICES, formerly known as the Institute for Clinical and Evaluative Sciences.

And with that, let’s get into our conversation with Mohamed Abdalla

So we’re here today to discuss a recent paper of yours titled The Grey Hoodie Project: Big Tobacco, Big Tech, and the threat on academic integrity. To start things off here, I’m curious if you could paint in broad brush strokes how you view big tech as actively distorting the academic landscape to suit its needs, and how these efforts are modeled and similar to what big tobacco has done. And if you could also expand on what you mean by big tech, I think that would also be helpful for setting up the conversation.

Mohamed Abdalla: Yeah. So let’s define what big tech is. I think that’s the easiest of what we’re going to tackle. Although in itself, it’s actually not a very easy label to pin down. It’s unclear what makes a company big and what makes a company tech. So for example, is Yahoo still a big company or would it count as big tech? Is Disney a tech company? Because they clearly have a lot of technical capabilities, but I think most people would not consider them to be big tech. So what we did was we basically had a lot of conversation with a lot of researchers in our department, and we asked for a list of companies that they viewed as big tech. And we ended up with a list of 14 companies. Most of them we believe will be agreeable. Google, Facebook, Microsoft, Apple, Amazon, Nvidia, Intel, IBM Huawei, Samsung, Uber, Alibaba, Element AI, and OpenAI. This is a very restrictive set on what we believe the big tech companies are. Like for example, a clear missing one here is Oracle. There’s a lot of other big companies that are missing, but we’re not assigning this as a prescriptive or a definitive list of what big tech companies are.

But adding more companies to this list would only help us strengthen the conclusions we’ve drawn in our paper because they will show how much more influence these companies have. So by limiting it to a small group, we’re actually taking a pessimistic view on the maximum amount of influence that they have. That’s what we define as big tech.

Then the question comes, what do we mean by they have an outsized influence or how they go about influencing policy? And we will get into specific examples here, but I think the best way of demonstrating why there should be cause for concern is through a very simple analogy.

So imagine if there was a health policy conference, which had tens of thousands of researchers. And among the topics they discussed was how do you deal with the negative effects of increased tobacco usage. And its largest funding bodies were all big tobacco companies. Would this be socially acceptable? Would the field of health policy accept this? No. In fact, there are guidelines such as the Article 5.3 on the World Health Organization’s Framework for Convention on Tobacco Control. Which states that if you are developing public health policies with respect to tobacco control, you are required to act to protect these policies from commercial and other vested interests of the tobacco industry. So they are aware of the fact that industrial funding has a very large negative effect on the types of research, the types of conclusions, how strong the conclusions are that can be drawn from research.

But if we flip that around. So instead of health policy, replace machine learning policy or AI policy. Instead of big tobacco, you replace it with big tech. And instead of the negative effects of increased tobacco usage, the ethical concerns of increased AI deployment. Would this be accepted? And this is not even a hypothetical, because all of the big machine learning conferences among their top funding bodies are all these big tech companies. If you look at NeurIPS or you look at FAccT, the Fairness, Accountability, and Transparency Conference, their platinum sponsors or gold sponsors, whatever their highest level is depending on the conference, is all of these companies. Even if one wants to say it’s okay, because these companies are not the same as big tobacco, this should be justified. There is no justification for why we allow big tech to have such influence. I haven’t proven that this influence exists yet in my speaking so far. But there is precedence to believe that industrial funding warps research. And there’s been no critical thought of whether or not big tech, as computer science industrial funding warps research. And I argue in the paper that it does.

Lucas Perry: All right. So with regards to understanding how industry involvement in research and development of a field or area can have a lot of influence, what you take as a historical example to learn from the strategies and the playbook of a historical industry to see if big tech might be doing the same things as big tobacco. So can you explain in the broadest brush strokes, how big tobacco became involved in shaping industry research on the health effects of tobacco, and how big tech is using all of the same or most of the same moves to shape the research landscape to make big tech themselves have a public image of trust and accountability when that may not be the case?

Mohamed Abdalla: The history is that shortly after World War II in the mid 1950s, there was a pronounced decrease in demand for their product. What they believe caused or at least in part caused this drop in demand was a Reader’s Digest article that was published called Cancer by the Carton. And it discussed the scientific links between smoking and lung cancer.

It was later revealed after litigation that big tobacco actually knew about these links, but they also admitted it would result in increased legislation, decreased profits. So they didn’t want to publicly agree with the conclusions of the paper, despite having internal research which showed that this indeed was the case.

After this article was published, it was read by a lot of people and people were getting scared about the health effects of smoking. They employed a PR firm. And their first strategy was to publish a full-page ad in the New York Times that was seen by I think approximately 43 million people or so, which is a large percentage of the population. And they would go into state and I quote, “They accept an interest in people’s health as basic responsibility, paramount to every other consideration in our business.” So despite having internal research that showed that these links were conclusive in their full-page ad, not only did they state that they believed these links were not conclusive, they also lied and said that the health of their people was paramount to every other consideration, including profit. So clear, blatant lies, but dressed up really nicely. And it reads really well.

Another action that they were instructed to do by the PR firm was to fund academic research that would not only draw questions on the conclusiveness of these links, but also to sort of add noise, cause controversy, slowed down legislation. And I’ll go into the points specifically. But the idea to fund academia was actually the PR firms’ idea. And it was under the instruction of the PR firm that they funded academia. And despite their publicly stated goal of funding independent research because they wanted the truth and they care about health, it was revealed again after litigation that internal documents existed that showed that their true purpose was to sow doubt into the research that showed conclusive links between smoking and cancer.

Lucas Perry: Okay. And why would an industry want to do this? Why would they want to lie about the health implications of their products?

Mohamed Abdalla: Well, because there’s a profit motive behind every company’s actions. And that is basically unchanged to this day. Where while they may say a lot of nice sounding stuff, it’s just a simple fact of life that the strongest driving factor between any company’s decision is the profit motive. Especially if they’re publicly traded, they have a legal obligation to their shareholders to maximize profit. It’s not like they’re evil per se. They’re just working within the system that they’re in.

We see sort of the exact same thing with big tech. People can argue about when the decline of opinion regarding these big tech companies started. Especially if you’re American centric. Since I’m in Canada, you’re in the States. I think that the Cambridge Analytica scandal with Facebook can be seen as sort of a highlight, although Google has its own thing with Project Maven or Project Dragonfly.

And the Pew Research Firm shows that the amount of people that view big tech as having a net positive social change in the world started decreasing around the mid 2010s. And I’m only going to quote Facebook here, well Mark Zuckerberg specifically in his testimony to Congress. But in Congress, he would state, “It’s clear that we didn’t do enough and we didn’t focus enough on preventing abuse.” And he stressed that he admitted fault and they would double down on preventing abuse. And this was simply that they didn’t think about how people could do harm. And again, this statement did not mention the leaked internal emails, which stated that they were aware of companies breaking their policies. And they explicitly knew that Cambridge Analytica was breaking their scraping policies and chose to do nothing about it.

Even recently, there have been leaks. So Buzzfeed News leaked Sophie Zhang’s resignation letter, which basically stated that unless they thought that they were going to catch flack in terms of PR, they would not act to moderate a lot of the negative events that was happening on their platforms.

So this is a clear profit incentive thing, and there’s no reason to think that these companies are different. So then the question is how much of their funding of AI ethics or AI ethicists is driven by benevolent desire to see goodness in the world? And I’m sure there are people that work that have this sort of desire. But this is the four criticisms that you can get for reasons that you can fund academia. How much of it is used to reinvent yourself as socially responsible, influence the events and decisions made by funded universities, influence the research questions of individual scientists, and discover receptive academics.

So while we can assume that some people there may actually have the social good in mind and may want to improve society, we need to also consider these the reasons that big tobacco funded academia, and we need to check does big tech also fund academia? And is the effects of their funding academia the exact same as the effects of big tobacco funding academia? And if so, we as academics, or the public body, or government, or whoever needs to take steps to minimize the undue influence.

Lucas Perry: So I do want to spend the main body of this conversation on the big tobacco and big tech engage in these four points that you just illustrated, in order to reinvent themselves in the public image and be seen as socially responsible, even when they may not be actually trying to be socially responsible. It’s about the image and the perception. And then two, that they are trying to influence the events and decisions made by funded universities. And that three, they’ll be influencing the research questions and plans of individual scientists. This helps funnel research into areas that will make you look benevolent or socially responsible. Or you can funnel people away from topics that will lead to regulation.

And then the last one is to discover receptive academics who can be leveraged. If you’re in the oil industry and you can find a few scientists who have some degree of reputability and are willing to cast doubt on the science of climate change, then you’ve found some pretty good allies in your fight for your industry.

Mohamed Abdalla: Yep, exactly.

Lucas Perry: So before we jump into that, do you want to go through each of these points and say what big tobacco did in each of them and what big tech did in each of them? Or do you want to just start by saying everything that big tobacco did?

Mohamed Abdalla: Since there’s four points and there’s a lot of evidence for each of these points, I think it’s probably better to do for the first point, here’s what big tobacco did. And then here’s what big tech did.

Lucas Perry: Let’s go ahead and start then with this first point. From the perspective of big tobacco and big tech, what have they done to reinvent themselves in the public image as socially responsible? And again, you can just briefly touch on why it’s their incentive to do that, and not actually be socially responsible.

Mohamed Abdalla: So the benefit of framing yourself as socially responsible without having to actually take any actions to become socially responsible or to earn that label is basically increased consumer confidence, increased consumer counts, a decreased chance of legislation if the general public, and thereby the general politician believes that you are socially responsible and that you do care about the public good, you are less likely to be regulated as an industry. So that’s a big driving factor for trying to appear socially responsible. And we actually see this in both industries. I’ll cover it later, but a lot of stuff that’s leaked basically shows that spoiler, a lot of the AI research being done, especially in AI ethics is seen as a way to either delay or prevent the legislation of AI. Because they’re afraid that it will eat into their profit, which is against their profit motive, which is why they do a lot of the stuff that they do.

So first, I’ll go over what big tobacco did. And then we’ll try to draw parallels to what big tech did. To appear socially responsible, they were suggested by their PR firm Hill+Knowlton Strategies to fund academics and to create research centers. The biggest one that they created was CTR, which is the Council for Tobacco Research. And when they created CTR, they created it in a very academically appealing way. What I mean by that is that CTR was advised by distinguished scientists who serve on its scientific advisory boards. They went out of their way to recruit these scientists so that the research center gains academic respectability and is trusted by if not only the lay person, by academics in general. That they’re a respectable organization despite being funded by big tobacco.

And then what they would do is fund research questions. They’d act essentially as a pseudo granting body and provide grants to researchers who were working on specific questions that was decided by this council. At surface level, it seems okay. I’m not 100% sure how it works in the States, but at least in Canada we have research funding bodies. So we have NSERC or CIHR Natural Sciences and Engineering Research Council, which decides who gets the grants from the government’s research money. And we have it for all the different fields. And in theory, the research questions should be given in terms of validity of research, potential impact. A lot of the academically relevant considerations.

But what we ended up showing after litigation again, was that at least in the case of big tobacco, there were more lawyers than scientists involved in the distribution of money. And the lawyers were aware of what would and would not likely hurt the bottom line of these companies. So quoting previous work, their internal documents showed that they would simply refuse to fund any proposal that acknowledged that nicotine was addictive or that smoking was dangerous.

They basically went out of their way to fund research that was sort of unrelated to tobacco use so that they get good PR while minimizing the risk that said research would harm their profit motive. And during any sort of litigation, for example during a cigarette product liability trial, the lawyers presented a list of all the universities and medical schools supported by this Council for Tobacco Research as proof that they care about social responsibility and they care about the wellbeing. And they use this money as proof.

Basically at first glance, all of their external facing actions did seem that they cared about the well-being of people. But it was later revealed through internal documents that this was not the case. And this was basically a very calculated move to prevent legislation, beat litigation, and other self-serving goals in order to maximize profit.

In big tech, we see similar things happening. In 2016, the Partnership on AI to Benefit People and Society was established to, “Study and formulate best practices on AI technologies and to study AI and its influences on people and society.” Again, a seemingly very virtuous goal. And a lot of people signed up for this. A lot of non-profit organizations, academic bodies, and a lot of industries signed up for this. But it was later leaked that despite sounding rosy, the reality on the ground was a little bit darker. So reports from those involved, and this was a piece published at The Intercept. Demonstrated how neither prestigious academic institutions such as MIT, nor civil liberty organizations like the ACLU had much power in the direction of the partnership. So they ended up serving as a legitimate function for big tech’s goals. Basically railroading other institutions while having their brand on your work helps appear socially responsible. But if you don’t actually give them power, it’s only the appearance of social responsibility that you’re getting. You’re not actually being forced to be socially responsible.

There’s other examples of going to litigation specifically. During his testimony to Congress, Mark Zuckerberg states that in order to tackle this problems, they’ll work with independent academics. And these independent academics would be given oversight over their company. It’s unclear how an academic that is chosen by Facebook, theoretically compensated by Facebook, and could be fired by Facebook would be independent of Facebook after being chosen, receiving compensation, and knowing that they can lose that compensation if they do something to anger Facebook.

Another example almost word for word from big tobacco’s showing off to jurors is that Google boasts that it releases more than X research papers on topics in responsible AI in a year to demonstrate social responsibility. This is despite arm’s length involvement with the military minded startups. So if you build on that, Alphabet Google faced a lot of internal backlash with Project Maven, which is basically they’re working on image recognition algorithms for drones. They faced a lot of backlash. So publicly, they appeared to have stopped. They promised to stop working with the military. However, internally, Gradient Ventures, which is basically the venture capital arm of Alphabet still funds, provides researchers, and provides data to military startups. So despite their promise not to work in military, despite their research in responsible AI, they still work in areas that don’t necessarily fit the label of being socially responsible.

Lucas Perry: It seems there’s also this dynamic here where in both tobacco and in tech, it’s cheaper to pretend to be socially responsible than to actually be socially responsible. In the case of big tobacco, that would’ve actually meant dismantling the entire industry and maybe bringing e-cigarettes on 10 years before that actually happened. Yet in the case of big tech, it would seem to be more like hampering short term profit margins and putting a halt to recommender algorithms and systems that are already deployed that are having a dubious effect on American democracy and the wellbeing of the tens of millions of human brains that are getting fed garbage content by these algorithms.

So this first point seems pretty clear to me. I’m not sure if you have anything else that you’d like to add here. Public perception is just important. And if you can get policymakers and government to also think that you’re playing socially responsible, they’ll hold off on regulating you in the ways that you don’t want to be regulated.

Mohamed Abdalla: Yeah, that’s exactly it. Yeah.

Lucas Perry: Are these moves that are made by big tobacco and big tech also reflected in other contentious industries like in the oil industry or other greenhouse gas emitting energy industries? Are they making generally the same type of moves?

Mohamed Abdalla: Yeah, 100%. So this is simply industry’s reaction to any sort of possible legislation. Whether it’s big tobacco and smoking legislation, big tech and some sort of legislation on AI, oil companies and legislation on greenhouse gas emissions, clean energy, so on and so forth. Even a lot of food industry. I’m not sure what the proper term for it is, but a lot of the nutritional science research is heavily corrupted by funding from whether it’s Kellogg’s, or the meat industry, or the dairy industry. So that’s what industry does. They have a profit motive, and this is a profitable action to take. So it’s everywhere.

Lucas Perry: Yeah. So I mean, when the truth isn’t in your favor, and your incentive is profit, then obfuscating the truth is your goal.

Mohamed Abdalla: Exactly.

Lucas Perry: All right. So moving on to the second point then, how is it that big tobacco and big tech in your words, work to influence the events and decisions made by funded universities? And why does influencing the decisions made by funded universities even matter for large industries like big tobacco and big tech?

Mohamed Abdalla: So there’s multiple reasons to influence events. What it means to influence events is also a variety of actions. You could either hold events, you could stop holding events, or you can change how events that are being held operate. So events here, at least in the academic sense, I’m going to talk about conferences. And although they’re not always necessarily funded by universities, they are academic events. So why would you want to do this? Let’s talk about big tobacco first and show by example why they gained from doing this.

First, I’ll just go over some examples. So at my home university, the University of Toronto, Imperial Tobacco, which is one of the companies that belongs in big tobacco, withheld its funding from U of T’s Faculty of Law conference as retribution for the fact that U of T law students were influential in having criminal charges be laid against Shoppers Drug Mart for selling tobacco to a minor. As one of their spokespersons said, they were biting the hand that feeds them. If universities events such as this annual U of T law conference relies on funding from industry in general, then they have an oversized say of what you as an institution will do, or what people working for you can and cannot do. Because you’ll be scared of losing that consistent money.

Lucas Perry: I see. So you feed as many people as you can. Knowing that if they ever bite you, the retraction of your money or what you’re feeding them is an incentive for them to not hurt you?

Mohamed Abdalla: Exactly. But it’s not even the retraction, it’s the threat of retraction. If in the back of their mind 50% of their funding comes from whatever industry, can you afford to live without 50% of your funding? Most people would say no, and that causes worry and will call you to self self-censor. And that’s not a good thing in academia.

Lucas Perry: And replacing that funding is not always easy?

Mohamed Abdalla: It’s very difficult.

Lucas Perry: So not only are you getting the public image of being socially responsible by investing in some institutions or conferences, which appear to be socially responsible or which contain socially responsible workshops and portions. But then you also have in the back of the mind of the board that organizes these conferences, the worry and the knowledge that, “We can’t take a position on this because we’d lose our funding. Or this is too spicy, so we know we can’t take a position on this.” So you’re getting both of these benefits. The constraining of what they may do within the context of what may be deemed ethically and socially responsible, which may be to go against your industry in some strong way. But you also gain the appearance of just flat out being socially responsible while suppressing what would be socially responsible free discourse.

Mohamed Abdalla: 100%. And to build on that a little bit. Since we brought in boards, people that decide what happens. An easier way of influencing what happens is to actually plant or recruit friendly actors within academia. There is a history at least by my home university again, where the former president and dean of law of U of T was the director of a big tobacco company. And someone on the board of Women’s College Hospital, which is a teaching hospital affiliated with my university was the president and chief spokesperson for the Canadian Tobacco Manufacturer’s Council. So although there is no proof that they necessarily went out of their way to change the events held by the university, if a large percentage of your net worth is held in tobacco stocks, even if you’re a good human being, just because you’re a human being, you will have some sort of incentive to not hurt your own wellbeing. And that can influence the events that your university holds, the type of speakers that you invite, the types of stances that you allow your university to take.

Lucas Perry: So you talked about how big tobacco was doing this at your home university. How do you believe that big tech is engaging in this?

Mohamed Abdalla: The first thing that we’ll cover is the funding of large machine learning AI conferences. In addition to academic innovation or whatever reason they may say that they’re funding these conferences. And I believe that a large portion of this is because of academic innovation. You can see that the amount of funding that they provide also helps give them a say. Or at least in the back of the organizer’s mind. NeurIPS, which is the biggest machine learning AI conference has had always at least two big tech sponsors at the highest tier of funding since 2015. And in recent years, the number of big tech companies has exceeded five. This also carries over to workshops where over the past five years, only a single ethics related workshop did not have at least one organizer belonging to big tech. And that was 2018s robust AI in financial services workshop, which instead featured the foreheads of AI branches at big banks, which is not necessarily better. It’s not to say that those working in these companies should not have any say. But to have no venue that doesn’t rely on big tech in some sort of way or is not influenced in big tech in some sort of way is worrying.

Lucas Perry: Or whose existence is tied up in the incentives of big tech. Because whatever big tech’s incentives are, that’s generating the profit which is funding you. So you’re protecting that whole system when you accept money from it. And then your incentives become aligned with the incentives of the company that is suppressing socially responsible work.

Mohamed Abdalla: Yeah. Fully agree. In the next section where we talk about the individual researchers, I’ll go more into this. There’s a very reasonable framing of this issue where big tech isn’t purposely doing this. And industry is not purposely being influenced, but the influence is taking place. But basically exactly as you said. Even if these companies are not making any explicit demands or requests from the conference organizers, it’s only human nature to assume that these organizers would be worried or uncomfortable doing anything that would hurt their sponsors. And this type of self-censorship is worrying.

The example that I just showed was for NeurIPS, which is largely a technical conference. So Google does have incentive to fund technical research because a really good optimization algorithm will help their industry or their work, their products. But even when it comes to conferences that are not technical in their goal, so for example the FAccts conference, the Fairness Accountability, and Transparency Conference has never had a year without big tech funding at the highest level. Google’s three out of three years. Microsoft is two out of three years. And Facebook is two out of three years.

FAccT has a statement regarding sponsorship and financial support where they say that you have to disclose this funding. But it’s unclear how disclosure alone helps combat direct and indirect industrial pressures. A reaction that I often get is basically that those who are involved are very careful to disclose the potential conflicts of interest. But that is not a critical understanding of how the conflict of interest actually works. Disclosing a conflict of interest is not a solution. It’s just simply highlighting the fact that a problem exists.

In the public health sphere, researchers push that resources should be devoted to the problems associated with sequestration, which is elimination of relationships between commercial industry and professionals in all cases where it’s remotely feasible. So that means how this policy realizes that simply disclosing is not actually debiasing yourself.

Lucas Perry: That’s right. So you said that succinctly, disclosing is not debiasing. Another way of saying that is it’s basically just saying, “Hey, my incentives are misaligned here.” Full-stop.

Mohamed Abdalla: Exactly.

Lucas Perry: And then like okay, everyone knows that now, and that’s better than not. But your incentives are still misaligned towards the general good.

Mohamed Abdalla: Yeah, exactly. And it’s unclear why we think that AI ethicists are different from other forms of ethicists in their incorruptibility. Or maybe it’s an unfounded in terms of research view that thinks or believes that we would be able to post-hoc adjust for the biases of researchers, but that’s simply unfounded by research. So yeah, one of the ways that they influence that events is simply by funding the events and funding the people organizing these events.

But there’s also examples where some companies in big tech knowingly are manipulating the events. And I’ll quote here from my paper. “As part of a campaign by Google executives to shift the antitrust conversation, Google sponsored and planned a conference to influence policy makers going so far as to invite a token Google critic capable of giving some semblance of balance.” So it’s clear that these executives know what they’re doing, and they know that by influencing events, they will influence policy, which will influence legislation, and in turn litigation. So it’s clear from the leaks that are happening that this is not simply needless worrying, but this is an active goal of industry in order to maximize their profit.

There is some work for big tobacco that has not been done in big tech. I don’t think it can be done in big tech, and I’ll speak about why. But basically, when it comes to influencing events, there is research that shows that events held that were sponsored by big tobacco, such as symposiums or workshops about secondhand smoking are not only skewed, but also are poor quality compared to events not sponsored by big tobacco. So when it comes to big tobacco research, if the event is sponsored by big tobacco, the causation here is not clear whether or not it’s subconscious, whether or not it’s conscious. The causation might not be perfectly clear, but the results are. And it shows that if you’re funded by big tobacco, you’re more skewed and poorer quality in terms of research about the effects of secondhand smoking or the effects of smoking.

We can’t do this sort of research in big tech because there isn’t an event that isn’t sponsored by big tech. So we can’t even do this sort of research. And that should be worrying. If we know in other fields that sponsorship leads to lower quality of work, why are we not trying to have them divest from funding events directly anyway?

Lucas Perry: All right. Yeah. So this very clearly links up with the first example you gave at the beginning of our conversation about imagine having a health care conference and all of the main investors are big cigarette companies. Wouldn’t we have a problem with that?

So these large industries, which are having detrimental effects to society and civilization first have the incentive to portray a public image of social responsibility without actually being socially responsible. And then they also have an incentive to influence events and decisions made by funded universities as to one, make the incentives of those universities and events aligned with their own because their funding is dependent on these industries. And then also, to therefore constrain what can and may be said at these conferences in order to protect that funding. So the next one here is how is it that big tobacco and big tech influence the research questions and plans of individual scientists?

Mohamed Abdalla: So in the case of big tobacco, we know especially from leaked documents that they actively sought to fund research that placed the blame of lung cancer on anything other than smoking. So there’s the classic example on owning a bird is more likely to increase your chance of getting lung cancer. And that’s the reason why you got lung cancer instead of smoking.

Lucas Perry: Yeah, I thought that was hilarious. They were like, “Maybe it’s pets. I think it’s pets that are creating cancer.”

Mohamed Abdalla: Yeah exactly. And when they choose to fund this research question, not only do they get the positive PR. But they also get the ability to say that this is not conclusive. “Because look, here are these academics in your universities that think that it might not be smoking causing the cancer. So let’s hold off on litigation until this type of research is done.” So this steering of funds instead of exploring the effects of tobacco on lung cancer, they would study just the basic science of cancer instead. And this would limit the amount of negative PR that they get. So that’s one reason for doing it. But number two is it allows them sow doubt and say that there’s confusion, or that we haven’t arrived at some sort of consensus.

So that’s one of the ways that they did it, finding researchers who they termed critics or skeptics. And they would fund them and amplify their voices. And they had a specific set of money for set people, especially if they were smokers. So they tried to actively seek for people that were smokers because they felt that they’d be more sympathetic to these companies. They would purposely steer funds towards these people and they would change the research sphere.

There’s also very egregious actions that they took. So for example, Professor Stanton Glantz. He’s in UCSF I think, University of California, San Francisco. They would take out ads against him in the newspapers where they would put lies to point out flaws in his studies. And these flaws aren’t really flaws. It’s just a twisting of the truth. It’s basically if you go against us, we’re going to attack you. We’re going to make it very hard for you to get further funding. You’re going to have a lot of bad PR. It’s just sort of dis-incentivizing anyone else from doing critical issues against them.

They would work with elected politicians as well to block funding of scientists of opposing viewpoints. So it’s not like they didn’t have their fingers in government as well. During litigation, an email covered where HHS, which is the U.S. Department of Health and Human Services appropriations continuing resolution will include language to prohibit funding for Glantz who is Stanton Glantz, the same scientist. So it’s clear that through intimidation, but also through acting as a funding body, they’re able to change what researchers have work on.

Big tech works in essentially the same way. The first thing to note here is that when it comes to ethical AI or AI ethics, big tech in general has a very specific conception of what it means for an algorithm to be ethical. Whether it’s inspired by their insular culture where it’s sort of a very echo-y place where everyone sort of agrees with each other, there’s sort of an agreed upon culture. There is previous work owning ethics that discusses how there’s three main factors in defining AI ethics. And there’s three values of Silicon Valley. Meritocracy, trust in the market. And I forgot the last one. And basically, their definition is simply different from that, which the rest of the world, or the rest of the country generally has.

So Silicon Valley has a very specific view of what AI ethics is or should be. And that is not necessarily shared by everyone outside of Silicon Valley. That is not necessarily in itself a bad thing. But when they act as a pseudo granting body that is, they provide grants or money for researchers, it becomes an issue. Because if for example, you are a professor. And as a professor, one of your biggest roles is simply to bring in money to do research. And if your research question does not agree with the underlying logics of Silicon Valley’s funding bodies, whoever makes these decisions.

Lucas Perry: Like if you question the assumption about trust in the market?

Mohamed Abdalla: Yeah, exactly. Or you question meritocracy, and maybe that’s not how we should be basing our societal values. Even if the people granting the money are not lawyers like they were in big tobacco, even if they were research scientists, the fact that they’re at a high enough level to be choosing who gets money likely means that they’ve been there for awhile. And there’s an increased chance that they personally believe or agree with the views that their companies hold. Not necessarily always the case, but the probability is a lot higher.

So if you believe in what your company believes in, and there is a researcher who is working from a totally different set of foundations whose assumptions do not match your assumption. As human nature, you’re less likely to believe in this research. You’re less likely to believe that it’s successful or on the true path. So you’re less likely to fund it. And that requires no malicious intent on your side. It’s just that you are part of an industry that has a specific view. And if a researcher does not share this view, you’re not going to give them that money.

And you switch it over to the researcher side. If I want to get tenure, I’m not a professor. So I can’t even get tenure. But if I want to get hired, I have to show that I can bring in money. If I’m a professor and I want to get tenure, I have to show that I can bring in even more money. And if I see that these companies are giving away vast sums of money, it is in my best interest to ask a research question that will be funded by them. Because what good am I if I get no money and I don’t get hired? Or what good am I if I get no money and I don’t get tenure?

So what will end up happening there is that it’s a cyclical thing where researchers view that these companies fund specific types of research or researchers that are based in fundamental assumptions that they may not necessarily agree with. And in order to maximize their opportunities to get this money, they will change their research question. Whether it’s complete, or slight adjustment, or changing the assumptions to match what will get them the money. And the cycle will just keep happening until there’s no difference.

Lucas Perry: So there’s less opportunity for researchers and institutions that fundamentally disagree with some axiom that these industries hold over ethics and accountability, or whatever else?

Mohamed Abdalla: Exactly. 100%. And the important thing to note here is that for this to happen, no one needs to be acting maliciously. The people in big tech probably believe in what they’re pushing for. At least I like to make this assumption. And I think it makes it for the easiest sell, especially for those within the computer science department because there’s a lot of pushback to this type of thought. Where even if the people deciding who gets the money, imagine they’re completely disinterested researchers who have very agreeable goals, and they love society, and they want the general good. The fact that they are in a position to be deciding who gets the money means that they’re likely higher up in these companies. You don’t get to be higher up and stay for these companies long enough, unless you agree with the viewpoint.

Lucas Perry: And that viewpoint though, in a market has to be in some sense, aligned with the impersonal global corporate objective of maximizing the bottom line. There’s this values filtration process internally in a company where maybe you’ll have all the people who are against Project Maven, but none of them are high enough. Right?

Mohamed Abdalla: Exactly.

Lucas Perry: You need to sift those people out for the higher positions because the higher positions are the ones which have to be aligned with the bottom line, with maximizing profits for shareholders. Those people could authentically think that maximizing the profit of some big industrial company is a good thing, because you really trust in the market and how it serves the market.

Mohamed Abdalla: I think there are people that actually believe this. So I know you say it kind of disbelievingly, but I think that people actually believe this.

Lucas Perry: Yeah, people really do believe this. I don’t actually think about this stuff a lot. But yeah. I mean, it makes sense to me. We buy all your stuff. So you’re serving me, I’m making a transaction with you. But this fact about this values sifting towards the top to be aligned with the profit maximization, those are the values that will remain for then deciding the funding of researchers at institutions. So no one has to be evil in the process. You just have to be following the impersonal incentives of a global capitalist industry.

Mohamed Abdalla: Yeah. I do not aim to shame anybody involved from either side. Certain executives I shame and certain attorneys I shame. But I work under the assumption that all computer science, AI, ethicists, researchers, whatever you want to call them are well-intentioned. And the way the system is set up is that even well-intentioned researchers can have negative impact on the research being done, and can have a limiting impact on the types of questions being considered.

And I hope by now you agree that at least theoretically, that by acting as a pseudo granting body, there’s a chance for this influence to occur. But then in my work, what I did was I actually counted how many people were actually looking to big tech as a pseudo granting body. So I looked at the CVs of all computer science faculty at four schools. University of Toronto, Massachusetts Institute of Technology, Stanford, and Berkeley. Two private schools, two public schools. Two eastern coast universities, two western coast. And for each CV that I could find, I looked to answer a certain number of questions. So whether or not a specific faculty works on AI, whether or not they work on the ethics of AI. I very loosely defined that as being having at least one paper defined about any sort of societal impact of AI. Whether or not they have ever received faculty funding from big tech. So that is grants or awards from companies. Whether they have received graduate funding from big tech. So was any portion of this faculty’s graduate education funded by big tech? And whether or not they are or were employed by big tech. So at any time that they have any sort of previous or current financial relationship with big tech?

What the research shows is that of all computer science faculty, 52% of them. So at least half view big tech as a funding body. So that means as a professor, they have received a grant or an award to do research from big tech. And universities technically are here to not maximize profit for these companies, but to do science. And in theory, public good kind of things. At least half of the researchers are looking to these companies as granting bodies.

If you narrow that down to computer science faculty that work in AI, that percentage goes up to 58%. If you limit it to computer science faculty who work in the ethics of AI or who are AI ethicists, it remains at 58%. Which means that 58% of the people looking to answer the really hard questions about AI and society, whether it’s short-term or long-term, view these companies as a funding body. Which in turn, as we discussed, opens them up to influence whether it’s subconscious or conscious.

Lucas Perry: So then if you’re Mohamed Abdalla and you come out with a paper like you came out with, is the thought here that it’s very much less likely than in the future you will receive grants from big tech?

Mohamed Abdalla: So it’s unclear. There’s a meta game to play here as well. A classic example here is Michael Moore. The filmmaker, political activist. I’m not sure the title you want to give him. But a lot of his films are funded by Fox or some subsidiary of Fox.

Lucas Perry: Yeah. But they’re all leftist views.

Mohamed Abdalla: Exactly. So as in the example that I gave previously where Google would invite a token critic to their conferences to give it some semblance of balance, simply disagreeing with them will not reject you from their funding. It’s just that they will likely limit the amount of people who are publicly disagreeing with them by choosing. Again, it seems too self-serving to say, “I’m a martyr. I’ve sacrificed myself.” I don’t view that as the case, although I did get some feedback saying, “Maybe you shouldn’t push this now until you get a job,” kind of thing. But I’m pushing that it shouldn’t be researchers deciding who they get money from. This is a higher level issue.

If you go into a pure hypothetical. And for the listeners, this is not what I actually believe. But let us consider big tech to be evil, right? And publicly minded researchers who refuse to take money from big tech as good. If every good researcher. Again, good here is not being used in the prescriptive sense, but just in our hypothetical. If all of the good researchers refuse to take money from these evil corporations, then what you’re going to be ending up with is these researchers will not get jobs, will not get promoted. Their viewpoints will die out. But also, the people who are not good will have no problem taking this money. And they will be less likely to challenge these evil corporations. So in a game theoretic perspective, if you go from a pure utility perspective, it makes sense for you as a good researchers to take this bad money.

So that’s why I state in the paper whatever our fix to this is, can’t be done at the individual researcher level. You have to assume that all researchers are good, but you have to come up with a system level solution. Whether that’s legislation from governments, whether that’s a funding body solution, or a collection of institutions that come up with an institutional policy that applies to all of these top schools or all computer science departments all over the world. Whoever we can get to agree together. So that’s basically what I’m pushing for. But there’s also ways that you can influence research questions without directly funding them. And the way that you do this is by repeated exposure to your ideas of ethics or your ideas of what is fair, what is not fair. However you want to phrase it.

I got a lot of puzzled looks when I tell people that I also looked at whether or not a professor was funded during graduate school by these companies. And there is some rightful questioning there. Because I saying, or am I assuming that the fact that they got a scholarship from let’s say Microsoft during their PhD, is that going to impact their research question 20 years down the line when they’re a professor? And I do not think that’s actually how this works. But the reason for asking this was to show how often or how much exposure these faculty members were given to big tech’s values or Silicon Valley values, however you want to say it.

Even if they’re not actively going out of their way to give money to researchers to affect their research questions. If every single person who becomes a faculty member at all of these prestigious schools has at one point done some term. Whether that’s a four month internship, or a one-year stint, or a multiple year stint in big tech in Silicon Valley. It’s only human to worry that repeated exposure to such views will impact whatever views you end up developing yourself. Especially if you’re not going into this environment trying to critically examine their views. You’re just likely to adopt it internally, subconsciously before you have to think about it.

And what we show here is that 84% of all computer science faculty have had some sort of financial connection with big tech. Whether that’s receiving funding as graduate, or a faculty member, or been previously employed.

Lucas Perry: We know what the incentives of these industries are all about. So why would they even be interested in funding the graduate work of someone, if it wasn’t going to groom them in some sense? Are there tax reasons?

Mohamed Abdalla: I’m not 100% sure how it works in the United States. It exists in Canada, but I’m not sure if it does in the U.S.

Lucas Perry: Okay.

Mohamed Abdalla: So there’s multiple reasons for doing this. There is of course as usual, the PR aspects of it. We are helping students pay off their student loans in the states I guess. There’s also if you fund someone’s graduate funding, you’re building connections, making them easier to hire possibly.

Lucas Perry: Oh yeah. You win the talent war.

Mohamed Abdalla: Yeah, exactly. Yeah. If you want a Microsoft Fellowship, I think you also win an internship at Microsoft, which makes you more likely to work for Microsoft. So it’s also a semi-hiring thing. There’s a lot of reasons for them to do this. And I can’t say that to influence is the only reason. But the fact that if you limit it to CS faculty who work in AI ethics, 97% of them have had some sort of financial connection to big tech. 97% of them have had exposure to the dominant views of ethics by Silicon Valley. What percentage of this 97 is going to subconsciously accept these views, or adopt these views because they haven’t been presented with another view, or they haven’t been presented with the opportunity to consider another critical view that disagrees with their fundamental assumptions? It’s not to say that it’s impossible, it’s just to say should they be having such a large influence?

Lucas Perry: So we’ve spent a lot of time here then on this third point on influencing the research questions and plans of individual scientists. It seems largely again that by giving money, you can help align their incentives with your own. You can help direct what kind of research questions you care about. You can help give the impression of social responsibility. When actually, you’re constraining and funneling research interest and research activity into places which are beneficial to you. You’re also, I think you’re arguing here exposing researchers to your values and your community.

Mohamed Abdalla: Yeah. Not everyone’s view of ethics is fully formed when they get a lot of exposure to big tech. And this is worrying because if you’re a blank slate, you’re much more easily drawn upon. So they’re more likely to impart their views on you. And if 97% of the people are exposed, it’s only safe to assume that some percentage will draw this. And then that will artificially inflate the amount of people that agree with big tech’s viewpoints. And therefore further push academia or the academic conversation into alignments with something they find favorable.

Lucas Perry: All right. So the last point here then is how big tobacco and big tech discover receptive academics who can be leveraged. So this is the point about finding someone who may be a skeptic or critic in a community of some widely held scientific view, and then funding and propping them up so that they introduce some level of fake or constructed, and artificial doubt and skepticism and obfuscation of the issue. So would you like to unpack how big tobacco and big tech have done this?

Mohamed Abdalla: When it comes to the big tobacco, we did cover this a tiny bit before. For example, when we talked about how they would fund research that questions whether it’s actually keeping birds as pets that caused lung cancer. And so long as this research is still being funded and has not been published in the journal, they can honestly speaking say that it is not yet conclusive. There is research being done, and there are other possible causes being studied.

Despite having internal research to show that this is not true. If you go from a pure logic standpoint where is it conclusive being defined as there exists no oppositional research, they’ve satisfied the conditions such that it is not conclusive and there is fake doubt.

Lucas Perry: Yeah. You’re making logically accurate statements, but they’re epistemically dishonest.

Mohamed Abdalla: Yeah. And that’s basically what they do when they leverage these academics to sow doubt. But they knew that especially in Europe a little bit after this, that there was a lot of concern being drawn regarding the funding of academics by big tobacco. So they would purposefully search for European scientists who had no previous connection to them who they could leverage to testify. And this was part of a larger project that they called the White Coat Project, which resulted in infiltrations in governing bodies, heads of academia, and editorial boards to help with litigation legislation. And that’s actually why I named my paper The Grey Hoodie Project. It’s an homage to the White Coat Project. But since computer scientists don’t actually wear white coats, we’re more likely to wear gray hoodies. That’s where the name of the paper comes from. So that’s how big tobacco did it.

When it comes to big tech, we have clear evidence that they have done the same. Although it’s not clear the scope of which they have done this because there haven’t been enough leaks yet. This is not something that’s usually publicly facing. But Eric Schmidt, previously the CEO of Google was, and I quote from an Intercept article, “Advised on which academic AI ethicists his private foundation should fund.” I think Eric Schmidt has very special views regarding the place of big tech and its impact on society that likely would not be agreed with by the majority of AI ethicists. However, if they find an ethicist that they agree with and they amplify him and give him hundreds of thousands of dollars a year, he is basically pushing his viewpoint on the rest of the community by way of funding.

Another example where Eric again, Eric Schmidt asked that he should fund a certain professor, and this certain professor later served as an expert consultant to the Pentagon’s innovation board. And Eric Schmidt is now on some military advisement role in the U.S. government. And that’s a clear example of how those from big tech are looking to leverage receptive academics. We don’t have a lot of examples from other companies. But the fact that it is happening and this one got leaked, do we have to wait until other ones get leaked to worry about this?

An interesting example that I personally view as quite weak. I don’t like this example, but the irony will show in a little while. There is a professor at George Mason University who had written academic research that was funded indirectly by Google. And his research criticized the antitrust scrutiny of Google shortly before joining the FTC, the Federal Trade Commission. And after he joined the FTC, they dropped their antitrust suits. They’ve picked it up now again. But this claim, which basically draws into question whether or not Google funded him one, because of his criticism of antitrust against Google. So that is a possible reason they chose to fund him. There’s another unstated question here in this example with did he choose to criticize antitrust scrutiny of Google because they fund him? So which direction does this flow? It’s possible that neither direction flow. But when he joined the FTC, did they drop their case because essentially they hired a compromised academic?

I do not believe and I have no proof of any of this. But Google’s response to this hinting question was that this expose was pushed by the Campaign for Accountability. And Google said that this evidence should not be acceptable because this nonprofit Campaign for Accountability is largely funded by Oracle, which is another tech company.

So if you sort of abstract this away, what Google is saying is that claims made regarding societal impacts, or legislation, or anything to do with AI ethics. If that researcher is funded by a big tech company, we should be worried about what they’re saying, or we should be very skeptical about what they’re saying. Because they’re essentially saying that you should not trust this because it’s funded by Oracle. It’s largely backed by Oracle. You know, you abstract that away, it’s largely backed by big tech. Does that not apply to everything that Google does or everything that big tech in general does? So it is clear that they themselves know that industry money has a corrupting influence on the type of research being done. And that sort of just pushes my entire piece.

Lucas Perry: Yeah. I mean at some sense, none of this is mysterious. They couldn’t not be doing this. We know what industry wants and does, and they’re full of smart people. So, I mean, if someone from industry who is participating in this or listening to this conversation, they would be like, “You’ve woken up to the obvious. Good job.” And that’s not to downplay the insight of your work yet. It also makes me think of lobbying.

Mohamed Abdalla: 100%.

Lucas Perry: We could figure out all of the machinations of lobbying and it would be like, “Well yeah, they couldn’t not be doing this, given their incentives.”

Mohamed Abdalla: So I fully agree. If you come into this knowing all of the incentives, what they’re doing is the logical move. I fully agree that this is obvious, right?

Lucas Perry: I don’t think it’s obvious. I think it naturally follows from first principles, but I feel like I learned a lot from your paper. Not everyone knows this. I would say not even many people know this.

Mohamed Abdalla I guess obviously it wasn’t the correct word. But I was going to say that the points that I raise show that there’s a clear concern here. And I think that once people hear the points, they’re more likely to believe this. But there are people in academia. A common criticism I get is that people know who pay them. So they say that it’s unfair to assume that someone funded by a company cannot be critical of that company or big tech in general. And several researchers who work at these companies are critical of their employer’s technology. So the point of my work is to lay this out flat to show that it doesn’t matter if people know who pay them, the academic literature shows that this has a negative effect. And therefore, disclosure isn’t enough. I don’t want to name the person who said this criticism, but they’re pretty high up. The idea that conflict of interest is okay simply because it’s disclosed seems to be a uniquely computer science phenomenon.

Lucas Perry: Yeah. It’s a weird claim to be able to say, “I’m so smart and powerful, and I have a PhD, and giving me dirty money or money that carries with it certain incentives, I’m just free of that.”

Mohamed Abdalla: Yeah. Or whether or not it’s incorrectly perceived ability to self-correct for these biases. That’s sort of the current that I’m trying to fight against. Because the mainstream current in academia is sort of like, “Yeah, but we know who pays us, so we’re going to adjust for it.” And although the conclusions I draw are intuitive I think, the intuition that people have regarding big tech is basically, big tobacco, everyone has an intuitively negative gut feeling. So it’s very easy for them to agree. It’s a little bit more difficult to convince them that even if you believe that big tech is a force for good, you should still be worried.

Lucas Perry: I also think that the word here that is better than obvious is it’s self-evident once it’s been explained. It’s not obvious. Because if it were obvious, then you wouldn’t have needed to write this paper. And I already would’ve known about this, and everyone would have. So if you were just to wrap up and summarize in a few bullet points here this last point on discovering receptive academics and leveraging them, how would you do that?

Mohamed Abdalla: I kind of summarize this to policymakers. When policymakers to try to make policy, they tend to converse with three main parties. They will converse with industry, they converse with the academics, and they converse with the public. And they believe that getting this wide viewpoint will help them arrive at the best compromise to help society move in the way that it should. However, the very mindful way that big tech is trying to leverage academics, a policymaker will talk to industry. He’ll talk to the very specific researchers who are handpicked by industry. And therefore, are basically in agreement with the industry and they will talk to the public. So two thirds of the voices they hear are industry aligned voices, as opposed to previously one third. And that’s something that I cover in the paper.

And that’s the reason why you want to leverage receptive academics, because it gives you the ability so that the majority of whatever a policymaker hears, and they’re really busy people, and they don’t have the time to do the research themselves. If two out of every three people is pushing policy or pushing views that are in alignment with whatever’s good for big tech’s profit motive, then you’re more likely to believe that viewpoint. As opposed to having an independent academia where if the right decision is to agree with big tech then you assume they would. If the right decision is to disagree, then you assume they would. But if they leverage the academics, this is less likely to happen. Therefore, academia is not playing its proper role when it comes to policy-making.

Lucas Perry: All right. So I think this pretty clearly lays out then how industries in general, whether it be big tobacco, big tech, oil companies, greenhouse gas emitting energy companies, you even brought up the food industry. I mean, just anyone really who have the bottom line as their incentive. These strategies are just naturally born of the impersonal incentive structure of a corporation or industry.

This next question is maybe a bit more optimistic. All these organizations are made up of people, and these people are all I guess more or less good or more or less altruistic. And you expect that if we don’t go extinct, these industries always get caught, right? Big tobacco got caught, oil industries are in the midst of getting caught. And next we have big tech. And I mean, the dynamics are also a little bit different because cigarettes and oil can be booted. But we’re kind of married to the technology of big tech forever. Literally.

Mohamed Abdalla: I would agree with that.

Lucas Perry: Yeah. So the strategy for those two seems to be obfuscate the issue for as long as possible so your industry exists as long as possible, and then you will die. There is no socially responsible version of your industry. That’s not going to happen with big tech. I mean, technology is here to stay. So does big tech have any actual incentives for genuine social responsibility, or are they just playing the optimal game from their end where you obfuscate for as long as possible, and you bias all of the events and the researchers as much as possible? Eventually, there’ll be enough podcasts like this and minds changed that they can’t do that any longer without incurring a large social cost in opinion, and perhaps market. So is it always simply the case that promoting the facade of being socially responsible is cheaper and better than the incentive of actually becoming socially responsible?

Mohamed Abdalla: So a thing that I have to say, because the people that I worked with that still work in health policy regarding tobacco would be hurt if I didn’t say it. Big tobacco is still heavily investing in academia, and they’re still heavily pushing research and certain viewpoints. And although the general perception has shifted regarding big tobacco, they’re not done yet. So although I do agree with your conclusion that it is a matter of time when they’re done, to think that the fight is over is simply not true yet. There’s still a lot of health policy folks that are pushing as hard as they can to completely get rid of them. Even within the United States and Europe, they create new institutions that do other research. They’ve become maybe a little bit more subtle about it. But to declare victory I think is the correct path on what will happen, but it has not yet happened. So there’s still work to be done.

Regarding whether or not big tech has an actual incentive to do good, I like to assume the best of people. I assume that Mark Zuckerberg actually founded Facebook because he actually cared about connecting people. I believe that in his heart of hearts, he does have at least generally speaking, a positive goal for society. He doesn’t want to necessarily do bad or be wrecking democracies across the world. So I don’t think that’s his goal, right?

So I think that starting from that viewpoint is helpful because one, it will make you heard. But also, it shows how this is a largely systemic issue. Because despite his well-intentioned goals that we’re assuming exist. And I actually do believe at some level, it’s true. The incentives in the system in which he plays adds a caveat to everything he says, that we aren’t putting there

So for example, when Facebook says they care about social responsibility or that they will take steps to minimize the amount of fake news, whatever that means. All of the statements made by any industry in any company, because of the fact that we’re in a capitalist system, is prior on the statements given it does not hamper profits, right? So when Facebook wants to deal with fake news, they will turn to automated AI algorithms. And they say we’re doing this because it’s impossible to moderate the amount of stories that we get.

From a strictly numeric perspective, this is true. But what they’re not saying is that it is not possible for us to use humans to moderate all of these stories while staying profitable. So that is to say the starting point of their action may be positive. But the fact that it has to be warped to fit the profit motive ends up largely negating, if not completely negating the effects of the actions they take.

So for example, you can take Facebook’s content moderation in the continent of Africa. They used to have none. And until recently, they got only one content center moderation in the entire continent of Africa. The amount of languages spoken in that continent alone, how many people do you have to hire in that one continent’s moderation? How many people per language are you hiring? Sophie Zhang’s resignation letter basically showed that despite being aware of all of these issues and having employees especially at the lower levels who were passionate about the social good. So it’s clear that they are trying to do a social good. The fact that everything is prior to whether or not it will result in money, hurts the end result of their action. So I believe and I agree with you that this industry is different. And I do believe that they have an incentive for the social good. But unless this incentive is forced upon everyone else, they are hurting themselves if they refuse to take profit that they could take, if that makes sense.

But if you choose to not do something because it is socially good but it will hurt your profits, some other company is going to do that thing. And they will take the profits and they will beat your market share until you can find a way to account for it in the stock price.

Lucas Perry: People value you more now that you are being good.

Mohamed Abdalla: Yeah. But I don’t think we’re at a stage where that’s possible or that’s even well-defined what it means. So I agree that even if this research is well-intentioned, the road to hell is paved with good intentions.

Lucas Perry: Yeah. Good intentions lead to bad incentives.

Mohamed Abdalla: Or the good incentives are required to be forced through the lens of bad incentive. It has to be aligned with the bad incentive for them to actually manifest. Otherwise, they will always get blocked.

Lucas Perry: Yeah. By that you mean the things which are good for society must be aligned with the bad incentives of maximizing profit share, or they will not manifest.

Mohamed Abdalla: Exactly. And that’s the issue when it comes to funding academia. Because it is possible to change society’s viewpoint on one, what is possible. But two, what is preferable to match the profit incentives of these companies. So you could find a way of what is ethical AI, what does it cover? What sort of legislation is feasible? What sort of legislation is desirable? In what context does it apply, does it not apply? What jurisdiction, so on and so forth. These are all still open questions. And it is the incentive of these companies to help mold these answers such that they have to change as little as possible.

Lucas Perry: So when we have benefits that are not accruing from industry, or where we have negative externalities or negative effects from the incentives of industry leading to detrimental outcomes for society, the thing that we have for remedying that is regulation. And I imagine that more than the general population, I would guess are libertarian attitudes at big tech companies. Which in this sense I would summarize as socially liberal or left leaning and then against regulation. So valuing the free market. So there’s this value resistance. We talked about how the people at the top are going to be sifted through. You’re not going to have people at the top of big tech companies who really love regulation, or think that regulation is really good for making a beautiful world. Because regulation is just always hampering the bottom line.

Yet, it’s the tool that we have for trying to mitigate negative externalities and negative outcomes from industry maximizing their bottom line. So what do you suggest that we do? Is it just that we need good regulation? We need to find some meaningful regulatory system and effective policy? Because otherwise, nothing will happen. They’ll just keep following their incentives, and they have so much power. And they’ll just do what they do to keep doing the same thing. And the only way to break that is regulation.

Mohamed Abdalla: So I agree. The solution is basically regulation. The question is, how do we go about getting there? Or what specific rules do we want to use or laws do we want to create? And I don’t actually answer any of this in my work. I answer a question that comes before the legislation or the regulation. Which is basically, I propose that AI ethics should be a different department from computer science. So that in the same way that bioethics is no longer in the same department as biology or medicine, AI ethics should be its own separate department. And in that way, anyone working in this department is not allowed to have any sort of relationship with these companies.

Lucas Perry: You call that sequestration.

Mohamed Abdalla: It’s not my own term. But yeah, that’s what it’s called.

Lucas Perry: Yeah. Okay. So this is where you’re just removing all of the incentives. Whether you’re declaring conflict of interest or not, you’re just removing the conflict of interest.

Mohamed Abdalla: Yes. Putting myself on the spot here, it’s very difficult to assume that I myself have not been corrupted by repeated exposure. As much as I try to view myself as a critical thinker, the research shows repeated exposure will influence what you think and what you believe. I’ve interned at Google for example, and they have a very large amount of internal propaganda pointed at their employees.

So I can’t barge in here saying that I am a clean slate or, “I’m a clean person. You should listen to my policies.” But I think that academia should try to create an environment where it is possible. Or dare I say, encouraged to be a clean person where clean means no financial involvement with these companies.

That said, there’s a lot of steps that can be done when it comes to regulation. Slightly unrelated, but kind of not unrelated is fixing the tax code in the U.S. and Canada and around the world. A large reason of why a lot of computer science faculty and computer scientists in general look to industry for funding is because governments have been cutting or at least not increasing with the rates of research being done in fields the amount of money available for research funding. And why do they not have as much money? This is probably in part because these companies are not paying their fair share when it comes to paying their taxes, which is how a lot of researchers get their funding. That’s one way of doing it. If you want to go into specifics, it’s more difficult and much harder to sell for specific policies. I don’t think regulation of specific technologies would be effective because all the technologies changed very fast.

I think creating a governmental body whose role it is to sue these companies when they violate stuff that we don’t believe match our social norms is probably the way to go about it. But I don’t know. It’s hard for me to say. It’s a difficult question that I don’t have an answer for. We don’t even know who to ask for legislation because every computer scientist is sort of corrupted. And they’re like, “Okay, do we not use computer scientists at all? Are we relying only on economists and moral philosophers to do this sort of legislation possibly?” I don’t know.

Lucas Perry: So I want to talk a little bit about transformative AI, and the role that this transition plays in that. There is a sense, and this is a meme that I think needs to be combated. The race between China and America on AI with the end goal of that being AI systems that are increasingly powerful.

So some sense that any kind of regulation used to try to fix any of these negative externalities from these incentives is just shooting ourself in the knee. And the evil other is racing to beat us.

Mohamed Abdalla: That’s the Eric Schmidt argument.

Lucas Perry: So we can’t be implementing these kinds of regulations in the face of the geopolitical and international problem of racing to ever more powerful AI systems. So you already said this is the Eric Schmidt argument. What is your reaction to this kind of argument?

Mohamed Abdalla: There’s multiple possible reactions. And I don’t like to state which one I believe in personally, but I’d like to walk through them. Because I think first off, let us assume that the U.S. and China are racing for an automated general intelligence, AGI. Would you not then increase government funding and nationalize this research such that it belongs to the government and not to a multinational corporation? In the same way that if for example, Google, Facebook, Microsoft, Alibaba, Huawei were in the race to develop nukes. Would you say, leave these companies alone so they can develop nuclear weapons? And once they develop a nuke, we’ll be able to take it. Or would you not nationalize these companies? Or not nationalize them, but basically it has to become only for the U.S. They cannot have any incentives in any other country. That is a form of legislation or regulation.

Governments would have to have a much bigger say in the type of research being done, who’s doing it, what can be done. For example, in the aerospace industry, you can’t employ non U.S. citizens. Is this what you’re pushing for in artificial intelligence research? Because if not, then you’re conceding that it’s not likely to happen. But if you do believe that this is likely to happen, then you would be pushing for some sort of regulation. You could argue what the regulation, but I don’t find the viewpoint that we want these companies to compete with the Chinese companies, because they’re going to create this thing that we need to beat the Chinese at. If you believe that this is going to happen, you’d be still in support of regulation. It’d just be different regulation.

Lucas Perry: I mean obviously I can’t speak for Eric Schmidt. But the kinds of regulation that stops the Chinese from stealing the AGI secrets is good regulation. And then anything else that slows the power of our technology is bad regulation.

Mohamed Abdalla: Yes. But for example, when Donald Trump banned the H1B visa. Or not banned, he put a limit or a pause. I’m not sure the exact thing that happened.

Lucas Perry: Yes. He’s made it harder for international students to be here and to do work here.

Mohamed Abdalla: Yes, exactly. That is the type of regulation that you would have if you believed AI was a threat, like we are racing the Chinese. If you believed that, you would be for that sort of regulation because you don’t want these companies training foreign nationals and the development of this technology. Yet this is not what these companies are going for. They are not agreeing with the legislation or the regulation that limits the amount of foreign workers they can bring in.

Lucas Perry: Yeah. Because they just want all the talent.

Mohamed Abdalla: Exactly. But if they believe that this was a matter of national security, would they not support this? You can’t make the national security arguments when they say, “Don’t regulate us, because we need to develop as much as we can, as fast as we can.” While also pushing against regulation that if this was truly dangerous, if we did truly need to leave you unregulated internally, we should limit who can work for you in the same way that we do it for rocketry. Who can work on rockets, who can work at NASA? They have to be U.S. citizens.

Lucas Perry: Why is that contradictory?

Mohamed Abdalla: Because they’re saying, “Don’t regulate us in terms of what we can work on,” but they’re saying also, “Do not regulate us in terms of who can work for us.” If what you’re working on is a matter of national security and you care about national security, then by definition, you want to limit who can work on it. If you want anyone, or you say there should be no limit on who can work for us, then you are basically admitting that this is not a matter of national security, or profits over everything else. Google, Facebook, Microsoft, when possible legislation comes up, the Eric Schmidt argument gets played. And it’s like, “If you legislate us, if you regulate us, you are slowing down our progress towards this technology.”

But if any sort of regulation against the development of tech will slow down the arrival of AGI, which we assume that the Department of Defense cares is important. Then what you’re saying is that these companies are essentially striving towards that should they not be protected from foreign workers infiltrating. So this is where the companies hold two opposing viewpoints. Where depending on who they’re talking to, no don’t regulate us because we’re working towards AGI and you don’t want to stop us. But at the same time, don’t regulate immigration because we need these workers. But if what you were working on is sensitive, then you shouldn’t even be able to take these workers.

Lucas Perry: Because it would be a national security risk.

Mohamed Abdalla: Exactly. When a lot of your researchers come from another country and they’re likely to go back to that country or at least have friends, have conversations with other countries.

Lucas Perry: Or just be an agent.

Mohamed Abdalla: Yeah, exactly. So if this is actually your worry that this regulation will slow down the development of AGI, how can you at the same time be trying to hire foreign nationals?

Lucas Perry: All right. So let’s do some really rapid fire here.

Mohamed Abdalla: Okay.

Lucas Perry: Is there anything else that you wanted to add to this argument about incentives and companies actually just being good? And we are walking through this Eric Schmidt argument.

Mohamed Abdalla: Yeah. So the thing I want to highlight is that this is a system level problem. So it’s not a problem with any specific company, despite some being in the news more than others. It’s also not a problem with any specific researchers or institutions. This is a systemic issue. And since it’s a high level problem, the solution needs to be at a high level as well. Whether it’s at the institutional level or national level, some sort of legislation, it’s not something that research individually can solve.

Lucas Perry: Okay. So let’s just blast through the action items here then for solving this problem. You argue that everyone should post their funding information online in historic informations. This increases transparency on conflicts of interest. But as we discussed earlier, they actually just need to be removed. You also argue that universities should publish documents highlighting their position on big tech funding for researchers.

Mohamed Abdalla: Yeah. Basically I want them to critically consider the risks associated with accepting such funding. I don’t think that it’s a consideration that most people are taking seriously. And if they are forced to publicly establish a position, they’ll have to defend it. And that will I believe lead to better results.

Lucas Perry: Okay. And then you argue that more discussion on the future of AI ethics and the role of industry in this space is needed. Can’t argue with that. That computer science should explore how to actively court antagonistic thinkers.

Mohamed Abdalla: Yeah. I think there’s a lot of stuff that people don’t say because it’s either not in the zeitgeist, or it’s weird, or seems an attack on a lot of researchers.

Lucas Perry: Stigmatized.

Mohamed Abdalla: Yeah, exactly. So instead of trying to find people who simply list on their CV that they care about AI ethics or AI fairness, you should find who will to disagree with you. If you’re able to beat points to disagree with, it doesn’t matter if you don’t agree with their viewpoint.

Lucas Perry: Yeah. I mean usually, the people that are saying the most disruptive things are the most avant garde and are sometimes bringing in the revolution that we need. You also encourage academia to consider the splintering of AI ethics into different department from computer science. This would be analogous to how bioethics is separated from medicine and biology. We talked about this already as sequestration. Are there any other ways that you think that the field of bioethics can help inform the development of AI ethics on academic integrity?

Mohamed Abdalla: If I’m being honest, I’m not an expert on bioethics or the history of the field of bioethics. I only know it in relation to how it has dealt with the tobacco industry. But I think largely, more historical knowledge needs to be used by people deciding what we as computer scientists do. There’s a lot of lessons learned by other disciplines that we’re not using. And they’ve basically been in a mirror situation. So we should be using this knowledge. So I don’t have an answer, but I think that there’s more to learn.

Lucas Perry: Have you received any criticism from academics in response to your research, following the publication that you want to discuss or address?

Mohamed Abdalla: In this specific publication, no. But it may be because of the COVID pandemic. I have raised these points previously. And I have received some pushback, but not for this specific piece. Although this piece was covered in WIRED and there are some criticisms of the piece in the WIRED article, but I kind of raised them up in this talk.

Lucas Perry: All right. So as we wrap up here, do you have anything else that you’d like to just wrap up on? Any final thoughts for listeners?

Mohamed Abdalla: I just want to stress if they’ve made it this way without hating me, this work is not meant to call into question the integrity of researchers, whether they’re in academia or in industry. And that I think these are critical conversations to be had now. It may be too late for the initial round of AI legislation. But for the future, good. And for longer-term problems, I think it’s more important.

Lucas Perry: Yeah, there’s some meme I think going around, like one of the major problems in the world is good people who are running the software of bad ideas on their brain. And I think similar to that is all of the good people who are caught up in bad incentives. So this is just sort of amplifying your non-critical or judgmental role, but that the universality of the human condition is that we all get caught up in these systemic negative incentive structures that lead to behavior that is harmful for the whole.

So thank you so much for coming on. I really learned a lot in this conversation. I really appreciate that you wrote this article. I think it’s important, and I’m glad that we have this thinking early and we can hopefully try and do something to make the transformation of big tech into something more positive happen faster and more rapidly than it has historically with other industries. So if people want to follow you, or look into more of your work, or get in contact with you, where the best places to do that?

Mohamed Abdalla: I’m not on any social media. So email is the best way to contact me. It’s on my website. If you search my name and add the University of Toronto and the end of it, I should be near the top. It’s cs.toronto.edu/msa. And that’s where all my work is also posted.

Lucas Perry: All right. Thanks so much, Mohamed.

Mohamed Abdalla: Thank you so much.

Maria Arpa on the Power of Nonviolent Communication

 Topics discussed in this episode include:

  • What nonviolent communication (NVC) consists of
  • How NVC is different from normal discourse
  • How NVC is composed of observations, feelings, needs, and requests
  • NVC for systemic change
  • Foundational assumptions in NVC
  • An NVC exercise

 

Timestamps: 

0:00 Intro

2:50 What is nonviolent communication?

4:05 How is NVC different from normal discourse?

18:40 NVC’s four components: observations, feelings, needs, and requests

34:50 NVC for systemic change

54:20 The foundational assumptions of NVC

58:00 An exercise in NVC

 

Citation:

The Center for Nonviolent Communication’s website 

 

We hope that you will continue to join in the conversations by following us or subscribing to our podcasts on YoutubeSpotify, SoundCloudiTunesGoogle PlayStitcheriHeartRadio, or your preferred podcast site/application. You can find all the AI Alignment Podcasts here.

You can listen to the podcast above or read the transcript below. 

Lucas Perry: Welcome to the Future of Life Institute Podcast. I’m Lucas Perry. Today’s conversation is with Maria Arpa on nonviolent communication, which will be referred to as NVC for short throughout the episode. This podcast continues to explore the theme of wisdom in relation to the growing power of our technology and our efforts to mitigate existential risk, which was covered in our last episode with Stephen Batchelor. Maria and I discuss what nonviolent communication consists of, its four components of observations, feelings, needs, and requests, we discuss the efficacy of NVC, its core assumptions, Maria’s experience using NVC in the British prison system, and we also do an on the spot NVC exercise towards the end of the episode.   

I find nonviolent communication to be a powerful upgrade in relating, resolving conflict, and addressing the needs and grievances we find in ourselves and others. It cuts through many of the bugs of normal human discourse around conflict and makes communication far more wholesome and collaborative than it otherwise might be. I honestly view it as a quite a powerful and essential skill or way of being that has had quite a transformative impact on my own life, and may very well do the same for others. It’s a paradigm shift in communication and human relating that I would argue is an essential part of the project of waking up and growing up. 

Maria joined the Center of Nonviolent Communication as the Executive Director in November 2019. She was introduced to NVC by UK Trainer Daren de Witt, who invited her to see Marshall Rosenberg, the founder of NVC, speak in London, a moment that changed her life. She was inspired to attend one of the Special Sessions on Social Change in Switzerland in 2005 and then invited Marshall to Malta where she organised a conference between concentration camp survivors and the multi-national corporation that had bought the site formerly used as a place of torture. Since then she has worked with marginalised, hard to reach individuals and communities and taken her work into prisons, schools, neighbourhoods and workplaces.

And with that, let’s get into our conversation with Maria Arpa. 

I really appreciate you coming on, and I’m excited to learn more about NVC. A lot of people in my community, myself included, find it to be quite a powerful paradigm, so I feel excited and grateful that you’re here. So I think we can kick things off here with just a pretty simple question. What is nonviolent communication?

Maria Arpa: Thank you. Yes, and it’s really good to be here. So nonviolent communication is a way or an approach or a system for putting words to living a nonviolent life. So if you’ve chosen nonviolence as a philosophy or as a way of life, then what Marshall Rosenberg proposed in creating nonviolent communication in the 1960s is a way of communicating interpersonally based on the idea that we need to connect as human beings first.

So NVC, which is short for nonviolent communication, is both a spiritual practice and a set of concrete skills. And it’s based on the idea of listening deeply to ourself and to others, in order to establish what are the real needs that we’re trying to meet? And from the understanding of the needs that come through empathic listening, then we can begin to build strategies.

Lucas Perry: Alright, so can you juxtapose what NVC does, compared to what usually happens in normal discourse and conflict resolution between people?

Maria Arpa: Yeah, lovely. Thank you. I like that question. In society, we have been brought up, and I would go so far as to say indoctrinated, with taking adversarial positions. And we do that because we think about things like the legal profession, academia, science, the military, and all of those disciplines, which are highly prized in society, but they all use a debate model of discourse.

And that’s wonderful if you want to prove a theory or expand the knowledge between people, but if we’re actually trying to just build a relationship in order that we can coexist, do things, envision, create something new in society, debate just doesn’t work. So at the very micro level, in the family system, what I experienced with couples and families is a sort of table tennis match. And while you’re speaking, I am preparing my counter argument. I would say that we’ve been indoctrinated with a debate model of conversation that even extends into our entertainment, because in my understanding, if you go to scriptwriting school, they will tell you that to make a Hollywood blockbuster, you need to leave a conflict in every scene.

Maria Arpa: So that has become the way in which we communicate with each other, which is played out in our legal systems, in the justice system, it’s played out in education. When I position that against nonviolent communication, nonviolent communication says we need to build a relationship first. What is the relationship between us? So if we even take this podcast, Lucas, you and I had built a relationship. We didn’t just make an appointment, come on, and do this, we got to know each other a bit. Is that a helpful answer?

Lucas Perry: That’s a good question. Yeah, I think that is a helpful answer. I’m also curious if you could expand a bit more on what the actual communicative features of this debate style of conversation focus on. So where NVC focuses on talking about needs and feelings, feelings being evidence of needs, and also making observations, observations as being opposed to judgments, what is it that the normal debate style or adversarial kind of conversations that we’re having, what is the structure and content of that kind of conversation that is not needs, feelings, and observations?

To me, it often seems like it deals with much more constructed and synthetic concepts where, NVC you boil down concepts to things which are very simple and basic and core, whereas the adversarial relationship deals with more complex constructive concepts like respect, abandonment, and is less simple in a sense. So do you have anything else you’d add here on to what actually constitutes the adversarial kind of conversation?

Maria Arpa: Yeah, definitely. So in an adversarial conversation now we have two levels. And the first level, because I’m really thinking about how we’ve been programmed, in the first level, we’re battling backwards and forwards and what we’re out to do is win the argument. So what I want is for my argument to prevail over yours. And in families, it’s generally who’s the worst off person, who’s the most tired, who’s the most unresourced, who does most of the work, who’s earning most of the money, that level. It’s a competition. It’s a competitive conversation.

At its worst ends of the spectrum, and when we think about school and education, it’s a debate, which also includes the prospects of enforcement, which is punitive. So we could simply be competing to win an argument or we could be actually out to provide evidence of the other person’s wrongness in order to punish them, maybe not even physically punish them, but punish them with the withdrawal of our love.

Lucas Perry: Right, so if we’re not sufficiently mindful, there is this kind of default way that we have been conditioned into communicating, where I noticed a sense of strong ego identification. It’s a bit selfish. It’s not collaborative in any sense. As you said, it’s kind of like if I can make and give points which are sufficiently strong, then the person will see that I’m the most tired, I’m the most overworked, I have contributed the most, and then my needs will be satisfied. And then my needs being satisfied is more implicit, and it’s never made explicit. And so NVC is more about making the needs and feelings explicit and the core and cutting out the argument.

Maria Arpa: Yes, I really agree with that. The problem with that debate model, that adversarial model, is that I might get my needs met, I might be able to bulldoze or bully my way, or I might be able to play the victim and get my needs met, but usually I have induced the other person to respond to me out of fear, guilt, or shame, not out of love. Someone, somewhere will pay for that. It’s that whole thing, you may have won the battle, but you haven’t won the war. While at a very micro level, day by day, I can scrape by and score points and win this and get my need met, in the long term, I’m actually still feeding the insecurity that nothing can come without struggle.

Lucas Perry: That’s a really good point.

Maria Arpa: And when you say it cuts out the argument, the argument bit which is not necessary, I’m saying that we will go into the dialogue with the idea that once we’ve established the needs, then we can actually build agreements. Now, that’s not to say that I’m going to get everything I want, and you’re going to get everything I want. But there’s a beauty in the negotiation that comes from the desire to contribute.

Lucas Perry: Yeah, so I found what you said to be quite beautiful and powerful around, you may gain short term benefits from participating in what one might call violent communication, or this kind of adversarial debate format, you might get your needs met but there’s a sense in which you’re conditioning in yourself and others this toxic communicative framework, where first, you’re more deeply instantiating this duality between you and other people, which you’re reifying by participating in this kind of conversation. And that’s leading to, I think, a belief in an unwanted world, a world that you don’t want to live in, a world where my needs will only be met, if I’ll take on this unhappy, defensive, self-centered stance in communication. And so that’s a contradicted belief. If I don’t do the debate format, then I will not be safe, or I will not get my needs met or I will be unhappy. You don’t want that to be true, and so it doesn’t have to be true if you pivot into NVC.

Maria Arpa: Yes, and got two things really to say about that. One is that we are often counting the cost of something and one of the things that we very rarely factor in is the emotional cost to getting our needs met, the emotional cost to taking on the task. And sometimes when I work with people, we find that the price is too high. I may be getting my fame and fortune or whatever it is I’m after, but actually, the emotional price to my soul is just too high.

And most of us have been taught not to imagine that as being of any importance. In fact, most of us have been taught, rather like scientific experiments, that when I’m looking at a situation, even when I’m not taking a self-centered point of view, even when I’m looking at it and wanting to be generous and benevolent, I look at a situation and I fracture myself into that situation, as if I don’t matter, or I don’t count, and the truth is everybody matters, and everybody counts.

So that’s one thing, and then when I heard you talking about this idea of adopting the adversarial approach to life and how that will feed the system, the best example I have of that is in the prison that I’ve been working in for the last four years. Prisons are pretty mean places, and so most people come into prison in the belief that they need to develop a very strong set of armor in order to defend themselves, and sometimes attack is the best form of defense, in order to survive this new world. And what I’ve been able to demonstrate with the guys that I’ve been training is that actually, if you throw away the armor, and if you actually come to be able to work out what your needs are, and be able to help other people find their needs, actually, it becomes a different place.

And the proof of that has been 25 men that I have trained to do what I do within the prison, working with other prisoners across the prison, and turning the prison into one of the safest jails in the UK.

Lucas Perry: Wow.

Maria Arpa: So the first thing that happens is I deliver training, and it’s intensive, it’s grueling, there’s a lot of work to do, and what happens to the guys as they’re doing the training. Because see, I believe that the people that cause the problems are the ones who have the answers to the problems. We should go to the people that cause the problems and say, “How do we not end up here again?” So as they do this training and they realize the potential to make their prison sentence go better, what a nice thing to be able to do, and then they realize that actually, I can’t do this for anyone else until I’ve done it for myself. So that’s part of the transformative bit where they go, “Well hang on, I’ve got so much conflict, or I’ve got so much inner conflict or dislike of myself, or whatever chaos going on inside, I can’t possibly do this for anyone else. So, right, now we can begin because now we have to begin as a team.”

And so what’s been remarkable for me is 25 men who, on the outside of prison would never ordinarily come into contact with each other, from completely different walks of life, different areas, different ages, different crimes, we’ve got everything from what in America you call homicide, through sexual offenses, to fraud, and that type of thing.

So these 25 men have been through a process to be able to work together as a team and to be able to understand each other, and they are completely blown away by the idea that they have a process so when they feel that the system that they’re using, because they get overrun with casework, and that what I’ve taught them to do is you have to put yourselves first. So when that happens, you actually need to stop everything and come back as a team, and work out what are the petty conflicts that are arising, and use this nonviolent communication process to come back to center. So for me, very much, there’s a huge difference in being able to live this way, and no other greater place have I proved it than in a place called prison.

Lucas Perry: Yeah, exactly. I bet that’s quite liberating for them, and I bet that there’s a deeper sense of security that is unavailable when you’re taking the adversarial, aggressive stance.

Maria Arpa: There’s a deeper sense of security. There’s a huge amount of gratification for somebody who has literally been thrown away by society and told that they’re worthless and have no place, and I don’t want to get into the crime or whether they did it or didn’t do it, but they’ve been thrown away by society, to actually find that they can be in service of others, and they can actually start to love themselves. So there’s a really huge gratification in being able to do that and not do it in a self sacrificing way, to be able to do it in a way that enriches the other person and enriches themselves. That for me is monumental.

And the second thing is that, in places like prisons and family, I often compare schools to prisons, you can be overwhelmed by the power of enforcement and the misuse of authority. So often, one person in a position of power may dish out the rules one way today in a different way tomorrow, may treat one person differently to another. And so what we’ve been able to establish is that if the guys sit in circle and invite officers to those circles, they can clear up some of the things that just create unnecessary conflict.

Now, obviously, in prison, some topics are non-negotiable, and we don’t go there, right? And if you don’t think you should be imprisoned, then you need to take the appropriate steps through your legal advisors. But if it’s a case of the laundry’s messed up every week and there are fights starting over it, if it’s a case of exactly who’s collecting the slips for the lunches, and that’s creating, or there’s an argument when people are queuing up for their food, these things can be sat down and had out and people could talk about their different experiences, and we can clear up those gray areas by the guys coming up with a policy. And they do this using needs.

Lucas Perry: All right, it’s encouraging to see it as effective and liberating in the case of the prison system. I’m interested in talking with you more about how this might apply to, for example the effective altruism movement, or altruism in general, and also working on really big problems in the world that involve both industry and government, who at times function and act rather impersonally. Where, for example, the collective incentives of some corporation are to maximize some bottom line, and so it’s unclear how one could NVC with something where the aggregate of all the behavior is something rather impersonal.

So right before we get to that, I just want to more concretely lay out the four components of NVC practice, just so listeners have a better sense of what it actually consists of. So could you take us through how NVC consists of observations, feelings, needs, and requests?

Maria Arpa: Yeah, I’d love to, yes, thank you. So usually, in our heads, if we’re indoctrinated in that adversarial and we don’t even know it, whatever, usually what’s happening is when we see something, we’re busy judging it, evaluating it, deciding whether we like it or don’t like it, imposing our diagnosis, and generally having an opinion about it, good or bad. And then, of course we live in a world now where people can take to social media and destroy other people if they choose. So we can actually just act out of what we think we’re seeing.

In nonviolent communication, what we do is we try to get to what we call the observation without the evaluation. So what is it that’s actually happening? And that is really trying to separate the reality from the perception. A really good example of that was I saw a demonstration once, a woman came down from the audience, and she wanted to talk about how angry she was with her flatmate. And she gave out this whole story and the trainer would say, “Well, we need to get to the observation, we need to get to the observation.” And out of this whole mass, the only two observations that she could come up with were that her flatmate occasionally leaves a dirty plate and a dirty mug and dirty cutlery in the sink without washing it up. And on occasion, her flatmate when she leaves the flat or the apartment, allows the door to slam and make a very loud noise behind her. And those are the only two observations that she could come up with that were actually really happening. Everything else was what she made up around it.

Lucas Perry: Yeah. So there’s this sense in which we’re telling ourselves stories on top of what may be just kind of simple, brute facts about the world. And then she’s just suffering so much over the stories that she’s telling herself.

Maria Arpa: Yeah, so the story attached to someone leaving the dirty plates in the sink is, she’s doing it on purpose, doing it to get at me. Those are the sorts of things we might tell ourselves. Or we might be the opposite and say, “She’s just so selfish.” And I really love what you just said. Of course, the person that I’m causing the most grief and pain to is myself. I’m cutting myself off from my own channel of love.

So that’s how we get to the observation, and it’s a really important part of the NVC process, because it helps us to identify that which is what we are telling ourselves and that which is actually happening in front of us. And the way that I could tell if it’s an observation or an evaluation, is I could record it on a video camera and show it to you and you would see the same thing.

Lucas Perry: Yeah, that makes sense. You’re trying to describe things as more factual without the story elements. This happened, then this happened, rather than, “My asshole roommate decided to leave her dirty shit everywhere because she just doesn’t care and she sucks.”

Maria Arpa: Yeah, so that’s the first step. And then what we’re looking to do is check in with ourselves on how do we feel. This is a really important step, because in nonviolent communication, what we propose is that our feelings are the red warning light on the dashboard of the car that tells you to pull over and look under the hood, you would say, we would say bonnet, and to check what else is going on. So feelings are a gateway. They’re our doorway. So they are our barometer.

So it’s really important to develop a really good vocabulary around feelings. And it’s really important to get to the feeling itself, whether it’s sadness, or anger, or upset, or despair or grief, or joy, or happiness, it’s really important to develop a vocabulary of feelings because if I ask someone how they feel, and I get, “I feel like,” no feeling is coming after the word “like.” I feel like jumping off a cliff. I feel like just going to bed and never getting up again. I feel like running away. That’s not a feeling. People can use those kinds of metaphors for us to try and guess the feelings, but actually what I want is the feeling.

Lucas Perry: Right, those are overly constructed. They need to be deconstructed into core feelings, which is a skill in itself that one learns. So you could say for example, “I feel abandoned,” but saying, “I feel abandoned,” needs to be deconstructed. Being abandoned is being afraid and feeling lonely.

Maria Arpa: So if we say, “I feel abandoned,” and I’m particularly referring to the ED at the end, or, “I feel disappointed,” rather than disappointment, then what I’m doing is I’m actually, by the backdoor, I’m accusing somebody of doing it to me.

Lucas Perry: Yeah, that’s right. It’s a belief about the world actually, that other people have abandoned you or are capable of abandoning you.

Maria Arpa: Yeah, exactly. So there’s a skill in the language. However, if you go to the cnvc.org website, we have a free feelings list and a needs list that’s downloadable for anybody that wants to go and get it. And that helps you to really get closer to the language.

Lucas Perry: Okay, so I do have a objection here that I don’t want to spend too much time on, but I’m curious what your reaction is. So I think that what you would call the “abandoned” word is a faux feeling, is that the right word you guys use? There’s a sense in which it needs to be further deconstructed, which you mentioned, because it’s a belief about the world, yet is there not also some reality that we need to respect and engage with, where abusers or toxic people may actually be doing the kind of thing which seems like a belief in the world.

Maria Arpa: That’s where I would get back to the observation. Because those things do happen. I work a lot in domestic violence. I understand this. And there are two things, and one that we’ll go on to later. There’s getting back to the observation, because if I heard you say, “I feel abandoned,” what I would want to do is go back to figure out what’s the observation that brings you to that sense. Because actually, if you’re telling yourself you’ve been abandoned, or if somebody has abandoned you, and we can see that in an observation, then I’m guessing you’re feeling a huge amount of misery, grief, and despair, or loneliness, those would be the feelings.

And then later on, when we get to it, I’ll talk about the use of protective force. Because it isn’t all happy, dippy, and let’s all get on our hippie barge and have a great life. Without putting too fine a point in it, shit has happened, shit is happening, and shit’s always going to happen. And that’s the way of the world. What I’m talking about is how we respond to it.

Lucas Perry: Yeah, that’s right. You can try and NVC Hitler, and when you realize it’s not going to work, that’s when you mobilize your armies.

Maria Arpa: That’s a very interesting thing, you could try to work with Hitler, because actually, I don’t know if you’ve seen, I have a copy of it somewhere, Gandhi actually wrote a letter to Hitler.

Lucas Perry: Yeah, it didn’t work.

Maria Arpa: It didn’t work. And actually, if you look at the letter, it’s a shame because there was nothing in there for me that I recognized as NVC that may have generated at least a response.

Lucas Perry: Alright, so we have feelings, and we want to be sure to deconstruct them into simple feelings, which is a skill that one develops. And the thing here that you said that the feelings are like the warning to check the engine of the car, which is a metaphor to say feelings are a signal giving you information about needs being unmet, or at least even the impression or ignorance or delusion that you think your needs are not being met. Whether it’s actually your needs not being met, or a kind of ignorance to your needs being met, either way, they are a signal of that kind of perception.

Maria Arpa: Yes, absolutely. They’re a signal for something. And so when we talk about feelings, what I’m trying to do is capture the real emotion here and name it.

Lucas Perry: And so then there’s a sense that when you communicate needs to other people, they cannot be argued with and they’re also universally shared. So you can recognize the unmet needs of another person as a reflection or a copy and paste of your own needs.

Maria Arpa: So, this is a really interesting part of the conversation when we get to needs because that sits in something called needs-based theory. And Marshall Rosenberg does not have the monopoly on needs-based theory. I mean, most people will have heard of Maslow’s hierarchy of needs. There’s a Chilean economist called Manfred Max Neef, who boiled all the needs down to just nine and said that everything else is just ways or satisfiers, to try and meet those needs.

For me, needs-based theory is an art, not a science. And so again, you could go on the cnvc.org website, and you can pull off a list of needs, and you’ll recognize them. Now, when I say it’s an art not a science, on there could be, say, the need for order and structure. Okay, so let’s say I have a need for order and structure.

Lucas Perry: That seems like it needs to be deconstructed.

Maria Arpa: Yes. So I would then say, “Maybe that is a strategy to get to a deeper need of inner peace, but at the moment, that seems to be the very present need for me. I come downstairs, my desk looks like a bomb’s hit it, I’ve got calls to get on, and I just don’t feel like I can get my day started until I’ve created a sense of order around myself.”

It is a simple need in that moment, but the idea is that when we look at the fundamental needs like air, and movement, and shelter, and nutrition, and water, those are universal. I mean, I don’t think anyone could disagree with that. And then we get into more spiritual needs and social needs. Things like discovery and creativity and respect, what a big word, respect is. And the way I like to look at it is, you see, all the arguments we ever have can never be over needs, because I can recognize that need in myself, as well as in others, but they’re over the strategies that we’re trying to use to meet the need that may be at a cost to someone else’s needs, or to my own deeper needs.

So a really good example is if you take our need for air. There’s only one strategy to meet our need for air and that’s to breathe. How many arguments do people have over the need for air and the strategy of breathing?

Lucas Perry: Zero.

Maria Arpa: Right. Now, let’s take a really big word that gets bandied about everywhere, respect. How many arguments do we have over the strategies we’re using to meet a need for respect?

Lucas Perry: A million. And another million.

Maria Arpa: Exactly, exactly. And so the arguments are only ever about strategy. And once you’ve understood it, and practiced it, and embodied that, and you can see the world through that lens, everything changes. And that’s why I can do what I do.

Lucas Perry: Yeah, well, so let’s stick with air. So some people have a strategy for meeting their needs by polluting the air with things. So there’s some strategy to meet needs where the air gets worse, and everyone has this more basic need to breathe clean air, and so the government has to step in and make regulations so that some more basic need is protected. But then so there’s this sense that strategies may harm other people’s needs, and there’s a sense in which sometimes the strategies are incompatible. But there’s this assumption that I think is brought in that the world is sufficiently abundant to meet everyone’s needs and that’s a way, I think, of subverting or getting around this contradictory strategy problem where it would suggest that, okay, oil companies, we can meet your needs some other way, as long as you change your strategy and we’ll help you do that so that we have clean air. Does this make sense?

Maria Arpa: It makes total sense, and I’ve got two sort of parallel answers, maybe even three. So the first one, we’ve got where we have where there are people in the world who don’t mind, or maybe they do mind secretly, doesn’t matter, where there are people in the world who will pollute the air for profit. And we’ve reached that point because we have been using an adversarial system with each other that means that as long as I can turn someone into the enemy, I can justify doing whatever I want. So we create bad people and good people.

So in this adversarial system, one of the things we can do is justify what we’re doing by holding up other people as being in the way. So we’ve created that system and actually what we’re finding, is that the system is failing. I don’t know, I don’t want to predict things, I’m not an economist or a politician, but it seems to me that the system is failing rapidly, more and more. More harm is being visited on the planet than is necessary and lots of people are waking up to that.

So now we’re hitting some kind of tipping point where in giving people things like the internet and all this stuff to self soothe them, actually, a lot of people got educated and started to ask better quality questions about the world they’re living in. And I think there’s a bit of an age difference between us, the wrong way for my end, but people of your generation are definitely asking better quality questions, and they’re less willing to be fobbed off.

So now we’ve got to figure out, how do we change things? And while I understand that from time to time, we need to go out and we need to actually put our foot down and make a protest and make a stand and say, “We’re not putting up with this,” and use protective force, and nonviolent resistance, and civil disobedience, while we need to do those things, we will never change things if we’re only operating at the incident level. If you try to do everything and fix it at the incident level without somebody working long-term on the system… People need to organize, and work out how people like you could get into positions of power.

I mean, I did a lovely piece of work with a Somalian community many years ago, and they’d arrived in the UK as refugees, and when they first arrived, they thought they were only going to be around for a few years and that the war would sort itself out and they’d all go back home. So they kept to themselves and they were very excluded and left out of society, and some of the sons were getting into trouble with the police because they hadn’t really worked out how to live in this society, and after they realized that actually they weren’t going back, “We’re here, this is our home,” what they’ve realized is they needed to start organizing. They needed to become teachers and doctors and lawyers and actually start to help their own community in that way. And I found that very moving and very empowering, and I loved doing the work with them. And the work we were doing was literally around the mothers and the sons. So that’s changing things at a system level.

Lucas Perry: Okay, so the final point here is about making requests. And I think this is a good way to pivot into talking about, you can’t make requests to make systemic change, because the power structures are sufficiently embedded in the incentives are structured such that, “Hey, excuse me, please stop having all that power and money, my needs are not being met,” isn’t going to work.

So let’s talk about the efficacy of NVC and how it’s effectively used. So I think it’s quite obvious how NVC is excellent for personal relationships, where there’s enough emotional investment in one another, and authentic care and community where everyone’s clearly invested in actually trying to NVC if they can. Then the question becomes, for bigger problems in the world like existential risk, if NVC can be effective in social movements, or with policy makers, or with politicians, or with industry or other powerful actors whose incentives aren’t aligned with our needs and who function impersonally. What is your reaction to NVC as applied to systemic problems and impersonal, large, powerful actors who have historically never cared about individual needs?

Maria Arpa: That’s a really interesting question because in my experience of the world, nothing happens without some kind of relationship between people. I mean, you can talk about powerful actors that don’t care, but bring me a powerful actor that doesn’t care and let me have a conversation with them. So for me, I agree that there’s a place for NVC in a group of people who care. There’s also a place for NVC in making the conversation irresistible, finding that place in somebody, because if we work on the basis that there are human beings in the world that have no self-love, or no love at all, if we work on the basis that there are human beings that walk the planet that are just all selfish and dangerous and nothing else, then of course, we’re doomed.

But I don’t believe that, you see, I believe that we are all selfish, greedy, kind, and considerate. And I know this from doing this work in prisons, that often what’s happened is the kind and considerate has just gone to sleep, or it’s paralyzed, or it’s frozen, but it is there to be woken up. And that’s the power of this work, when the person has sufficiently embodied it, has practiced this, and really understands that this involves seeing the world through a different lens. That actually, my role in the work I do, and I work in the front line of some of the worst things that go on in society, my role is to wake up the part of a person that is kind and considerate, and nurture it and bring it to life and grow it and work with it. And that doesn’t happen in one conversation. I don’t do that because I want something, I do that because I generally care about how that person is destroying themselves.

I can give you an example of somebody I met in prison who had been imprisoned for being part of a very, very violent gang, been in violent gangs for most of his life, done a lot of time in prison, and the judge called him evil, and greedy, or whatever. And he came on one of my trainings in around 2013. And he kept coming to talk to me in the breaks, it was like he really wanted some kind of connection or some affirmation or something, and he said, “I did a restorative justice training last month, and I really have to think about the harm I’ve done to my victims.” And I said, “You also have to think about the harm you’ve done to yourself.” And that was the first moment of engagement. And actually, now this man will be out of prison in I think 2022. He has put himself through a therapeutic prison for six years. I’ve never seen a life change to such an extent or such a degree. We’re thinking about employing him when he comes out of prison.

And that’s the thing is, it’s how do you engage a person to look at themselves and to look at how they may be destroying themselves in the pursuit of whatever it is they think they need. So, bring me somebody who is a powerful actor, who doesn’t care about anyone else, and we’ll open the conversation. That’s how I see it. The reason that I can do this and I can have these conversations is I don’t have an agenda for another human being. I simply want to understand what is going on, what the motivations are, what the needs are, and work out with that person, is that strategy actually working for you? And if you’re meeting your need for power or growth or structure or whatever, is it costing you in some other needs that’s actually killing you slowly?

Lucas Perry: Yeah, I mean, I think that, for example, at risk of becoming too esoteric, non-dual wisdom traditions, I think would see this kind of violent satisfaction of one own’s needs is also a form of, first of all self harm, because your needs extend beyond what the conceptual egoistic mind would expect to be your needs. I’m thinking of someone who owns a cigarette company, and who’s selling them and knows that he’s basically helping to lie about the science of it, and also promoting that kind of harm. There’s a sense in which it’s spiritually corrupting, and leads to the violation of other needs that you have when you engage in the satisfaction of your needs through these toxic methodologies.

Maria Arpa: Absolutely, and it’s a kind of addiction, it’s a kind of habit, or obsession. One of the things that I’m really interested in is, at the end of the day, when we get to the request part of NVC, the real request is change. Whatever it is I’m asking for, whether I’m making a request of myself or the person in front of me, I’m requesting change. And change isn’t easy for most people. People need to go through a change process. And so it’s not just about the use of NVC as an isolated tool that is going to change the world, it is about contextualizing the use of NVC within other structures and systems like change processes, understanding group dynamics, understanding psychology, and all of those things, and then it has its place.

Lucas Perry: Yeah, it has a place amongst many other tools in which it becomes quite effective, I imagine. I suppose my only reaction here then is, you have this perspective, like, “Bring me someone in one of these positions of power, or who has sufficient leverage on something that looks to be extremely impersonal, and let’s have a conversation,” those conversations don’t happen. And no one could bring you that person really, and the person probably wouldn’t want to even talk to you, or anyone really who they know is coming at them from some kind of paradigm like this.

Maria Arpa: Oh, I don’t know about that. I mean, in the work I do, it’s a very small world, I’m not trying to affect global change. I would love to, but I’m not. But in the prison work I was doing, we managed to get the prison’s minister to come and see the work, and I managed to then have a meeting with him. And I managed to convince him on one or two things that had an effect at the time. So I don’t know that these things don’t happen. I think it’s about the courage and determination of the people to get those meetings, not coming from having an agenda for that person, but coming from really wanting to understand what the thinking is.

Again, in my experience, having been around the block a few times, the people making policies would be absolutely horrified if they saw how those policies are being delivered on the ground. There’s a huge gap between people sitting somewhere making a policy, and then how it gets translated down hierarchical systems, and then how it gets delivered. I like to think that policy makers aren’t sitting around the table going, “How can we make life worse for everybody, because we hate everybody.” Policy makers are sometimes very misguided and detached and unable to connect, but I don’t think policy makers are sitting there going, “We hate everybody, let’s just make life difficult.” They really genuinely believe they’re solving problems. But the issue with solving problems is that we’re addicted to strategy before understanding the needs.

Lucas Perry: We’re addicted to strategy before understanding needs.

Maria Arpa: Yeah. Our whole mentality is, “Problem? Fix it.”

Lucas Perry: So I mean, the idea here then, is that the recognition of needs, as well as bringing in some other assumptions that we can talk about shortly, and relaxing this adversarial communicative paradigm into a needs-based one where you take people on good faith and you recognize the universality of human needs, and there’s this authentic care and empathy which is born of, not something which you’re fabricating, but something which participating in actually serves some kind of need that you actually already have to have authentic human connection, or maybe that boils down to love. And so NVC can be an expression of love in which NVC becomes something spiritual. And then that this kind of process is what leads to a reexamination of strategy.

Maria Arpa: Yeah, so the idea is that because we have a problem, fix it mentality, we are skipping over the main part which is to sit with the pain of not knowing. So what we do is we jump to strategy, whether that’s in our daily lives, “I feel bad, I’ll go and get a haircut or buy myself a new wardrobe, or I’ve got a problem, and it’s going to create a big PR problem, so I’m going to do this,” and what we’re missing is the richness of understanding that when you do that, you’re acting out of fear, you’re jumping, because you’ve got triggered or stimulated in some way, and you’re acting out of fear to prevent yourself from the feelings that you don’t believe you’re going to be able to cope with.

And what I’m saying is that we understand, we get to the observation, we identify, is this an issue? Is it not an issue? And then we go within, in a group, and we sit with the pain, the mourning, of the mistakes we’ve made, or the problem we haven’t solved, or the world we’ve created, whatever it is, and it’s in sitting together with that, and being willing to say, “I don’t know what the answer is right now, or today. Maybe I just need to breathe,” in being able to do that, we reach our creativity.

So we’re coming out of a place of absolute creativity and love, not jumping out of fear. And there’s a tremendous difference in operating in the world in this way. But it requires us to be willing to be vulnerable, and I think that’s what I think you’re talking about when you talk about people being detached. They’re so far away from their vulnerability, and when people are so far away from their vulnerability, they can do terrible things to other people or themselves.

Lucas Perry: Yeah, I mean, this is a sense of vulnerability in which, it’s a vulnerability of the recognition and sensitivity of your needs, but there’s a kind of stable foundation and security in that vulnerability. It’s a kind of vulnerability of self-knowing, it seems.

Maria Arpa: It’s vulnerability, plus trust.

Lucas Perry: It seems to me then, NVC’s place in the repertoire of an effective altruist, or someone interested in systemic change or existential risk, is that it becomes a tool in your toolkit for having a kind of discourse with people that may be surprising. I definitely believe in the capacity for enlightened or awakened people to exist as an example of what might be possible. And so if you come at someone with NVC who’s never experienced NVC before, I agree with you that that is where, “Oh, just have the conversation with the person,” might lead to some kind of transformative change. Because if you exist as a transformative example of what is possible, then there is the capacity for other people to recognize the goodness in you that is something that they would want and that leads to peace and freedom. NVC is obviously not the perfect solution to conversation, or the perfect solution to the problem of strategy, for example, and I guess, broadly, strategy can also be understood as game theory, where you’re going to have lots of different actors with different risk tolerances and incentives, but it is a much, much better way of communicating, full stop.

Maria Arpa: I notice I feel a slight discomfort when you call NVC a tool, because I don’t see it as a tool, I see it as a way of life.

Lucas Perry: Yeah, I hear that.

Maria Arpa: When I’m in that frame, because I look at the person I was 20 years ago, and I look at the person I am now and I see the transformation, but it’s because of the embodiment of something. It’s because it’s really helped me to look at all aspects of my life. It’s helped me to understand things that I wasn’t understanding, it helped me to wake up and become functional, and mindful, and all of those things, but that’s who I am now. I mean, I’m not saying that I’m some perfect person, and of course, occasionally, the shadows always there, but I’ve learned not to act on my shadow. I’ve learned to play with it. But when I am that embodiment or that person, then I’m bringing a new perspective into any conversation I have. And sometimes people find that disarming in an engaging way.

Lucas Perry: Yeah, that’s right. It can be disarming and engaging. I like that you use the word waking up. We just had a podcast with a former Buddhist monk and we talked a lot about awakening. And I agree with you that calling it a tool is an instrumentalisation of it, which lends itself to the problem-solving mindset, which is kind of adversarial with relation to the object which is being problem solved, which in this case, could be a person. So if it becomes a kind of non-conceptual embodied knowing or being, then there is the spiritual transformation and growth that is born of adopting this upgraded way of being. If you download the software of NVC, things will run much better.

Maria Arpa: So then I wanted to comment on the strategy. NVC is a way of unlocking something, okay. Now once I’ve unlocked it, and once I’ve got to the part where we’re now looking at strategies that will satisfy needs, now, we might need a different way of conversing. Now it might be very robust, it might be from the point of negotiation, and that negotiation may be very gentle and sensitive, but it can also be very boundaried. And so yeah, NVC for me is the way to unlock something, to bring people into a consciousness that what we’re going to do is, what’s the point of making strategies if we don’t understand the needs we’re trying to meet, and then using those needs as the measurement for whether the strategy is going to satisfy or not.

Lucas Perry: Okay. And I think I do also want to put a flag here, and you can tell me if this is wrong, that even those negotiations can fail. And that comes back to this kind of assumption that the world has sufficient abundance, that everyone’s needs can be met. So I mean, my perspective is I think that the negotiations can fail, and that when they fail, then something else is needed.

Maria Arpa: So if the negotiation has failed, in my experience, it’s because somebody wanted something, even if it was just speed, that wasn’t available. And so a really big deal for me is understanding where we want to get to, having that shared vision that we’re all trying to get to this place, and working towards it at the speed and tolerances of the whole group, and yet not allowing it to go at the slowest person’s pace. And that’s an art. There’s a real skill to, “We’re not going at the slowest person’s pace, but we’re also not going to take people out of their tolerances.”

Lucas Perry: But it seems like often with so many different people, tolerances are all over the place and can be strictly incompatible.

Maria Arpa: So that means we didn’t do enough work, and our shared vision isn’t right, and maybe we need to go back and look at the deeper needs. One of the things I talk about in this work is you’re never going to undo 30 years of poor communication in one conversation. It’s a process, and what I’m looking for is progress. And sometimes progress is literally just the agreement that the person will have another conversation with me, rather than slam the door in my face.

I’ve done neighbor disputes where I have knocked on someone’s door, they haven’t responded to the letters or the phone calls, and I have knocked on someone’s door, and I’ve got 30 seconds before they slam the door in my face and tell me, in no uncertain terms, tell me to whatever. And so for me at that moment, just then giving me two minutes, and then just getting to the agreement, I’m not going to try and do any business with you right now, just to get to the agreement that you will have another conversation with me, is progress.

And so it’s really about expectations and how quickly we think we can undo things or change things. And change processes are complex. How many times did you wake up and say, “I want to get fit or eat healthier food or lose weight or stop smoking or drink less,” or whatever it was, and then did you execute it straight away? No, you fluctuated. You probably relapsed, and relapse is a really important part of change. But then do we give up? Do we say, “Well that’s it, it’s over, we can’t negotiate,” or do we say, “Well, okay, that didn’t work. What else could we try?”

So in my world, and what I’ve understood, is the art, or the trick to life is not constantly searching to get your needs met. The trick to life is understanding that I have many needs, and on any day, week, month, year, some go met, and some go unmet, and I’m okay with that. It’s just looking on balance. Because if the aim of the game is to go, “Yeah, my need for this, this, this and this are all not being met, so therefore, I’m going to just make it my mission to get my needs met,” you’re still in the adversarial paradigm.

So I have lots of needs that go unmet, and you know what, it’s fine. It doesn’t mean I can’t express gratitude for what I do have. It doesn’t mean I don’t love everybody and everything in the way it is. It’s fine. I have no expectation that all my needs will get met.

Lucas Perry: Yeah, so you’re talking here about some of your experience, which I think boils down to some axioms or assumptions that one makes in NVC that I think are quite wholesome and beneficial. And I’ll just read off a bunch of them here and if you have any reactions or want to highlight any of them, then you can.

So the first is that all human beings have capacity for compassion and empathy. I believe in that. Behavior stems from trying to meet needs, so everything that we do is an expression of just trying to meet needs. You said earlier there are no bad or good people, they are just people trying to meet needs. Needs are universal, shared, and never in conflict. I think that one’s maybe 99.9% true. I don’t know how psychopaths fit in there, like Jeffrey Dahmer, fits in.

Maria Arpa: Well, I mean, I’ve worked with people in prisons who have been labeled as psychopaths, and I have on that very clear basis that people are selfish, greedy, kind and considerate, but the kind and considerate is either not on show, not available, has been put in a box, paralyzed, not today, I have woken up the kind and considerate.

Lucas Perry: You don’t think that there are people that are sufficiently neurodivergent and neuroatypical, that they don’t fit into these frameworks? It seems clearly physically possible.

Maria Arpa: It only runs out when I run out of patience, love, and tolerance to try. It only ends at that point, when I run out of patience, love and tolerance to try, and there might be many reasons why I would say, “I’m no longer going to try,” don’t get me wrong. We’re not asking everybody to just carry on regardless. But yeah, when I say I’ve had enough, and I don’t want to do this anymore, that’s when it, the trouble is we do that I think far too quickly with most people.

Lucas Perry: Yeah. All right. And so kind of moving a bit along here. The world has enough resources for meeting everyone’s basic needs, we mentioned this.

Maria Arpa: I do want to just comment on the world is abundant, and it has enough abundance to meet everybody’s needs.

Lucas Perry: Yeah.

Maria Arpa: The issue is, if it hasn’t, what’s the conversation we’re going to have? Or do we just want to inflict more and more unnecessary human suffering on each other? If, as is predicted, there’s going to be climate change on a scale that renders parts of the planet uninhabitable and there’s going to be mass migration, what are we going to do? Are we going to just keep killing them? Are we going to have a race to the bottom?

Lucas Perry: Are we going to leave it to power dynamics?

Maria Arpa: Yeah, or are we going to say, “Actually, things are getting tighter now, so we need to figure out how to collaborate so that we don’t kill each other.”

Lucas Perry: And then, so I was saying, if feelings point to needs being met or unmet, I would argue this is more like feelings point to needs being met or unmet, or the perception that they are being met or unmet.

Maria Arpa: So I just say, being able to identify the feeling and name the emotion is an alarm system. It’s our body’s natural alarm system, and we can use that to our advantage.

Lucas Perry: And I’ll just finish with the last one here that I think is related to the spiritual path, which says that the most direct path to peace is through self connection. So through self connection to needs, one becomes, I think, increasingly empathetic and compassionate, which leads to a deepening and an expression of NVC, which leads to more peace.

Maria Arpa: Yeah, the first marriage is this one. The first marriage is the one between I and I, and if that one ain’t working, nothing else is going to work.

Lucas Perry: All right. So can we engage in an NVC exercise now as an example of the format, so moving from observations to feelings to needs to request?

Maria Arpa: Okay, so I think it would be really useful if you could tell me about a situation that’s on your mind right now. And Marshall Rosenberg would say, “Can you tell me in 40 words or less what’s alive in you right now?”

Lucas Perry: So let’s NVC about this morning, then? I was late, and I kept you waiting for me, and I also showed up to the Zoom call late because… I guess the because doesn’t matter. Unless that’s part of the chronology of what happened. But yeah, I showed up to the Zoom call late and then my microphone wasn’t working, so I had to get another microphone. And I feel… how do I feel? I feel bad that I mean, bad isn’t the right word, though, right?

Maria Arpa: Mm-hmm (affirmative). But you just say it, just say it, and then we’ll go over it together.

Lucas Perry: Yeah, so I guess I regret and I feel bad that I wasn’t fully prepared, and then it took me 15, 20 minutes to get things started. And this probably relates to some need I have the podcasts go well, but also that I not damage my relationship with guests, which relates to some kind of need about… I mean, this probably goes all the way down to something like a need for love or self-love, very quickly, it would seem. And yeah, it’s unclear that we’ll have another conversation like this anytime soon, so it’s unclear to me what the kinds of requests are, though maybe there’s some requests for understanding and compassion for my failure to completely show up and arrive for the interview perfectly. How’s that for a start?

Maria Arpa: Yeah, it’s really very sweet, and so I’d love to just tell you some of what I’ve heard, first. So I heard you say that you’re feeling bad because you turned up late and unprepared for our interview, and that feeling bad or regretful is linked to some kind of need, at the end of the day you went there, and you said, “It’s probably a need for self-love.” And it’s hard to know what a request would be like, but I guess what I heard you say is you’d like to request some understanding.

Lucas Perry: Yeah, that’s right.

Maria Arpa: Before I respond to you, I would really love to just break down how you did observation, feeling, need, request, and then work with you a little on each of those. Would that be okay?

Lucas Perry: Yeah, that sounds good.

Maria Arpa: So the biggest judgment in your observation was the word late. It takes a bit of understanding, but who defines late?

Lucas Perry: Yeah, it started 15 minutes after the agreed upon time, is more of a concrete observation.

Maria Arpa: Well, a concrete observation actually, is that we got online at the time we agreed, and we didn’t start the interview until 15 minutes later.

Lucas Perry: Well, I was five minutes late to getting to the Zoom call.

Maria Arpa: Okay. Well, yeah, that word late again.

Lucas Perry: Sorry, I arrived five minutes…

Maria Arpa: After the agreed time.

Lucas Perry: Thank you.

Maria Arpa: Because late can be a huge weapon for self punishment. So the observation is, you came on the call five minutes after the agreed time, and we didn’t begin the interview until 15 minutes after the agreed time. So unprepared, late, and all of those things, they’re what you’re telling yourself. It’s part of the story, because from my perspective, since the example you gave includes me in that narrative, from my perspective, we didn’t have an agreement about what to expect at 10:00 AM your time, 3:00 PM my time. So how do I know you were unprepared? I’ve never done this before with you.

Lucas Perry: Okay.

Maria Arpa: Does that make sense? Can you see that you put a huge amount of judgment into what you thought was your observation, when in actual fact, who knows?

Lucas Perry: Yeah, so introducing other observations now here, is the observation that, I believe I pushed this back twice.

Maria Arpa: Once.

Lucas Perry: I pushed this back once?

Maria Arpa: Mm-hmm (affirmative).

Lucas Perry: Okay, so I pushed this back once. And then there was also this long period where I did not get back to you about scheduling after we had an initial conversation. So the feelings are things that need to be deconstructed. They’re around messiness, or disorganization, or not doing a good job, which are super synthetic in an evaluative, and would need to be super deconstructed, but that’s what I have right now.

Maria Arpa: So on some level, you didn’t meet your own standards.

Lucas Perry: No, I did not meet my own standards.

Maria Arpa: Right. So on some level, you didn’t meet your own standards, and that’s giving rise to a number of superficial feelings, like you’re feeling bad and guilty, and all of those things. And I can only guess, I’ve told myself, but perhaps you’re feeling regret, possibly some shame, and I don’t know why the word loneliness comes up for me. Isolation or loneliness, disconnection, something around, that you’ve screwed up, and now you have to sit in there and now there’s some shame and some regret and some embarrassment. Embarrassment, that’s it. I’m telling myself the feeling is embarrassment.

Lucas Perry: I’m trying to focus and see how accurate these are.

Maria Arpa: Yeah, and I could be completely wrong. I can only guess.

Lucas Perry: Yeah, I mean, you’re trying to guide me towards explaining my own feelings.

Maria Arpa: Yeah, so does anything resonate with embarrassment, or shame, or regret, or mourning.

Lucas Perry: Mostly regret. This kind of loneliness thing is interesting, because I think it’s related to the feeling of… If there was sufficient understanding and compassion and self-love, then these feelings wouldn’t arise because there would be understanding. And so the loneliness is born out of the self-rejection of the events as they transpired. And so there’s this need for wholeness, which is a kind of self-love and understanding and compassion and knowledge. It’s just a aligned state of being, and I’ve become unaligned by creating this evaluative story that’s full of judgment. Because all this could happen and I could feel totally fine, right? That roughly captures the feelings.

Maria Arpa: Okay. And then it really resonated for me and I heard you say that this need for wholeness, and definitely for understanding and love, and a deep need for mutuality.

Lucas Perry: Yeah. There’s a sense that I can’t fully meet you when I haven’t fully accepted myself for what I have done. Or if there’s a kind of self consciousness around the events and how it has impacted you.

Maria Arpa: Yeah, so I’m telling myself that we made an agreement, and actually, part of it is a story that you’re telling yourself, and part of it has some reality in it, that you didn’t meet the terms of our agreement.

Lucas Perry: Yeah.

Maria Arpa: And then what that’s doing to you is, when you didn’t meet the terms of the agreement, I’m telling myself that now what happens to you is you worry, I think that’s the word. Ah, that’s it, maybe the feeling’s worry, or anxiety, that then any connection that we might have made is disconnecting or breaking, or we’re losing mutuality, because I may be now looking at you differently for not having met the terms of the agreement.

Lucas Perry: Yeah, and there’s also a level of self-rejection. And someone who is self-rejecting is in contradiction and cannot fully connect with someone else. If you find that yourself is unlovable or that you do not love yourself, then it’s like impossible to love someone else. So I think there’s a sense also, then of, if you’re creating a bad narrative and that’s leading to a sense of self-rejection, then there’s an inability to fully be with the other person. So then I think that’s why you were pointing out to the sense of loneliness, because this kind of self-rejection leads to isolation and inability to fully meet the person as they are.

Maria Arpa: Yeah, so we went over the observation, we got to some of the feelings, we got to some of the needs, now do you have a request? And I think I heard you say you have a request for understanding.

Lucas Perry: Yeah, understanding and compassion, probably mostly from myself, but also from you.

Maria Arpa: Yeah, so I’m wondering if you’d like me to respond to that request now. Would that be helpful for you?

Lucas Perry: Yeah, sure.

Maria Arpa: So I guess when I hear your request for understanding and compassion, and that you’re also recognizing you need to give it to yourself, and that’s a relief for me that you know you need to give it to yourself, and yet on some level, we do have a situation over an agreement that was broken. I would love for you to be able to hear where I am. And I’m just wondering, would you be willing to hear where I am in that, to support you in your request?

Lucas Perry: Yeah, and probably some need for love in the sense of, I mean, there are different kinds of love. So whatever kind of love exists between you and I, as human beings that is coworker love, or colleague love, or whatever kind of relationship we have, I don’t know how to explain our relationship, but whatever kind of love is appropriate in that context.

Maria Arpa: So I guess where I’m coming from, is, I feel deeply privileged and honored to be asked to do this podcast. I’ve heard some of your other podcasts, and I think they’re masterpieces. So to be invited to do this at all and for us to have met and for you to have actually said, “Yeah, let’s go ahead and do this,” went a long way for me to believe in myself as well.

So you may be having your own moment of self punishment, and so did I. “What happens if he doesn’t like me at the end of our interview, and doesn’t want to do it?” And in terms of our agreement, as far as I’m concerned, you got online roughly at the time we said, and I have no idea, we didn’t have a tacit agreement that the interview then starts at, and so in terms of being alongside you while you make preparations and whatever, actually, it helped me to see you as human. So it actually increased my love for you.

Lucas Perry: Oh, nice.

Maria Arpa: Because I saw in that first meeting, you were kind of interviewing me and seeing if there was suitability for a podcast. And of course, I know I know my stuff, right. That’s not the issue. But there was a sense of me wanting to be on best behavior. But now I come to this call, and there you were just being human and expressing it, and I was able to say a few things to you. And I felt that 15 minutes was very connecting, it was very connecting for me. And so I just wonder when you hear that, does it change anything for you?

Lucas Perry: Yeah, I feel much better. I feel more capacity to connect with you and I appreciate the honesty and transition of how you felt with regards to the first time we talked where, because I couldn’t find any of your content online, so I didn’t really know anything about you, so we had this first conversation where you felt almost as if there was kind of evaluative relationship going on, which was then I guess, dissolved by having this conversation in particular, but also the beginning of the podcast where I was being human and my microphone wasn’t working, and my electricity was out this morning and things weren’t working out. So yeah, I appreciate that. I feel much better and warm and more capacity for self-love and connection. So thanks.

Maria Arpa: Yeah.

Lucas Perry: I think that means the NVC was successful.

Maria Arpa: Yeah. And then just to add one thing, during the interview, I heard you say something like, “I don’t know what my requests would be because the opportunity for us to connect again like this,” or whatever, you said something about that we probably wouldn’t speak to each other again in a hurry. I actually felt really sad when I heard that. I felt such sadness, “Oh, no, I’ve connected with Lucas now.” So I hope that there’ll be other opportunities to just chat or stay in touch or whatever, because there’s something about you that I feel really resonates. And I love where you’re coming from, and I love what you’re trying to do. It’s really important.

Lucas Perry: Well, thank you, that’s really sweet. I appreciate it.

Maria Arpa: Thank you. And look at that, we’re bang on time.

Lucas Perry: So that means you escaped question eight, which is my criticisms of NVC.

Maria Arpa: We could come back and add that on another time, but I can’t do it now. If you want to do another bit, we can do another bit. I’m really happy to do that.

Lucas Perry: Yeah. Great. So as we wrap up, if people want to follow you or to learn more about NVC, or the Center for Nonviolent Communication, where are the best places to follow you or get more information or check out more of Marshall’s work?

Maria Arpa: So obviously, CNVC has a website which is CNVC, Center for Nonviolent Communication, .org. That’s your first port of call. Marshall’s books are published by Puddle Dancer Press. I know Meiji very well. I know him reasonably well, and he’s a really wonderful guy, so buy some books from Puddle Dancer Press, because Marshall’s books are amazing. There are 700 NVC trainers across the world, and you can find those on the website if you go to the right bit and search. So if you want to find someone local in your area, and they all work differently and specialize in different things. If you put NVC into Facebook, you will find countless NVC pages. And if you’re looking for me, Google my name, Maria Arpa, and I will come up. Thank you.

Lucas Perry: All right. Thanks, Maria.

Stephen Batchelor on Awakening, Embracing Existential Risk, and Secular Buddhism

 Topics discussed in this episode include:

  • The projects of awakening and growing the wisdom with which to manage technologies
  • What might be possible of embarking on the project of waking up
  • Facets of human nature that contribute to existential risk
  • The dangers of the problem solving mindset
  • Improving the effective altruism and existential risk communities

 

Timestamps: 

0:00 Intro

3:40 Albert Einstein and the quest for awakening

8:45 Non-self, emptiness, and non-duality

25:48 Stephen’s conception of awakening, and making the wise more powerful vs the powerful more wise

33:32 The importance of insight

49:45 The present moment, creativity, and suffering/pain/dukkha

58:44 Stephen’s article, Embracing Extinction

1:04:48 The dangers of the problem solving mindset

1:26:12 Improving the effective altruism and existential risk communities

1:37:30 Where to find and follow Stephen

 

Citations:

Stephen’s website

Stephen’s teachings and courses

 

We hope that you will continue to join in the conversations by following us or subscribing to our podcasts on YoutubeSpotify, SoundCloudiTunesGoogle PlayStitcheriHeartRadio, or your preferred podcast site/application. You can find all the AI Alignment Podcasts here.

You can listen to the podcast above or read the transcript below. 

Lucas Perry: Welcome to the Future of Life Institute Podcast. I’m Lucas Perry. Today, we have a special episode for you with Stephen Batchelor. Stephen is a secular and skeptical Buddhist teacher and practitioner with many years under his belt in a variety of different Buddhist traditions. You’ve probably heard often on this podcast about the dynamics of the race between the power of our technology and the wisdom with which we manage it. This podcast is primarily centered around the wisdom portion of this dynamic and how we might cultivate wisdom, and how that relates to the growing power of our technology. Stephen and I get into discussing the cultivation of wisdom, what awakening might entail or look like. And also his views on embracing existential risk and existential threats. As for a little bit more background, we can think of ourselves as contextualized in a world of existential threats that are primarily created due to the kinds of minds that people have and how we behave. Particularly how we decide to use industry and technology and science and the kinds of incentives and dynamics that are born of that. And so cultivating wisdom here in this conversation is seeking to try to and understand how we might better gain insight into and grow beyond the worst parts of human nature. Things like hate, greed, and delusion, which motivate and help to cultivate the manifestation of existential risks. The flipside of understanding the ways in which hate, greed, and delusion motivate and lead to the manifestation of existential risk is also uncovering and being interested in the project of human awakening and developing into our full potential. So, this just means that whatever idealized kind of version you think you might want to be or that you might strive to be there is a path to getting there and this podcast is primarily interested in that path and how that path relates to living in a world of existential threat and how we might relate to existential risk and its mitigation. This podcast contains a bit of Buddhist jargon in it. I do my best in this podcast to define the words to the best of my ability. I’m not an expert but I think that these definitions will help to bring a bit of context and understanding to some of the conversation. 

Stephen Batchelor is a contemporary Buddhist teacher and writer, best known for his secular or agnostic approach to Buddhism. Stephen considers Buddhism to be a constantly evolving culture of awakening rather than a religious system based on immutable dogmas and beliefs. Through his writings, translations and teaching, Stephen engages in a critical exploration of Buddhism’s role in the modern world, which has earned him both condemnation as a heretic and praise as a reformer. And with that, let’s get into our conversation with Stephen Batchelor. 

Thanks again so much for coming on. I’ve been really excited and looking forward to this conversation. I just wanted to start it off here with a quote by Albert Einstein that I thought would set the mood and the context. “A human being is a part of the whole called by us universe, a part limited in time and space. He experiences himself, his thoughts and feelings as something separated from the rest, a kind of optical delusion of his consciousness. This delusion is a kind of prison for us, restricting us to our personal desires and to affection for a few persons nearest to us. Our task must be to free ourselves from this prison by widening our circle of compassion to embrace all living creatures, and the whole of nature and its beauty. Nobody is able to achieve this completely. But the striving for such achievement is in itself a part of the liberation and a foundation of inner security.”

This quote to me is compelling, because, one, it comes from someone who is celebrated as one of the greatest scientists that have ever lived. In that sense, it’s a calling for the spiritual journey, it seems, from someone who, for people who are skeptical of something like the project of awakening or whatever a secular Dharma might be or look like. I think it sets up well the project. I mean, he talks about here how this idea of separation is a kind of optical delusion of his consciousness. He sets it up as the problem of trying to arrive at experiential truth and this project of self-improvement. It’s in the spirit of this, I think, seeking to become and live an engaged and fulfilled life that I am interested and motivated in having this conversation with you.

With that in mind, the problem, it seems, that we have currently in the 21st century is what Max Tegmark and others have called the race between the power of our technology and the wisdom with which we manage it. I’m basically interested in discussing and exploring how to grow wisdom and about how to grow into and develop full human potential so that we can manage powerful things like technology.

Stephen Batchelor: I love the quote. I think I’ve heard it before. I’ve come across a number of similar statements that Einstein has made over the years of his life. I’