Mohamed Abdalla, PhD student at the University of Toronto, joins us to discuss how Big Tobacco and Big Tech work to manipulate public opinion and academic institutions in order to maximize profits and avoid regulation.
Topics discussed in this episode include:
- How Big Tobacco used it’s wealth to obfuscate the harm of tobacco and appear socially responsible
- The tactics shared by Big Tech and Big Tobacco to preform ethics-washing and avoid regulation
- How Big Tech and Big Tobacco work to influence universities, scientists, researchers, and policy makers
- How to combat the problem of ethics-washing in Big Tech
1:55 How Big Tech actively distorts the academic landscape and what counts as big tech
6:00 How Big Tobacco has shaped industry research on industry research
12:17 The four tactics of Big Tobacco and Big Tech
13:34 Big Tech and Big Tobacco working to appear socially responsible
22:15 Big Tech and Big Tobacco working to influence the decisions made by funded universities
32:25 Big Tech and Big Tobacco working to influence research questions and the plans of individual scientists
51:53 Big Tech and Big Tobacco finding skeptics and critics of them and funding them to give the impression of social responsibility
1:00:24 Big Tech and being authentically socially responsible
1:11:41 Transformative AI, social responsibility, and the race to powerful AI systems
1:16:56 Ethics-washing as systemic
1:17:30 Action items for solving Ethics-washing
1:19:42 Has Mohamed received criticism for this paper?
1:20:07 Final thoughts from Mohamed
We hope that you will continue to join in the conversations by following us or subscribing to our podcasts on Youtube, Spotify, SoundCloud, iTunes, Google Play, Stitcher, iHeartRadio, or your preferred podcast site/application. You can find all the AI Alignment Podcasts here.
Lucas Perry: Welcome to the Future of Life Institute Podcast. I’m Lucas Perry. Today’s conversation is with Mohamed Abdalla on his paper The Grey Hoodie Project: Big Tobacco, Big Tech, and the threat on academic integrity. We explore how big tobacco has used and still uses its wealth and influence to obfuscate the harm of tobacco by funding certain kinds of research, conferences, and organizations, as well as influencing scientists, all to shape public opinion in order to avoid regulation and maximize profits. Mohammed explores in his paper and in this podcast how big technology companies engage in many of the same behaviors and tactics of big tobacco in order to protect their bottom line and appear to be socially responsible.
Some of the opinions presented in the podcast may be controversial or inflammatory to some of our normal audience. The Future of Life Institute support’s hearing a wide range of perspectives without taking a formal stance on it as an institution. If you’re interested to know more about FLI’s work in AI policy, you can head over to our policy page on our website at futureoflife.org/ai-policy, link in the description.
Mohamed Abdalla is a PhD student in the Natural Language Processing Group in the Department of Computer Science at the University of Toronto and a Vanier scholar, advised by Professor Frank Rudzicz and Professor Graeme Hirst. He holds affiliations with the Vector Institute for Artificial Intelligence, the Centre for Ethics, and ICES, formerly known as the Institute for Clinical and Evaluative Sciences.
And with that, let’s get into our conversation with Mohamed Abdalla
So we’re here today to discuss a recent paper of yours titled The Grey Hoodie Project: Big Tobacco, Big Tech, and the threat on academic integrity. To start things off here, I’m curious if you could paint in broad brush strokes how you view big tech as actively distorting the academic landscape to suit its needs, and how these efforts are modeled and similar to what big tobacco has done. And if you could also expand on what you mean by big tech, I think that would also be helpful for setting up the conversation.
Mohamed Abdalla: Yeah. So let’s define what big tech is. I think that’s the easiest of what we’re going to tackle. Although in itself, it’s actually not a very easy label to pin down. It’s unclear what makes a company big and what makes a company tech. So for example, is Yahoo still a big company or would it count as big tech? Is Disney a tech company? Because they clearly have a lot of technical capabilities, but I think most people would not consider them to be big tech. So what we did was we basically had a lot of conversation with a lot of researchers in our department, and we asked for a list of companies that they viewed as big tech. And we ended up with a list of 14 companies. Most of them we believe will be agreeable. Google, Facebook, Microsoft, Apple, Amazon, Nvidia, Intel, IBM Huawei, Samsung, Uber, Alibaba, Element AI, and OpenAI. This is a very restrictive set on what we believe the big tech companies are. Like for example, a clear missing one here is Oracle. There’s a lot of other big companies that are missing, but we’re not assigning this as a prescriptive or a definitive list of what big tech companies are.
But adding more companies to this list would only help us strengthen the conclusions we’ve drawn in our paper because they will show how much more influence these companies have. So by limiting it to a small group, we’re actually taking a pessimistic view on the maximum amount of influence that they have. That’s what we define as big tech.
Then the question comes, what do we mean by they have an outsized influence or how they go about influencing policy? And we will get into specific examples here, but I think the best way of demonstrating why there should be cause for concern is through a very simple analogy.
So imagine if there was a health policy conference, which had tens of thousands of researchers. And among the topics they discussed was how do you deal with the negative effects of increased tobacco usage. And its largest funding bodies were all big tobacco companies. Would this be socially acceptable? Would the field of health policy accept this? No. In fact, there are guidelines such as the Article 5.3 on the World Health Organization’s Framework for Convention on Tobacco Control. Which states that if you are developing public health policies with respect to tobacco control, you are required to act to protect these policies from commercial and other vested interests of the tobacco industry. So they are aware of the fact that industrial funding has a very large negative effect on the types of research, the types of conclusions, how strong the conclusions are that can be drawn from research.
But if we flip that around. So instead of health policy, replace machine learning policy or AI policy. Instead of big tobacco, you replace it with big tech. And instead of the negative effects of increased tobacco usage, the ethical concerns of increased AI deployment. Would this be accepted? And this is not even a hypothetical, because all of the big machine learning conferences among their top funding bodies are all these big tech companies. If you look at NeurIPS or you look at FAccT, the Fairness, Accountability, and Transparency Conference, their platinum sponsors or gold sponsors, whatever their highest level is depending on the conference, is all of these companies. Even if one wants to say it’s okay, because these companies are not the same as big tobacco, this should be justified. There is no justification for why we allow big tech to have such influence. I haven’t proven that this influence exists yet in my speaking so far. But there is precedence to believe that industrial funding warps research. And there’s been no critical thought of whether or not big tech, as computer science industrial funding warps research. And I argue in the paper that it does.
Lucas Perry: All right. So with regards to understanding how industry involvement in research and development of a field or area can have a lot of influence, what you take as a historical example to learn from the strategies and the playbook of a historical industry to see if big tech might be doing the same things as big tobacco. So can you explain in the broadest brush strokes, how big tobacco became involved in shaping industry research on the health effects of tobacco, and how big tech is using all of the same or most of the same moves to shape the research landscape to make big tech themselves have a public image of trust and accountability when that may not be the case?
Mohamed Abdalla: The history is that shortly after World War II in the mid 1950s, there was a pronounced decrease in demand for their product. What they believe caused or at least in part caused this drop in demand was a Reader’s Digest article that was published called Cancer by the Carton. And it discussed the scientific links between smoking and lung cancer.
It was later revealed after litigation that big tobacco actually knew about these links, but they also admitted it would result in increased legislation, decreased profits. So they didn’t want to publicly agree with the conclusions of the paper, despite having internal research which showed that this indeed was the case.
After this article was published, it was read by a lot of people and people were getting scared about the health effects of smoking. They employed a PR firm. And their first strategy was to publish a full-page ad in the New York Times that was seen by I think approximately 43 million people or so, which is a large percentage of the population. And they would go into state and I quote, “They accept an interest in people’s health as basic responsibility, paramount to every other consideration in our business.” So despite having internal research that showed that these links were conclusive in their full-page ad, not only did they state that they believed these links were not conclusive, they also lied and said that the health of their people was paramount to every other consideration, including profit. So clear, blatant lies, but dressed up really nicely. And it reads really well.
Another action that they were instructed to do by the PR firm was to fund academic research that would not only draw questions on the conclusiveness of these links, but also to sort of add noise, cause controversy, slowed down legislation. And I’ll go into the points specifically. But the idea to fund academia was actually the PR firms’ idea. And it was under the instruction of the PR firm that they funded academia. And despite their publicly stated goal of funding independent research because they wanted the truth and they care about health, it was revealed again after litigation that internal documents existed that showed that their true purpose was to sow doubt into the research that showed conclusive links between smoking and cancer.
Lucas Perry: Okay. And why would an industry want to do this? Why would they want to lie about the health implications of their products?
Mohamed Abdalla: Well, because there’s a profit motive behind every company’s actions. And that is basically unchanged to this day. Where while they may say a lot of nice sounding stuff, it’s just a simple fact of life that the strongest driving factor between any company’s decision is the profit motive. Especially if they’re publicly traded, they have a legal obligation to their shareholders to maximize profit. It’s not like they’re evil per se. They’re just working within the system that they’re in.
We see sort of the exact same thing with big tech. People can argue about when the decline of opinion regarding these big tech companies started. Especially if you’re American centric. Since I’m in Canada, you’re in the States. I think that the Cambridge Analytica scandal with Facebook can be seen as sort of a highlight, although Google has its own thing with Project Maven or Project Dragonfly.
And the Pew Research Firm shows that the amount of people that view big tech as having a net positive social change in the world started decreasing around the mid 2010s. And I’m only going to quote Facebook here, well Mark Zuckerberg specifically in his testimony to Congress. But in Congress, he would state, “It’s clear that we didn’t do enough and we didn’t focus enough on preventing abuse.” And he stressed that he admitted fault and they would double down on preventing abuse. And this was simply that they didn’t think about how people could do harm. And again, this statement did not mention the leaked internal emails, which stated that they were aware of companies breaking their policies. And they explicitly knew that Cambridge Analytica was breaking their scraping policies and chose to do nothing about it.
Even recently, there have been leaks. So Buzzfeed News leaked Sophie Zhang’s resignation letter, which basically stated that unless they thought that they were going to catch flack in terms of PR, they would not act to moderate a lot of the negative events that was happening on their platforms.
So this is a clear profit incentive thing, and there’s no reason to think that these companies are different. So then the question is how much of their funding of AI ethics or AI ethicists is driven by benevolent desire to see goodness in the world? And I’m sure there are people that work that have this sort of desire. But this is the four criticisms that you can get for reasons that you can fund academia. How much of it is used to reinvent yourself as socially responsible, influence the events and decisions made by funded universities, influence the research questions of individual scientists, and discover receptive academics.
So while we can assume that some people there may actually have the social good in mind and may want to improve society, we need to also consider these the reasons that big tobacco funded academia, and we need to check does big tech also fund academia? And is the effects of their funding academia the exact same as the effects of big tobacco funding academia? And if so, we as academics, or the public body, or government, or whoever needs to take steps to minimize the undue influence.
Lucas Perry: So I do want to spend the main body of this conversation on the big tobacco and big tech engage in these four points that you just illustrated, in order to reinvent themselves in the public image and be seen as socially responsible, even when they may not be actually trying to be socially responsible. It’s about the image and the perception. And then two, that they are trying to influence the events and decisions made by funded universities. And that three, they’ll be influencing the research questions and plans of individual scientists. This helps funnel research into areas that will make you look benevolent or socially responsible. Or you can funnel people away from topics that will lead to regulation.
And then the last one is to discover receptive academics who can be leveraged. If you’re in the oil industry and you can find a few scientists who have some degree of reputability and are willing to cast doubt on the science of climate change, then you’ve found some pretty good allies in your fight for your industry.
Mohamed Abdalla: Yep, exactly.
Lucas Perry: So before we jump into that, do you want to go through each of these points and say what big tobacco did in each of them and what big tech did in each of them? Or do you want to just start by saying everything that big tobacco did?
Mohamed Abdalla: Since there’s four points and there’s a lot of evidence for each of these points, I think it’s probably better to do for the first point, here’s what big tobacco did. And then here’s what big tech did.
Lucas Perry: Let’s go ahead and start then with this first point. From the perspective of big tobacco and big tech, what have they done to reinvent themselves in the public image as socially responsible? And again, you can just briefly touch on why it’s their incentive to do that, and not actually be socially responsible.
Mohamed Abdalla: So the benefit of framing yourself as socially responsible without having to actually take any actions to become socially responsible or to earn that label is basically increased consumer confidence, increased consumer counts, a decreased chance of legislation if the general public, and thereby the general politician believes that you are socially responsible and that you do care about the public good, you are less likely to be regulated as an industry. So that’s a big driving factor for trying to appear socially responsible. And we actually see this in both industries. I’ll cover it later, but a lot of stuff that’s leaked basically shows that spoiler, a lot of the AI research being done, especially in AI ethics is seen as a way to either delay or prevent the legislation of AI. Because they’re afraid that it will eat into their profit, which is against their profit motive, which is why they do a lot of the stuff that they do.
So first, I’ll go over what big tobacco did. And then we’ll try to draw parallels to what big tech did. To appear socially responsible, they were suggested by their PR firm Hill+Knowlton Strategies to fund academics and to create research centers. The biggest one that they created was CTR, which is the Council for Tobacco Research. And when they created CTR, they created it in a very academically appealing way. What I mean by that is that CTR was advised by distinguished scientists who serve on its scientific advisory boards. They went out of their way to recruit these scientists so that the research center gains academic respectability and is trusted by if not only the lay person, by academics in general. That they’re a respectable organization despite being funded by big tobacco.
And then what they would do is fund research questions. They’d act essentially as a pseudo granting body and provide grants to researchers who were working on specific questions that was decided by this council. At surface level, it seems okay. I’m not 100% sure how it works in the States, but at least in Canada we have research funding bodies. So we have NSERC or CIHR Natural Sciences and Engineering Research Council, which decides who gets the grants from the government’s research money. And we have it for all the different fields. And in theory, the research questions should be given in terms of validity of research, potential impact. A lot of the academically relevant considerations.
But what we ended up showing after litigation again, was that at least in the case of big tobacco, there were more lawyers than scientists involved in the distribution of money. And the lawyers were aware of what would and would not likely hurt the bottom line of these companies. So quoting previous work, their internal documents showed that they would simply refuse to fund any proposal that acknowledged that nicotine was addictive or that smoking was dangerous.
They basically went out of their way to fund research that was sort of unrelated to tobacco use so that they get good PR while minimizing the risk that said research would harm their profit motive. And during any sort of litigation, for example during a cigarette product liability trial, the lawyers presented a list of all the universities and medical schools supported by this Council for Tobacco Research as proof that they care about social responsibility and they care about the wellbeing. And they use this money as proof.
Basically at first glance, all of their external facing actions did seem that they cared about the well-being of people. But it was later revealed through internal documents that this was not the case. And this was basically a very calculated move to prevent legislation, beat litigation, and other self-serving goals in order to maximize profit.
In big tech, we see similar things happening. In 2016, the Partnership on AI to Benefit People and Society was established to, “Study and formulate best practices on AI technologies and to study AI and its influences on people and society.” Again, a seemingly very virtuous goal. And a lot of people signed up for this. A lot of non-profit organizations, academic bodies, and a lot of industries signed up for this. But it was later leaked that despite sounding rosy, the reality on the ground was a little bit darker. So reports from those involved, and this was a piece published at The Intercept. Demonstrated how neither prestigious academic institutions such as MIT, nor civil liberty organizations like the ACLU had much power in the direction of the partnership. So they ended up serving as a legitimate function for big tech’s goals. Basically railroading other institutions while having their brand on your work helps appear socially responsible. But if you don’t actually give them power, it’s only the appearance of social responsibility that you’re getting. You’re not actually being forced to be socially responsible.
There’s other examples of going to litigation specifically. During his testimony to Congress, Mark Zuckerberg states that in order to tackle this problems, they’ll work with independent academics. And these independent academics would be given oversight over their company. It’s unclear how an academic that is chosen by Facebook, theoretically compensated by Facebook, and could be fired by Facebook would be independent of Facebook after being chosen, receiving compensation, and knowing that they can lose that compensation if they do something to anger Facebook.
Another example almost word for word from big tobacco’s showing off to jurors is that Google boasts that it releases more than X research papers on topics in responsible AI in a year to demonstrate social responsibility. This is despite arm’s length involvement with the military minded startups. So if you build on that, Alphabet Google faced a lot of internal backlash with Project Maven, which is basically they’re working on image recognition algorithms for drones. They faced a lot of backlash. So publicly, they appeared to have stopped. They promised to stop working with the military. However, internally, Gradient Ventures, which is basically the venture capital arm of Alphabet still funds, provides researchers, and provides data to military startups. So despite their promise not to work in military, despite their research in responsible AI, they still work in areas that don’t necessarily fit the label of being socially responsible.
Lucas Perry: It seems there’s also this dynamic here where in both tobacco and in tech, it’s cheaper to pretend to be socially responsible than to actually be socially responsible. In the case of big tobacco, that would’ve actually meant dismantling the entire industry and maybe bringing e-cigarettes on 10 years before that actually happened. Yet in the case of big tech, it would seem to be more like hampering short term profit margins and putting a halt to recommender algorithms and systems that are already deployed that are having a dubious effect on American democracy and the wellbeing of the tens of millions of human brains that are getting fed garbage content by these algorithms.
So this first point seems pretty clear to me. I’m not sure if you have anything else that you’d like to add here. Public perception is just important. And if you can get policymakers and government to also think that you’re playing socially responsible, they’ll hold off on regulating you in the ways that you don’t want to be regulated.
Mohamed Abdalla: Yeah, that’s exactly it. Yeah.
Lucas Perry: Are these moves that are made by big tobacco and big tech also reflected in other contentious industries like in the oil industry or other greenhouse gas emitting energy industries? Are they making generally the same type of moves?
Mohamed Abdalla: Yeah, 100%. So this is simply industry’s reaction to any sort of possible legislation. Whether it’s big tobacco and smoking legislation, big tech and some sort of legislation on AI, oil companies and legislation on greenhouse gas emissions, clean energy, so on and so forth. Even a lot of food industry. I’m not sure what the proper term for it is, but a lot of the nutritional science research is heavily corrupted by funding from whether it’s Kellogg’s, or the meat industry, or the dairy industry. So that’s what industry does. They have a profit motive, and this is a profitable action to take. So it’s everywhere.
Lucas Perry: Yeah. So I mean, when the truth isn’t in your favor, and your incentive is profit, then obfuscating the truth is your goal.
Mohamed Abdalla: Exactly.
Lucas Perry: All right. So moving on to the second point then, how is it that big tobacco and big tech in your words, work to influence the events and decisions made by funded universities? And why does influencing the decisions made by funded universities even matter for large industries like big tobacco and big tech?
Mohamed Abdalla: So there’s multiple reasons to influence events. What it means to influence events is also a variety of actions. You could either hold events, you could stop holding events, or you can change how events that are being held operate. So events here, at least in the academic sense, I’m going to talk about conferences. And although they’re not always necessarily funded by universities, they are academic events. So why would you want to do this? Let’s talk about big tobacco first and show by example why they gained from doing this.
First, I’ll just go over some examples. So at my home university, the University of Toronto, Imperial Tobacco, which is one of the companies that belongs in big tobacco, withheld its funding from U of T’s Faculty of Law conference as retribution for the fact that U of T law students were influential in having criminal charges be laid against Shoppers Drug Mart for selling tobacco to a minor. As one of their spokespersons said, they were biting the hand that feeds them. If universities events such as this annual U of T law conference relies on funding from industry in general, then they have an oversized say of what you as an institution will do, or what people working for you can and cannot do. Because you’ll be scared of losing that consistent money.
Lucas Perry: I see. So you feed as many people as you can. Knowing that if they ever bite you, the retraction of your money or what you’re feeding them is an incentive for them to not hurt you?
Mohamed Abdalla: Exactly. But it’s not even the retraction, it’s the threat of retraction. If in the back of their mind 50% of their funding comes from whatever industry, can you afford to live without 50% of your funding? Most people would say no, and that causes worry and will call you to self self-censor. And that’s not a good thing in academia.
Lucas Perry: And replacing that funding is not always easy?
Mohamed Abdalla: It’s very difficult.
Lucas Perry: So not only are you getting the public image of being socially responsible by investing in some institutions or conferences, which appear to be socially responsible or which contain socially responsible workshops and portions. But then you also have in the back of the mind of the board that organizes these conferences, the worry and the knowledge that, “We can’t take a position on this because we’d lose our funding. Or this is too spicy, so we know we can’t take a position on this.” So you’re getting both of these benefits. The constraining of what they may do within the context of what may be deemed ethically and socially responsible, which may be to go against your industry in some strong way. But you also gain the appearance of just flat out being socially responsible while suppressing what would be socially responsible free discourse.
Mohamed Abdalla: 100%. And to build on that a little bit. Since we brought in boards, people that decide what happens. An easier way of influencing what happens is to actually plant or recruit friendly actors within academia. There is a history at least by my home university again, where the former president and dean of law of U of T was the director of a big tobacco company. And someone on the board of Women’s College Hospital, which is a teaching hospital affiliated with my university was the president and chief spokesperson for the Canadian Tobacco Manufacturer’s Council. So although there is no proof that they necessarily went out of their way to change the events held by the university, if a large percentage of your net worth is held in tobacco stocks, even if you’re a good human being, just because you’re a human being, you will have some sort of incentive to not hurt your own wellbeing. And that can influence the events that your university holds, the type of speakers that you invite, the types of stances that you allow your university to take.
Lucas Perry: So you talked about how big tobacco was doing this at your home university. How do you believe that big tech is engaging in this?
Mohamed Abdalla: The first thing that we’ll cover is the funding of large machine learning AI conferences. In addition to academic innovation or whatever reason they may say that they’re funding these conferences. And I believe that a large portion of this is because of academic innovation. You can see that the amount of funding that they provide also helps give them a say. Or at least in the back of the organizer’s mind. NeurIPS, which is the biggest machine learning AI conference has had always at least two big tech sponsors at the highest tier of funding since 2015. And in recent years, the number of big tech companies has exceeded five. This also carries over to workshops where over the past five years, only a single ethics related workshop did not have at least one organizer belonging to big tech. And that was 2018s robust AI in financial services workshop, which instead featured the foreheads of AI branches at big banks, which is not necessarily better. It’s not to say that those working in these companies should not have any say. But to have no venue that doesn’t rely on big tech in some sort of way or is not influenced in big tech in some sort of way is worrying.
Lucas Perry: Or whose existence is tied up in the incentives of big tech. Because whatever big tech’s incentives are, that’s generating the profit which is funding you. So you’re protecting that whole system when you accept money from it. And then your incentives become aligned with the incentives of the company that is suppressing socially responsible work.
Mohamed Abdalla: Yeah. Fully agree. In the next section where we talk about the individual researchers, I’ll go more into this. There’s a very reasonable framing of this issue where big tech isn’t purposely doing this. And industry is not purposely being influenced, but the influence is taking place. But basically exactly as you said. Even if these companies are not making any explicit demands or requests from the conference organizers, it’s only human nature to assume that these organizers would be worried or uncomfortable doing anything that would hurt their sponsors. And this type of self-censorship is worrying.
The example that I just showed was for NeurIPS, which is largely a technical conference. So Google does have incentive to fund technical research because a really good optimization algorithm will help their industry or their work, their products. But even when it comes to conferences that are not technical in their goal, so for example the FAccts conference, the Fairness Accountability, and Transparency Conference has never had a year without big tech funding at the highest level. Google’s three out of three years. Microsoft is two out of three years. And Facebook is two out of three years.
FAccT has a statement regarding sponsorship and financial support where they say that you have to disclose this funding. But it’s unclear how disclosure alone helps combat direct and indirect industrial pressures. A reaction that I often get is basically that those who are involved are very careful to disclose the potential conflicts of interest. But that is not a critical understanding of how the conflict of interest actually works. Disclosing a conflict of interest is not a solution. It’s just simply highlighting the fact that a problem exists.
In the public health sphere, researchers push that resources should be devoted to the problems associated with sequestration, which is elimination of relationships between commercial industry and professionals in all cases where it’s remotely feasible. So that means how this policy realizes that simply disclosing is not actually debiasing yourself.
Lucas Perry: That’s right. So you said that succinctly, disclosing is not debiasing. Another way of saying that is it’s basically just saying, “Hey, my incentives are misaligned here.” Full-stop.
Mohamed Abdalla: Exactly.
Lucas Perry: And then like okay, everyone knows that now, and that’s better than not. But your incentives are still misaligned towards the general good.
Mohamed Abdalla: Yeah, exactly. And it’s unclear why we think that AI ethicists are different from other forms of ethicists in their incorruptibility. Or maybe it’s an unfounded in terms of research view that thinks or believes that we would be able to post-hoc adjust for the biases of researchers, but that’s simply unfounded by research. So yeah, one of the ways that they influence that events is simply by funding the events and funding the people organizing these events.
But there’s also examples where some companies in big tech knowingly are manipulating the events. And I’ll quote here from my paper. “As part of a campaign by Google executives to shift the antitrust conversation, Google sponsored and planned a conference to influence policy makers going so far as to invite a token Google critic capable of giving some semblance of balance.” So it’s clear that these executives know what they’re doing, and they know that by influencing events, they will influence policy, which will influence legislation, and in turn litigation. So it’s clear from the leaks that are happening that this is not simply needless worrying, but this is an active goal of industry in order to maximize their profit.
There is some work for big tobacco that has not been done in big tech. I don’t think it can be done in big tech, and I’ll speak about why. But basically, when it comes to influencing events, there is research that shows that events held that were sponsored by big tobacco, such as symposiums or workshops about secondhand smoking are not only skewed, but also are poor quality compared to events not sponsored by big tobacco. So when it comes to big tobacco research, if the event is sponsored by big tobacco, the causation here is not clear whether or not it’s subconscious, whether or not it’s conscious. The causation might not be perfectly clear, but the results are. And it shows that if you’re funded by big tobacco, you’re more skewed and poorer quality in terms of research about the effects of secondhand smoking or the effects of smoking.
We can’t do this sort of research in big tech because there isn’t an event that isn’t sponsored by big tech. So we can’t even do this sort of research. And that should be worrying. If we know in other fields that sponsorship leads to lower quality of work, why are we not trying to have them divest from funding events directly anyway?
Lucas Perry: All right. Yeah. So this very clearly links up with the first example you gave at the beginning of our conversation about imagine having a health care conference and all of the main investors are big cigarette companies. Wouldn’t we have a problem with that?
So these large industries, which are having detrimental effects to society and civilization first have the incentive to portray a public image of social responsibility without actually being socially responsible. And then they also have an incentive to influence events and decisions made by funded universities as to one, make the incentives of those universities and events aligned with their own because their funding is dependent on these industries. And then also, to therefore constrain what can and may be said at these conferences in order to protect that funding. So the next one here is how is it that big tobacco and big tech influence the research questions and plans of individual scientists?
Mohamed Abdalla: So in the case of big tobacco, we know especially from leaked documents that they actively sought to fund research that placed the blame of lung cancer on anything other than smoking. So there’s the classic example on owning a bird is more likely to increase your chance of getting lung cancer. And that’s the reason why you got lung cancer instead of smoking.
Lucas Perry: Yeah, I thought that was hilarious. They were like, “Maybe it’s pets. I think it’s pets that are creating cancer.”
Mohamed Abdalla: Yeah exactly. And when they choose to fund this research question, not only do they get the positive PR. But they also get the ability to say that this is not conclusive. “Because look, here are these academics in your universities that think that it might not be smoking causing the cancer. So let’s hold off on litigation until this type of research is done.” So this steering of funds instead of exploring the effects of tobacco on lung cancer, they would study just the basic science of cancer instead. And this would limit the amount of negative PR that they get. So that’s one reason for doing it. But number two is it allows them sow doubt and say that there’s confusion, or that we haven’t arrived at some sort of consensus.
So that’s one of the ways that they did it, finding researchers who they termed critics or skeptics. And they would fund them and amplify their voices. And they had a specific set of money for set people, especially if they were smokers. So they tried to actively seek for people that were smokers because they felt that they’d be more sympathetic to these companies. They would purposely steer funds towards these people and they would change the research sphere.
There’s also very egregious actions that they took. So for example, Professor Stanton Glantz. He’s in UCSF I think, University of California, San Francisco. They would take out ads against him in the newspapers where they would put lies to point out flaws in his studies. And these flaws aren’t really flaws. It’s just a twisting of the truth. It’s basically if you go against us, we’re going to attack you. We’re going to make it very hard for you to get further funding. You’re going to have a lot of bad PR. It’s just sort of dis-incentivizing anyone else from doing critical issues against them.
They would work with elected politicians as well to block funding of scientists of opposing viewpoints. So it’s not like they didn’t have their fingers in government as well. During litigation, an email covered where HHS, which is the U.S. Department of Health and Human Services appropriations continuing resolution will include language to prohibit funding for Glantz who is Stanton Glantz, the same scientist. So it’s clear that through intimidation, but also through acting as a funding body, they’re able to change what researchers have work on.
Big tech works in essentially the same way. The first thing to note here is that when it comes to ethical AI or AI ethics, big tech in general has a very specific conception of what it means for an algorithm to be ethical. Whether it’s inspired by their insular culture where it’s sort of a very echo-y place where everyone sort of agrees with each other, there’s sort of an agreed upon culture. There is previous work owning ethics that discusses how there’s three main factors in defining AI ethics. And there’s three values of Silicon Valley. Meritocracy, trust in the market. And I forgot the last one. And basically, their definition is simply different from that, which the rest of the world, or the rest of the country generally has.
So Silicon Valley has a very specific view of what AI ethics is or should be. And that is not necessarily shared by everyone outside of Silicon Valley. That is not necessarily in itself a bad thing. But when they act as a pseudo granting body that is, they provide grants or money for researchers, it becomes an issue. Because if for example, you are a professor. And as a professor, one of your biggest roles is simply to bring in money to do research. And if your research question does not agree with the underlying logics of Silicon Valley’s funding bodies, whoever makes these decisions.
Lucas Perry: Like if you question the assumption about trust in the market?
Mohamed Abdalla: Yeah, exactly. Or you question meritocracy, and maybe that’s not how we should be basing our societal values. Even if the people granting the money are not lawyers like they were in big tobacco, even if they were research scientists, the fact that they’re at a high enough level to be choosing who gets money likely means that they’ve been there for awhile. And there’s an increased chance that they personally believe or agree with the views that their companies hold. Not necessarily always the case, but the probability is a lot higher.
So if you believe in what your company believes in, and there is a researcher who is working from a totally different set of foundations whose assumptions do not match your assumption. As human nature, you’re less likely to believe in this research. You’re less likely to believe that it’s successful or on the true path. So you’re less likely to fund it. And that requires no malicious intent on your side. It’s just that you are part of an industry that has a specific view. And if a researcher does not share this view, you’re not going to give them that money.
And you switch it over to the researcher side. If I want to get tenure, I’m not a professor. So I can’t even get tenure. But if I want to get hired, I have to show that I can bring in money. If I’m a professor and I want to get tenure, I have to show that I can bring in even more money. And if I see that these companies are giving away vast sums of money, it is in my best interest to ask a research question that will be funded by them. Because what good am I if I get no money and I don’t get hired? Or what good am I if I get no money and I don’t get tenure?
So what will end up happening there is that it’s a cyclical thing where researchers view that these companies fund specific types of research or researchers that are based in fundamental assumptions that they may not necessarily agree with. And in order to maximize their opportunities to get this money, they will change their research question. Whether it’s complete, or slight adjustment, or changing the assumptions to match what will get them the money. And the cycle will just keep happening until there’s no difference.
Lucas Perry: So there’s less opportunity for researchers and institutions that fundamentally disagree with some axiom that these industries hold over ethics and accountability, or whatever else?
Mohamed Abdalla: Exactly. 100%. And the important thing to note here is that for this to happen, no one needs to be acting maliciously. The people in big tech probably believe in what they’re pushing for. At least I like to make this assumption. And I think it makes it for the easiest sell, especially for those within the computer science department because there’s a lot of pushback to this type of thought. Where even if the people deciding who gets the money, imagine they’re completely disinterested researchers who have very agreeable goals, and they love society, and they want the general good. The fact that they are in a position to be deciding who gets the money means that they’re likely higher up in these companies. You don’t get to be higher up and stay for these companies long enough, unless you agree with the viewpoint.
Lucas Perry: And that viewpoint though, in a market has to be in some sense, aligned with the impersonal global corporate objective of maximizing the bottom line. There’s this values filtration process internally in a company where maybe you’ll have all the people who are against Project Maven, but none of them are high enough. Right?
Mohamed Abdalla: Exactly.
Lucas Perry: You need to sift those people out for the higher positions because the higher positions are the ones which have to be aligned with the bottom line, with maximizing profits for shareholders. Those people could authentically think that maximizing the profit of some big industrial company is a good thing, because you really trust in the market and how it serves the market.
Mohamed Abdalla: I think there are people that actually believe this. So I know you say it kind of disbelievingly, but I think that people actually believe this.
Lucas Perry: Yeah, people really do believe this. I don’t actually think about this stuff a lot. But yeah. I mean, it makes sense to me. We buy all your stuff. So you’re serving me, I’m making a transaction with you. But this fact about this values sifting towards the top to be aligned with the profit maximization, those are the values that will remain for then deciding the funding of researchers at institutions. So no one has to be evil in the process. You just have to be following the impersonal incentives of a global capitalist industry.
Mohamed Abdalla: Yeah. I do not aim to shame anybody involved from either side. Certain executives I shame and certain attorneys I shame. But I work under the assumption that all computer science, AI, ethicists, researchers, whatever you want to call them are well-intentioned. And the way the system is set up is that even well-intentioned researchers can have negative impact on the research being done, and can have a limiting impact on the types of questions being considered.
And I hope by now you agree that at least theoretically, that by acting as a pseudo granting body, there’s a chance for this influence to occur. But then in my work, what I did was I actually counted how many people were actually looking to big tech as a pseudo granting body. So I looked at the CVs of all computer science faculty at four schools. University of Toronto, Massachusetts Institute of Technology, Stanford, and Berkeley. Two private schools, two public schools. Two eastern coast universities, two western coast. And for each CV that I could find, I looked to answer a certain number of questions. So whether or not a specific faculty works on AI, whether or not they work on the ethics of AI. I very loosely defined that as being having at least one paper defined about any sort of societal impact of AI. Whether or not they have ever received faculty funding from big tech. So that is grants or awards from companies. Whether they have received graduate funding from big tech. So was any portion of this faculty’s graduate education funded by big tech? And whether or not they are or were employed by big tech. So at any time that they have any sort of previous or current financial relationship with big tech?
What the research shows is that of all computer science faculty, 52% of them. So at least half view big tech as a funding body. So that means as a professor, they have received a grant or an award to do research from big tech. And universities technically are here to not maximize profit for these companies, but to do science. And in theory, public good kind of things. At least half of the researchers are looking to these companies as granting bodies.
If you narrow that down to computer science faculty that work in AI, that percentage goes up to 58%. If you limit it to computer science faculty who work in the ethics of AI or who are AI ethicists, it remains at 58%. Which means that 58% of the people looking to answer the really hard questions about AI and society, whether it’s short-term or long-term, view these companies as a funding body. Which in turn, as we discussed, opens them up to influence whether it’s subconscious or conscious.
Lucas Perry: So then if you’re Mohamed Abdalla and you come out with a paper like you came out with, is the thought here that it’s very much less likely than in the future you will receive grants from big tech?
Mohamed Abdalla: So it’s unclear. There’s a meta game to play here as well. A classic example here is Michael Moore. The filmmaker, political activist. I’m not sure the title you want to give him. But a lot of his films are funded by Fox or some subsidiary of Fox.
Lucas Perry: Yeah. But they’re all leftist views.
Mohamed Abdalla: Exactly. So as in the example that I gave previously where Google would invite a token critic to their conferences to give it some semblance of balance, simply disagreeing with them will not reject you from their funding. It’s just that they will likely limit the amount of people who are publicly disagreeing with them by choosing. Again, it seems too self-serving to say, “I’m a martyr. I’ve sacrificed myself.” I don’t view that as the case, although I did get some feedback saying, “Maybe you shouldn’t push this now until you get a job,” kind of thing. But I’m pushing that it shouldn’t be researchers deciding who they get money from. This is a higher level issue.
If you go into a pure hypothetical. And for the listeners, this is not what I actually believe. But let us consider big tech to be evil, right? And publicly minded researchers who refuse to take money from big tech as good. If every good researcher. Again, good here is not being used in the prescriptive sense, but just in our hypothetical. If all of the good researchers refuse to take money from these evil corporations, then what you’re going to be ending up with is these researchers will not get jobs, will not get promoted. Their viewpoints will die out. But also, the people who are not good will have no problem taking this money. And they will be less likely to challenge these evil corporations. So in a game theoretic perspective, if you go from a pure utility perspective, it makes sense for you as a good researchers to take this bad money.
So that’s why I state in the paper whatever our fix to this is, can’t be done at the individual researcher level. You have to assume that all researchers are good, but you have to come up with a system level solution. Whether that’s legislation from governments, whether that’s a funding body solution, or a collection of institutions that come up with an institutional policy that applies to all of these top schools or all computer science departments all over the world. Whoever we can get to agree together. So that’s basically what I’m pushing for. But there’s also ways that you can influence research questions without directly funding them. And the way that you do this is by repeated exposure to your ideas of ethics or your ideas of what is fair, what is not fair. However you want to phrase it.
I got a lot of puzzled looks when I tell people that I also looked at whether or not a professor was funded during graduate school by these companies. And there is some rightful questioning there. Because I saying, or am I assuming that the fact that they got a scholarship from let’s say Microsoft during their PhD, is that going to impact their research question 20 years down the line when they’re a professor? And I do not think that’s actually how this works. But the reason for asking this was to show how often or how much exposure these faculty members were given to big tech’s values or Silicon Valley values, however you want to say it.
Even if they’re not actively going out of their way to give money to researchers to affect their research questions. If every single person who becomes a faculty member at all of these prestigious schools has at one point done some term. Whether that’s a four month internship, or a one-year stint, or a multiple year stint in big tech in Silicon Valley. It’s only human to worry that repeated exposure to such views will impact whatever views you end up developing yourself. Especially if you’re not going into this environment trying to critically examine their views. You’re just likely to adopt it internally, subconsciously before you have to think about it.
And what we show here is that 84% of all computer science faculty have had some sort of financial connection with big tech. Whether that’s receiving funding as graduate, or a faculty member, or been previously employed.
Lucas Perry: We know what the incentives of these industries are all about. So why would they even be interested in funding the graduate work of someone, if it wasn’t going to groom them in some sense? Are there tax reasons?
Mohamed Abdalla: I’m not 100% sure how it works in the United States. It exists in Canada, but I’m not sure if it does in the U.S.
Lucas Perry: Okay.
Mohamed Abdalla: So there’s multiple reasons for doing this. There is of course as usual, the PR aspects of it. We are helping students pay off their student loans in the states I guess. There’s also if you fund someone’s graduate funding, you’re building connections, making them easier to hire possibly.
Lucas Perry: Oh yeah. You win the talent war.
Mohamed Abdalla: Yeah, exactly. Yeah. If you want a Microsoft Fellowship, I think you also win an internship at Microsoft, which makes you more likely to work for Microsoft. So it’s also a semi-hiring thing. There’s a lot of reasons for them to do this. And I can’t say that to influence is the only reason. But the fact that if you limit it to CS faculty who work in AI ethics, 97% of them have had some sort of financial connection to big tech. 97% of them have had exposure to the dominant views of ethics by Silicon Valley. What percentage of this 97 is going to subconsciously accept these views, or adopt these views because they haven’t been presented with another view, or they haven’t been presented with the opportunity to consider another critical view that disagrees with their fundamental assumptions? It’s not to say that it’s impossible, it’s just to say should they be having such a large influence?
Lucas Perry: So we’ve spent a lot of time here then on this third point on influencing the research questions and plans of individual scientists. It seems largely again that by giving money, you can help align their incentives with your own. You can help direct what kind of research questions you care about. You can help give the impression of social responsibility. When actually, you’re constraining and funneling research interest and research activity into places which are beneficial to you. You’re also, I think you’re arguing here exposing researchers to your values and your community.
Mohamed Abdalla: Yeah. Not everyone’s view of ethics is fully formed when they get a lot of exposure to big tech. And this is worrying because if you’re a blank slate, you’re much more easily drawn upon. So they’re more likely to impart their views on you. And if 97% of the people are exposed, it’s only safe to assume that some percentage will draw this. And then that will artificially inflate the amount of people that agree with big tech’s viewpoints. And therefore further push academia or the academic conversation into alignments with something they find favorable.
Lucas Perry: All right. So the last point here then is how big tobacco and big tech discover receptive academics who can be leveraged. So this is the point about finding someone who may be a skeptic or critic in a community of some widely held scientific view, and then funding and propping them up so that they introduce some level of fake or constructed, and artificial doubt and skepticism and obfuscation of the issue. So would you like to unpack how big tobacco and big tech have done this?
Mohamed Abdalla: When it comes to the big tobacco, we did cover this a tiny bit before. For example, when we talked about how they would fund research that questions whether it’s actually keeping birds as pets that caused lung cancer. And so long as this research is still being funded and has not been published in the journal, they can honestly speaking say that it is not yet conclusive. There is research being done, and there are other possible causes being studied.
Despite having internal research to show that this is not true. If you go from a pure logic standpoint where is it conclusive being defined as there exists no oppositional research, they’ve satisfied the conditions such that it is not conclusive and there is fake doubt.
Lucas Perry: Yeah. You’re making logically accurate statements, but they’re epistemically dishonest.
Mohamed Abdalla: Yeah. And that’s basically what they do when they leverage these academics to sow doubt. But they knew that especially in Europe a little bit after this, that there was a lot of concern being drawn regarding the funding of academics by big tobacco. So they would purposefully search for European scientists who had no previous connection to them who they could leverage to testify. And this was part of a larger project that they called the White Coat Project, which resulted in infiltrations in governing bodies, heads of academia, and editorial boards to help with litigation legislation. And that’s actually why I named my paper The Grey Hoodie Project. It’s an homage to the White Coat Project. But since computer scientists don’t actually wear white coats, we’re more likely to wear gray hoodies. That’s where the name of the paper comes from. So that’s how big tobacco did it.
When it comes to big tech, we have clear evidence that they have done the same. Although it’s not clear the scope of which they have done this because there haven’t been enough leaks yet. This is not something that’s usually publicly facing. But Eric Schmidt, previously the CEO of Google was, and I quote from an Intercept article, “Advised on which academic AI ethicists his private foundation should fund.” I think Eric Schmidt has very special views regarding the place of big tech and its impact on society that likely would not be agreed with by the majority of AI ethicists. However, if they find an ethicist that they agree with and they amplify him and give him hundreds of thousands of dollars a year, he is basically pushing his viewpoint on the rest of the community by way of funding.
Another example where Eric again, Eric Schmidt asked that he should fund a certain professor, and this certain professor later served as an expert consultant to the Pentagon’s innovation board. And Eric Schmidt is now on some military advisement role in the U.S. government. And that’s a clear example of how those from big tech are looking to leverage receptive academics. We don’t have a lot of examples from other companies. But the fact that it is happening and this one got leaked, do we have to wait until other ones get leaked to worry about this?
An interesting example that I personally view as quite weak. I don’t like this example, but the irony will show in a little while. There is a professor at George Mason University who had written academic research that was funded indirectly by Google. And his research criticized the antitrust scrutiny of Google shortly before joining the FTC, the Federal Trade Commission. And after he joined the FTC, they dropped their antitrust suits. They’ve picked it up now again. But this claim, which basically draws into question whether or not Google funded him one, because of his criticism of antitrust against Google. So that is a possible reason they chose to fund him. There’s another unstated question here in this example with did he choose to criticize antitrust scrutiny of Google because they fund him? So which direction does this flow? It’s possible that neither direction flow. But when he joined the FTC, did they drop their case because essentially they hired a compromised academic?
I do not believe and I have no proof of any of this. But Google’s response to this hinting question was that this expose was pushed by the Campaign for Accountability. And Google said that this evidence should not be acceptable because this nonprofit Campaign for Accountability is largely funded by Oracle, which is another tech company.
So if you sort of abstract this away, what Google is saying is that claims made regarding societal impacts, or legislation, or anything to do with AI ethics. If that researcher is funded by a big tech company, we should be worried about what they’re saying, or we should be very skeptical about what they’re saying. Because they’re essentially saying that you should not trust this because it’s funded by Oracle. It’s largely backed by Oracle. You know, you abstract that away, it’s largely backed by big tech. Does that not apply to everything that Google does or everything that big tech in general does? So it is clear that they themselves know that industry money has a corrupting influence on the type of research being done. And that sort of just pushes my entire piece.
Lucas Perry: Yeah. I mean at some sense, none of this is mysterious. They couldn’t not be doing this. We know what industry wants and does, and they’re full of smart people. So, I mean, if someone from industry who is participating in this or listening to this conversation, they would be like, “You’ve woken up to the obvious. Good job.” And that’s not to downplay the insight of your work yet. It also makes me think of lobbying.
Mohamed Abdalla: 100%.
Lucas Perry: We could figure out all of the machinations of lobbying and it would be like, “Well yeah, they couldn’t not be doing this, given their incentives.”
Mohamed Abdalla: So I fully agree. If you come into this knowing all of the incentives, what they’re doing is the logical move. I fully agree that this is obvious, right?
Lucas Perry: I don’t think it’s obvious. I think it naturally follows from first principles, but I feel like I learned a lot from your paper. Not everyone knows this. I would say not even many people know this.
Mohamed Abdalla I guess obviously it wasn’t the correct word. But I was going to say that the points that I raise show that there’s a clear concern here. And I think that once people hear the points, they’re more likely to believe this. But there are people in academia. A common criticism I get is that people know who pay them. So they say that it’s unfair to assume that someone funded by a company cannot be critical of that company or big tech in general. And several researchers who work at these companies are critical of their employer’s technology. So the point of my work is to lay this out flat to show that it doesn’t matter if people know who pay them, the academic literature shows that this has a negative effect. And therefore, disclosure isn’t enough. I don’t want to name the person who said this criticism, but they’re pretty high up. The idea that conflict of interest is okay simply because it’s disclosed seems to be a uniquely computer science phenomenon.
Lucas Perry: Yeah. It’s a weird claim to be able to say, “I’m so smart and powerful, and I have a PhD, and giving me dirty money or money that carries with it certain incentives, I’m just free of that.”
Mohamed Abdalla: Yeah. Or whether or not it’s incorrectly perceived ability to self-correct for these biases. That’s sort of the current that I’m trying to fight against. Because the mainstream current in academia is sort of like, “Yeah, but we know who pays us, so we’re going to adjust for it.” And although the conclusions I draw are intuitive I think, the intuition that people have regarding big tech is basically, big tobacco, everyone has an intuitively negative gut feeling. So it’s very easy for them to agree. It’s a little bit more difficult to convince them that even if you believe that big tech is a force for good, you should still be worried.
Lucas Perry: I also think that the word here that is better than obvious is it’s self-evident once it’s been explained. It’s not obvious. Because if it were obvious, then you wouldn’t have needed to write this paper. And I already would’ve known about this, and everyone would have. So if you were just to wrap up and summarize in a few bullet points here this last point on discovering receptive academics and leveraging them, how would you do that?
Mohamed Abdalla: I kind of summarize this to policymakers. When policymakers to try to make policy, they tend to converse with three main parties. They will converse with industry, they converse with the academics, and they converse with the public. And they believe that getting this wide viewpoint will help them arrive at the best compromise to help society move in the way that it should. However, the very mindful way that big tech is trying to leverage academics, a policymaker will talk to industry. He’ll talk to the very specific researchers who are handpicked by industry. And therefore, are basically in agreement with the industry and they will talk to the public. So two thirds of the voices they hear are industry aligned voices, as opposed to previously one third. And that’s something that I cover in the paper.
And that’s the reason why you want to leverage receptive academics, because it gives you the ability so that the majority of whatever a policymaker hears, and they’re really busy people, and they don’t have the time to do the research themselves. If two out of every three people is pushing policy or pushing views that are in alignment with whatever’s good for big tech’s profit motive, then you’re more likely to believe that viewpoint. As opposed to having an independent academia where if the right decision is to agree with big tech then you assume they would. If the right decision is to disagree, then you assume they would. But if they leverage the academics, this is less likely to happen. Therefore, academia is not playing its proper role when it comes to policy-making.
Lucas Perry: All right. So I think this pretty clearly lays out then how industries in general, whether it be big tobacco, big tech, oil companies, greenhouse gas emitting energy companies, you even brought up the food industry. I mean, just anyone really who have the bottom line as their incentive. These strategies are just naturally born of the impersonal incentive structure of a corporation or industry.
This next question is maybe a bit more optimistic. All these organizations are made up of people, and these people are all I guess more or less good or more or less altruistic. And you expect that if we don’t go extinct, these industries always get caught, right? Big tobacco got caught, oil industries are in the midst of getting caught. And next we have big tech. And I mean, the dynamics are also a little bit different because cigarettes and oil can be booted. But we’re kind of married to the technology of big tech forever. Literally.
Mohamed Abdalla: I would agree with that.
Lucas Perry: Yeah. So the strategy for those two seems to be obfuscate the issue for as long as possible so your industry exists as long as possible, and then you will die. There is no socially responsible version of your industry. That’s not going to happen with big tech. I mean, technology is here to stay. So does big tech have any actual incentives for genuine social responsibility, or are they just playing the optimal game from their end where you obfuscate for as long as possible, and you bias all of the events and the researchers as much as possible? Eventually, there’ll be enough podcasts like this and minds changed that they can’t do that any longer without incurring a large social cost in opinion, and perhaps market. So is it always simply the case that promoting the facade of being socially responsible is cheaper and better than the incentive of actually becoming socially responsible?
Mohamed Abdalla: So a thing that I have to say, because the people that I worked with that still work in health policy regarding tobacco would be hurt if I didn’t say it. Big tobacco is still heavily investing in academia, and they’re still heavily pushing research and certain viewpoints. And although the general perception has shifted regarding big tobacco, they’re not done yet. So although I do agree with your conclusion that it is a matter of time when they’re done, to think that the fight is over is simply not true yet. There’s still a lot of health policy folks that are pushing as hard as they can to completely get rid of them. Even within the United States and Europe, they create new institutions that do other research. They’ve become maybe a little bit more subtle about it. But to declare victory I think is the correct path on what will happen, but it has not yet happened. So there’s still work to be done.
Regarding whether or not big tech has an actual incentive to do good, I like to assume the best of people. I assume that Mark Zuckerberg actually founded Facebook because he actually cared about connecting people. I believe that in his heart of hearts, he does have at least generally speaking, a positive goal for society. He doesn’t want to necessarily do bad or be wrecking democracies across the world. So I don’t think that’s his goal, right?
So I think that starting from that viewpoint is helpful because one, it will make you heard. But also, it shows how this is a largely systemic issue. Because despite his well-intentioned goals that we’re assuming exist. And I actually do believe at some level, it’s true. The incentives in the system in which he plays adds a caveat to everything he says, that we aren’t putting there
So for example, when Facebook says they care about social responsibility or that they will take steps to minimize the amount of fake news, whatever that means. All of the statements made by any industry in any company, because of the fact that we’re in a capitalist system, is prior on the statements given it does not hamper profits, right? So when Facebook wants to deal with fake news, they will turn to automated AI algorithms. And they say we’re doing this because it’s impossible to moderate the amount of stories that we get.
From a strictly numeric perspective, this is true. But what they’re not saying is that it is not possible for us to use humans to moderate all of these stories while staying profitable. So that is to say the starting point of their action may be positive. But the fact that it has to be warped to fit the profit motive ends up largely negating, if not completely negating the effects of the actions they take.
So for example, you can take Facebook’s content moderation in the continent of Africa. They used to have none. And until recently, they got only one content center moderation in the entire continent of Africa. The amount of languages spoken in that continent alone, how many people do you have to hire in that one continent’s moderation? How many people per language are you hiring? Sophie Zhang’s resignation letter basically showed that despite being aware of all of these issues and having employees especially at the lower levels who were passionate about the social good. So it’s clear that they are trying to do a social good. The fact that everything is prior to whether or not it will result in money, hurts the end result of their action. So I believe and I agree with you that this industry is different. And I do believe that they have an incentive for the social good. But unless this incentive is forced upon everyone else, they are hurting themselves if they refuse to take profit that they could take, if that makes sense.
But if you choose to not do something because it is socially good but it will hurt your profits, some other company is going to do that thing. And they will take the profits and they will beat your market share until you can find a way to account for it in the stock price.
Lucas Perry: People value you more now that you are being good.
Mohamed Abdalla: Yeah. But I don’t think we’re at a stage where that’s possible or that’s even well-defined what it means. So I agree that even if this research is well-intentioned, the road to hell is paved with good intentions.
Lucas Perry: Yeah. Good intentions lead to bad incentives.
Mohamed Abdalla: Or the good incentives are required to be forced through the lens of bad incentive. It has to be aligned with the bad incentive for them to actually manifest. Otherwise, they will always get blocked.
Lucas Perry: Yeah. By that you mean the things which are good for society must be aligned with the bad incentives of maximizing profit share, or they will not manifest.
Mohamed Abdalla: Exactly. And that’s the issue when it comes to funding academia. Because it is possible to change society’s viewpoint on one, what is possible. But two, what is preferable to match the profit incentives of these companies. So you could find a way of what is ethical AI, what does it cover? What sort of legislation is feasible? What sort of legislation is desirable? In what context does it apply, does it not apply? What jurisdiction, so on and so forth. These are all still open questions. And it is the incentive of these companies to help mold these answers such that they have to change as little as possible.
Lucas Perry: So when we have benefits that are not accruing from industry, or where we have negative externalities or negative effects from the incentives of industry leading to detrimental outcomes for society, the thing that we have for remedying that is regulation. And I imagine that more than the general population, I would guess are libertarian attitudes at big tech companies. Which in this sense I would summarize as socially liberal or left leaning and then against regulation. So valuing the free market. So there’s this value resistance. We talked about how the people at the top are going to be sifted through. You’re not going to have people at the top of big tech companies who really love regulation, or think that regulation is really good for making a beautiful world. Because regulation is just always hampering the bottom line.
Yet, it’s the tool that we have for trying to mitigate negative externalities and negative outcomes from industry maximizing their bottom line. So what do you suggest that we do? Is it just that we need good regulation? We need to find some meaningful regulatory system and effective policy? Because otherwise, nothing will happen. They’ll just keep following their incentives, and they have so much power. And they’ll just do what they do to keep doing the same thing. And the only way to break that is regulation.
Mohamed Abdalla: So I agree. The solution is basically regulation. The question is, how do we go about getting there? Or what specific rules do we want to use or laws do we want to create? And I don’t actually answer any of this in my work. I answer a question that comes before the legislation or the regulation. Which is basically, I propose that AI ethics should be a different department from computer science. So that in the same way that bioethics is no longer in the same department as biology or medicine, AI ethics should be its own separate department. And in that way, anyone working in this department is not allowed to have any sort of relationship with these companies.
Lucas Perry: You call that sequestration.
Mohamed Abdalla: It’s not my own term. But yeah, that’s what it’s called.
Lucas Perry: Yeah. Okay. So this is where you’re just removing all of the incentives. Whether you’re declaring conflict of interest or not, you’re just removing the conflict of interest.
Mohamed Abdalla: Yes. Putting myself on the spot here, it’s very difficult to assume that I myself have not been corrupted by repeated exposure. As much as I try to view myself as a critical thinker, the research shows repeated exposure will influence what you think and what you believe. I’ve interned at Google for example, and they have a very large amount of internal propaganda pointed at their employees.
So I can’t barge in here saying that I am a clean slate or, “I’m a clean person. You should listen to my policies.” But I think that academia should try to create an environment where it is possible. Or dare I say, encouraged to be a clean person where clean means no financial involvement with these companies.
That said, there’s a lot of steps that can be done when it comes to regulation. Slightly unrelated, but kind of not unrelated is fixing the tax code in the U.S. and Canada and around the world. A large reason of why a lot of computer science faculty and computer scientists in general look to industry for funding is because governments have been cutting or at least not increasing with the rates of research being done in fields the amount of money available for research funding. And why do they not have as much money? This is probably in part because these companies are not paying their fair share when it comes to paying their taxes, which is how a lot of researchers get their funding. That’s one way of doing it. If you want to go into specifics, it’s more difficult and much harder to sell for specific policies. I don’t think regulation of specific technologies would be effective because all the technologies changed very fast.
I think creating a governmental body whose role it is to sue these companies when they violate stuff that we don’t believe match our social norms is probably the way to go about it. But I don’t know. It’s hard for me to say. It’s a difficult question that I don’t have an answer for. We don’t even know who to ask for legislation because every computer scientist is sort of corrupted. And they’re like, “Okay, do we not use computer scientists at all? Are we relying only on economists and moral philosophers to do this sort of legislation possibly?” I don’t know.
Lucas Perry: So I want to talk a little bit about transformative AI, and the role that this transition plays in that. There is a sense, and this is a meme that I think needs to be combated. The race between China and America on AI with the end goal of that being AI systems that are increasingly powerful.
So some sense that any kind of regulation used to try to fix any of these negative externalities from these incentives is just shooting ourself in the knee. And the evil other is racing to beat us.
Mohamed Abdalla: That’s the Eric Schmidt argument.
Lucas Perry: So we can’t be implementing these kinds of regulations in the face of the geopolitical and international problem of racing to ever more powerful AI systems. So you already said this is the Eric Schmidt argument. What is your reaction to this kind of argument?
Mohamed Abdalla: There’s multiple possible reactions. And I don’t like to state which one I believe in personally, but I’d like to walk through them. Because I think first off, let us assume that the U.S. and China are racing for an automated general intelligence, AGI. Would you not then increase government funding and nationalize this research such that it belongs to the government and not to a multinational corporation? In the same way that if for example, Google, Facebook, Microsoft, Alibaba, Huawei were in the race to develop nukes. Would you say, leave these companies alone so they can develop nuclear weapons? And once they develop a nuke, we’ll be able to take it. Or would you not nationalize these companies? Or not nationalize them, but basically it has to become only for the U.S. They cannot have any incentives in any other country. That is a form of legislation or regulation.
Governments would have to have a much bigger say in the type of research being done, who’s doing it, what can be done. For example, in the aerospace industry, you can’t employ non U.S. citizens. Is this what you’re pushing for in artificial intelligence research? Because if not, then you’re conceding that it’s not likely to happen. But if you do believe that this is likely to happen, then you would be pushing for some sort of regulation. You could argue what the regulation, but I don’t find the viewpoint that we want these companies to compete with the Chinese companies, because they’re going to create this thing that we need to beat the Chinese at. If you believe that this is going to happen, you’d be still in support of regulation. It’d just be different regulation.
Lucas Perry: I mean obviously I can’t speak for Eric Schmidt. But the kinds of regulation that stops the Chinese from stealing the AGI secrets is good regulation. And then anything else that slows the power of our technology is bad regulation.
Mohamed Abdalla: Yes. But for example, when Donald Trump banned the H1B visa. Or not banned, he put a limit or a pause. I’m not sure the exact thing that happened.
Lucas Perry: Yes. He’s made it harder for international students to be here and to do work here.
Mohamed Abdalla: Yes, exactly. That is the type of regulation that you would have if you believed AI was a threat, like we are racing the Chinese. If you believed that, you would be for that sort of regulation because you don’t want these companies training foreign nationals and the development of this technology. Yet this is not what these companies are going for. They are not agreeing with the legislation or the regulation that limits the amount of foreign workers they can bring in.
Lucas Perry: Yeah. Because they just want all the talent.
Mohamed Abdalla: Exactly. But if they believe that this was a matter of national security, would they not support this? You can’t make the national security arguments when they say, “Don’t regulate us, because we need to develop as much as we can, as fast as we can.” While also pushing against regulation that if this was truly dangerous, if we did truly need to leave you unregulated internally, we should limit who can work for you in the same way that we do it for rocketry. Who can work on rockets, who can work at NASA? They have to be U.S. citizens.
Lucas Perry: Why is that contradictory?
Mohamed Abdalla: Because they’re saying, “Don’t regulate us in terms of what we can work on,” but they’re saying also, “Do not regulate us in terms of who can work for us.” If what you’re working on is a matter of national security and you care about national security, then by definition, you want to limit who can work on it. If you want anyone, or you say there should be no limit on who can work for us, then you are basically admitting that this is not a matter of national security, or profits over everything else. Google, Facebook, Microsoft, when possible legislation comes up, the Eric Schmidt argument gets played. And it’s like, “If you legislate us, if you regulate us, you are slowing down our progress towards this technology.”
But if any sort of regulation against the development of tech will slow down the arrival of AGI, which we assume that the Department of Defense cares is important. Then what you’re saying is that these companies are essentially striving towards that should they not be protected from foreign workers infiltrating. So this is where the companies hold two opposing viewpoints. Where depending on who they’re talking to, no don’t regulate us because we’re working towards AGI and you don’t want to stop us. But at the same time, don’t regulate immigration because we need these workers. But if what you were working on is sensitive, then you shouldn’t even be able to take these workers.
Lucas Perry: Because it would be a national security risk.
Mohamed Abdalla: Exactly. When a lot of your researchers come from another country and they’re likely to go back to that country or at least have friends, have conversations with other countries.
Lucas Perry: Or just be an agent.
Mohamed Abdalla: Yeah, exactly. So if this is actually your worry that this regulation will slow down the development of AGI, how can you at the same time be trying to hire foreign nationals?
Lucas Perry: All right. So let’s do some really rapid fire here.
Mohamed Abdalla: Okay.
Lucas Perry: Is there anything else that you wanted to add to this argument about incentives and companies actually just being good? And we are walking through this Eric Schmidt argument.
Mohamed Abdalla: Yeah. So the thing I want to highlight is that this is a system level problem. So it’s not a problem with any specific company, despite some being in the news more than others. It’s also not a problem with any specific researchers or institutions. This is a systemic issue. And since it’s a high level problem, the solution needs to be at a high level as well. Whether it’s at the institutional level or national level, some sort of legislation, it’s not something that research individually can solve.
Lucas Perry: Okay. So let’s just blast through the action items here then for solving this problem. You argue that everyone should post their funding information online in historic informations. This increases transparency on conflicts of interest. But as we discussed earlier, they actually just need to be removed. You also argue that universities should publish documents highlighting their position on big tech funding for researchers.
Mohamed Abdalla: Yeah. Basically I want them to critically consider the risks associated with accepting such funding. I don’t think that it’s a consideration that most people are taking seriously. And if they are forced to publicly establish a position, they’ll have to defend it. And that will I believe lead to better results.
Lucas Perry: Okay. And then you argue that more discussion on the future of AI ethics and the role of industry in this space is needed. Can’t argue with that. That computer science should explore how to actively court antagonistic thinkers.
Mohamed Abdalla: Yeah. I think there’s a lot of stuff that people don’t say because it’s either not in the zeitgeist, or it’s weird, or seems an attack on a lot of researchers.
Lucas Perry: Stigmatized.
Mohamed Abdalla: Yeah, exactly. So instead of trying to find people who simply list on their CV that they care about AI ethics or AI fairness, you should find who will to disagree with you. If you’re able to beat points to disagree with, it doesn’t matter if you don’t agree with their viewpoint.
Lucas Perry: Yeah. I mean usually, the people that are saying the most disruptive things are the most avant garde and are sometimes bringing in the revolution that we need. You also encourage academia to consider the splintering of AI ethics into different department from computer science. This would be analogous to how bioethics is separated from medicine and biology. We talked about this already as sequestration. Are there any other ways that you think that the field of bioethics can help inform the development of AI ethics on academic integrity?
Mohamed Abdalla: If I’m being honest, I’m not an expert on bioethics or the history of the field of bioethics. I only know it in relation to how it has dealt with the tobacco industry. But I think largely, more historical knowledge needs to be used by people deciding what we as computer scientists do. There’s a lot of lessons learned by other disciplines that we’re not using. And they’ve basically been in a mirror situation. So we should be using this knowledge. So I don’t have an answer, but I think that there’s more to learn.
Lucas Perry: Have you received any criticism from academics in response to your research, following the publication that you want to discuss or address?
Mohamed Abdalla: In this specific publication, no. But it may be because of the COVID pandemic. I have raised these points previously. And I have received some pushback, but not for this specific piece. Although this piece was covered in WIRED and there are some criticisms of the piece in the WIRED article, but I kind of raised them up in this talk.
Lucas Perry: All right. So as we wrap up here, do you have anything else that you’d like to just wrap up on? Any final thoughts for listeners?
Mohamed Abdalla: I just want to stress if they’ve made it this way without hating me, this work is not meant to call into question the integrity of researchers, whether they’re in academia or in industry. And that I think these are critical conversations to be had now. It may be too late for the initial round of AI legislation. But for the future, good. And for longer-term problems, I think it’s more important.
Lucas Perry: Yeah, there’s some meme I think going around, like one of the major problems in the world is good people who are running the software of bad ideas on their brain. And I think similar to that is all of the good people who are caught up in bad incentives. So this is just sort of amplifying your non-critical or judgmental role, but that the universality of the human condition is that we all get caught up in these systemic negative incentive structures that lead to behavior that is harmful for the whole.
So thank you so much for coming on. I really learned a lot in this conversation. I really appreciate that you wrote this article. I think it’s important, and I’m glad that we have this thinking early and we can hopefully try and do something to make the transformation of big tech into something more positive happen faster and more rapidly than it has historically with other industries. So if people want to follow you, or look into more of your work, or get in contact with you, where the best places to do that?
Mohamed Abdalla: I’m not on any social media. So email is the best way to contact me. It’s on my website. If you search my name and add the University of Toronto and the end of it, I should be near the top. It’s cs.toronto.edu/msa. And that’s where all my work is also posted.
Lucas Perry: All right. Thanks so much, Mohamed.
Mohamed Abdalla: Thank you so much.