Posts in this category get featured at the top of the front page.

AI Alignment Podcast: China’s AI Superpower Dream with Jeffrey Ding

“In July 2017, The State Council of China released the New Generation Artificial Intelligence Development Plan. This policy outlines China’s strategy to build a domestic AI industry worth nearly US$150 billion in the next few years and to become the leading AI power by 2030. This officially marked the development of the AI sector as a national priority and it was included in President Xi Jinping’s grand vision for China.” (FLI’s AI Policy – China page) In the context of these developments and an increase in conversations regarding AI and China, Lucas spoke with Jeffrey Ding from the Center for the Governance of AI (GovAI). Jeffrey is the China lead for GovAI where he researches China’s AI development and strategy, as well as China’s approach to strategic technologies more generally. 

Topics discussed in this episode include:

  • China’s historical relationships with technology development
  • China’s AI goals and some recently released principles
  • Jeffrey Ding’s work, Deciphering China’s AI Dream
  • The central drivers of AI and the resulting Chinese AI strategy
  • Chinese AI capabilities
  • AGI and superintelligence awareness and thinking in China
  • Dispelling AI myths, promoting appropriate memes
  • What healthy competition between the US and China might look like

You can take a short (3 minute) survey to share your feedback about the podcast here.

 

Key points from Jeffrey: 

  • “Even if you don’t think Chinese AI capabilities are as strong as have been hyped up in the media and elsewhere, important actors will treat China as either a bogeyman figure or as a Sputnik type of wake-up call motivator… other key actors will leverage that as a narrative, as a Sputnik moment of sorts to justify whatever policies they want to do. So we want to understand what’s happening and how the conversation around what’s happening in China’s AI development is unfolding.”
  • “There certainly are differences, but we don’t want to exaggerate them. I think oftentimes analysis of China happens in a vacuum where it’s like, ‘Oh, this only happens in this mysterious far off land, we call China and it doesn’t happen anywhere else.’ Shoshana Zuboff has this great book on Surveillance Capitalism that shows how the violation of privacy is pretty extensive on the US side, not only from big companies but also from the national security apparatus. So I think a similar phenomenon is taking place with the social credit system. Jeremy Dom at Yale laws China Center has put it really nicely where he says that, ‘We often project our worst fears about technology in AI onto what’s happening in China, and we look through a glass darkly and we unleash all of our anxieties on what’s happening on to China without reflecting on what’s happening here in the US, what’s happening here in the UK.'”
  • “I think we have to be careful about which historical analogies and memes we choose. So ‘arms race’ is a very specific call back to cold war context, where there’s almost these discrete types of missiles that we are racing Soviet Union on and discrete applications that we can count up; Or even going way back to what some scholars call the first industrial arms race in the military sphere over steam power boats between Britain and France in the late 19th century. And all of those instances you can count up. France has four iron clads, UK has four iron clads; They’re racing to see who can build more. I don’t think there’s anything like that. There’s not this discreet thing that we’re racing to see who can have more of. If anything, it’s about a competition to see who can absorb AI advances from abroad better, who can diffuse them throughout the economy, who can adopt them in a more sustainable way without sacrificing core values. So that’s sort of one meme that I really want to dispel. Related to that, assumptions that often influence a lot of our discourse on this is techno-nationalist assumption, which is this idea that technology is contained within national boundaries and that the nation state is the most important actor –– which is correct and a good one to have and a lot of instances. But there are also good reasons to adopt techno-globalist assumptions as well, especially in the area of how fast technologies diffuse nowadays and also how much underneath this national level competition, firms from different countries are working together and make standards alliances with each other. So there’s this undercurrent of techno-globalism, where there are people flows, idea flows, company flows happening while the coverage and the sexy topic is always going to be about national level competition, zero sum competition, relative games rhetoric. So you’re trying to find a balance between those two streams.”
  • “I think currently a lot of people in the US are locked into this mindset that the only two players that exist in the world are the US and China. And if you look at our conversation, right, oftentimes I’ve displayed that bias as well. We should probably have talked a lot more about China-EU or China-Japan corporations in this space and networks in this space because there’s a lot happening there too. So a lot of US policy makers see this as a two-player game between the US and China. And then in that sense, if there’s some cancer research project about discovering proteins using AI that may benefit China by 10 points and benefit the US only by eight points, but it’s going to save a lot of people from cancer  –– if you only care about making everything about maintaining a lead over China, then you might not take that deal. But if you think about it from the broader landscape of it’s not just a zero sum competition between US and China, then your kind of evaluation of those different point structures and what you think is rational will change.”

 

Important timestamps: 

0:00 intro 

2:14 Motivations for the conversation

5:44 Historical background on China and AI 

8:13 AI principles in China and the US 

16:20 Jeffrey Ding’s work, Deciphering China’s AI Dream 

21:55 Does China’s government play a central hand in setting regulations? 

23:25 Can Chinese implementation of regulations and standards move faster than in the US? Is China buying shares in companies to have decision making power? 

27:05 The components and drivers of AI in China and how they affect Chinese AI strategy 

35:30 Chinese government guidance funds for AI development 

37:30 Analyzing China’s AI capabilities 

44:20 Implications for the future of AI and AI strategy given the current state of the world 

49:30 How important are AGI and superintelligence concerns in China?

52:30 Are there explicit technical AI research programs in China for AGI? 

53:40 Dispelling AI myths and promoting appropriate memes

56:10 Relative and absolute gains in international politics 

59:11 On Peter Thiel’s recent comments on superintelligence, AI, and China 

1:04:10 Major updates and changes since Jeffrey wrote Deciphering China’s AI Dream 

1:05:50 What does healthy competition between China and the US look like? 

1:11:05 Where to follow Jeffrey and read more of his work

 

Works referenced 

Deciphering China’s AI Dream

FLI AI Policy – China page

ChinAI Newsletter

Jeff’s Twitter

Previous podcast with Jeffrey

 

We hope that you will continue to join in the conversations by following us or subscribing to our podcasts on Youtube, Spotify, SoundCloud, iTunes, Google Play, StitcheriHeartRadio, or your preferred podcast site/application. You can find all the AI Alignment Podcasts here.

You can listen to the podcast above or read the transcript below. More works from GovAI can be found here.

 

Lucas Perry: Hello everyone and welcome back to the AI Alignment Podcast at The Future of Life Institute. I’m Lucas Perry and today we’ll be speaking with Jeffrey Ding from The Future of Humanity Institute on China and their efforts to be the leading AI Superpower by 2030. In this podcast, we provide a largely descriptive account of China’s historical technological efforts, their current intentions and methods for pushing Chinese AI Success, some of the foundational AI principles being called for within China; We cover the drivers of AI progress, the components of success, China’s strategies born of these variables; We also assess China’s current and likely future AI capabilities, and the consequences of all this tied together. The FLI AI Policy China page, and Jeffrey Ding’s publication Deciphering China’s AI Dream are large drivers of this conversation, and I recommend you check them out.

If you find this podcast interesting or useful, consider sharing it with friends on social media platforms, forums, or anywhere you think it might be found valuable. As always, you can provide feedback for me by following the SurveyMonkey link found in the description of wherever you might find this podcast. 

Jeffrey Ding specializes in AI strategy and China’s approach to strategic technologies more generally. He is the China lead for the Center for the Governance of AI. There, Jeff researches China’s development of AI and his work has been cited in the Washington Post, South China Morning Post, MIT Technological Review, Bloomberg News, Quartz, and other outlets. He is a fluent Mandarin speaker and has worked at the US Department of State and the Hong Kong Legislative Council. He is also reading for a PhD in international relations as a Rhodes scholar at the University of Oxford. And so without further ado, let’s jump into our conversation with Jeffrey Ding.

Let’s go ahead and start off by providing a bit of the motivations for this conversation today. So why is it that China is important for AI alignment? Why should we be having this conversation? Why are people worried about the US-China AI Dynamic?

Jeffrey Ding: Two main reasons, and I think they follow an “even if” structure. The first reason is China is probably second only to the US in terms of a comprehensive national AI capabilities measurement. That’s a very hard and abstract thing to measure. But if you’re taking which countries have the firms on the leading edge of the technology, the universities, the research labs, and then the scale to lead in industrial terms and also in potential investment in projects related to artificial general intelligence. I would put China second only to the US, at least in terms of my intuition and sort of my analysis that I’ve done on the subject.

The second reason is even if you don’t think Chinese AI capabilities are as strong as have been hyped up in the media and elsewhere, important actors will treat China as either a bogeyman figure or as a Sputnik type of wake-up call motivator. And you can see this in the rhetoric coming from the US especially today, and even in areas that aren’t necessarily connected. So Axios had a leaked memo from the US National Security Council that was talking about centralizing US telecommunication services to prepare for 5G. And in the memo, one of the justifications for this was because China is leading in AI advances. The memo doesn’t really tie the two together. There are connections –– 5G may empower different AI technologies –– but that’s a clear example of how even if Chinese capabilities in AI, especially in projects related to AGI, are not as substantial as has been reported, or we think, other key actors will leverage that as a narrative, as a Sputnik moment of sorts to justify whatever policies they want to do. So we want to understand what’s happening and how the conversation around what’s happening in China’s AI development is unfolding.

Lucas Perry: So the first aspect being that they’re basically the second most powerful AI developer. And we can get into later their relative strength to the US; I think that in your estimation, they have about half as much AI capability relative to the United States. And here, the second one is you’re saying –– and there’s this common meme in AI Alignment about how avoiding races is important because in races, actors have incentives to cut corners in order to gain decisive strategic advantage by being the first to deploy advanced forms of artificial intelligence –– so there’s this important need, you’re saying, for actually understanding the relationship and state of Chinese AI Development to dispel inflammatory race narratives?

Jeffrey Ding: Yeah, I would say China’s probably at the center of most race narratives when we talk about AI arms races and the conversation in at least US policy-making circles –– which is what I follow most, US national security circles –– has not talked necessarily about AI as a decisive strategic advantage in terms of artificial general intelligence, but definitely in terms of decisive strategic advantage and who has more productive power, military power. So yeah, I would agree with that.

Lucas Perry: All right, so let’s provide a little bit more historical background here, I think, to sort of contextualize why there’s this rising conversation about the role of China in the AI space. So I’m taking this here from the FLI AI Policy China page: “In July of 2017, the State Council of China released the New Generation Artificial Intelligence Development Plan. And this was an AI research strategy policy to build a domestic AI industry worth nearly $150 billion in the next few years” –– again, this was in 2017 –– “and to become a leading AI power by 2030. This officially marked the development of the AI sector as a national priority, and it was included in President Xi Jinping’s grand vision for China.” And just adding a little bit more color here: “given this, the government expects its companies and research facilities to be at the same level as leading countries like the United States by 2020.” So within a year from now –– maybe a bit ambitious, given your estimation that they have is about half as much capability as us.

But continuing this picture I’m painting: “five years later, it calls for breakthroughs in select disciplines within AI” –– so that would be by 2025. “That will become a key impetus for economic transformation. And then in the final stage, by 2030, China is intending to become the world’s premier artificial intelligence innovation center, which will in turn foster a new national leadership and establish the key fundamentals for an economic great power,” in their words. So there’s this very clear, intentional stance that China has been developing in the past few years.

Jeffrey Ding: Yeah, definitely. And I think it was Jess Newman who put together the AI policy in China page –– did a great job. It’s a good summary of this New Generation AI Development Plan issued in July 2017 and I would say the plan was more reflective of momentum that was already happening at the local level with companies like Baidu, Tencent, Alibaba, making the shift to focus on AI as a core part of their business strategy. Shenzhen, other cities, had already set up their own local funds and plans, and this was an instance of the Chinese national government, in the words of I think Paul Triolo and some other folks at New America, “riding the wave,” and kind of joining this wave of AI development.

Lucas Perry: And so adding a bit more color here again: there’s also been developments in principles that are being espoused in this context. I’d say probably the first major principles on AI were developed at the Asilomar Conference, at least those pertaining to AGI. In June 2019, the New Generation of AI Governance Expert Committee released principles for next-generation artificial intelligence governance, which included tenants like harmony and friendliness and fairness and justice, inclusiveness and sharing, open cooperation, shared responsibility, and agile governance. 

And then also in May of 2019 the Beijing AI Principles were released. That was by a multi-stakeholder coalition, including the Beijing Academy of Artificial Intelligence, a bunch of top universities in China, as well as industrial firms such as Baidu, Alibaba, and Tencent. And these 15 principles, among other things, called for “the construction of a human community with a shared future and the realization of beneficial AI for humankind in nature.” So it seems like principles and intentions are also being developed similarly in China that sort of echo and reflect many of the principles and intentions that have been developing in the states.

Jeffrey Ding: Yeah, I think there’s definitely a lot of similarities, and I think it’s not just with this recent flurry of AI ethics documents that you’ve done a good job of summarizing. It dates back to even the plan that we were just talking about. If you read the July 2017 New Generation AI Plan carefully, there’s a lot of sections devoted to AI ethics, including some sections that are worried about human robot alienation.

So, depending on how you read that, you could read that as already anticipating some of the issues that could occur if human goals and AI goals do not align. Even back in March, I believe, of 2018, a lot of government bodies came together with companies to put out a white paper on AI standardization, which I translated for New America. And in that, they talk about AI safety and security issues, how it’s important to ensure that the design goals of AI are consistent with the interests, ethics, and morals of most humans. So a lot of these topics, I don’t even know if they’re western topics. These are just basic concepts: We want systems to be controllable and reliable. And yes, those have deeper meanings in the sense of AGI, but that doesn’t mean that some of these initial core values can’t be really easily applied to some of these deeper meanings that we talk about when we talk about AGI ethics.

Lucas Perry: So with all of the animosity and posturing and whatever that happens between the United States and China, these sort of principles and intentions which are being developed, at least in terms of AI –– both of them sort of have international intentions for the common good of humanity; At least that’s what is being stated in these documents. How do you think about the reality of the day-to-day combativeness and competition between the US and China in relation to these principles which strive towards the deployment of AI for the common good of humanity more broadly, rather than just within the context of one country?

Jeffrey Ding: It’s a really good question. I think the first point to clarify is these statements don’t have teeth behind them unless they’re enforced, unless there’s resources dedicated to funding research on these issues, to track 1.5, track 2 diplomacy, technical meetings between researchers. These are just statements that people can put out and they don’t have teeth unless they’re actually enforced. Oftentimes, we know it’s the case. Firms like Google and Microsoft, Amazon, will put out principles about facial recognition or what their ethical stances are, but behind the scenes they’ll chase profit motives and maximize shareholder value. And I would say the same would take place for Tencent, Baidu, Alibaba. So I want to clarify that, first of all. The competitive dynamics are real: It’s partly not just an AI story, it’s a broader story of China’s rise. I’ve come from international relations background, so I’m a PhD student at Oxford studying that, and there’s a big debate in the literature about what happens when a rising power challenges an established power. And oftentimes frictions result, and it’s about how to manage these frictions without leading to accidents, miscalculation, arms races. And that’s the tough part of it.

Lucas Perry: So it seems –– at least for a baseline, thinking that we’re still pretty early in the process of AI alignment or this long-term vision we have –– it seems like at least there is theoretically some shared foundational principles reflective across both the cultures. Again, these Beijing AI Principles also include focus on benefiting all of humanity and the environment; serving human values such as privacy, dignity, freedom, autonomy and rights; continuous focus on AI safety and security; inclusivity, openness; supporting international cooperation; and avoiding a malicious AI race. So the question now simply seems: implementation of these shared principles, ensuring that they manifest.

Jeffrey Ding: Yeah. I don’t mean to be dismissive of these efforts to create principles that were at least expressing the rhetoric of planning for all of humanity. I think there’s definitely a lot of areas of US-China cooperation in the past that have also echoed some of these principles: bi-lateral cooperation on climate change research; there’s a good nuclear safety cooperation module; different centers that we’ve worked on. But at the same time, I also think that even with that list of terms you just mentioned, there are some differences in terms of how both sides understand different terms.

So with privacy in the Chinese context, it’s not necessarily that Chinese people or political actors don’t care about privacy. It’s that privacy might mean more of privacy as an instrumental right, to ensure your financial data doesn’t get leaked, you don’t lose all your money; to ensure that your consumer data is protected from companies; but not necessarily in other contexts where privacy is seen as an intrinsic right, as a civil right of sorts, where it’s also about an individual’s protection from government surveillance. That type of protection is not caught up in conversations about privacy in China as much.

Lucas Perry: Right, so there are going to be implicitly different understandings about some of these principles that we’ll have to navigate. And again, you brought up privacy as something –– and this has been something people have been paying more attention to, as there has been kind of this hype and maybe a little bit of hysteria over the China social crediting system, and plenty of misunderstanding around that.

Jeffrey Ding: Yeah, and this ties into a lot of what I’ve been thinking about lately, which is there certainly are differences, but we don’t want to exaggerate them. I think oftentimes analysis of China happens in a vacuum where it’s like, “Oh, this only happens in this mysterious far off land we call China and it doesn’t happen anywhere else.” Shoshana Zuboff has this great book on surveillance capitalism that shows how the violation of privacy is pretty extensive on the US side, not only from big companies but also from the national security apparatus.

So I think a similar phenomenon is taking place with the social credit system. Jeremy Dom at Yale Law’s China Center has put it really nicely where he says that, “We often project our worst fears about technology in AI onto what’s happening in China, and we look through a glass darkly and we unleash all of our anxieties on what’s happening onto China without reflecting on what’s happening here in the US, what’s happening here in the UK.”

Lucas Perry: Right. I would guess that generally in human psychology it seems easier to see the evil in the other rather than in the self.

Jeffrey Ding: Yeah, that’s a little bit out of range for me, but I’m sure there’s studies on that.

Lucas Perry: Yeah. All right, so let’s get in here now to your work on deciphering China’s AI dream. This is a work that you’d published in 2018 and in this work you divided up into these four different sections. First you work on context, then you discuss components, then you discuss capabilities, and then you discuss consequences all in relation to AI in China. Would you like to just sort of unpack the structuring?

Jeffrey Ding: Yeah, this was very much just a descriptive paper. I was just starting out researching this area and I just had a bunch of basic questions. So question number one for context: what is the background behind China’s AI Strategy? How does it compare to other countries’ plans? How does it compare to its own past science and technology plans? The second question was, what are they doing in terms of pushing forward drivers of AI Development? So that’s the component section. The third question is, how well are they doing? It’s about assessing China’s AI capabilities. And then the fourth is, so what’s it all mean? Why does it matter? And that’s where I talk about the consequences and the potential implications of China’s AI ambitions for issues related to AI Safety, some of the AGI issues we’ve been talking about, national security, economic development, and social governance.

Lucas Perry: So let’s go ahead and move sequentially through these. We’ve already here discussed a bit of context about what’s going on in China in terms of at least the intentional stance and the development of some principles. Are there any other key facets or areas here that you’d like to add about China’s AI strategy in terms of its past science and technology? Just to paint a picture for our listeners.

Jeffrey Ding: Yeah, definitely. I think two past critical technologies that you could look at are the plans to increase China’s space industry, aerospace sector; and then also biotechnology. So in each of these other areas there was also a national level strategic plan; An agency or an office was set up to manage this national plan; Substantial funding was dedicated. With the New Generation AI Plan, there was also a sort of implementation office set up across a bunch of the different departments tasked with implementing the plan.

AI was also elevated to the level of a national strategic technology. And so what’s different between these two phases? Because it’s debatable how successful the space plan and the biotech plans have been. What’s different with AI is you already had big tech giants who are pursuing AI capabilities and have the resources to shift a lot of their investments toward the AI space, independent of government funding mechanisms: companies like Baidu, Tencent, Alibaba, even startups that have really risen like SenseTime. And you see that reflected in the type of model.

It’s no longer the traditional national champion model where the government almost builds a company from the ground up, maybe with the help of like international financers and investors. Now it’s a national team model where they ask for the support of these leading tech giants, but it’s not like these tech giants are reliant on the government for subsidies or funding to survive. They are already flourishing firms that have international presence.

The other bit of context I would just add is that if you look at the New Generation Plan, there’s a lot of terms that are related to manufacturing. And I mentioned in Deciphering China’s AI Dream, how there’s a lot of connections and callbacks to manufacturing plans. And I think this is key because it’s one aspect of China’s strive for AI as they want to escape the middle income trap and kind of get to those higher levels of value-add in the manufacturing chain. So I want to stress that as a key point of context.

Lucas Perry: So the framing here is the Chinese government is trying to enable companies which already exist and already are successful. And this stands in contrast to the US and the UK where it seems like the government isn’t even part of a teamwork effort.

Jeffrey Ding: Yeah. So maybe a good comparison would be how technical standards develop, which is an emphasis of not only this deciphering China dream paper but a lot of later work. So I’m talking about technical standards, like how do you measure the accuracy of facial recognition systems and who gets to set those measures, or product safety standards for different AI applications. And in many other countries, including the US, the process for that is much more decentralized. It’s largely done through industry alliances. There is the NIST, which is a body under the Department of Commerce in the US that helps coordinate that to some extent, but not nearly as much as what happens in China with the Standards Administration Commission (SAC), I believe. There, it’s much more of a centralized effort to create technical standards. And there are pros and cons to both.

With the more decentralized approach, you minimize the risks of technological lock-in by setting standards too early, and you let firms have a little bit more freedom, competition as well. Whereas having a more centralized top-down effort might lead to earlier harmonization on standards and let you leverage economies of scale when you just have more interoperable protocols. That could help with data sharing, help with creating stable test bed for different firms to compete and measure stuff I was talking about earlier, like algorithmic accuracy. So there are pros and cons of the two different approaches. But I think yeah, that does flush out how the relationship between firms and the government differs a little bit, at least in the context of standards setting.

Lucas Perry: So on top of standards setting, would you say China’s government plays more of a central hand in the regulation as well?

Jeffrey Ding: That’s a good question. It probably differs in terms of what area of regulation. So I think in some cases there’s a willingness to let companies experiment and then put down regulations afterward. So this is the classic example with mobile payments: There was definitely a gray space as to how these platforms like Alipay, WeChat Pay were essentially pushing into a gray area of law in terms of who could handle this much money that’s traditionally in the hands of the banks. Instead of clamping down on it right away, the Chinese government kind of let that play itself out, and then once these mobile pay platforms got big enough that they’re holding so much capital and have so much influence on the monetary stock, they then started drafting regulations for them to be almost treated as banks. So that’s an example of where it’s more of a hands-off approach.

In AI, folks have said that the US and China are probably closer in terms of their approach to regulation, which is much more hands-off than the EU. And I think that’s just a product partly of the structural differences in the AI ecosystem. The EU has very few big internet giants and AI algorithm firms, so they have more of an incentive to regulate other countries’ big tech giants and AI firms.

Lucas Perry: So two questions are coming up. One is, is there sufficiently more unity and coordination in the Chinese government such that when standards and regulations, or decisions surrounding AI, need to be implemented that they’re able to move, say, much quicker than the United States government? And the second thing was, I believe you mentioned also that the Chinese government is also trying to find ways of using potential government money for buying up shares in these companies and try to gain decision making power.

Jeffrey Ding: Yeah, I’ll start with the latter. The reference is to the establishment of special management shares: so these would be almost symbolic, less than 1% shares in a company so that they could maybe get a seat on the board –– or another vehicle is through the establishment of party committees within companies, so there’s always a tie to party leadership. I don’t have that much more insight into how these work. I think probably it’s fair to say that the day-to-day and long-term planning decisions of a lot of these companies are mostly just driven by what their leadership wants, not necessarily what the party leaders want, because it’s just very hard to micromanage these billion dollar giants.

And that was part of a lot of what was happening with the reform of the state-owned enterprise sector, where, I think it was the SAC –– there are a lot of acronyms –– but this was the body in control of state-owned enterprises and they significantly cut down the number of enterprises that they directly oversee and sort of focused on the big ones, like the big banks or the big oil companies.

To your first point on how smooth policy enforcement is, this is not something I’ve studied that carefully. I think to some extent there’s more variability in terms of what the government does. So I read somewhere that if you look at the government relations departments of Chinese big tech companies versus US big tech companies, there’s just a lot more on the Chinese side –– although that might be changing with recent developments in the US. Two cases I’m thinking of right now are the Chinese government worrying about addictive games and then issuing the ban against some games including Tencent’s PUBG, which has wrecked Tencent’s game revenues and was really hurtful for their stock value.

So that’s something that would be very hard for the US government to be like, “Hey, this game is banned.” At the same time, there’s a lot of messiness with this, which is why I’m pontificating and equivocating and not really giving you a stable answer, because local governments don’t implement things that well. There’s a lot of local center attention. And especially with technical stuff –– this is the case of the US as well –– there’s just not as much technical talent in the government. So with a lot of these technical privacy issues, it’s very hard to develop good regulations if you don’t actually understand the tech. So what they’ve been trying to do is audit privacy policies of different social media tech companies and they started with 10 of the biggest and have tried to audit them. So I think it’s very much a developing process in both China and the US.

Lucas Perry: So you’re saying that the Chinese government, like the US, lacks much scientific or technical expertise? I had some sort of idea in my head that many of the Chinese mayors or other political figures actually have engineering degrees or degrees in science.

Jeffrey Ding: That’s definitely true. But I mean, by technical expertise I mean something like what the US government did with the digital service corps, where they’re getting people who have worked in the leading edge tech firms to then work for the government. That type of stuff would be useful in China.

Lucas Perry: So let’s move on to the second part, discussing components. And here you relate the key features of China’s AI strategy to the drivers of AI development, and here the drivers of AI development you say are hardware in the form of chips for training and executing AI algorithms, data as an input for AI Algorithms, research and algorithm development –– so actual AI researchers working on the architectures and systems through which the data will be put, and then the commercial AI ecosystems, which I suppose support and feed these first three things. What can you say about the state of these components in China and how it affects China’s AI strategy?

Jeffrey Ding: I think the main thing that I want to emphasize here that a lot of this is the Chinese government is trying to fill in some of the gaps, a lot of this is about enabling people, firms that are already doing the work. One of the gaps is private firms tend to under-invest in basic research or will under-invest in broader education because they don’t get a capture all those gains. So the government tries to support not only AI as a national level discipline but also to construct AI institutes, help fund talent programs to bring back the leading researchers from overseas. So that’s one part of it. 

The second part of it, which I did not talk about that much in the report in this section but I’ve recently researched more and more about, is that where the government is more actively driving things is when they are the final end client. So this is definitely the case in the surveillance industry space: provincial-level public security bureaus are working with companies in both hardware, data, research and development and the whole security systems integration process to develop more advanced high tech surveillance systems.

Lucas Perry: Expanding here, there’s also this way of understanding Chinese AI strategy as it relates to previous technologies and how it’s similar or different. Ways in which it’s similar involve strong degree of state support and intervention, transfer of both technology and talent, and investment in long-term whole-of-society measures; I’m quoting you here.

Jeffrey Ding: Yeah.

Lucas Perry: Furthermore, you state that China is adopting a catch-up approach in the hardware necessary to train and execute AI algorithms. This points towards an asymmetry, that most of the chip manufacturers are not in China and they have to buy them from Nvidia. And then you go on to mention about how access to large quantities of data is an important driver for AI systems and that China’s data protectionism favors Chinese AI companies and accessing data from China’s large domestic market, but it also detracts from cross-border pooling of data.

Jeffrey Ding: Yeah, and just to expand on that point, there’s been good research out of folks at DigiChina, which is a New America Institute, that looks at the cybersecurity law –– and we’re still figuring out how that’s going to be implemented completely, but the original draft would have prevented companies from taking data that was collected inside of China and taking it outside of China.

And actually these folks at DigiChina point out how some of the major backlash to this law didn’t just come from US multinational incorporations but also Chinese multinationals. That aspect of data protectionism illustrates a key trade-off: on one sense, countries and national security players are valuing personal data almost as a national security asset for the risk of blackmail or something. So this is the whole Grindr case in the US where I think Grindr was encouraged or strongly encouraged by the US government to find a non-Chinese owner. So that’s on one aspect you want to protect personal information, but on the other hand, free data flows are critical to spurring gains and innovation as well for some of these larger companies.

Lucas Perry: Is there an interest here to be able to sell their data to other companies abroad? Is that why they’re against this data protectionism in China?

Jeffrey Ding: I don’t know that much about this particular case, but I think Alibaba and Tencent have labs all around the world. So they might want to collate their data together, so they were worried that the cybersecurity law would affect that.

Lucas Perry: And just highlighting here for the listeners that access to large amounts of high quality data is extremely important for efficaciously training models and machine learning systems. Data is a new, very valuable resource. And so you go on here to say, I’m quoting you again, “China’s also actively recruiting and cultivating talented researchers to develop AI algorithms. The state council’s AI plan outlines a two pronged gathering and training approach.” This seems to be very important, but it also seems like from your report that China’s losing AI talent to America largely. What can you say about this?

Jeffrey Ding: Often the biggest bottleneck cited to AI development is lack of technical talent. That gap will eventually be filled just based on pure operations in the market, but in the meantime there has been a focus on AI talent, whether that’s through some of these national talent programs, or it also happens through things like local governments offering tax breaks for companies who may have headquarters around the world.

For example, Jingchi which is an autonomous driving startup, they had I think their main base in California or one of their main bases in California; But then Shenzhen or Guangzhou, I’m not sure which local government it was, they gave them basically free office space to move one of their bases back to China and that brings a lot of talented people back. And you’re right, a lot of the best and brightest do go to US companies as well, and one of the key channels for recruiting Chinese students are big firms setting up offshore research and development labs like Microsoft Research Asia in Beijing.

And then the third thing I’ll point out, and this is something I’ve noticed recently when I was doing translations from science and tech media platforms that are looking at the talent space in particular: They’ve pointed out that there’s sometimes a tension between the gathering and the training planks. So there’ve been complaints from domestic Chinese researchers, so maybe you have two super talented PhD students. One decides to stay in China, the other decides to go abroad for their post-doc. And oftentimes the talent plans –– the recruiting, gathering plank of this talent policy –– will then favor the person who went abroad for the post-doc experience over the person who stayed in China, and they might be just as good. So then that actually creates an incentive for more people to go abroad. There’s been good research that a lot of the best and brightest ended up staying abroad; The stay rates, especially in the US for Chinese PhD students in computer science fields, are shockingly high.

Lucas Perry: What can you say about Chinese PhD student anxieties with regards to leaving the United States to go visit family in China and come back? I’ve heard that there may be anxieties about not being let back in given that their research has focused on AI and that there’s been increasing US suspicions of spying or whatever.

Jeffrey Ding: I don’t know how much of it is a recent development but I think it’s just when applying for different stages of the path to permanent residency –– whether it’s applying for the H-1B visa or if you’re in the green card pipeline –– I’ve heard just secondhand that they avoid traveling abroad or going back to visit family just to kind of show commitment that they’re residing here in the US. So I don’t know how much of that is recent. My dad actually, he started out as a PhD student in math at University of Iowa before switching to computer science and I remember we had a death in the family and he couldn’t go back because it was so early on in his stay. So I’m sure it’s a conflicted situation for a lot of Chinese international students in the US.

Lucas Perry: So moving along here and ending this component section, you also say here –– and this kind of goes back to what we were discussing earlier about government guidance funds –– Chinese government is also starting to take a more active role in funding AI ventures, helping to grow the fourth driver of AI development, which again is the commercial AI ecosystems, which support and are the context for hardware data and research on algorithm development. And so the Chinese government is disbursing funds through what are called Government Guidance Funds or GGFs, set up by local governments and state owned companies. And the government has invested more than a billion US dollars on domestic startups. This seems to be in clear contrast with how America functions on this, with much of the investments shifting towards healthcare and AI as the priority areas in the last two years.

Jeffrey Ding: Right, yeah. So the GGFs are an interesting funding vehicle. The China Money Network, which has I think the best English language coverage of these vehicles, say that they may be history’s greatest experiment in using state capitol to reshape a nation’s economy. These essentially are Public Private Partnerships, PPPs, which do exist across the world, in the US. And the idea is basically the state seeds and anchors these investment vehicles and then they partner with private capital to also invest in startups, companies that the government thinks either are supporting a particular policy initiative or are good for overall development.

A lot of this is hard to decipher in terms of what the impact has been so far, because publicly available information is relatively scarce. I mentioned in my report that these funds haven’t had a successful exit yet, which means that maybe just they need more time. I think there’s also been some complaints that the big VCs –– whether it’s Chinese VCs or even international VCs that have a Chinese arm –– they much prefer to just to go it on their own rather than be tied to all the strings and potential regulations that come with working with the government. So I think it’s definitely a case of time will tell, and also this is a very fertile research area that I know some people are looking into. So be on the lookout for more conclusive findings about these GGFs, especially how they relate to the emerging technologies.

Lucas Perry: All right. So we’re getting to your capabilities section, which assesses the current state of China’s AI capabilities across the four drivers of AI development. Here you’re constructing an AI Potential Index, which is an index for the potentiality of, say, a country, based off these four variables, to be able to create successful AI products. So based on your research, you give China an AI Potential Index score of 17, which is about half of the US’s AI Potential Index score of 33. And so you state here that what is sort of essential to draw from this finding is the relative scale, or at least the proportionality, between China and the US. So the conclusion which we can try to draw from this is that China trails the US in every driver except for access to data, and that on all of these dimensions China is about half as capable as the US.

Jeffrey Ding: Yes, so the AIPI, the AI Potential Index, was definitely just meant as a first cut at developing a measure for which we can make comparative claims. I think at the time, and even now, I think we just throw around things like, “who is ahead in AI?” I was reading this recent Defense One article that was like, “China’s the world leader in GANs,” G-A-Ns, Generative Adversarial Networks. That’s just not even a claim that is coherent. Are you the leader at developing the talent who is going to make advancement to GANs? Are you the leader at applying and deploying GANs in the military field? Are you the leader in producing the most publications related to GANs?

I think that’s what was frustrating me about the conversation and net assessment of different countries’ AI capabilities, so that’s why I tried to develop a more systematic framework which looked at the different drivers, and it was basically looking at what is the potential of country’s AI capabilities based on their marks across these drivers.

Since then, probably the main thing that I’ve done update this was in my written testimony before the US China Economic and Security Review Commission, where I kind of switch up a little bit how I evaluate the current AI capabilities of China and the US. Basically there’s this very fuzzy concept of national AI capabilities that we throw around and I slice it up into three cross-sections. The first is, let’s look at what the scientific and technological inputs and outputs different countries are putting into AI. So that’s: how many publications are coming out of this country in Europe versus China versus US? How many outputs also in the sense of publications or inputs in the sense of R&D investments? So let’s take a look at that. 

The second slice is, let’s not just say AI. I think every time you say AI it’s always better to specify subtypes, or at least in the second slice I look at different layers of the AI value chain: foundational layers, technological layers, and the application layer. So, for example, foundation layers may be who is leading in developing the AI open source software that serves as the technological backbone for a lot of these AI applications and technologies? 

And then the third slice that I take is different sub domains of AI –– so computer vision, predictive intelligence, natural language processing, et cetera. And basically my conclusion: I throw a bunch of statistics in this written testimony out there –– some of it draws from this AI potential index that I put out last year –– and my conclusion is that China is not poised to overtake the US in the technology domain of AI; Rather the US maintains structural advantages in the quality of S and T inputs and outputs, the fundamental layers of the AI value chain, and key sub domains of AI.

So yeah, this stuff changes really fast too. I think a lot of people are trying to put together more systemic ways of measuring these things. So Jack Clark at openAI; projects like the AI index out of Stanford University; Matt Sheehan recently put out a really good piece for MacroPolo on developing sort of a five-dimensional framework for understanding data. So in this AIPI first cut, my data indicator is just a very raw who has more mobile phone users, but that obviously doesn’t matter for who’s going to lead in autonomous vehicles. So having finer grained understanding of how to measure different drivers will definitely help this field going forward.

Lucas Perry: What can you say about symmetries or asymmetries in terms of sub-fields in AI research like GANs or computer vision or any number of different sub-fields? Can we expect very strong specialties to develop in one country rather than another, or there to be lasting asymmetries in this space, or does research publication subvert this to some extent?

Jeffrey Ding: I think natural language processing is probably the best example because everyone says NLP, but then you just have that abstract word and you never dive into, “Oh wait, China might have a comparative advantage in Chinese language data processing, speech recognition, knowledge mapping,” which makes sense. There is just more of an incentive for Chinese companies to put out huge open source repositories to train automatic speech recognition.

So there might be some advantage in Chinese language data processing, although Microsoft Research Asia has very strong NOP capabilities as well. Facial recognition, maybe another area of comparative advantage: I think in my testimony I cite that China has published 900 patents in this sub domain in 2017; In that same year less than 150 patents related to facial recognition were filed in the US. So that could be partly just because there’s so much more of a fervor for surveillance applications, but in other domains such as the larger scale business applications the US probably possesses a decisive advantage. So autonomous vehicles are the best example of that: In my opinion, Google’s Waymo, GM’s Cruise are lapping the field.

And then finally in my written testimony I also try to look at military applications, and I find one metric that puts the US as having more than seven times as many military patents filed with the terms “autonomous” or “unmanned” in the patent abstract in the years 2003 to 2015. So yeah, that’s one of the research streams I’m really interested in, is how can we have more fine grain metrics that actually put into context China’s AI development, and that way we can have a more measured understanding of it.

Lucas Perry: All right, so we’ve gone into length now providing a descriptive account of China and the United States and key descriptive insights of your research. Moving into consequences now, I’ll just state some of these insights which you bring to light in your paper and then maybe you can expand on them a bit.

Jeffrey Ding: Sure.

Lucas Perry: You discuss the potential implications of China’s AI dream for issues of AI safety and ethics, national security, economic development, and social governance. The thinking here is becoming more diversified and substantive, though you claim it’s also too early to form firm conclusions about the long-term trajectory of China’s AI development; This is probably also true of any other country, really. You go on to conclude that a group of Chinese actors is increasingly engaged with issues of AI safety and ethics. 

A new book has been authored by Tencent’s Research Institute, and it includes a chapter in which the authors discuss the Asilomar Principles in detail and call for  strong regulations and controlling spells for AI. There’s also this conclusion that military applications of AI could provide a decisive strategic advantage in international security. The degree to which China’s approach to military AI represents a revolution in military affairs is an important question to study, to see how strategic advantages between the United States and China continue to change. You continue by elucidating how the economic benefit is the primary and immediate driving force behind China’s development of AI –– and again, I think you highlighted this sort of manufacturing perspective on this.

And finally, China’s adoption of AI Technologies could also have implications for its mode of social governance. For the state council’s AI plan, you state, “AI will play an irreplaceable role in maintaining social stability, an aim reflected in local level integrations of AI across a broad range of public services, including judicial services, medical care, and public security.” So given these sort of insights that you’ve come to and consequences of this descriptive picture we’ve painted about China and AI, is there anything else you’d like to add here?

Jeffrey Ding: Yeah, I think as you are laying out those four categories of consequences, I was just thinking this is what makes this area so exciting to study because if you think about it, each four of those consequences map out onto four research fields: AI ethics and safety, which with benevolent AI efforts, stuff that FLI is doing, the broader technology studies, critical technologies studies, technology ethics field; then in the social governance space, AI as a tool of social control: what are the social aftershocks of AI’s economic implications? You have this entire field of democracy studies or studies of technology and authoritarianism; and the economic benefits, you have this entire field of innovation studies: how do we understand the productivity benefits of general purpose technologies? And of course with AI as a revolution in military affairs, you have this whole field of security studies that is trying to understand what are the implications of new emerging technologies for national security? 

So it’s easy to start delineating these into their separate containers. I think what’s hard, especially for those of us are really concerned about that first field, AI ethics and safety, and the risks of AGI arms races, is a lot of other people are really, really concerned about those other three fields. And how do we tie in concepts from those fields? How do we take from those fields, learn from those fields, shape the language that we’re using to also be in conversation with those fields –– and then also see how those fields may actually be in conflict with some of what our goals are? And then how do we navigate those conflicts? How do we prioritize different things over others? It’s an exciting but daunting prospect ahead.

Lucas Perry: If you’re listening to this and are interested in becoming an AI researcher in terms of the China landscape, we need you. There’s a lot of great and open research questions here to work on.

Jeffrey Ding: For sure. For sure.

Lucas Perry: So I’ve extracted some insights from previous podcasts you did –– I can leave a link for that in the page for this podcast –– so I just want to kind of rapid fire these as points that I thought were interesting that we may or may not have covered here. You point out a language asymmetry: The best Chinese AI researchers read English and Chinese, whereas the western researchers generally cannot do this. You have a newsletter called China AI with 1A; Your newsletter attempts to correct for this as you translate important Chinese tech-related things into English. I suggest everyone follow that if you’re interested in continuing to track China and AI. There is more international cooperation on research at international conferences –– this is a general trend that you point out: Some top Chinese AI conferences are English only. Furthermore, I believe that you claim that the top 10% of AI research is still happening in America and the UK. 

Another point which I think that you’ve brought up is that China is behind on military AI uses. I’m also interested here just to see if you can expand a little bit more on it, but that China and AI safety and superintelligence is also something interesting to hear a little bit more about because on this podcast we often take the lens of long-term AI issues and AGI and super intelligence. So I think you mentioned that the Nick Bostrom of China is Professor, correct me if I get this wrong, Jao ting Wang. And also I’m curious here if you might be able to expand on how large or serious this China superintelligence FLI/FHI vibe is and what the implications of this are, and if there are any orgs in China that are explicitly focused on this. I’m sorry if this is a silly question, but are there like nonprofits in China in the same way that there are in the US? How does that function? Is China on the brink of having an FHI or FLI or MIRI or anything like this?

Jeffrey Ding: So a lot to untangle there and all really good questions. First, just to clarify, yeah, there are definitely nonprofits, non-governmental organizations. In recent years there has been some pressure on international nongovernmental organizations, nonprofit organizations, but there’s definitely nonprofits. One of the open source NLP initiatives I mentioned earlier, the Chinese language Corpus, was put together by a nonprofit online organization called AIShell Foundation, and they put together AIShell-1, AIShell-2, which are the largest open source speech Corpus available for Mandarin speech recognition.

I haven’t really followed up on Jao ting Wang. He’s a philosopher at the Chinese Academy of Social Sciences. The sort of “Nick Bostrom of China” label was more of a newsletter headline to get people to read, but he does devote a lot of time and thinking to the long-term risks of AI. Another professor at Nanjing University by the name of Zhi-Hua Zhou, he’s published articles about the need to not even touch some of what he calls strong AI. These were published in a pretty influential publication outlet by the Chinese Computer Federation, which brings together a lot of the big name computer scientists. So there’s definitely conversations about this happening. Whether there is an FHI, FLI equivalent, let’s say probably not, at least not yet.

Peking University may be developing something in this space. Berggruen Institute is also I think looking at some related issues. There’s probably a lot of stuff happening in Hong Kong as well; Maybe we just haven’t looked hard enough. I think the biggest difference is there’s definitely not something on the level of a DeepMind or OpenAI, because even the firms with the best general AI capabilities –– DeepMind and OpenAI almost like these unique entities where profits and stocks don’t matter.

So yeah, definitely some differences, but honestly I updated significantly once I started reading more, and nobody had really looked at this Zhi-Hua Zhou essay before we went looking and found it. So maybe there are a lot of these organizations and institutions out there but we just need to look harder.

Lucas Perry: So on this point of there not being OpenAI or DeepMind equivalents, are there any research organizations or departments explicitly focused on the mission of creating artificial general intelligence or superintelligence safely scalable machine learning systems that could go from now until infinity? Or is this just more like scattered researchers?

Jeffrey Ding: I think it’s how you define an AGI project. Like what you just said is probably a good tight definition. I know Seth Baum, he’s done some research tracking AGI projects and he says that there are six in China. I would say probably the only ones that come close are, I guess Tencent says it’s one of their missions streams to develop artificial general intelligence; horizon robotics, which is actually like a chip company, they also state it as one of their objectives. It depends also on how much you think work on neuroscience related pathways into AGI count or not. So there’s probably some Chinese Academy of Science labs working on whole brain emulation or kind of more brain inspired approaches to AGI, but definitely not anywhere to the level of DeepMind, OpenAI.

Lucas Perry: All right. So there are some myths in table one of your paper which you demystify. Three of these are: China’s approach to AI is defined by its top-down and monolithic nature; China is winning the AI arms race; And there is little to no discussion of issues of AI ethics and safety in China. And then maybe lastly I might add, if you might be able to add to it, that there is just to begin with an AI arms race between the US and China.

Jeffrey Ding: Yeah, I think that’s a good addition. I think we have to be careful about which historical analogies and memes we choose. So “arms race” is a very specific call back to cold war context, where there’s almost these discrete types of missiles that we are racing Soviet Union on and discrete applications that we can count up; Or even going way back to what some scholars call the first industrial arms race in the military sphere over steam power boats between Britain and France in the late 19th century. And all of those instances you can count up. France has four iron clads, UK has four iron clads; They’re racing to see who can build more. I don’t think there’s anything like that. There’s not this discreet thing that we’re racing to see who can have more of. If anything, it’s about a competition to see who can absorb AI advances from abroad better, who can diffuse them throughout the economy, who can adopt them in a more sustainable way without sacrificing core values.

So that’s sort of one meme that I really want to dispel. Related to that, assumptions that often influence a lot of our discourse on this is techno-nationalist assumption, which is this idea that technology is contained within national boundaries and that the nation state is the most important actor –– which is correct and a good one to have and a lot of instances. But there are also good reasons to adopt techno-globalist assumptions as well, especially in the area of how fast technologies diffuse nowadays and also how much underneath this national level competition, firms from different countries are working together and make standards alliances with each other. So there’s this undercurrent of techno-globalism, where there are people flows, idea flows, company flows happening while the coverage and the sexy topic is always going to be about national level competition, zero sum competition, relative games rhetoric. So you’re trying to find a balance between those two streams.

Lucas Perry: What can you say about this sort of reflection on zero sum games versus healthy competition and the properties of AI and AI research? I’m seeking clarification on this secondary framing that we can take on a more international perspective about deployment and implementation of AI research and systems rather than, as you said, this sort of techno-nationalist one.

Jeffrey Ding: Actually, this idea comes from my supervisor: Relative gains make sense if there’s only two players involved, just from a pure self-interest maximizing standpoint. But once you introduce three or more players, relative gains doesn’t make as much sense as optimizing for absolute gains. So maybe one way to explain this is to take the perspective of a European country –– let’s say Germany –– and you are working on an AI project with China or some other country that maybe the US is pressuring you not to work with; You’re working with Saudi Arabia or China on some project and it’s going to benefit China 10 arbitrary points and it’s going to benefit Germany eight arbitrary points versus if you didn’t choose to cooperate at all.

So in that sense, Germany, the rational actor, would take that deal. You’re not just caring about being better than China; From a German perspective, you care about maintaining leadership in the European Union, providing health benefits to your citizens, continuing to power your economy. So in that sense you would take the deal even though China benefits a little bit more, relatively speaking. 

I think currently a lot of people in the US are locked into this mindset that the only two players that exist in the world are the US and China. And if you look at our conversation, right, oftentimes I’ve displayed that bias as well. We should probably have talked a lot more about China-EU or China-Japan cooperation in this space and networks in this space because there’s a lot happening there too. So a lot of US policy makers see this as a two-player game between the US and China. And then in that sense, if there’s some cancer research project about discovering proteins using AI that may benefit China by 10 points and benefit the US only by eight points, but it’s going to save a lot of people from cancer  –– if you only care about making everything about maintaining a lead over China, then you might not take that deal. But if you think about it from the broader landscape of it’s not just a zero sum competition between US and China, then your kind of evaluation of those different point structures and what you think is rational will change.

Lucas Perry: So as there’s more actors, is the idea here that you care more about absolute gains in the sense that these utility points or whatever can be translated into decisive strategic advantages like military advantages?

Jeffrey Ding: Yeah, I think that’s part of it. What I was thinking along that example is basically 

if you as Germany don’t choose to cooperate with Saudi Arabia or work on this joint research project with China then the UK or some other countries just going to swoop in. And that possibility doesn’t exist in the world where you’re just thinking about two players. There’s a lot of different ways to fit these sort of formal models, but that’s probably the most simplistic way of explaining it.

Lucas Perry: Okay, cool. So you’ve spoken a bit here on important myths that we need to dispel or memes that we need to combat. And recently Peter Thiel has been on a bunch of conservative platforms, and he also wrote an op-ed, basically fanning the flames of AGI as a military weapon, AI as a path to superintelligence and, “Google campuses have lots of Chinese people on them who may be spies,” and that Google is actively helping China with AI military technology. In terms of bad memes and myths to combat, what are your thoughts here?

Jeffrey Ding: There’s just a lot of things that Thiel gets wrong. I’m mostly kind of just confused because he is one of the original founders of OpenAI, he’s funded other institutions, really concerned about AGI safety, really concerned about race dynamics –– and then in the middle of this piece, he first says AI is a military technology, then he goes back to saying AI is dual use in the middle, and then he says this ambiguity is “strangely missing from the narrative that pits a monolithic AI against all of humanity.” He out of anyone should know that these conversations about the risks of AGI, why are you attacking this straw man in the form of a terminator AI meme? Especially, you’re funding a lot of the organizations that are worried about the risks of AGI for all of humanity. 

The other main thing that’s really problematic is if you’re concerned about the US military advantage, that more than ever is rooted on our innovation advantage. It’s not about spinoff from military innovation to civilian innovation, which was the case in the days of US tech competition against Japan. It’s more the case of spin on, where innovations are happening in the commercial sector that are undergirding the US military advantage.

And this idea of painting Google as anti-American for setting up labs in China is so counterproductive. There are independent Google developer conferences all across China just because so many Chinese programmers want to use Google tools like TensorFlow. It goes back to the fundamental AI open source software I was talking about earlier that lets Google expand its talent pool: People want to work on Google products; They’re more used to the framework of Google tools to build all these products. Google’s not doing this out of charity to help the Chinese military. They’re doing this because the US has a flawed high-skilled immigration system, so they need to go to other countries to get talent. 

Also, the other thing about the piece is he cites no empirical research on any of these fronts, when there’s this whole globalization of innovation literature that backs up empirically a lot of what I’m saying. And then I’ve done my own empirical research on Microsoft Research Asia, which as we’ve mentioned is their second biggest lab overall, it’s based in Beijing. I’ve tracked their PhD Fellowship Program: This basically gives people at Chinese PhD programs, you get a full scholarship and you just do an internship at Microsoft Research Asia for one of the summers. And then we track their career trajectories, and a lot of them end up coming to the US or working for Microsoft Research Asia in Beijing. And the ones that come to the US don’t just go to Microsoft: They go to Snapchat or Facebook or other companies. And it’s not just about the people: As I mentioned earlier, we have this innovation centrism about who produces the technology first, but oftentimes it’s about who diffuses and adopts the technology first. And we’re not always going to be the first on the scene, so we have to be able to adopt and diffuse technologies that are invented first in other areas. And these overseas labs are some of our best portals into understanding what’s happening in these other areas. If we lose them, it’s another form of asymmetry because Chinese AI companies are going abroad and expanding. 

I honestly, I’m just really confused about what the point of this piece was and to be honest, it’s kind of sad because this is not what Thiel researches every day. So he’s obviously picking up bits and pieces from the narrative frames that are dominating our conversation. And it’s actually probably a structural stain on how we’ve allowed the discourse to have so many of these bad problematic memes, and we need more people calling them out actively, doing the heart to heart conversations behind the scenes to get people to change their minds or have productive constructive conversations about these.

And the last thing I’ll point out here is there’s this zombie Cold War mentality that still lingers today, and I think the historian Walter McDougall was really great in calling this out, where he talks about we paint this other, this enemy, and we use it to justify sacrifices in human values to drive society to its fullest technological potential. And that often comes with sacrificing human values like privacy, equality, freedom of speech. And I don’t want us to compete with China over who can build better tools to sensor, repress, and surveil dissidents and minority groups, right? Let’s see who can build the better, I don’t know, industrial internet of things or build better privacy preserving algorithms that are going to sustain a more trustworthy AI ecosystem.

Lucas Perry: Awesome. So just moving along here as we’re making it to the end of our conversation: What are updates you’ve had or major changes since you’ve written Deciphering China’s AI Dreams, since it has been a year?

Jeffrey Ding: Yeah, I mentioned some of the updates in the capability section. The consequences, I mean I think those are still the four main big issues, all of them tied to four different literature bases. The biggest change would probably be in the component section. I think when I started out, I was pretty new in this field, I was reading a lot of literature from the China watching community and also a lot from Chinese comparative politics or articles about China, and so I focused a lot on government policies. And while I think the party and the government are definitely major players, I think I probably overemphasized the importance of government policies versus what is happening at the local level.

So if I were to go back and rewrite it, I would’ve looked a lot more at what is happening at the local level, given more examples of AI firms, like iFlytek I think is a very interesting under-covered firm, and how they are setting up research institutes with a university in Chung Cheng very similar to the industry- academia style collaborations in the US, basically just ensuring that they’re able to train the next generation of talent. They have relatively close ties to the state as well, I think controlling shares or a large percentage of shares owned by state-owned vehicles. So I probably would have gone back and looked at some of these more under-covered firms and localities and looked at what they were doing rather than just looking at the rhetoric coming from the central government.

Lucas Perry: Okay. What does it mean for there to be healthy competition between the United States and China? What is an ideal AI research and political situation? What are the ideal properties of the relations the US and China can have on the path to superintelligence?

Jeffrey Ding: Yeah.

Lucas Perry: Solve AI Governance for me, Jeff!

Jeffrey Ding: If I could answer that question, I think I could probably retire or something. I don’t know.

Lucas Perry: Well, we’d still have to figure out how to implement the ideal governance solutions.

Jeffrey Ding: Yeah. I think one starting point is on the way to more advanced AI systems, we have to stop looking at AI as if it’s like this completely special area with no analogs, because even though there are unique aspects of AI –– like their autonomous intelligence systems, a possibility of the product surpassing human level intelligence, or the process surpassing human level intelligence –– we can learn a lot from past general purpose technologies like steam, electricity, the diesel engine. And we can learn about a lot of competition in past strategic industries like chips, steel.

So I think probably one thing that we can distill from some of this literature is there are some aspects of AI development that are going to be more likely to lead to race dynamics than others. So one cut that you could take are industries where it’s likely that there are only going to be two or three, four or five major players –– so it might be the case that capital costs, the upstart costs, the infrastructure costs of autonomous vehicles requires that there are going to be only one or two players across the world. And that is like, hey, if you’re a national government who’s thinking strategically, you might really want to have a player in that space, so that might incentivize more competition. Whereas in other fields, maybe there’s just going to be a lot more competition or less need for relative gain, zero sum thinking. So like neural machine translation, that could be a case of something that just almost becomes like a commodity. 

So then there are things we can think about in those fields where there’s only going to be four or five players or three or four players. Can we maybe balance it out so that at least one is from the two major powers or is the better approach to, I don’t know, enact global competition, global antitrust policy to kind of ensure that there’s always going to be a bunch of different players from a bunch of different countries? So those are some of the things that come to mind that I’m thinking about, but yeah, this is definitely something where I claim zero credibility relative to others who are thinking about it.

Lucas Perry: Right. Well unclear anyone has very good answers here. I think my perspective, to add at least one frame on it, is that given the dual use nature of many of the technologies like computer vision and like embedded robot systems and developing autonomy and image classification –– all of these different AI specialty subsystems can be sort of put together in arbitrary ways. So in terms of autonomous weapons, FLI’s position is, it’s important to establish international standards around the appropriate and beneficial uses of these technologies.

Image classification, as people already know, can be used for discrimination or beneficial things. And the technologies can be aggregated to make anything from literal terminator swarm robots to lifesaving medical treatments. So the relation between the United States and China can be made more productive if clear standards based on the expression of the principles we enumerated earlier could be created. And given that, then we might be taking some paths towards a beneficial beautiful future of advanced AI systems.

Jeffrey Ding: Yeah, no, I like that a lot. And some of the technical standards documents I’ve been translating: I definitely think in the short-term, technical standards are a good way forward, sort of solve the starter pack type of problems before AGI. Even some Chinese white papers on AI standardization have put out the idea of ranking the intelligence level of different autonomous systems –– like an autonomous car might be more than a smart speaker or something: Even that is a nice way to kind of keep track of the progress, is continuities in terms of intelligence explosions and trajectories in the space. So yeah, I definitely second that idea. Standardization efforts, autonomous weapons regulation efforts, as serving as the building blocks for larger AGI safety issues.

Lucas Perry: I would definitely like to echo this starter pack point of view. There’s a lot of open questions about the architectures or ways in which we’re going to get to AGI, about how the political landscape and research landscape is going to change in time. But I think that we already have enough capabilities and questions that we should really be considering where we can be practicing and implementing the regulations and standards and principles and intentions today in 2019 that are going to lead to robustly good futures for AGI and superintelligence.

Jeffrey Ding: Yeah. Cool.

Lucas Perry: So Jeff, if people want to follow you, what is the best way to do that?

Jeffrey Ding: You can hit me up on Twitter, I’m @JJDing99; Or I put out a weekly newsletter featuring translations on AI related issues from Chinese media, Chinese scholars and that’s China AI Newsletter, C-H-I-N-A-I. if you just search that, it should pop up.

Lucas Perry: Links to those will be provided in the description of wherever you might find this podcast. Jeff, thank you so much for coming on and thank you for all of your work and research and efforts in this space, for helping to create a robust and beneficial future with AI.

Jeffrey Ding: All right, Lucas. Thanks. Thanks for the opportunity. This was fun.

Lucas Perry: If you enjoyed this podcast, please subscribe, give it a like or share it on your preferred social media platform. We’ll be back again soon with another episode in the AI Alignment series.

End of recorded material

The Climate Crisis as an Existential Threat with Simon Beard and Haydn Belfield

Does the climate crisis pose an existential threat? And is that even the best way to formulate the question, or should we be looking at the relationship between the climate crisis and existential threats differently? In this month’s FLI podcast, Ariel was joined by Simon Beard and Haydn Belfield of the University of Cambridge’s Center for the Study of Existential Risk (CSER), who explained why, despite the many unknowns, it might indeed make sense to study climate change as an existential threat. Simon and Haydn broke down the different systems underlying human civilization and the ways climate change threatens these systems; They also discussed our species’ unique strengths and vulnerabilities — and the ways in which technology has heightened both — with respect to the changing climate.

This month’s podcast helps serve as the basis for a new podcast we’re launching later this month about the climate crisis. We’ll be talking to climate scientists, meteorologists, AI researchers, policy experts, economists, social scientists, journalists, and more to go in depth about a vast array of climate topics. We’ll talk about the basic science behind climate change, like greenhouse gases, the carbon cycle, feedback loops, and tipping points. We’ll discuss various impacts of greenhouse gases, like increased extreme weather events, loss of biodiversity, ocean acidification, resource conflict, and the possible threat to our own continued existence. We’ll talk about the human causes of climate change and the many human solutions that need to be implemented. And so much more!. If you don’t already subscribe to our podcasts on your preferred podcast platform, please consider doing so now to ensure you’ll be notified when the climate series launches.

We’d also like to make sure we’re covering the climate topics that are of most interest to you. If you have a couple minutes, please fill out a short survey at surveymonkey.com/r/climatepodcastsurvey, and let us know what you want to learn more about.

Topics discussed in this episode include:

  • What an existential risk is and how to classify different threats
  • Systems critical to human civilization
  • Destabilizing conditions and the global systems death spiral
  • How we’re vulnerable as a species
  • The “rungless ladder”
  • Why we can’t wait for technology to solve climate change
  • Uncertainty and how to deal with it
  • How to incentivize more creative science
  • What individuals can do

References discussed in this episode include:

Want to get involved? CSER is hiring! Find a list of openings here.

Ariel Conn: Hi everyone and welcome to another episode of the FLI podcast. I’m your host, Ariel Conn, and I am especially excited about this month’s episode. Not only because, as always, we have two amazing guests joining us, but also because this podcast helps lay the groundwork for an upcoming series we’re releasing on climate change.

There’s a lot of debate within the existential risk community about whether the climate crisis really does pose an existential threat, or if it will just be really, really bad for humanity. But this debate exists because we don’t know enough yet about how bad the climate crisis will get nor about how humanity will react to these changes. It’s very possible that today’s predicted scenarios for the future underestimate how bad climate change could be, while also underestimating how badly humanity will respond to these changes. Yet if we can get enough people to take this threat seriously and to take real, meaningful action, then we could prevent the worst of climate change, and maybe even improve some aspects of life. 

In late August, we’ll be launching a new podcast series dedicated to climate change. I’ll be talking to climate scientists, meteorologists, AI researchers, policy experts, economists, social scientists, journalists, and more to go in depth about a vast array of climate topics. We’ll talk about the basic science behind climate change, like greenhouse gases, the carbon cycle, feedback loops, and tipping points. We’ll discuss various impacts of greenhouse gases, like increased extreme weather events, loss of biodiversity, ocean acidification, resource conflict, and the possible threat to our own continued existence. We’ll talk about the human causes of climate change and the many human solutions that need to be implemented. And so much more. If you don’t already subscribe to our podcasts on your preferred podcast platform, please consider doing so now to ensure you’ll be notified as soon as the climate series launches.

But first, today, I’m joined by two guests who suggest we should reconsider studying climate change as an existential threat. Dr. Simon Beard and Haydn Belfield are researchers at University of Cambridge’s Center for the Study of Existential Risk, or CSER. CSER is an interdisciplinary research group dedicated to the study and mitigation of risks that could lead to human extinction or a civilizational collapse. They study existential risks, develop collaborative strategies to reduce them, and foster a global community of academics, technologists, and policy makers working to safeguard humanity. Their research focuses on four areas: biological risks, environmental risks, risks from artificial intelligence, and how to manage extreme technological risk in general.

Simon is a senior research associate and academic program manager; He’s a moral philosopher by training. Haydn is a research associate and academic project manager, as well as an associate fellow at the Leverhulme Center for the Future of Intelligence. His background is in politics and policy, including working for the UK Labor party for several years. Simon and Haydn, thank you so much for joining us today.

Simon Beard: Thank you.

Haydn Belfield: Hello, thank you.

Ariel Conn: So I’ve brought you both on to talk about some work that you’re involved with, looking at studying climate change as an existential risk. But before we really get into that, I want to remind people about some of the terminology. So I was hoping you could quickly go over a reminder of what an existential threat is and how that differs from a catastrophic threat and if there’s any other terminology that you think is useful for people to understand before we start looking at the extreme threats of climate change.

Simon Beard: So, we use these various terms as kind of terms of art within the field of existential risk studies, in a sense. We know what we mean by them, but all of them, in a way, are different ways of pointing to the same kind of outcome — which is something unexpectedly, unprecedentedly bad. And, actually, once you’ve got your head around that, different groups have slightly different understandings of what the differences between these three terms are. 

So, for some groups, it’s all about just the scale of badness. So, an extreme risk is one that does a sort of an extreme level of harm; A catastrophic risk does more harm, a catastrophic level of harm. And an existential risk is something where either everyone dies, human extinction occurs, or you have an outcome which is an equivalent amount of harm: Maybe some people survive, but their lives are terrible. Actually, at the Center for the Study of Existential Risk, we are concerned about this classification in terms of the cost involved, but we also have coupled that with a slightly different sort of terminology, which is really about systems and the operation of the global systems that surround us.

Most of the systems — be this physiological systems, the world’s ecological system, the social, economic, technological, cultural systems that surround those institutions that we build on — they have a kind of normal space of operation where they do the things that you expect them to do. And this is what human life, human flourishing, and human survival are built on: that we can get food from the biosphere, that our bodies will continue to operate in a way that’s consistent with and supporting our health and our continued survival, and that the institutions that we’ve developed will still work, will still deliver food to our tables, will still suppress interpersonal and international violence, and that we’ll basically, we’ll be able to get on with our lives.

If you look at it that way, then an extreme risk, or an extreme threat, is one that pushes at least one of these systems outside of its normal boundaries of operation and creates an abnormal behavior that we then have to work really hard to respond to. A catastrophic risk is one where that happens, but then that also cascades. Particularly in global catastrophe, you have a whole system that encompasses everyone all around the world, or maybe a set of systems that encompass everyone all around the world, that are all operating in this abnormal state that’s really hard for us to respond to.

And then an existential catastrophe is one where the systems have been pushed into such an abnormal state that either you can’t get them back or it’s going to be really hard. And life as we know it cannot be resumed; We’re going to have to live in a very different and very inferior world, at least from our current way of thinking.

Haydn Belfield: I think that sort of captures it really well. One thing that you could kind of visualize, it might be something like, imagine a really bad endemic. 100 years ago, we had the Spanish flu pandemic that killed 100 million people — that was really bad. But it could be even worse. So imagine one tomorrow that killed a billion people. That would be one of the worst things that’s ever happened to humanity; It would be sort of a global catastrophic risk. But it might not end our story, it might not be the end of our potential. But imagine if it killed everyone, or it killed almost everyone, and it was impossible to recover: That would be an existential risk.

Ariel Conn: So, there’s — at least I’ve seen some debate about whether we want to consider climate change as falling into either a global catastrophic or existential risk category. And I want to start first with an article that, Simon, you wrote back in 2017, to consider this question. The subheading of your article is a question that I think is actually really important. And it was: how much should we care about something that is probably not going to happen? I want to ask you about that — how much should we care about something that is probably not going to happen?

Simon Beard: I think this is really important when you think about existential risk. People’s minds, they want to think about predictions, they want someone who works in existential risk to be a prophet of doom. That is the idea that we have — that you know what the future is going to be like, and it’s going to be terrible, and what you’re saying is, this is what’s going to happen. That’s not how people who work in existential risk operate. We are dealing with risks, and risks are about knowing all the possible outcomes: whether any of those are this severe long term threat, an irrecoverable loss to our species.

And it doesn’t have to be the case that you think that something is the most likely or the most probable as a potential outcome for you to get really worried about the thing that could bring that about. And even a 1% risk of one of these existential catastrophes is still completely unacceptable because of the scale of the threat, and the harm we’re talking about. And because if this happens, there is no going back; It’s not something that we can do a safe experiment with.

So when you’re dealing with risk, you have to deal with probabilities. You don’t have to be convinced that climate change is going to have these effects to really place it on the same level as some of the other existential risks that people talk about — nuclear weapons, and artificial intelligence, and so on — you just need to see that this is possible. We can’t exclude it based on the knowledge that we have at the moment, but it seems like a credible threat with a real chance of materializing. And something that we can do about it, because ultimately the aim of all existential risk research is safety — trying to make the world a safer place and the future of humanity a more certain thing.

Ariel Conn: Before I get into the work that you’re doing now, I want to stick with one more question that I have about this article. I was amused when you sent me the link to it — you sort of prefaced it by saying that you think it’s rather emblematic of some of the problematic ways that we think about climate change, especially as an existential risk, and that your thinking has evolved in the last couple of years since writing this. I was hoping you could just talk a little bit about some of the problems you see with the way we’re thinking about climate change as an x-risk.

Simon Beard: I wrote this paper largely out of a realization that people wanted us to talk about climate change in the next century. And we wanted to talk about it. It’s always up there on the list of risks and threats that people bring up when you talk about existential risk. And so I thought, well, let’s get the ball rolling; Let’s review what’s out there, and the kind of predictions that people who seem to know what they’re talking about have made about this — you know, economists, climate scientists, and so on — and make this case that this suggests there is a credible threat, and we need to take this seriously. And that seemed, at the time, like a really good place to start.

But the more I thought about it afterwards, the more flawed I saw the approach as being. And it’s hard to regret a paper like that, because I’m still convinced that the risk is very real, and people need to take it seriously. But for instance, one of the things that kept on coming up is that when people make predictions about climate change as an existential risk, they’re always very vague. Why is it a risk? What’s the sort of scenarios that we worry about? Where are the danger levels? And they always want to link it to a particular temperature threshold or a particular greenhouse gas trajectory. And that just didn’t strike me as credible, that we would cross a particular temperature threshold and then that would be the end of humanity.

Because of course, a huge amount of the risk that we face depends upon how humanity responds to the changing climate, not just upon climate change. I think people have this idea in their mind that it’ll get so hot, everyone will fry or everyone will die of heat exhaustion. And that’s just not a credible scenario. So there were these really credible scholars, like Marty Weitzman and Ram Ramanathan, who tried to work this out, and have tried to predict what was going to happen. But they seemed to me to be missing a lot, and try and make very precise claims but based on very vague scenarios. So we kind of said at that point, we’re going to stop doing this until we have worked out a better way of thinking about climate change as an existential threat. And we’ve been thinking a lot about this in the intervening 18 months, and that’s where the research that you’re seeing that we’re hoping to publish soon and the desire to do this podcast really come from. So it seems to us that there are kind of three ways that people have gone about thinking about climate change as an existential risk. It’s a really hard question. We don’t really know what’s going to happen. There’s a lot of speculation involved in this.

One of the ways that people have gone about trying to respond to this has just been to speculate, just been to come up with some plausible scenario or pick a temperature number out of the air and say, “Well, that seems about right, if that were to happen that would lead to human extinction, or at least a major disruption of all of these systems that we rely upon. So what’s the risk of that happening, and then we’ll label that as the existential climate threat.” As far as we can tell, there isn’t the research to back up some of these numbers. Many of them conflict: In Ram Ramanathan’s paper he goes for five degrees; In Marty Weitzman’s paper he goes to six degrees; There’s another paper that was produced by Breakthrough where they go for four degrees. There’s kind of quite a lot of disagreement about where the danger levels lie.

And some of it’s just really bad. So there’s this prominent paper by Jem Bendell — he never got it published, but it’s been read like 150,000 times, I think — on adapting to extreme climate change. And he just picks this random scenario where the sea levels rise, a whole bunch of coastal nuclear reactors get inundated with seawater, and they go critical, and this just causes human extinction. That’s not credible in many different ways, not least just that won’t have that much damage. But it just doesn’t seem credible that this slow sea level rise would have this disastrous meltdown effect — we could respond to that. What passes for scientific study and speculation didn’t seem good enough to us.

Then there were some papers which just kind of passed the whole thing by — say, “Well, we can’t come up with a plausible scenario or a plausible threat level, but there just seem to be a lot of bad things going on around there. Given that we know that the climate is changing, and that we are responding to this in a variety of ways, probably quite inadequately, it doesn’t help us to prioritize efforts or really understand the level of risk we face and when maybe some more extreme measures like geoengineering become more appropriate because of the level of risk that we face.”

And then there’s a final set of studies — there have been an increasing number of these; one recently came out in Vox, Anders Sandberg has done one, and Toby Ord talks about one — where people say, “Well, let’s just go for the things that we know, let’s go for the best data and the best studies.” And these usually focus on a very limited number of climate effects, the more direct impacts of things like heat exhaustion, perhaps sometimes the crop failure — but only really looking at the most direct climate impacts and only where there are existing studies. And then they try and extrapolate from that, sometimes using integrated assessment models, sometimes it’s the other kinds of analysis, but usually in quite a straightforward linear economic analysis or epidemiological analysis.

And that also is useful. I don’t want to dis these papers; I think that they provide very useful information for us. But there is no way that that can constitute an adequate risk assessment, given the complexity of the impacts that climate change is having, and the ways in which we’re responding to that. And it’s very easy for people to read these numbers and these figures and conclude, as I think the Vox article did, climate change isn’t an existential risk, it’s just going to kill a lot of people. Well, no, we know it will kill a lot of people, but that doesn’t answer the question about whether it is an existential threat. There are a lot of things that you’re not considering in this analysis. So given that there wasn’t really a good example that we could follow within the literature, we’ve kind of turned it on its head. And we’re now saying, maybe we need to work backwards.

Rather than trying to work forwards from the climate change we’re expecting and the effects that we think that is going to have and then whether these seem to constitute an existential threat, maybe we need to start from the other end and think about what are the conditions that could most plausibly destabilize the global civilization and the continued future of our species? And then work back from them to ask, are there plausible climate scenarios that could bring these about? And there’s already been some interesting work in this area for natural systems, and this kind of global Earth system thinking and the planetary boundaries framework, but there’s been very little work on this done at the social level.

And even less work done when you consider that we rely on both social and natural systems for our survival. So what we really need is some kind of approach that will integrate these two. That’s a huge research agenda. So this is how we think we’re going to proceed in trying to move beyond the limited research that we’ve got available. And now we need to go ahead and actually construct these analysis and do a lot more work in this field. And maybe we’re going to start to be able to produce a better answer.

Ariel Conn: Can you give some examples of the research that has started with this approach of working backwards?

Simon Beard: So there’s been some really interesting research coming out of the Stockholm Resilience Center dealing with natural Earth systems. So they first produced this paper on planetary boundaries, where they looked at a range of, I think it’s nine systems — the biosphere, biogeochemical systems, yes, climate system and so on — and said, are these systems operating in what we would consider their normal functioning boundaries? That’s how they’ve operated throughout the pliocene, throughout the last several thousand years, during which human civilization has developed. Or do they show signs of transitioning to a new state of abnormal operation? Or are they in a state that’s already posing high risk to the future of human civilization, but without really specifying what that risk is.

Then they produced another paper recently on Hothouse Earth, where they started to look for tipping points within the system, points where, in a sense, change become self perpetuating. And rather than just a kind of gradual transition from what we’re used to, to maybe an abnormal condition, all of a sudden, a whole bunch of changes start to accelerate. So it becomes much harder to adapt to these. Their analysis is quite limited, but they argue that quite a lot of these tipping point seem to start kicking in at about one and a half to two degrees warming above pre-industrial levels.

We’re getting quite close to that now. But yeah, the real question for us at the Center for the Study of Existential risk looking at humanity is, what are the effects of this going to be? And also what are the risks that exist within those socio-technological systems, the institutions that we set up, the way that we survive as a civilization, the way we get our food, the way we get our information, and so on, because there’s also significant fragilities and potential tipping points there as well. 

That’s a very new sort of study, I mean, to the point were a lot of people just refer back to this one book written by Jared Diamond in 2005 as if it was the authoritative tome on collapse. And it’s a popular book, and he’s not an expert in this: He’s kind of a very generalist scholar, but he provides a very narrative-based analysis of the collapse of certain historical civilizations and draws out a couple of key lessons from that. But it’s all very vague and really written for a general audience. And that still kind of stands out as this is the weighty tome, this is where you go to get answers to your questions. It’s very early and we think that there’s a lot of room for better analysis of that question. And that’s something we’re looking at a lot.

Ariel Conn: Can you talk about the difference between treating climate change itself as an existential risk, like saying this is an x-risk, and studying it as if it poses such a threat? If that distinction makes sense?

Simon Beard: Yeah. When you label something as an existential risk, I think that is in many ways a very political move. And I think that that has been the predominant lens through which people have approached this question of how we should talk about climate change. People want to draw attention to it, they realize that there’s a lot of bad things that could come from it. And it seems like we could improve the quality of our future lives relatively easily by tackling climate change.

It’s not like AI safety, you know, the threats that we face from advance artificial intelligence, where you really have to have advanced knowledge of machine learning and a lot of skills and do a lot of research to understand what’s going on here and what the real threats that we face might be. This is quite clear. So talking about it, labeling it as an existential risk has predominantly been a political act. But we are an academic institution. 

I think when you ask this question about studying it as an existential threat, one of the great challenges we face is all things that are perceived as existential threats, they’re all interconnected. Human extinction, or the collapse of our civilization, or these outcomes that we worry about: these are scenarios and they will have complex causes — complex technological causes, complex natural causes. And in a sense, when you want to ask the question, should we study climate change as an existential risk? What you’re really asking is, if we look at everything that flows from climate change, will we learn something about the conditions that could precipitate the end of our civilization? 

Now, ultimately, that might come about because of some heat exhaustion or vast crop failure because of the climate change directly. It may come about because, say, climate change triggers a nuclear war. And then there’s a question of, was that a climate-based extinction or a nuclear-based extinction? Or it might come about because we develop technologies to counter climate change, and then those technologies prove to be more dangerous than we thought and pose an existential threat. So when we carve this off as an academic question, what we really want to know is, do we understand more about the conditions that would lead to existential risk, and do we understand more about how we can prevent this bad thing from happening, if we look specifically at climate change? It’s a slightly different bar. But it’s all really just this question of, is talking about climate change, or thinking about climate change, a way to move to a safer world? We think it is but we think that there’s quite a lot of complex, difficult research that is needed to really make that so. And at the moment, what we have is a lot of speculation.

Haydn Belfield: I’ve got maybe an answer to that as well. Over the last few years, lots, and lots of politicians have said climate change is an existential risk, and lots of activists as well. So you get lots and lots of speeches, or rallies, or articles saying this is an existential risk. But at the same time, over the last few years, we’ve had people who study existential risk for a living, saying, “Well, we think it’s an existential risk in the same way that nuclear war is an existential risk. But it’s not maybe this single event that could kill lots and lots of people, or everyone, in kind of one fell swoop.”

So you get people saying, “Well, it’s not a direct risk on its own, because you can’t really kill absolutely everybody on earth with climate change. Maybe there’s bits of the world you can’t live in, but people move around. So it’s not an existential risk.” And I think the problem with both of these ways of viewing it is that word that I’ve been emphasizing, “an.” So I would kind of want to ban the word “an” existential risk, or “a” existential risk, and just say, does it contribute to existential risk in general?

So it’s pretty clear that climate change is going to make a bunch of the hazards that we face — like pandemics, or conflict, or environmental one-off disasters — more likely, but it will also make us more vulnerable to a whole range of hazards, and it will also increase the chances of all these types of things happening, and increase our exposure. So like with Simon, I would want to ask, is climate change going to increase the existential risk we face, and not get hung up on this question of is it “an” existential risk?

Simon Beard: The problem is, unfortunately, there is an existing terminology and existing way of talking that to some extent we’re bound up with. And this is how the debate is. So we’ve really struggled with to what extent we kind of impose the terminology that we’ve most liked on the field and the way that these things are discussed? And we know ultimately existential risk is just a thing; It’s a homogenous lump at the end of human civilization or the human species, and what we’re really looking at is the drivers of that and the things that push that up, and we want to push it down. That is not a concept that I think lots of people find easy to engage with. People do like to carve this up into particular hazards and vulnerabilities and so on.

Haydn Belfield: That’s how most of risk studies works. Most of when you study natural disasters, or you study accidents, in an industry setting, that’s what you’re looking at. You’re not looking at this risk as completely separate. You’re saying, “What hazards are we facing? What are our vulnerabilities? And what are our exposure,” and kind of combining all of those into having some overall assessment of the risk you face. You don’t try and silo it up into, this is bio, this is nuclear, this is AI, this is environment.

Ariel Conn: So that connects to a question that I have for you both. And that is what do you see as society’s greatest vulnerabilities today?

Haydn Belfield: Do you want to give that a go, Simon?

Simon Beard: Sure. So I really hesitate to answer any question that’s posed quite in that way, just because I don’t know what our greatest vulnerability is.

Haydn Belfield: Because you’re a very good academic, Simon.

Simon Beard: But we know some of the things that contribute to our vulnerability overall. One that really sticks in my head came out of a study we did looking at what we can learn from previous mass extinction events. And one of the things that people have found looking at the species that tend to die out in mass extinctions, and the species that survive, is this idea that the specialists — the efficient specialists — who’ve really carved out a strong biological niche for themselves, and are often the ones that are doing very well as a result of that, tend to be the species that die out, and the species that survive are the species that are generalists. But that means that within any given niche or habitat or environment, they’re always much more marginal, biologically speaking.

And then you say, “Well, what is humanity? Are we a specialist that’s very vulnerable to collapse, or are we a generalist that’s very robust and resilient to this kind of collapse that would fare very well?” And what you have to say is, as a species, when you consider humanity on its own, we seem to be the ultimate generalist, and indeed, we’re the only generalist who’s really moved beyond marginality. We thrive in every environment, every biome, and we survive in places where almost no other life form would survive. We survived on the surface of the moon — not for very long, but we did; We survived Antarctica, on the back ice, for long periods of time. And we can survive at the bottom of the Mariana Trench, and just a ridiculously large range of habitats.

But of course, the way we’ve achieved that is that every individual is now an incredible specialist. There are very few people in the world who could really support themselves. And you can’t just sort of pick it up and go along with it. You know like this last weekend, I went to an agricultural museum with my kids, and they were showing, you know, how you plow fields and how you gather crops and looked after it. And there’s a lot of really important, quite artisanal skills about what you had to do to gather the food and protect it and prepare it and so on. And you can’t just pick this up with a book; you really have to spend a long time learning it and getting used to it and getting your body strong enough to do these things.

And so every one of us as an individual, I think, is very vulnerable, and relies upon these massive global systems that we’ve set up, these massive global institutions, to provide this support and to make us this wonderfully adaptable generalist species. So, so long as institutions and the technologies that they’ve created and the broad socio-technological systems that we’ve created — so long as they carry on thriving and operating as we want them to, then we are very, very generalist, very adaptable, very likely to make it through any kind of trouble that we might face in the next couple of centuries — with a few exceptions, a few really extreme events. 

But the flip side of that is anything that threatens those global socio-technological institutions also threatens to move us from this very resilient global population we have at the moment to an incredibly fragile one. If we fall back on individuals and our communities, all of a sudden, we are going to become the vulnerable specialist that each of us individually is. That is a potentially catastrophic outcome that people don’t think about enough.

Haydn Belfield: One of my colleagues, Luke Kemp, likes to describe this as a rungless ladder. So the idea is that there’s been lots and lots of collapses before in human history. But what normally happens is elites at the top of the society collapse, and it’s bad for them. But for everyone else, you kind of drop one rung down on the ladder, but it’s okay, you just go back to the farm, and you still know how to farm, your family’s still farming — things get a little worse, maybe, but it’s not really that bad. And you get people leaving the cities, things like that; But you only drop one rung down the ladder, you don’t fall off it. But as we’ve gone many, many more rungs up the ladder, we’ve knocked out every rung below us. And now we’re really high up the ladder. Very few of us know how to farm, how to hunt or gather, how to survive, and so on. So were we to fall off that rungless ladder, then we might come crashing down with a wallop.

Ariel Conn: I’m sort of curious. We’re talking about how humanity is generalist but we’re looking within the boundaries of the types of places we can live. And yet, we’re all very specifically, as you described, reliant on technology in order to live in these very different, diverse environments. And so I wonder if we actually are generalists? Or if we are still specialists at a societal level because of technology, if that makes sense?

Simon Beard: Absolutely. I mean, the point of this was, we kind of wanted to work out where we fell on the spectrum. And basically, it’s a spectrum that you can’t apply to humanity: We appear to fall as the most extreme species in both ends. And I think one of the reasons for that is that the scale as it would be applied to most species really only looks at the physical characteristics of the species, and how they interact directly with their environment — whereas we’ve developed all these highly emergent systems that go way beyond how we interact with the environment, that determine how we interact with one another, and how we interact with the technologies that we’ve created.

And those basically allow us to interact with the world around us in the same ways that both generalists and specialists would. That’s great in many ways: It’s really served us well as a species, it’s been part of the hallmark of our success and our ability to get this far. But it is a real threat, because it adds a whole bunch of systems that have to be operating in a way as we expect them to in order for us to continue. Maybe so long as these systems function it makes us more resilient to normal environmental shocks. But it makes us vulnerable to a whole bunch of other shocks.

And then you look at the way that we actually treat these emergent socio-technological systems. And we’re constantly driving for efficiency; We’re constantly driving for growth, as quick and easy growth as we can get. And the ways that you do that are often by making the systems themselves much less resilient. Resiliency requires redundancy, requires diversity, requires flexibility, requires all of the things that either an economic planner or a market functioning on short-term economic return really hate, because they get in the way of productivity.

Haydn Belfield: Do you want to explain what resilience is?

Simon Beard: No.

Ariel Conn: Hayden do you want to explain it?

Haydn Belfield: I’ll give it a shot, yeah. So, just since people might not be familiar with it — so what I normally think of is someone balancing. How robust they are is how much you can push that person balancing before they fall over, and then resilience is how quickly they get up and can balance again. The next time they balance, they’re even stronger than before. So that’s what we’re talking about when we’re talking about resilience, how quickly and how well you’re able to respond to those kinds of external shocks.

Ariel Conn: I want to stick with this topic of the impact of technology, because one of the arguments that I often hear about why climate change isn’t as big of an existential threat or a contributor to existential risk as some people worry is because at some point in the near future, we will develop technologies that will help us address climate change, and so we don’t need to worry about it. You guys bring this up in the paper that you’re working on as potentially a dangerous approach; I was hoping you could talk about that.

Simon Beard: I think there’s various problems with looking for the technological solutions. One of them is technologies tend to be developed for quite specific purposes. But some of the conditions that we are examining as potential civilization collapse due to climate change scenarios involve quite widespread and wide-scale systemic change to society and to the environment around us. And engineers have a great challenge even capturing and responding to one kind of change. Engineering is an art of the small; It’s a reductionist art; You break things down, and you look at the components, and you solve each of the challenges one by one.

And there are definitely visionary engineers who look at systems and look at how the parts all fit together. But even there, you have to have a model, you have to have a basic set of assumptions of how all these parts fit together and how they’re going to interact. And this is why you get things like Murphy’s Law — you know, if it can go wrong, it will go wrong — because that’s not how the real world works. The real world is constantly throwing different challenges at you, problems that you didn’t foresee, or couldn’t have foreseen because they are inconsistent with the assumption you made, all of these things. 

So it is quite a stretch to put your faith in technology being able to solve this problem, when you don’t understand exactly what the problem that you’re facing is. And you don’t necessarily at this point understand where we may cross the tipping point, the point of no return, when you really have to step up this R & D funding. Or now you know the problem that the engineers have to solve, because it’s staring you in the face: By the time that that happens, it may be too late. If you get positive feedback loops — you know, reinforcement where one bad thing leads to another bad thing, leads to another bad thing, which then contributes to the original bad thing — you need so much more energy to push the system back into a state of normality than for this cycle to just keep on pushing it further and further away from what you previously were at.

So that throws up significant barriers to a technological fix. The other issue, just going back to what we were saying earlier, is technology does also breed fragility. We have a set of paradigms about how technologies are developed, how they interface with the economy that we face, which is always pushing for more growth and more efficiency. It has not got a very good track record of investing in resilience, investing in redundancy, investing in fail-safes, and so on. You typically need to have strong, externally enforced incentives for that to happen.

And if you’re busy saying this isn’t really a threat, this isn’t something we need to worry about, there’s a real risk that you’re not going to achieve that. And yes, you may be able to develop new technologies that start to work. But are they actually just storing up more problems for the future? We can’t wait until the story’s ended and then know whether these technologies really did make us safer in the end or more vulnerable.

Haydn Belfield: So I think I would have an overall skepticism about technology from a kind of, “Oh, it’s going to increase our resilience.” My skepticism in this case is just more practical. So it could very well be that we do develop — so there’s these things called negative emissions technologies, which suck CO2 out of the air — we could maybe develop that. Or things that could lower the temperature of the earth: maybe we can find a way to do that, throw the whole climate and weather into a chaotic system. Maybe tomorrow’s the day that we get the breakthrough with nuclear fusion. I mean, it could be that all of these things happen — it’d be great if they could. But I just wouldn’t put all my bets on it. The idea that we don’t need to prioritize climate change above all else, and make it a real central effort for societies, for companies, for governments, because we can just hope for some techno-fix to come along and save us — I just think it’s too risky, and it’s unwise. Especially because if we’re listening to the scientists, we don’t have that much longer. We’ve only got a few decades left, maybe even one decade, to really make dramatic changes. And we just won’t have invented some silver bullet within a decade’s time. Maybe technology could save us from climate change; I’d love it if it could. But we just can’t be sure about that, so we need to make other changes.

Simon Beard: That’s really interesting, Hayden, because when you list negative emissions technologies, or nuclear fusion, that’s not the sort of technology I’m talking about. I was thinking about technology as something that would basically just be used to make us more robust. Obviously, one of the things that you do if you think that climate change is an existential threat is you say, “Well, we really need to prioritize more investment into these potential technology solutions.” The belief that climate change is exponential threat is not committing you to trying to make climate change worse, or something like that.

You want to make it as small as possible, you want to reduce this impact as much as possible. That’s how you respond to climate change as an existential threat. if you don’t believe climate change is an existential threat, you would invest less in those technologies. Also, I do wanna say — and I mean, I think there’s some legitimate debate about this, but I don’t like the 12 years terminology, I don’t think we know nearly enough to support those kind of claims. The IPCC came up with this 12 years, but it’s not really clear what they meant by it. And it’s certainly not clear where they got it from. People have been saying, “Oh, we’ve got a year to fix the climate,” or something, for as long as I can remember discussions going on about climate change.

It’s one of those things where that makes a lot of sense politically, but those claims aren’t scientifically based. We don’t know. We need to make sure that that’s not true; We need to falsify these claims, either by really looking at it, and finding out that it genuinely is safer than we thought it was or by doing the technological development and greenhouse gas reduction efforts and other climate mitigation methods to make it safe. That’s just how it works.

Ariel Conn: Do you think that we’re seeing the kind of investment in technology, you know, trying to develop any of these solutions, that we would be seeing if people were sufficiently concerned about climate change as an existential threat?

Simon Beard: So one of the things that worries me is people always judge this by looking at one thing and saying, “Are we doing enough of that thing? Are we reducing our carbon dioxide emissions fast enough? Are people changing their behaviors fast enough? Are we developing technologies fast enough? Are we ready?” Because we know so little about the nature of the risk, we have to respond to this in a portfolio manner; We have to say, “What are all the different actions and the different things that we can take that will make us safer?” And we need to do all of those. And we need to do as much as we can of all of these.

And I think there is a definite negative answer to your question when you look at it like that, because people aren’t doing enough thinking and aren’t doing enough work about how we do all the things we need to do to make us safe from climate change. People tend to get an idea of what they think a safer world would look like, and then complain that we’re not doing enough of that thing, which is very legitimate and we should be doing more of all of these things. But if you look at it as an existential risk, and you look at it from an existential safety angle, there’s just so few people who are saying, “Let’s do everything we can to protect ourselves from this risk.”

Way too many people are saying, “I’ve had a great idea, let’s do this.” That doesn’t seem to me like safety-based thinking; That seems to me like putting all your eggs in one basket and basically generating the solution to climate change that’s most likely to be fragile, that’s most likely to miss something important and not solve the real problem and store up trouble for a future date and so on. We need to do more — but that’s not just more quantitatively, it’s also more qualitatively.

Haydn Belfield: I think just clearly we’re not doing enough. We’re not cutting emissions enough, we’re not moving to renewables fast enough, we’re not even beginning to explore possible solar geoengineering responses, we don’t have anything that really works to suck carbon dioxide or other greenhouse gases out of the air. Definitely, we’re not yet taking it seriously enough as something that could be a major contributor to the end of our civilization or the end of our entire species.

Ariel Conn: I think this connects nicely to another section of some of the work you’ve been doing. And that is looking at — I think there were seven critical systems that are listed as sort of necessary for humanity and civilization.

Simon Beard: Seven levels of critical systems.

Ariel Conn: Okay.

Simon Beard: We rely on all sorts of systems for our continued functioning and survival. And a sufficiently significant failure in any of these systems could be fatal to all of our species. We can kind of classify these systems at various levels. So at the bottom, there are the physical systems — that’s basically the laws of physics. Atoms operate, how subatomic particles operate, how they interact with each other: those are pretty safe. There are some advanced physics experiments that some people have postulated may be a threat to those systems. But they all seem pretty safe. 

We then kind of move up: We’ve got basic chemical systems and biochemical systems, how we generate enzymes and all the molecules that we use — proteins, lipids, and so on. Then we move up to the level of the cell; Then we move up to the level of the anatomical systems — the digestive system, the respiratory system — we need all these things. Then you look at the organism as a whole and how it operates. Then you look at how organisms interact with each other: the biosphere system, the biological system, ecological system.

And then as human beings, we’ve added this kind of seventh, even more emergent, system, which is not just how humans interact with each other, but the kind of systems that we have made to govern our interaction, and to determine how we work together with each other: political institutions, technology, the way we distribute resources around the planet, and so on. So there are a really quite amazing number of potential vulnerabilities that our species has. 

It’s many more than seven, but categorizing needs on the kind of the seven levels is helpful to not miss anything, because I think most people’s idea of an existential threat is something like a really big gun. Guns, we understand how they kill people, if you just had a really huge gun, and just blew a hole in everyone’s head. But that’s both missing things that are actually a lot more basic than the way that people normally die, but also a lot more sophisticated and emergent. All of these are potentially quite threatening.

Ariel Conn: So can you explain a little bit more detail how climate change affects these different levels?

Haydn Belfield: So I guess the way I’ll do is I’ll first talk a bit about natural feedback stuff, and then talk about the social feedback loops. Everyone listening to this will be familiar with feedback loops, like methane getting released from permafrost in the Arctic, or methane coming out of clathrates in the ocean, or there’s other kinds of feedback loops. So there’s one that was discovered only recently, very recent paper was about cloud formation. So if it gets to four degrees, these models show that it becomes much harder for clouds to form. And so you don’t get much sort of radiation bouncing off those clouds and you get very rapid additional heating up to 12 degrees, is what it said.

So the first way that climate change could affect these kinds of systems that we’re talking about is it just makes it anatomically way too hot: You get all these feedback, and it just becomes far too hot for anyone to survive sort of anywhere on the surface. It might get much too hot in certain areas of the globe for really civilization to be able to continue there, much like it’s very hard in the center of the Sahara to have large cities or anything like that. But that seems quite unlikely that climate change would ever get that bad. The kind of stuff that we’re much more concerned about is the more general effects that climate change, climate chaos, climate breakdown might have on a bunch of other systems.

So in this paper, we’ve broken it down into three. We’ve looked at the effects of climate change on the food/water/energy system, the ecological system, and on our political system and conflict. And climate change is likely to have very negative effects on all three of those systems. It’s likely to negatively affect crop yields; It’s likely to increase freak weather events, and there’s some possibility that you might have these sort of very freak weather events — droughts, or hurricanes is also one — in areas where we produce lots of our calories, so bread baskets around the world. So climate change is going to have very negative effects most likely on our food and energy and water systems.

Then separately, there’s ecological systems. People will be very familiar with climate change driving lots of habitat loss, and therefore the loss of species; People will be very familiar with coral reefs dying and bleaching and going away. This could also have very negative effects on us, because we rely on these ecological systems to provide what we call ecological services. Ecological services are things like pollination, so if all the bees died what would we do? Ecological services also include the fish that we catch and eat, or fresh, clean drinking water. So climate change is likely to have very negative effects on that whole set of systems. And then it’s likely to have negative effects on our political system.

If there are large areas of the world that are nigh on uninhabitable, because you can’t grow food or you can’t go out at midday, or there’s no clean water available, then you’re likely to see maybe state breakdown, maybe huge numbers of people leaving — much more than we’ve ever encountered before, sort of 10s or hundred millions of people dislocated and moving around the world. That’s likely to lead to conflict and war. So those are some ways in which climate change could have negative effects on three sets of systems that we crucially rely on as a civilization.

Ariel Conn: So in your work, you also talk about the global systems death spiral. Was that part of this?

Haydn Belfield: Yeah, that’s right. The global systems death spiral is a catchy term to describe the interaction between all these different systems. So not only would climate change have negative effects on our ecosystems, on our food and water and energy systems, the political system and conflict, but these different effects are likely to interact and make each other worse. So imagine our ecosystems are harmed by climate change: Well, that probably has an effect on food/water systems, because we rely on our ecosystems for these ecosystem services. 

So then, the bad effects on our food and water systems: Well, that probably leads to conflict. So some colleagues of ours at the Anglia Ruskin University have something called a global chaos map, which is a great name for a research project, where they try and link incidences of shocks to the food system and conflict — riots or civil wars. And they’ve identified lots and lots of examples of this. Most famously, the Arab Spring, which has now become lots of conflicts, has been linked to a big spike in food prices several years ago. So there’s that link there between food and water, insecurity and conflict. 

And then conflict leads back into ecosystem damage. Because if you have conflict, you’ve got weak governance, you’ve got weak governments trying to protect their ecosystems, and weak government has been identified as the strongest single predictor of ecosystem loss, biodiversity loss. They all interact with one another, and make one another worse. And you could also think about things going back the other way. So for example, if you’re in a war zone, if you’ve got conflict, you’ve got failing states — that has knock-on effects on the food systems, and the water systems that we rely on: We often get famines during wartime.

And then if they don’t have enough food to eat, they don’t have water to drink, maybe that has negative effects on our ecosystems, too, because people are desperate to eat anything. So what we’re trying to point out here is that the systems aren’t independent from one another — they’re not like three different knobs that are all getting turned up independently by climate change — but that they interact with one another in a way that could cause lots of chaos and lots of negative outcomes for world society.

Simon Beard: We did this kind of pilot study looking at the ecological system and the food system and the global political system and looking at the connections of those three, really just in one direction: looking at the impact of food insecurity on conflict, and conflict and political instability on the biosphere, and loss of biosphere on integrity of the food system. But that was largely determined by the fact that these were three connections that we either had looked at directly, or had close colleagues who had looked at, so we had quite good access to the resources.

As Hayden said, everything kind of also works in the other direction, most likely. And also, there are many, many more global systems that interact in different ways. Another trio that we’re very interested in looking at in the future is the connection between the biosphere and the political system, but this time, also, with some of the health systems, the emergence of new diseases, the ability to respond to public health emergencies, and especially when these things are looked at in kind of one health perspective, where plant health and animal health and human health are all actually very closely interacting with one another.

And then you kind of see this pattern where, yes, we could survive six degrees plus, and we could survive famine, and we could survive x, y, and z. But once these things start interacting, it just drives you to a situation where really everything that we take for granted at the moment up to and including the survival of the species — they’re all on the table, they’re all up for grabs once you start to get this destructive cycle between changes in the environment and changes in how human society interacts with the environment. It’s the very dangerous, potentially very self-perpetuating feedback loop, and that’s why we refer to it as a global systems death spiral: because we really can’t predict at this point in time where it will end. But it looks very, very bleak, and very, very hard to see how once you enter into this situation, you could then kind of dial it back and return to a safe operating environment for humanity and the systems that we rely on. 

There’s definitely a new stable state at the end of this spiral. So when you get feedback loops between systems, it’s not that they will just carry on amplifying change forever; They’re moving towards another kind of stable state, but you don’t know how long it’s going to take to get there, you don’t know what that steady state will be. So for the simulation with the death of clouds, this idea that purely physical feedback between rising global temperatures, changes in the water cycle, and cloud cover, then you end up with a world that’s much, much hotter and much more arid than the one we have at the moment, which could be a very dangerous state. For sort of perpetual human survival, we would need a completely different way of feeding ourselves and really interacting with the environment. 

You don’t know what sort of death traps or kill mechanisms lie along that path of change; You don’t know if there is, for instance, somewhere here, it’s going to trigger a nuclear war, or it’s going to trigger attempts to geoengineer the climate in a sort of bid to gain safety, but actually these turn out to have catastrophic consequences, or all the others that are unknown unknowns we want to make turn into known unknowns, and then turn into things that we can actually begin to understand and study. So in terms of not knowing where the bottom is, that’s potentially limitless as far as humanity is concerned. We know that it will have an end. Worst case scenario, that end is a very arid climate with a much less complex, much simpler atmosphere, which would basically need to be terraformed back into a livable environment in the way that we’re currently thinking maybe we could do that for Mars. But to get a global effort to do that, in an already sort of disintegrating Earth, I think would be an extremely tall order. There’s a huge range of different threats and different potential opportunities for an existential catastrophe to unravel within this kind of death spiral. And we think this really is a very credible threat.

Ariel Conn: How do we deal with all this uncertainty?

Haydn Belfield: More research needed, is the classic academic response to any time you ask that question. More research.

Simon Beard: That’s definitely the case, but there are also big questions about the kind of research. So mostly scientists want to study things that they already kind of understand: where you already have well established techniques, you have journals that people can publish their research in, you have an extensive peer review community, you can say, yes, you have done this study by the book, you get to publish it. That’s what all the incentives are aligned towards. 

And that sort of research is very important and very valuable, and I don’t want to say that we need less of that kind of research. But that kind of research is not going to deal with the sort of radical uncertainty that we’re talking about here. So we do need more creative science, we need science that is willing to engage in speculation, but to do so in an open and rigorous way. One of the things is you need scientists who are willing to come on the stand and say, “Look, here’s a hypothesis. I think it’s probably wrong, and I don’t yet know how to test it. But I want people to come out and help me find a way to test this hypothesis and falsify it.” 

There aren’t any scientific incentive structures at the moment that encourage that. That is not a way to get tenure, and it’s not a way to get a professorship or chair, or to take your paper published. That is a really stupid strategy to take if you want to be a successful scientist. So what we need to do is we need to create a safe sandbox for people who are concerned about this — and we know from our engagement that there are a lot of people who would really like to study this and really like to understand it better — for them to do that. So one of the big things that we’re really looking at here in CSER is how do we make the tools to make the tools that will then allow us to study this. How do we provide the methodological insights or the new perspectives that are needed to move towards establishing a science of social collapse or environmental collapse that we can actually use to then answer some of these questions.

So there are several things that we’re working on at the moment. One important thing, which I think is a very crucial step for dealing with the sort of radical uncertainty we face, is this classification. We’ve already talked about classifying different levels of critical system. That’s one part of a larger classification scheme that CSER has been developing to just look at all the different components of risk and say, “Well, there’s this and this and this. Once you start to sort of engage in that exercise and look at what are all the systems that might be vulnerable? What are all the possible vulnerabilities that exist within those systems? What are all the ways in which humanity has exposed these vulnerabilities that they could harness if things go wrong? And you map that out; You haven’t got to the truth, but you’ve moved a lot of things in the unknown category into the, “Okay, I now know all the ways that things could go wrong, and I know that I haven’t a clue how any of these things could happen.” Then you need to say, “Well, what are the techniques that seem appropriate?” 

So we think the planetary boundaries framework, albeit it doesn’t answer the question that we’re interested in, it offers a really nice approach to looking at this question about where tipping points arise, where systems move out of their ordinary operation. We want to apply that in new environments, we want to find new ways of using that. And there are other tools as well that we can take, for instance, from disaster studies and risk management studies, looking at things like fault tree analysis where you say, “What are all the things that might go wrong with this? And what are the levers that we currently have or the interventions that we could make to stop this from happening?” 

We also think that there’s a lot more room for people to share their knowledge and their thoughts and their fears and expectations to what we call structured expert solicitations, where you get people who have very different knowledge together, and you find a way that they can all talk to each other and they can all learn from each other. And often you get answers out of these sort of exercises that are very different to what any individual might put in at the beginning, but they represent a much more sort of complete, much more creative structure. And you can get those published because it’s a recognized scientific method, so structured expert solicitations on climate change got published in Nature last month. Which is great, because it’s a really under researched topic. But I think one of the things that really helped there was that they were using an established method.

What I really hope that CSER’s work going forward is going to achieve is just to make this space that we can actually work with many more of the people who we need to work with to answer these questions and understand the nature of this risk and pull them all together and make the social structures so that the kind of research that we really badly need at this point can actually start to emerge.

Ariel Conn: A lot of what you’re talking about doesn’t sound like something that we can do in the short term, that it will take at least a decade, if not more to get some of this research accomplished. So in the interest of speed — which is one of the uncertainties we have, we don’t seem to have a good grasp of how much time we have before the climate could get really bad — what do we do in the short term? What do we do for the next decade? What do non-academics do?

 

Haydn Belfield: The thing is, it’s kind of two separate questions, right? We certainly know all we need to know to take really drastic, serious action on climate change. What we’re asking is a slightly more specific question, which is how can climate change, climate breakdown, climate chaos contribute to existential risk. So we already know with very high certainty that climate change is going to be terrible for billions of people in the world, that it’s going to make people’s lives harder, it’s going to make them getting out of extreme poverty much harder.

 

And we also know that the people who have contributed the least to the problem are going to be the ones that are screwed the worst by climate change. And it’s just so unfair, and so wrong, that I think we know enough now to take serious action on climate change. And not only is it wrong, it’s not in the interest of rich countries to live in this world of chaos, of worse weather events, and so on. So I think we already know enough, we have enough certainty on those questions to act very seriously, to reduce our emissions very quickly, to invest in as much clean technology as we can, and to collaborate collectively around the world to make those changes. And what we’re saying though, is about the different, more unusual question of how it contributes to existential risk more specifically. So I think I would just make that distinction pretty clear. 

 

Simon Beard: So there’s a direct answer to your question and an indirect answer to your question. Direct answer to your question is all the things you know you should be doing. Fly less, preferably not at all; eat less meat, preferably not at all, and perfectly not dairy, either. Every time there’s an election, vote, but also ask all the candidates — all the candidates, don’t just go for the ones who you think will give you the answer you like — “I’m thinking of voting for you. What are you going to do about climate change?” 

 

There are a lot of people all over the political spectrum who care about climate change. Yeah, there are political slumps in who cares more, and so on. But every political candidate has votes that they could pick up if they did more on climate change, irrespective of their political persuasion. And even if you have a political conviction, so that you’re always going to vote the same way, you can nudge candidates to get those votes and to do more on climate change by just asking that simple question: “I’m thinking of voting for you. What are you going to do about climate change?” That’s a really low buy, it’s good for election; If they get 100 letters, all saying that, and they’re all personal letters, and not just some mass campaign, it really does change the way that people think about the problems that they face. But I also want to challenge you a bit on this, “This is going to take decades,” because it depends — depends how we approach it.

 

Ariel Conn: So one example of research that can happen quickly and action that can occur quickly is this example that you give early on in the work that you’re doing, comparing the need to study climate change as a contributor to existential risk as the work that was done in the 80s, looking at how nuclear weapons can create a nuclear winter, and how that connects to an existential risk. And so I was hoping you could also talk a little bit about that comparison.

 

Simon Beard: Yeah, so I think this is really important and I know a lot of the things that we’re talking about here, about critical global systems and how they interact with each other and so on — it’s long winded, and it’s technical, and it can sound a bit boring. But this was, for me, a really big inspiration as for why we’re trying to look at it in this way. So when people started to explode nuclear weapons in the Manhattan Project in the early 1940s, right from the beginning, they were concerned about the kind of threats, or the kind of risks that these posed, and firstly thought, well, maybe it would set light to the upper atmosphere. And there were big worries about the radiation. And then, for a time, there were worries just about the explosive capacity. 

 

This was enough to raise a kind of general sense of alarm and threat. But none of these were really credible. They didn’t last; They didn’t withstand scientific scrutiny for very long. And then Carl Sagan and some colleagues did this research in the early 1980s on modeling the climate impacts of nuclear weapons, which is not a really intuitive thing to do, right? When you’ve got the most explosive weapon ever envisaged, and it has all this nuclear fallout and so, and you think, what’s this going to do to the global climate, that doesn’t seem like that’s going to be where the problems lie.

 

But they discover when they look at that, that no, it’s a big thing. If you have nuclear strikes on cities, it sends a lot of ash into the upper atmosphere. And it’s very similar to what happens if you have a very large asteroid, or a very large set of volcanoes going off; The kind of changes that you see in the upper atmosphere are very similar, and you get this dramatic global cooling. And this then threatens — as a lot of mass extinctions have — threatens the underlying food source. And that’s how humans starve. And this comes out in 1983, this is kind of 40 years after people started talking about nuclear risk. And it changes the game, because all of a sudden, in looking at this rather unusual topic, they find a really credible way in which nuclear winter leads to everyone dying.

 

The research is still much discussed, and what kind of nuclear warhead, what kind of nuclear explosions, and how many and would they need to hit cities, or would they need to hit areas with particularly large sulphur deposits, or all of these things — these are still being discussed. But all of a sudden, the top leaders, the geopolitical leaders start to take this threat seriously. And we know Reagan was very interested and explored this a lot, the Russians even more so. And it really does seem to have kick started a lot of nuclear disarmament debate and discussion and real action.

 

And what we’re trying to do in reframing the way that people research climate change as an existential threat is to look for something like that: What’s a credible way in which this really does lead to an existential catastrophe for humanity? Because that hasn’t been done yet. We don’t have that. We feel like we have it because everyone knows the threat and the risk. But really, we’re just at this area of kind of vague speculation. There’s a lot of room for people to step up with this kind of research. And the historical evidence suggests that this can make a real difference.

 

Haydn Belfield: We tend to think of existential risks as one-off threats — some big explosion, or some big thing, like an individual asteroid that hits an individual species of dinosaurs and then kills it, right — we tend to think of existential risks as one singular event. But really, that’s not how most mass extinctions happen. That’s not how civilizational collapses have tended to happen over history. The way that all of these things have actually happened, when you go back to look at archeological evidence or you go back to look at the fossil evidence, is that there’s a whole range of different things — different hazards and different internal capabilities of these systems, whether they’re species or societies — and they get overcome by a range of different things. 

 

So, often in archeological history — in the Pueblo Southwest, for example — there’ll be one set of climatic conditions, and one external shock that faces the community, and they react fine to it. But then, in a few different years, the same community is faced by some similar threats, but reacts completely differently and collapses completely. It’s not that there’s these one singular, overwhelming events from outside, it’s that you have to look at all the different systems that this one particular society or whatever relies on. And you have to look at when all of those things overcome the overall resilience of a system. 

 

Or looking at species, like what happens when sometimes a species can recover from an external shock, and sometimes there’s just too many things, and the conditions aren’t right, and they get overcome, and they go extinct. That’s where looking at existential risk, and looking at the study of how we might collapse or how we might go extinct — that’s where the field needs to go: It needs to go into looking at what are all the different hazards we face, how do they interact with the vulnerabilities that we have, and the internal dynamics of our systems that we rely on, and the different resilience of those systems, and how are we exposed to those hazards in different ways, and having a much more sophisticated, complicated, messy look at how they all interact. I think that’s the way that existential risk research needs to go.

 

Simon Beard: I agree. I think that fits in with various things we said earlier.

 

Ariel Conn: So then my final question for both of you is — I mean, you’re not even just looking at climate change as an existential threat; I know you look at lots of things and how they contribute to existential threats — but looking at climate change, what gives you hope?

 

Simon Beard: At a psychological level, hope and fear aren’t actually big day-to-day parts of my life. Because working in existential risk, you have this amazing privilege that you’re doing something, you’re working to make that difference between human extinction and civilization collapse and human survival and flourishing. It’s a waste to have that opportunity and to get too emotional about it. It’s a waste firstly because it is the most fascinating problem. It is intellectually stimulating; It is diverse; It allows you to engage with and talk to the best people, both in terms of intelligence and creativity, but also in terms of drive and passion, and activism and ability to get things done.

 

But also because it’s a necessary task: We have to get on with it, we have to do this. So I don’t know if I have hope. But that doesn’t mean that I’m scared or anxious, I just have a strong sense of what I have to do. I have to do what I can to contribute, to make a difference, to maximize my impact. That’s a series of problems and we have to solve those problems. If there’s one overriding emotion that I have in relation to my work, and what I do, and what gets me out of bed, it’s curiosity — which is, I think, at the end of the day, one of the most motivating emotions that exists. People often say to me, “What’s the thing I should be most worried about: nuclear war, or artificial intelligence or climate change? Like, tell me, what should I be most worried about?” You shouldn’t worry about any of those things. Because worry is a very disabling emotion.

 

People who worry stay in bed. I haven’t got time to do that. I had heart surgery about 18 months ago, a big heart bypass operation. And they warned me before that, after this surgery, you’re going to feel emotional, it happens to everyone. It’s basically a near death experience. You have to be cooled down to a state that you can’t recover on your own; They have to heat you up. Your body kind of remembers these things. And I do remember a couple of nights after getting home from that. And I just burst into floods of tears thinking about this kind of existential collapse, and, you know, what it would mean for my kids and how we’d survive it, and it was completely overwhelming. As overwhelming as you’d expect it to be for someone who has to think about that. 

 

But this isn’t how we engage with it. This isn’t science fiction stories that we’re telling ourselves to feel scared or feel a rush. This is a real problem. And we’re here to solve that problem. I’ve been very moved the last month or so by all the stuff about the Apollo landing missions. And it’s reminded me, sort of a big inspiration of my life, one of these bizarre inspirations of my life, was getting Microsoft Encarta 95, which was kind of my first all-purpose knowledge source. And when you loaded it up — because it was the first one on CD ROM — they had these sound clips and they included that bit of JFK’s speech about we choose to go to the moon, not because it’s easy, but because it’s hard. And that has been a really inspiring quote for me. And I think I’ve often chosen to do things because they’re hard. 

 

And it’s been kind of upsetting — this is the first time this kind of moon landing anniversary’s come up — and I realized no, he was being completely literal. Like the reason that I chose to go to the moon was it was so hard that the Russians couldn’t do it. So they were confident that they were going to win the race. And that was all that mattered. But for me, I think in this case, we’re choosing to do this research and to do this work, not because it’s hard, but because it’s easy. Because understanding climate change, being curious about it, working out new ways to adapt, and to mitigate, and to manage the risk, is so much easier than living with the negative consequences of it. This is the best deal on the table at the moment. This is the way that we maximize the benefit for minimizing the cost.

 

This is not the great big structural change that completely messes up our entire society, and reduces us to some kind of Greek primitivism. That’s what happens if climate change kicks in. That’s when we start to see people reduced to subsistence level, agricultural, whatever it is. Understanding the risk and responding to it: this is the way that we keep all the good things that our civilization has given us. This is the way that we keep international travel, that we keep our technology, that we keep our food and getting nice things from all around the world. 

 

And yes, it does require some sacrifices. But these are really small change in the scale of things. And once we start to make them we will find ways of working around it. We are very creative, we are very adaptable, we can adapt to the changes that we need to make to mitigate climate change. And we’ll be good at that. And I just wish that anyone listening to this podcast had that mindset, didn’t think about fear or about blame, or shame or anger — that they thought about curiosity, and they thought about what can I do, and how good this is going to be, how bright and open our future is, and how much we can achieve as a species.

 

If we can just get over these hurdles, these mistakes that we made years ago, for various reasons — often a small number of people in the land, you know, that’s what determined that we have petrol cars rather than battery cars — and we can undo them; It’s in our power, it’s in our gift. We are the species that can determine our own fate; We get to choose. And that’s why we’re doing this research. And I think if lots of people — especially if lots of people who are well educated, maybe scientists, maybe people who are thinking about a career in science — view this problem in that light, as what can I do? What’s the difference I can make? We’re powerful. It’s a much less difficult problem to solve and a much better ultimate payoff that we’ll get than if we try and solve this any other way, especially if we don’t do anything.

 

Ariel Conn: That was wonderful.

 

Simon Beard: Yeah, I’m ready to storm the barricade.

 

Ariel Conn: All right, Haydn try to top that.

 

Haydn Belfield: No way. That’s great. I think Simon said all that needs to be said on that.

 

Ariel Conn: All right. Well, thank you both for joining us today.

 

Simon Beard: Thank you. It’s been a pleasure.

 

Haydn Belfield: Yeah, absolute pleasure.

 

 

 

 

AI Alignment Podcast: On the Governance of AI with Jade Leung

In this podcast, Lucas spoke with Jade Leung from the Center for the Governance of AI (GovAI). GovAI strives to help humanity capture the benefits and mitigate the risks of artificial intelligence. The center focuses on the political challenges arising from transformative AI, and they seek to guide the development of such technology for the common good by researching issues in AI governance and advising decision makers. Jade is Head of Research and Partnerships at GovAI, where her research focuses on modeling the politics of strategic general purpose technologies, with the intention of understanding which dynamics seed cooperation and conflict.

Topics discussed in this episode include:

  • The landscape of AI governance
  • GovAI’s research agenda and priorities
  • Aligning government and companies with ideal governance and the common good
  • Norms and efforts in the AI alignment community in this space
  • Technical AI alignment vs. AI Governance vs. malicious use cases
  • Lethal autonomous weapons
  • Where we are in terms of our efforts and what further work is needed in this space

You can take a short (3 minute) survey to share your feedback about the podcast here.

Important timestamps: 

0:00 Introduction and updates

2:07 What is AI governance?

11:35 Specific work that Jade and the GovAI team are working on

17:21 Windfall clause

21:20 Policy advocacy and AI alignment community norms and efforts

27:22 Moving away from short-term vs long-term framing to a stakes framing

30:44 How do we come to ideal governance?

40:22 How can we contribute to ideal governance through influencing companies and government?

48:12 US and China on AI

51:18 What more can we be doing to positively impact AI governance?

56:46 What is more worrisome, malicious use cases of AI or technical AI alignment?

01:01:19 What is more important/difficult, AI governance or technical AI alignment?

01:03:49 Lethal autonomous weapons

01:09:49 Thinking through tech companies in this space and what we should do

 

Two key points from Jade: 

“I think one way in which we need to rebalance a little bit, as kind of an example of this is, I’m aware that a lot of the work, at least that I see in this space, is sort of focused on very aligned organizations and non-government organizations. So we’re looking at private labs that are working on developing AGI. And they’re more nimble. They have more familiar people in them, we think more similarly to those kinds of people. And so I think there’s an attraction. There’s really good rational reasons to engage with the folks because they’re the ones who are developing this technology and they’re plausibly the ones who are going to develop something advanced.

“But there’s also, I think, somewhat biased reasons why we engage, is because they’re not as messy, or they’re more familiar, or we see more value aligned. And I think this early in the field, putting all our eggs in a couple of very, very limited baskets, is plausibly not that great a strategy. That being said, I’m actually not entirely sure what I’m advocating for. I’m not sure that I want people to go and engage with all of the UN conversations on this because there’s a lot of noise and very little signal. So I think it’s a tricky one to navigate, for sure. But I’ve just been reflecting on it lately, that I think we sort of need to be a bit conscious about not group thinking ourselves into thinking we’re sort of covering all the basis that we need to cover.”

 

“I think one thing I’d like for people to be thinking about… this short term v. long term bifurcation. And I think a fair number of people are. And the framing that I’ve tried on a little bit is more thinking about it in terms of stakes. So how high are the stakes for a particular application area, or a particular sort of manifestation of a risk or a concern.

“And I think in terms of thinking about it in the stakes sense, as opposed to the timeline sense, helps me at least try to identify things that we currently call or label near term concerns, and try to filter the ones that are worth engaging in versus the ones that maybe we just don’t need to engage in at all. An example here is that basically I am trying to identify near term/existing concerns that I think could scale in stakes as AI becomes more advanced. And if those exist, then there’s really good reason to engage in them for several reasons, right?…Plausibly, another one would be privacy as well, because I think privacy is currently a very salient concern. But also, privacy is an example of one of the fundamental values that we are at risk of eroding if we continue to deploy technologies for other reasons : efficiency gains, or for increasing control and centralizing of power. And privacy is this small microcosm of a maybe larger concern about how we could possibly be chipping away at these very fundamental things which we would want to preserve in the longer run, but we’re at risk of not preserving because we continue to operate in this dynamic of innovation and performance for whatever cost. Those are examples of conversations where I find it plausible that there are existing conversations that we should be more engaged in just because those are actually going to matter for the things that we call long term concerns, or the things that I would call sort of high stakes concerns.”

 

We hope that you will continue to join in the conversations by following us or subscribing to our podcasts on Youtube, Spotify, SoundCloud, iTunes, Google Play, StitcheriHeartRadio, or your preferred podcast site/application. You can find all the AI Alignment Podcasts here.

You can listen to the podcast above or read the transcript below. Key works mentioned in this podcast can be found here 

Lucas: Hey, everyone. Welcome back to the AI Alignment Podcast. I’m Lucas Perry. And today, we will be speaking with Jade Leung from the Center for the Governance of AI, housed at the Future of Humanity Institute. Their work strives to help humanity capture the benefits and mitigate the risks of artificial intelligence. They focus on the political challenges arising from transformative AI, and seek to guide the development of such technology for the common good by researching issues in AI governance and advising decision makers. Jade is Head of Research and Partnerships at GovAI, and her research work focusing on modeling the politics of strategic general purpose technologies, with the intention of understanding which dynamics seed cooperation and conflict.

In this episode, we discuss GovAI’s research agenda and priorities, the landscape of AI governance, how we might arrive at ideal governance, the dynamics and roles of both companies and states within this space, how we might be able to better align private companies with what we take to be ideal governance. We get into the relative importance of technical AI alignment and governance efforts on our path to AGI, we touch on lethal autonomous weapons, and also discuss where we are in terms of our efforts in this broad space, and what work we might like to see more of.

As a general bit of announcement, I found all the feedback coming in through the SurveyMonkey poll to be greatly helpful. I’ve read through all of your comments and thoughts, and am working on incorporating feedback where I can. So for the meanwhile, I’m going to leave the survey up, and you’ll be able to find a link to it in a description of wherever you might find this podcast. Your feedback really helps and is appreciated. And, as always, if you find this podcast interesting or useful, consider sharing with others who might find it valuable as well. And so, without further ado, let’s jump into our conversation with Jade Leung.

So let’s go ahead and start by providing a little bit of framing on what AI governance is, the conceptual landscape that surrounds it. What is AI governance, and how do you view and think about this space?

Jade: I think the way that I tend to think about AI governance is with respect to how it relates to the technical field of AI safety. In both fields, the broad goal is how humanity can best navigate our transition towards a world with advanced AI systems in it. The technical AI safety agenda and the kind of research that’s being done there is primarily focused on how do we build these systems safely and well. And the way that I think about AI governance with respect to that is broadly everything else that’s not that. So that includes things like the social, political, economic context that surrounds the way in which this technology is developed and built and used and employed.

And specifically, I think with AI governance, we focus on a couple of different elements of it. One big element is the governance piece. So what are the kinds of norms and institutions we want around a world with advanced AI serving the common good of humanity. And then we also focus a lot on the kind of strategic political impacts and effects and consequences of the route on the way to a world like that. So what are the kinds of risks, social, political, economic? And what are the kinds of impacts and effects that us developing it in sort of sub-optimal ways could have on the various things that we care about.

Lucas: Right. And so just to throw out some other cornerstones here, because I think there’s many different ways of breaking up this field and thinking about it, and this sort of touches on some of the things that you mentioned. There’s the political angle, the economic angle. There’s the military. There’s the governance and the ethical dimensions.

Here on the AI Alignment Podcast, before we’ve, at least breaking down the taxonomy sort of into the technical AI alignment research, which is getting machine systems to be aligned with human values and desires and goals, and then the sort of AI governance, the strategy, the law stuff, and then the ethical dimension. Do you have any preferred view or way of breaking this all down? Or is it all just about good to you?

Jade: Yeah. I mean, there are a number of different ways of breaking it down. And I think people also mean different things when they say strategy and governance and whatnot. I’m not particular excited about getting into definitional debates. But maybe one way of thinking about what this word governance means is, at least I often think of governance as the norms, and the processes, and the institutions that are going to, and already do, shape the development and deployment of AI. So I think a couple of things that are work underlining in that, I think there’s … The word governance isn’t just specifically government and regulations. I think that’s a specific kind of broadening of the term, which is worth pointing out because that’s a common misconception, I think, when people use the word governance.

So when I say governance, I mean governance and regulation, for sure. But I also mean what are other actors doing that aren’t governance? So labs, researchers, developers, NGOs, journalists, et cetera, and also other mechanisms that aren’t regulation. So it could be things like reputation, financial flows, talent flows, public perception, what’s within and outside the opportune window, et cetera. So there’s a number of different levers I think you can pull if you’re thinking about governance.

It’s probably worth also pointing out, I think, when people say governance, a lot of the time people are talking about the normative side of things, so what should it look like, and how could be if it were good? A lot of governance research, at least in this space now, is very much descriptive. So it’s kind of like what’s actually happening, and trying to understand the landscape of risk, the landscape of existing norms that we have to work with, what’s a tractable way forward with existing actors? How do you model existing actors in the first place? So a fair amount of the research is very descriptive, and I would qualify that as AI governance research, for sure.

Other ways of breaking it down are, according to the research done that we put out, is one option. So that kind of breaks it down into firstly understanding the technological trajectory, so that’s understanding where this technology is likely to go, what are the technical inputs and constraints, and particularly the ones that have implications for governance outcomes. This looks like things like modeling AI progress, mapping capabilities, involves a fair amount of technical work.

And then you’ve got the politics cluster, which is probably where a fair amount of the work is at the moment. This is looking at political dynamics between powerful actors. So, for example, my work is focusing on big firms and government and how they relate to each other, but also includes how AI transforms and impacts political systems, both domestically and internationally. This includes the cluster around international security and the race dynamics that fall into that. And then also international trade, which is a thing that we don’t talk about a huge amount, but politics also includes this big dimension of economics in it.

And then the last cluster is this governance cluster, which is probably the most normative end of what we would want to be working on in this space. This is looking at things like what are the ideal institutions, infrastructure, norms, mechanisms that we can put in place now/in the future that we should be aiming towards that can steer us in robustly good directions. And this also includes understanding what shapes the way that these governance systems are developed. So, for example, what roles does the public have to play in this? What role do researchers have to play in this? And what can we learn from the way that we’ve governed previous technologies in similar domains, or with similar challenges, and how have we done on the governance front on those bits as well. So that’s another way of breaking it down, but I’ve heard more than a couple of ways of breaking this space down.

Lucas: Yeah, yeah. And all of them are sort of valid in their own ways, and so we don’t have to spend too much time on this here. Now, a lot of these things that you’ve mentioned are quite macroscopic effects in the society and the world, like norms and values and developing a concept of ideal governance and understanding actors and incentives and corporations and institutions and governments. Largely, I find myself having trouble developing strong intuitions about how to think about how to impact these things because it’s so big it’s almost like the question of, “Okay, let’s figure out how to model all of human civilization.” At least all of the things that matter a lot for the development and deployment of technology.

And then let’s also think about ideal governance, like what is also the best of all possible worlds, based off of our current values, that we would like to use our model of human civilization to bring us closer towards? So being in this field, and exploring all of these research threads, how do you view making progress here?

Jade: I can hear the confusion in your voice, and I very much resonate with it. We’re sort of consistently confused, I think, at this place. And it is a very big, both set of questions, and a big space to kind of wrap one’s head around. I want to emphasize that this space is very new, and people working in this space are very few, at least with respect to AI safety, for example, which is still a very small section that feels as though it’s growing, which is a good thing. We are at least a couple of years behind, both in terms of size, but also in terms of sophistication of thought and sophistication of understanding what are more concrete/sort of decision relevant ways in which we can progress this research. So we’re working hard, but it’s a fair ways off.

One way in which I think about it is to think about it in terms of what actors are making decisions now/in the near to medium future, that are the decisions that you want to influence. And then you sort of work backwards from that. I think at least, for me, when I think about how we do our research at the Center for the Governance of AI, for example, when I think about what is valuable for us to research and what’s valuable to invest in, I want to be able to tell a story of how I expect this research to influence a decision, or a set of decisions, or a decision maker’s priorities or strategies or whatever.

Ways of breaking that down a little bit further would be to say, you know, who are the actors that we actually care about? One relatively crude bifurcation is focusing on those who are in charge of developing and deploying these technologies, firms, labs, researchers, et cetera, and then those who are in charge of sort of shaping the environment in which this technology is deployed, and used, and is incentivized to progress. So that’s folks who shape the legislative environment, folks who shape the market environment, folks who shape the research culture environment, and expectations and whatnot.

And with those two sets of decision makers, you can then boil it down into what are the particular decisions they are in charge of making that you can decide you want to influence, or try to influence, by providing them with research insights or doing research that will in some down shoot way, affect the way they think about how these decisions should be made. And a very, very concrete example would be to pick, say, a particular firm. And they have a set of priorities, or a set of things that they care about achieving within the lifespan of that firm. And they have a set of strategies and tactics that they intend to use to execute on that set of priorities. So you can either focus on trying to shift their priorities towards better directions if you think they’re off, or you can try to point out ways in which their strategies could be done slightly better, e.g. they be coordinating more with other actors, or they should be thinking harder about openness in their research norms. Et cetera, et cetera.

Well, you can kind of boil it down to the actor level and the decision specific level, and get some sense of what it actually means for progress to happen, and for you to have some kind of impact with this research. One caveat with this is that I think if one takes this lens on what research is worth doing, you’ll end up missing a lot of valuable research being done. So a lot of the work that we do currently, as I said before, is very much understanding what’s going on in the first place. What are the actual inputs into the AI production function that matter and are constrained and are bottle-necked? Where are they currently controlled? A number of other things which are mostly just descriptive I can’t tell you with which decision I’m going to influence by understanding this. But having a better baseline will inform better work across a number of different areas. I’d say that this particular lens is one way of thinking about progress. There’s a number of other things that it wouldn’t measure, that are still worth doing in this space.

Lucas: So it does seem like we gain a fair amount of tractability by just thinking, at least short term, who are the key actors, and how might we be able to guide them in a direction which seems better. I think here it would also be helpful if you could let us know, what is the actual research that you, and say, Allan Dafoe engage in on a day to day basis. So there’s analyzing historical cases. I know that you guys have done work with specifying your research agenda. You have done surveys of American attitudes and trends on opinions on AI. Jeffrey Ding has also released a paper on deciphering China’s AI dream, tries to understand China’s AI strategy. You’ve also released on the malicious use cases of artificial intelligence. So, I mean, what is it like being Jade on a day to day trying to conquer this problem?

Jade: The specific work that I’ve spent most of my research time on to date sort of falls into the politics/governance cluster. And basically, the work that I do is centered on the assumption that there are things that we can learn from a history of trying to govern strategic general purpose technologies well. And if you look at AI, and you believe that it has certain properties that make it strategic, strategic here in the sense that it’s important for things like national security and economic leadership of nations and whatnot. And it’s also general purpose technology, in that it has the potential to do what GPTs do, which is to sort of change the nature of economic production, push forward a number of different frontiers simultaneously, enable consistent cumulative progress, change course of organizational functions like transportation, communication, et cetera.

So if you think that AI looks like strategic general purpose technology, then the claim is something like, in history we’ve seen a set of technology that plausibly have the same traits. So the ones that I focus on are biotechnology, cryptography, and aerospace technology. And the question that sort of kicked off this research is, how have we dealt with the very fraught competition that we currently see in the space of AI when we’ve competed across these technologies in the past. And the reason why there’s a focus on competition here is because, I think one important thing that characterizes a lot of the reasons why we’ve got a fair number of risks in the AI space is because we are competing over it. “We” here being very powerful nations, very powerful firms, and the reason why competition is an important thing to highlight is that it exacerbates a number of risks and it causes a number of risks.

So when you’re in a competitive environment, actors were normally incentivized to take larger risks than they otherwise would rationally do. They are largely incentivized to not engage in the kind of thinking that is required to think about public goods governance and serving the common benefit of humanity. And they’re more likely to engage in thinking about, is more about serving parochial, sort of private, interests.

Competition is bad for a number of reasons. Or it could be bad for a number of reasons. And so the question I’m asking is, how have we competed in the past? And what have been the outcomes of those competitions? Long story short, so the research that I do is basically I dissect these cases of technology development, specifically in the US. And I analyze the kinds of conflicts, and the kinds of cooperation that have existed between the US government and the firms that were leading technology development, and also the researcher communities that were driving these technologies forward.

Other pieces of research that are going on, we have a fair number of our researcher working on understanding what are the important inputs into AI that are actually progressing us forward. How important is compute relative to algorithmic structures, for example? How important is talent, with respect to other inputs? And then the reason why that’s important to analyze and useful to think about is understanding who controls these inputs, and how they’re likely to progress in terms of future trends. So that’s an example of the technology forecasting work.

In the politics work, we have a pretty big chunk on looking at the relationship between governments and firms. So this is a big piece of work that I’ve been doing, along with a fair amount of others, understanding, for example, if the US government wanted to control AI R&D, what are the various levers that they have available, that they could use to do things like seize patents, or control research publications, or exercise things like export controls, or investment constraints, or whatnot. And the reason why we focus on that is because my hypothesis is that ultimately, ultimately you’re going to start to see states get much more involved. At the moment, you’re currently in this period of time wherein a lot of people describe it as very private sector driven, and the governments are behind, I think, and history would also suggest that the state is going to be involved much more significantly very soon. So understanding what they could do, and what their motivations are, are important.

And then, lastly, on the governance piece, a big chunk of our work here is specifically on public opinions. So you’ve mentioned this before. But basically, we have a big substantial chunk of our work, consistently, is just understanding what the public thinks about various issues to do with AI. So recently, we published a report of the recent set of surveys that we did surveying the American public. And we asked them a variety of different questions and got some very interesting answers.

So we asked them questions like: What risks do you think are most important? Which institution do you trust the most to do things with respect to AI governance and development? How important do you think certain types of governance challenges are for American people? Et cetera. And the reason why this is important for the governance piece is because governance ultimately needs to have sort of public legitimacy. And so the idea was that understanding how the American public thinks about certain issues can at least help to shape some of the conversation around where we should be headed in governance work.

Lucas: So there’s also been work here, for example, on capabilities forecasting. And I think Allan and Nick Bostrom also come at these from slightly different angles sometimes. And I’d just like to explore all of these so we can get all of the sort of flavors of the different ways that researchers come at this problem. Was it Ben Garfinkel who did the offense-defense analysis?

Jade: Yeah.

Lucas: So, for example, there’s work on that. That work was specifically on trying to understand how the offense-defense bias scales as capabilities change. This could have been done with nuclear weapons, for example.

Jade: Yeah, exactly. That was an awesome piece of work by Allan and Ben Garfinkel, looking at this concept of the offense-defense balance, which exists for weapon systems broadly. And they were sort of analyzing and modeling. It’s a relatively theoretical piece of work, trying to model how the offense-defense balance changes with investments. And then there was a bit of a investigation there specifically on how we could expect AI to affect the offense-defense balance in different types of contexts. The other cluster work, which I failed to mention as well, is a lot of our work on policy, specifically. So this is where projects like the windfall clause would fall in.

Lucas: Could you explain what the windfall clause is, in a sentence or two?

Jade: The windfall clause is an example of a policy lever, which we think could be a good idea to talk about in public and potentially think about implementing. And the windfall clause is an ex-ante voluntary commitment by AI developers to distribute profits from the development of advanced AI for the common benefit of humanity. What I mean by ex-ante is that they commit to it now. So an AI developer, say a given AI firm, will commit to, or sign, the windfall clause prior to knowing whether they will get to anything like advanced AI. And what they commit to is saying that if I hit a certain threshold of profits, so what we call windfall profit, and the threshold is very, very, very high. So the idea is that this should only really kick in if a firm really hits the jackpot and develops something that is so advanced, or so transformative in the economic sense, that they get a huge amount of profit from it at some sort of very unprecedented scale.

So if they hit that threshold of profit, this clause will kick in, and that will commit them to distributing their profits according to some kind of pre-committed distribution mechanism. And the idea with the distribution mechanism is that it will redistribute these products along the lines of ensuring that sort of everyone in the world can benefit from this kind of bounty. There’s a lot of different ways in which you could do the distribution. And we’re about to put out the report which outlines some of our thinking on it. And there are many more ways in which it could be done besides from what we talk about.

But effectively, what you want in a distribution mechanism is you want it to be able to do things like rectify inequalities that could have been caused in the process of developing advanced AI. You want it to be able to provide a financial buffer to those who’ve been thoughtlessly unemployed by the development of advanced AI. And then you also want it to do somewhat positive things too. So it could be, for example, that you distribute it according to meeting the sustainable development goals. Or it could be redistributed according to a scheme that looks something like the UBI. And that transitions us into a different type of economic structure. So there are various ways in which you could play around with it.

Effectively, the windfall clause is starting a conversation about how we should be thinking about the responsibilities that AI developers have to ensure that if they do luck out, or if they do develop something that is as advanced as some of what we speculate we could get to, there is a responsibility there. And there also should be a committed mechanism there to ensure that that is balanced out in a way that reflects the way that we want this value to be distributed across the world.

And that’s an example of the policy lever that is sort of uniquely concrete, in that we don’t actually do a lot of concrete research. We don’t do much policy advocacy work at all. But to the extent that we want to do some policy advocacy work, it’s mostly with the motivation that we want to be starting important conversations about robustly good policies that we could be advocating for now, that can help steer us in better directions.

Lucas: And fitting this into the research threads that we’re talking about here, this goes back to, I believe, Nick Bostrom’s Superintelligence. And so it’s sort of predicated on more foundational principles, which can be attributed to before the Asilomar Conference, but also the Asilomar principles which were developed in 2017, that the benefits of AI should be spread widely, and there should be abundance. And so then there becomes these sort of specific policy implementations or mechanisms by which we are going to realize these principles which form the foundation of our ideal governance.

So Nick has sort of done a lot of this work on forecasting. The forecasting in Superintelligence was less about concrete timelines, and more about the logical conclusions of the kinds of capabilities that AI will have, fitting that into our timeline of AI governance thinking, with ideal governance at the end of that. And then behind us, we have history, which we can, as you’re doing yourself, try to glean more information about how what you call general purpose technologies affect incentives and institutions and policy and law and the reaction of government to these new powerful things. Before we brought up the windfall clause, you were discussing policy at FHI.

Jade: Yeah, and one of the reasons why it’s hard is because if we put on the frame that we mostly make progress by influencing decisions, we want to be pretty certain about what kinds of directions we want these decisions to go, and what we would want these decisions to be, before we engage in any sort of substantial policy advocacy work to try to make that actually a thing in the real world. I am very, very hesitant about our ability to do that well, at least at the moment. I think we need to be very humble about thinking about making concrete recommendations because this work is hard. And I also think there is this dynamic, at least, in setting norms, and particularly legislation or regulation, but also just setting up institutions, in that it’s pretty slow work, but it’s very path dependent work. So if you establish things, they’ll be sort of here to stay. And we see a lot of legacy institutions and legacy norms that are maybe a bit outdated with respect to how the world has progressed in general. But we still struggle with them because it’s very hard to get rid of them. And so the kind of emphasis on humility, I think, is a big one. And it’s a big reason why basically policy advocacy work is quite slim on the ground, at least in the moment, because we’re not confident enough in our views on things.

Lucas: Yeah, but there’s also this tension here. The technology’s coming anyway. And so we’re sort of on this timeline to get the right policy stuff figured out. And here, when I look at, let’s just take the Democrats and the Republicans in the United States, and how they interact. Generally, in terms of specific policy implementation and recommendation, it just seems like different people have various dispositions and foundational principles which are at odds with one another, and that policy recommendations are often not substantially tested, or the result of empirical scientific investigation. They’re sort of a culmination and aggregate of one’s very broad squishy intuitions and modeling or the world, and different intuitions one has. Which is sort of why, at least at the policy level, seemingly in the United States government, it seems like a lot of the conversation is just endless arguing that gets nowhere. How do we avoid that here?

Jade: I mean, this is not just specifically an AI governance problem. I think we just struggle with this in general as we try to do governance and politics work in a good way. It’s a frustrating dynamic. But I think one thing that you said definitely resonates and that, a bit contra to what I just said. Whether we like it or not, governance is going to happen, particularly if you take the view that basically anything that shapes the way this is going to go, you could call governance. Something is going to fill the gap because that’s what humans do. You either have the absence of good governance, or you have somewhat better governance if you try to engage a little bit. There’s definitely that tension.

One thing that I’ve recently been reflecting on, in terms of things that we under-prioritize in this community, because it’s sort of a bit of a double-edged sword of being very conscientious about being epistemically humble and being very cautious about things, and trying to be better calibrated and all of that, which are very strong traits of people who work in this space at the moment. But I think almost because of those traits, too, we undervalue, or we don’t invest enough time or resource in just trying to engage in existing policy discussions and existing governance institutions. And I think there’s also an aversion to engaging in things that feel frustrating and slow, and that’s plausibly a mistake, at least in terms of how much attention we pay to it because in the absence of our engagement, the things still going to happen anyway.

Lucas: I must admit that as someone interested in philosophy I’ve resisted for a long time now, the idea of governance in AI at least casually in favor of nice calm cool rational conversations at tables that you might have with friends about values, and ideal governance, and what kinds of futures you’d like. But as you’re saying, and as Alan says, that’s not the way that the world works. So here we are.

Jade: So here we are. And I think one way in which we need to rebalance a little bit, as kind of an example of this is, I’m aware that a lot of the work, at least that I see in this space, is sort of focused on very aligned organizations and non-government organizations. So we’re looking at private labs that are working on developing AGI. And they’re more nimble. They have more familiar people in them, we think more similarly to those kinds of people. And so I think there’s an attraction. There’s really good rational reasons to engage with the folks because they’re the ones who are developing this technology and they’re plausibly the ones who are going to develop something advanced.

But there’s also, I think, somewhat biased reasons why we engage, is because they’re not as messy, or they’re more familiar, or we feel more value aligned. And I think this early in the field, putting all our eggs in a couple of very, very limited baskets, is plausibly not that great a strategy. That being said, I’m actually not entirely sure what I’m advocating for. I’m not sure that I want people to go and engage with all of the UN conversations on this because there’s a lot of noise and very little signal. So I think it’s a tricky one to navigate, for sure. But I’ve just been reflecting on it lately, that I think we sort of need to be a bit conscious about not group thinking ourselves into thinking we’re sort of covering all the bases that we need to cover.

Lucas: Yeah. My view on this, and this may be wrong, is just looking at the EA community, and the alignment community, and all that they’ve done to try to help with AI alignment. It seems like a lot of talent feeding into tech companies. And there’s minimal efforts right now to engage in actual policy and decision making at the government level, even for short term issues like disemployment and privacy and other things. The AI alignment is happening now, it seems.

Jade: On the noise to signal point, I think one thing I’d like for people to be thinking about, I’m pretty annoyed at this short term v. long term bifurcation. And I think a fair number of people are. And the framing that I’ve tried on a little bit is more thinking about it in terms of stakes. So how high are the stakes for a particular application area, or a particular sort of manifestation of a risk or a concern.

And I think in terms of thinking about it in the stakes sense, as opposed to the timeline sense, helps me at least try to identify things that we currently call or label near term concerns, and try to filter the ones that are worth engaging in versus the ones that maybe we just don’t need to engage in at all. An example here is that basically I am trying to identify near term/existing concerns that I think could scale in stakes as AI becomes more advanced. And if those exist, then there’s really good reason to engage in them for several reasons, right? One is this path dependency that I talked about before, so norms that you’re developing around, for example, privacy or surveillance. Those norms are going to stick, and the ways in which we decide we want to govern that, even with narrow technologies now, those are the ones we’re going to inherit, grandfather in, as we start to advance this technology space. And then I think you can also just get a fair amount of information about how we should be governing the more advanced versions of these risks or concerns if you engage earlier.

I think there are actually probably, even just off the top off of my head, I can think of a couple which seemed to have scalable stakes. So, for example, a very existing conversation in the policy space is about this labor displacement problem and automation. And that’s the thing that people are freaking out about now, is the extent that you have litigation and bills and whatnot being passed, or being talked about at least. And you’ve got a number of people running on political platforms on the basis of that kind of issue. And that is both an existing concern, given automation to date. But it’s also plausibly a huge concern as this stuff is more advanced, to the point of economic singularity, if you wanted to use that term, where you’ve got vast changes in the structure of the labor market and the employment market, and you can have substantial transformative impacts on the ways in which humans engage and create economic value and production.

And so existing automation concerns can scale into large scale labor displacement concerns, can scale into pretty confusing philosophical questions about what it means to conduct oneself as a human in a world where you’re no longer needed in terms of employment. And so that’s an example of a conversation which I wish more people were engaged in right now.

Plausibly, another one would be privacy as well, because I think privacy is currently a very salient concern. But also, privacy is an example of one of the fundamental values that we are at risk of eroding if we continue to deploy technologies for other reasons : efficiency gains, or for increasing control and centralizing of power. And privacy is this small microcosm of a maybe larger concern about how we could possibly be chipping away at these very fundamental things which we would want to preserve in the longer run, but we’re at risk of not preserving because we continue to operate in this dynamic of innovation and performance for whatever cost. Those are examples of conversations where I find it plausible that there are existing conversations that we should be more engaged in just because those are actually going to matter for the things that we call long term concerns, or the things that I would call sort of high stakes concerns.

Lucas: That makes sense. I think that trying on the stakes framing is helpful, and I can see why. It’s just a question about what are the things today, and within the next few years, that are likely to have a large effect on a larger end that we arrive at with transformative AI. So we’ve got this space of all these four cornerstones that you guys are exploring. Again, this has to do with the interplay and interdependency of technical AI safety, politics, policy of ideal governance, the economics, the military balance and struggle, and race dynamics all here with AI, on our path to AGI. So starting here with ideal governance, and we can see how we can move through these cornerstones, what is the process by which ideal governance is arrived at? How might this evolve over time as we get closer to superintelligence?

Jade: It may be a couple of thoughts, mostly about what I think a desirable process is that we should follow, or what kind of desired traits do we want to have in the way that we get to ideal governance and what ideal governance could plausibly look like. I think that’s to the extent that I maybe have thoughts about it. And they’re quite obvious ones, I think. Governance literature has said a lot about what consists of both morally sound, politically sound, socially sound governance processes or design of governance processes.

So those are things like legitimacy and accountability and transparency. I think there are some interesting debates about how important certain goals are, either as end goals or as instrumental goals. So for example, I’m not clear where my thinking is on how important inclusion and diversity is. As we’re aiming for ideal governance, so I think that’s an open question, at least in my mind.

There are also things to think through around what’s unique to trying to aim for ideal governance for a transformative general purpose technology. We don’t have a very good track record of governing general purpose technologies at all. I think we have general purpose technologies that have integrated into society and have served a lot of value. But that’s not for having had governance of them. I think we’ve been come combination of lucky and somewhat thoughtful sometimes, but not consistently so. If we’re staking the claim that AI could be a uniquely transformative technology, then we need to ensure that we’re thinking hard about the specific challenges that it poses. It’s a very fast-moving emerging technology. And governments historically has always been relatively slow at catching up. But you also have certain capabilities that you can realize by developing, for example, AGI or super intelligence, which governance frameworks or institutions have never had to deal with before. So thinking hard about what’s unique about this particular governance challenge, I think, is important.

Lucas: Seems like often, ideal governance is arrived at through massive suffering of previous political systems, like this form of ideal governance that the founding fathers of the United States came up with was sort of an expression of the suffering they experienced at the hands of the British. And so I guess if you track historically how we’ve shifted from feudalism and monarchy to democracy and capitalism and all these other things, it seems like governance is a large slowly reactive process born of revolution. Whereas, here, what we’re actually trying to do is have foresight and wisdom about what the world should look like, rather than trying to learn from some mistake or some un-ideal governance we generate through AI.

Jade: Yeah, and I think that’s also another big piece of it, is another way of thinking about how to get to ideal governance is to aim for a period of time, or a state of the world in which we can actually do the thinking well without a number of other distractions/concerns on the way. So for example, conditions that we want to drive towards would mean getting rid of things like the current competitor environment that we have, which for many reasons, some of which I mentioned earlier, it’s a bad thing, and it’s particularly counterproductive to giving us the kind of space and cooperative spirit and whatnot that we need to come to ideal governance. Because if you’re caught in this strategic competitive environment, then that makes a bunch of things just much harder to do in terms of aiming for coordination and cooperation and whatnot.

You also probably want better, more accurate, information out there, hence being able to think harder by looking at better information. And so a lot of work can be done to encourage more accurate information to hold more weight in public discussions, and then also encourage an environment that is genuine, epistemically healthy deliberation about that kind of information. All of what I’m saying is also not particularly unique, maybe, to ideal governance for AI. I think in general, you can sometimes broaden this discussion to what does it look like to govern a global world relatively well. And AI is one of the particular challenges that are maybe forcing us to have some of these conversations. But in some ways, when you end up talking about governance, it ends up being relatively abstract in a way, I think, ruins technology. At least in some ways there are also particular challenges, I think, if you’re thinking particularly about superintelligence scenarios. But if you’re just talking about governance challenges in general, things like accurate information, more patience, lack of competition and rivalrous dynamics and what not, that generally is kind of just helpful.

Lucas: So, I mean, arriving at ideal governance here, I’m just trying to model and think about it, and understand if there should be anything here that should be practiced differently, or if I’m just sort of slightly confused here. Generally, when I think about ideal governance, I see that it’s born of very basic values and principles. And I view these values and principles as coming from nature, like the genetics, evolution instantiating certain biases and principles and people that tend to lead to cooperation, conditioning of a culture, how we’re nurtured in our homes, and how our environment conditions us. And also, people update their values and principles as they live in the world and communicate with other people and engage in public discourse, even more foundational, meta-ethical reasoning, or normative reasoning about what is valuable.

And historically, these sort of conversations haven’t mattered, or they don’t seem to matter, or they seem to just be things that people assume, and they don’t get that abstract or meta about their values and their views of value, and their views of ethics. It’s been said that, in some sense, on our path to superintelligence, we’re doing philosophy on a deadline, and that there are sort of deep and difficult questions about the nature of value, and how best to express value, and how to idealize ourselves as individuals and as a civilization.

So I guess I’m just throwing this all out there. Maybe not necessarily we have any concrete answers. But I’m just trying to think more about the kinds of practices and reasoning that should and can be expected to inform ideal governance. Should meta-ethics matter here, where it doesn’t seem to matter in public discourse. I still struggle between the ultimate value expression that might be happening through superintelligence, and the tension between that, and how are public discourse functions. I don’t know if you have any thoughts here.

Jade: No particular thoughts, aside from to generally agree that I think meta-ethics is important. It is also confusing to me why public discourse doesn’t seem to track the things that seem important. This probably is something that we’ve struggled and tried to address in various ways before, so I guess I’m always cognizant of trying to learn from ways in which we’ve tried to improve public discourse and tried to create spaces for this kind of conversation.

It’s a tricky one for sure, and thinking about better practices is probably the main way at least in which I engage with thinking about ideal governance. It’s often the case that people, when they look at the cluster of ideal governance work though like, “Oh, this is the thing that’s going to tell us what the answer is,” like what’s the constitution that we have to put in place, or whatever it is.

At least for me, the maun chunk of thinking is mostly centered around process, and it’s mostly centered around what constitutes a productive optimal process, and some ways of answering this pretty hard question. And how do you create the conditions in which you can engage with that process without being distracted or concerned about things like competition? Those are kind of the main ways in which it seems obvious that we can fix the current environment so that we’re better placed to answer what is a very hard question.

Lucas: Coming to mind here is also, is this feature that you pointed out, I believe, that ideal governance is not figuring everything out in terms of our values, but rather creating the kind of civilization and space in which we can take the time to figure out ideal governance. So maybe ideal governance is not solving ideal governance, but creating a space to solve ideal governance.

Usually, ideal governance has to do with modeling human psychology, and how to best to get human being to produce value and live together harmoniously. But when we introduce AI, and human beings become potentially obsolete, then ideal governance potentially becomes something else. And I wonder, if the role of, say, experimental cities with different laws, policies, and governing institutions might be helpful here.

Jade: Yeah, that’s an interesting thought. Another thought that came to mind as well, actually, is just kind of reflecting on how ill-equipped I feel thinking about this question. One funny trait of this field is that you have a slim number of philosophers, but specially in the AI strategy and safety space, it’s political scientists, international relations people, economists, and engineers, and computer scientists thinking about questions that other spaces have tried to answer in different ways before.

So when you mention psychology, that’s an example. Obviously, philosophy has something to say about this. But there’s also a whole space of people have thought about how we govern things well across a number of different domains, and how we do a bunch of coordination and cooperation better, and stuff like that. And so it makes me reflect on the fact that there could be things that we already have learned that we should be reflecting a little bit more on which we currently just don’t have access to because we don’t necessarily have the right people or the right domains of knowledge in this space.

Lucas: Like AI alignment has been attracting a certain crowd of researchers, and so we miss out on some of the insights that, say, psychologists might have about ideal governance.

Jade: Exactly, yeah.

Lucas: So moving along here, from ideal governance, assuming we can agree on what ideal governance is, or if we can come to a place where civilization is stable and out of existential risk territory, and where we can sit down and actually talk about ideal governance, how do we begin to think about how to contribute to AI governance through working with or in private companies and/or government.

Jade: This is a good, and quite large, question. I think there are a couple of main ways in which I think about productive actions that either companies or governments can take, or productive things we can do with both of these actors to make them more inclined to do good things. On the point of other companies, the primary thing I think that is important to work on, at least concretely in the near term, is to do something like establish the norm and expectation that as developers of this important technology that will have a large plausible impact on the world, they have a very large responsibility proportional to their ability to impact the development of this technology. By making the responsibility something that is tied to their ability to shape this technology, I think that as a foundational premise or a foundational axiom to hold about why private companies are important, that can get us a lot of relatively concrete things that we should be thinking about doing.

The simple way of saying its is something like if you are developing the thing, you’re responsibly for thinking about how that thing is going to affect the world. And establishing that, I think is a somewhat obvious thing. But it’s definitely not how the private sector operates at the moment, in that there is an assumed limited responsibility irrespective of how your stuff is deployed in the world. What that actually means can be relatively concrete. Just looking at what these labs, or what these firms have the ability to influence, and trying to understand how you want to change it.

So, for example, internal company policy on things like what kind of research is done and invested in, and how you allocate resources across, for example, safety and capabilities research, what particular publishing norms you have, and considerations around risks or benefits. Those are very concrete internal company policies that can be adjusted and shifted based on one’s idea of what they’re responsible for. The broad thing, I think, to try to steer them in this direction of embracing, acknowledging, and then living up this greater responsibility, as an entity that is responsible for developing the thing.

Lucas: How would we concretely change the incentive structure of a company who’s interested in maximizing profit towards this increased responsibility, say, in the domains that you just enumerated.

Jade: This is definitely probably one of the hardest things about this claim being translated into practice. I mean, it’s not the first time we’ve been somewhat upset at companies for doing things that society doesn’t agree with. We don’t have a great track record of changing the way that industries or companies work. That being said, I think if you’re outside of the company, there are particularly levers that one can pull that can influence the way that a company is incentivized. And then I think we’ve also got examples of us being able to use these levers well.

The fact that companies are constrained by the environment that a government creates, and governments also have the threat of things like regulation, or the threat of being able to pass certain laws or whatnot, which actually the mere threat, historically, has done a fair amount in terms of incentivizing companies to just step up their game because they don’t want regulation to kick in, which isn’t conducive to what they want to do, for example.

Users of the technology is a pretty classic one. It’s a pretty inefficient one, I think, because you’ve got to coordinate many, many different types of users, and actors, and consumers and whatnot, to have an impact on what companies are incentivized to do. But you have seen environmental practices in other types of industries that have been put in place as standards or expectations that companies should abide by because consumers across a long period of time have been able to say, “I disagree with this particular practice.” That’s an example of a trend that has succeeded.

Lucas: That would be like boycotting or divestment.

Jade: Yeah, exactly. And maybe a slightly more efficient one is focusing on things like researchers and employees. That is, if you are a researcher, if you’re an employee, you have levers over the employer that you work for. They need you, and you need them, and there’s that kind of dependency in that relationship. This is all a long way of saying that I think, yes, I agree it’s hard to change incentive structures of any industry, and maybe specifically so in this case because they’re very large. But I don’t think it’s impossible. And I think we need to think harder about how to use those well. I think the other thing that’s working in our favor in this particular case is that we have a unique set of founders or leaders of these labs or companies that have expressed pretty genuine sounding commitments to safety and to cooperativeness, and to serving the common good. It’s not a very robust strategy to rely on certain founders just being good people. But I think in this case, it’s kind of working in our favor.

Lucas: For now, yeah. There’s probably already other interest groups who are less careful, who are actually just making policy recommendations right now, and we’re broadly not in on the conversation due to the way that we think about the issue. So in terms of government, what should we be doing? Yeah, it seems like there’s just not much happening.

Jade: Yeah. So I agree there isn’t much happening, or at least relative to how much work we’re putting into trying to understand and engage with private labs. There isn’t much happening with government. So I think there needs to be more thought put into how we do that piece of engagement. I think good things that we could be trying to encourage more governments to do, for one, investing in productive relationships with the technical community, and productive relationships with the researcher community, and with companies as well. At least in the US, it’s pretty adversarial between Silicon Valley firms and DC.

And that isn’t good for a number of reasons. And one very obvious reason is that there isn’t common information or common understand of what’s going on, what the risks are, what the capabilities are, et cetera. One of the main critiques of governments is that they’re ill-equipped in terms of access to knowledge, and access to expertise, to be able to appropriately design things like bills, or things like pieces of legislation or whatnot. And I think that’s also something that governments should take responsibility for addressing.

So those are kind of law hanging fruit. There’s a really tricky balance that I think governments will need to strike, which is the balance between avoiding over-hasty ill-informed regulation. A lot of my work looking at history will show that the main ways in which we’ve achieved substantial regulation is as a result of big public, largely negative events to do with the technology screwing something up, or the technology causing a lot of fear, for whatever reasons. And so there’s a very sharp spike in public fear or public concern, and then the government then kicks into gear. And I think that’s not a good dynamic in terms of forming nuanced well-considered regulation and governance norms. Avoiding the outcome is important, but it’s also important that governments do engage and track how this is going, and particularly track where things like company policy and industry-wide efforts are not going to be sufficient. So when do you start translating some of the more soft law, if you will, into actual hard law.

That will be a very tricky timing question, I think, for governments to grapple with. But ultimately, it’s not sufficient to have companies governing themselves. You’ll need to be able to consecrate it into government backed efforts and initiatives and legislation and bills. My strong intuition is that it’s not quite the right time to roll out object level policies. And so the main task for governments will be just to position themselves to do that well when the time is right.

Lucas: So what’s coming to my mind here is I’m thinking about YouTube compilations of congressional members of the United States and senators asking horrible questions to Mark Zuckerberg and the CEO of, say, Google. They just don’t understand the issues. The United States is currently not really thinking that much about AI, and especially transformative AI. Whereas, China, it seems, has taken a step in this direction and is doing massive governmental investments. So what can we say about this assuming difference? And the question is, what are governments to do in this space? Different governments are paying attention at different levels.

Jade: Some governments are more technological savvy than others, for one. So I pushed back on the US not … They’re paying attention on different things. So, for example, the Department of Commerce put out a notice to the public indicating that they’re exploring putting in place export controls on a cluster of emerging technologies, including a fair number of AI relevant technologies. The point of export controls is to do something like ensure that adversaries don’t get access to critical technologies that, if they do, then that could undermine national security and/or domestic industrial base. The reasons why export controls are concerning is because they’re a) a relatively outdated tool. They used to work relatively well when you were targeting specific kind of weapons technologies, or basically things that you could touch and see. And the restriction of them from being on the market by the US means that a fair amount of it won’t be able to be accessed by other folks around the world. And you’ve seen export controls be increasingly less effective the more that we’ve tried to apply to things like cryptography, where it’s largely software based. And so trying to use export controls, which are applied at the national border, is a very tricky thing to make effective.

So you have the US paying attention to the fact that they think that AI is a national security concern, at least in this respect, enough to indicate that they’re interested in exploring export controls. I think it’s unlikely that export controls are going to be effective at achieving the goals that the US want to pursue. But I think export controls is also indicative of a world that we don’t want to slide in, which is a world where you have rivalrous economic blocks, where you’re sort of protecting your own base, and you’re not contributing to the kind of global commons of progressing this technology.

Maybe it goes back to what we were saying before, in that if you’re not engaged in the governance, the governance is going to happen anyway. This is an example of activity is going to happen anyway. I think people assume now, probably rightfully so, that the US government is not going to be very effective because they are not technically literate. In general, they are sort of relatively slow moving. They’ve got a bunch of other problems that they need to think about, et cetera. But I don’t think it’s going to take very, very long for the US government to start to seriously engage. I think the thing that is worth trying to influence is what they do when they start to engage.

If I had a policy in mind that I thought was robustly good that the US government should pass, then that would be the more proactive approach. It seems possible that if we think about this hard enough, there could be robustly good things that the US government could do, that could be good to be proactive about.

Lucas: Okay, so there’s this sort of general sense that we’re pretty heavy on academic papers because we’re really trying to understand the problem, and the problem is so difficult, and we’re trying to be careful and sure about how we progress. And it seems like it’s not clear if there is much room, currently, for direct action, given our uncertainty about specific policy implementations. There are some shorter term issues. And sorry to say shorter term issues. But, by that, I mean automation and maybe lethal autonomous weapons and privacy. These things, we have a more clear sense of, at least about potential things that we can start doing. So I’m just trying to get a sense here from you, on top of these efforts to try to understand the issues more, and on top of these efforts, for example, like 80,000 Hours has contributed. And by working to place aligned persons in various private organizations, what else can we be doing? What would you like to see more being done on here?

Jade: I think this is on top of just more research. But that would be the first thing that comes to mind, is people thinking hard about it seems like a thing that I want a lot more of, in general. But on top of that, what you mentioned, I think, the placing people, that maybe fits into this broader category of things that seems good to do, which is investing in building our capacity to influence the future. That’s quite a general statement. But something like it takes a fair amount of time to build up influence, particularly in certain institutions, like governments, like international institutions, et cetera. And so investing in that early seems good. And doing things like trying to encourage value aligned sensible people to climb the ladders that they need to climb in order to get to positions of influence, that generally seems like a good and useful thing.

The other thing that comes to mind as well is putting out more accurate information. One specific version of things that we could do here is, there is currently a fair number of inaccurate, or not well justified memes that are floating around, that are informing the way that people think. For example, the US and China are in a race. Or a more nuanced one is something like, inevitably, you’re going to have a safety performance trade off. And those are not great memes, in the sense that they don’t seem to be conclusively true. But they’re also not great in that they put you in a position of concluding something like, “Oh, well, if I’m going to invest in safety, I’ve got to be an altruist, or I’m going to trade off my competitive advantage.”

And so identifying what those bad ones are, and countering those, is one thing to do. Better memes could be something like those are developing this technology are responsible for thinking through its consequences. Or something even as simple as governance doesn’t mean government, and it doesn’t mean regulation. Because I think you’ve got a lot of firms who are terrified of regulation. And so they won’t engage in this governance conversation because of it. So there could be some really simple things I think we could do, just to make the public discourse both more accurate and more conducive to things being done that are good in the future.

Lucas: Yeah, here I’m also just seeing the tension here between the appropriate kinds of memes that inspire, I guess, a lot of the thinking within the AI alignment community, and the x-risk community, versus what is actually useful or spreadable for the general public, adding in here ways in which accurate information can be info-hazardy. I think broadly in our community, the common good principle, and building an awesome future for all sentient creatures, and I am curious to know how spreadable those memes are.

Jade: Yeah, the spreadability of memes is a thing that I want someone to investigate more. The things that make things not spreadable, for example, are just things that are, at a very simple level, quite complicated to explain, or are somewhat counterintuitive so you can’t pump the intuition very easily. Particularly things that require you to decide that one set of values that you care about, that’s competing against another set of values. Any set of things that brings nationalism against cosmopolitanism, I think, is a tricky one, because you have some subset of people. The ones that you and I talk to the most are very cosmopolitan. But you also have a fair amount of people who care about the common good principle, in some sense, but also care about their nation in a fairly large sense as well.

So there are things that make certain memes less good or less spreadable. And one key thing will be to figure out which ones are actually good in the true sense, and good in the pragmatic to spread sense.

Lucas: Maybe there’s a sort of research program here, where psychologists and researchers can explore focus groups on the best spreadable memes, which reflect a lot of the core and most important values that we see within AI alignment, and EA, and x-risk.

Jade: Yeah, that could be an interesting project. I think also in AI safety, or in the AI alignment space, people are framing safety in quite different ways. One framing of that, which like it’s a part of what it means to be a good AI person, is to think about safety. That’s an example of one that I’ve seen take off a little bit more lately because that’s an explicit act of trying to mainstream the thing. That’s a meme, or an example of a framing, or a meme, or whatever you want to call it. And you know there are pros and cons of that. The pros would be, plausibly, it’s just more mainstream. And I think you’ve seen evidence of that be the case because more people are more inclined to say, “Yeah, I agree. I don’t want to build a thing that kills me if I want it to get coffee.” But you’re not going to have a lot of conversations about maybe the magnitude of risks that you actually care about. So that’s maybe a con.

There’s maybe a bunch of stuff to do in this general space of thinking about how to better frame the kind of public facing narratives of some of these issues. Realistically, memes are going to fill the space. People are going to talk about it in certain ways. You might as well try to make it better, if it’s going to happen.

Lucas: Yeah, I really like that. That’s a very good point. So let’s talk here a little bit about technical AI alignment. So in technical AI alignment, the primary concerns are around the difficulty of specifying what humans actually care about. So this is like capturing human values and aligning with our preferences and goals, and what idealized versions of us might want. So, so much of AI governance is thus about ensuring that this AI alignment process we engage in doesn’t skip too many corners. The purpose of AI governance is to decrease risks, to increase coordination, and to do all of these other things to ensure that, say, the benefits of AI are spread widely and robustly, that we don’t get locked into any negative governance systems or value systems, and that this process of bringing AIs in alignment with the good doesn’t have researchers, or companies, or governments skipping too many corners on safety. In this context, and this interplay between governance and AI alignment, how much of a concern are malicious use cases relative to the AI alignment concerns within the context of AI governance?

Jade: That’s a hard one to answer, both because there is a fair amount of uncertainty around how you discuss the scale of the thing. But also because I think there are some interesting interactions between these two problems. For example, if you’re talking about how AI alignment interacts with this AI governance problem. You mentioned before AI alignment research is, in some ways, contingent on other things going well. I generally agree with that.

For example, it depends on AI safety taking place in research cultures and important labs. It requires institutional buy-in and coordination between institutions. It requires this mitigation of race dynamics so that you can actually allocate resources towards AI alignment research. All those things. And so in some ways, that particular problem being solved is contingent on us doing AI governance well. But then, also to the point of how big is malicious use risk relative to AI alignment, I think in some ways that’s hard to answer. But in some ideal world, you could sequence the problems that you could solve. If you solve the AI alignment problem first, then AI governance research basically becomes a much narrower space, addressing how an aligned AI could still cause problems because we’re not thinking about the concentration of power, the concentration of economic gains. And so you need to think about things like the windfall clause, to distribute that, or whatever it is. And you also need to think about the transition to creating an aligned AI, and what could be messy in that transition, how you avoid public backlash so that you can actually see the fruits of you having solved this AI alignment problem.

So that becomes more the kind of nature of the thing that AI governance research becomes, if you assume that you’ve solved the AI alignment problem. But if we assume that, in some world, it’s not that easy to solve, and both problems are hard, then I think there’s this interaction between the two. In some ways, it becomes harder. In some ways, they’re dependent. In some ways, it becomes easier if you solve bits of one problem.

Lucas: I generally model the risks of malicious use cases as being less than the AI alignment stuff.

Jade: I mean, I’m not sure I agree with that. But two things I could say to that. I think, one, intuition is something like you have to be a pretty awful person to really want to use a very powerful system to cause terrible ends. And it seems more plausible that people will just do it by accident, or unintentionally, or inadvertently.

Lucas: Or because the incentive structures aren’t aligned, and then we race.

Jade: Yeah. And then the other way to sort of support this claim is, if you look at biotechnology and bio-weapons, specifically, bio-security/bio-terrorism issues, so like malicious use equivalent. Those have been far less, in terms of frequency, compared to just bio-safety issues, which are the equivalent of accident risks. So people causing unintentional harm because we aren’t treating biotechnology safely, that’s cause a lot more problems, at least in terms of frequency, compared to people actually just trying to use it for terrible means.

Lucas: Right, but don’t we have to be careful here with the strategic properties and capabilities of the technology, especially in the context in which it exists? Because there’s nuclear weapons, which are sort of the larger more absolute power imbuing technology. There has been less of a need for people to take bio-weapons to that level. You know? And also there’s going to be limits, like with nuclear weapons, on the ability of a rogue actor to manufacture really effective bio-weapons without a large production facility or team of research scientists.

Jade: For sure, yeah. And there’s a number of those considerations, I think, to bear in mind. So it definitely isn’t the case that you haven’t seen malicious use in bio strictly because people haven’t wanted to do it. There’s a bunch of things like accessibility problems, and tacit knowledge that’s required, and those kinds of things.

Lucas: Then let’s go ahead and abstract away malicious use cases, and just think about technical AI alignment, and then AI/AGI governance. How do you see the relative importance of AI and AGI governance, and the process of AI alignment that we’re undertaking? Is solving AI governance potentially a bigger problem than AI alignment research, since AI alignment research will require the appropriate political context to succeed? On our path to AGI, we’ll need to mitigate a lot of the race conditions and increase coordination. And then even after we reach AGI, the AI governance problem will continue, as we sort of explored earlier that we need to be able to maintain a space in which humanity, AIs, and all earth originating sentient creatures are able to idealize harmoniously and in unity.

Jade: I both don’t think it’s possible to actually assess them at this point, in terms of how much we understand this problem. I have a bias towards saying that AI governance is the harder problem because I’m embedded in it and see it a lot more. And maybe ways to support that claim are things we’ve talked about. So AI alignment going well, or happening at all, is sort of contingent on a number of other factors that AI governments are trying to solve, so social political economic context needs to be right in order for that to actually happen, and then in order for that to have an impact.

There are some interesting things that are made maybe easier by AI alignment being solved, or somewhat solved, if you are thinking about the AI governance problem. In fact, it’s just like a general cluster of AI being safer and more robust and more transparent, or whatever, makes certain AI governance challenges just easier. The really obvious example here that comes to mind is the verification problem. The inability to verify what certain systems are designed to do and will do causes a bunch of governance problems. Like, arms control agreements are very hard. Establishing trust between parties to cooperate and coordinate is very hard.

If you happen to be able to solve some of those problems in the process of trying to tackle this AI alignment problem. And that makes AI governance a little bit easier. I’m not sure which direction it cashes out, in terms of which problem is more important. I’m certain that there are interactions between the two, and I’m pretty certain that one depends on the other, to some extent. So it becomes imminently really hard to govern the thing, if you can’t align the thing. But it also is probably the case that by solving some of the problems in one domain, you can help make the other problem a little bit tractable and easier.

Lucas: So now I’d like to get into lethal autonomous weapons. And we can go ahead and add whatever caveats are appropriate here. So in terms of lethal autonomous weapons, some people think that there are major stakes here. Lethal autonomous weapons are a major AI enabled technology that’s likely to come on the stage soon, as we make some moderate improvements to already existing technology, and then package it all together into the form of a lethal autonomous weapon. Some take the view that this is a crucial moment, or that there are high stakes here to get such weapons banned. The thinking here might be that by demarcating unacceptable uses of AI technology, such as for autonomously killing people, and by showing that we are capable of coordinating on this large and initial AI issue, that we might be taking the first steps in AI alignment, and the first steps in demonstrating our ability to take the technology and its consequences seriously.

And so we mentioned earlier how there’s been a lot of thinking, but not much action. This seems to be an initial place where we can take action. We don’t need to keep delaying our direction action and real world participation. So if we can’t get a ban on autonomous weapons, maybe it would seem that we have less hope for coordinating on more difficult issues. And so the lethal autonomous weapons may exacerbate global conflict by increasing skirmishing at borders, decrease the cost of war, dehumanize killing, taking the human element out of death, et cetera.

And other people disagree with this. Other people might argue that banning lethal autonomous weapons isn’t necessary in the long game. It’s not, as we’re framing it, a high stakes thing. Just because this sort of developmental step in this technology is not really crucial for coordination, or for political military stability. Or that coordination later would be born of other things, and that this would just be some other new military technology without much impact. So curious here, to gather what your views, or the views of FHI, or the Center for the Governance of AI, might have on autonomous weapons. Should there be a ban? Should the AI alignment community be doing more about this? And if not, why?

Jade: In terms of caveats, I’ve got a lot of them. So I think the first one is that I’ve not read up on this issue at all, followed it very loosely, but not nearly closely enough, that I feel like I have a confident well-informed opinion.

Lucas: Can I ask why?

Jade: Mostly because of bandwidth issues. It’s not because I have categorized them ahead of something not worth engaging in. I’m actually pretty uncertain about that. The second caveat is, definitely don’t claim to speak on behalf of anyone but myself in this case. The Center for the Governance of AI, we don’t have a particular position on this, nor the FHI.

Lucas: Would you say that this is because the Center for the Governance of AI, would it be for bandwidth issues again? Or would it be because it’s de-prioritized.

Jade: The main thing is bandwidth. Also, I think the main reason why it’s probably been de-prioritized, at least subconsciously, has been the framing of sort of focusing on things that are neglected by folks around the world. It seems like there are people at least with sort of somewhat good intentions tentatively engaged in the LAWS (lethal autonomous weapons) discussion. And so within that frame, I think de-prioritization because it’s not obviously neglected compared to other things that aren’t getting any focus at all.

With those things in mind, I could see a pretty decent case for investing more effort in engaging in this discussion, at least compared to what we currently have. I guess it’s hard to tell, compared to alternatives of how we could be spending those resources, giving it’s such a resource constrained space, in terms of people working in AI alignment, or just bandwidth, in terms of this community in general. So briefly, I think we’ve talked about this idea that there’s this fair amount of path dependency in the way that institutions and norms are built up. And if this is one of the first spaces, with respect to AI capabilities, where we’re going to be getting or driving towards some attempt at international norms, or establishing international institutions that could govern this space, then that’s going to be relevant in a general sense. And specifically, it’s going to be relevant for sort of defense and security related concerns in the AI space.

And so I think you both want to engage because there’s an opportunity to seed desirable norms and practices and process and information. But you also possibly want to engage because there could be a risk that bad norms are established. And so it’s important to engage, to prevent it going down something which is not a good path in terms of this path dependency.

Another reason I think that is maybe worth thinking through, in terms of making a case for engaging more, is that applications of AI in the military and defense spaces, possibly one of the most likely to cause substantial disruption in the near-ish future, and could be an example of something that I call the high stakes concerns in the future. And you can talk about AI and its impact on various aspects of the military domain, where it could have substantial risks. So, for example, in cyber escalation, or destabilizing nuclear security. Those would be examples where military and AI come together, and you can have bad outcomes that we do actually really care about. And so for the same reason, engaging specifically in any discussion that is touching on military and AI concerns, could be important.

And then the last one that comes to mind is the one that you mentioned. This is an opportunity to basically practice doing this coordination thing. And there are various things that are worth practicing or attempting. For one, I think even just observing how these discussions pan out is going to tell you a fair amount about how important actors think about the trade offs of using AI versus sort of going towards more safe outcomes or governance processes. And then our ability corral interest around good values or appropriate norms, or whatnot, that’s a good test of our ability to generally coordinate when we have some of those trade offs around, for example, military advantage versus safety. It gives you some insight into how we could be dealing with similarly shaped issues.

Lucas: All right. So let’s go ahead and bring it back here to concrete actionable real world things today, and understanding what’s actually going on outside of the abstract thinking. So I’m curious to know here more about private companies. At least, to me, they largely seem to be agents of capitalism, like we said. They have a bottom line that they’re trying to meet. And they’re not ultimately aligned with pro-social outcomes. They’re not necessarily committed to ideal governance, but perhaps forms of governance which best serve them. And as we sort of feed aligned people into tech companies, how should we be thinking about their goals, modulating their incentives? What does DeepMind really want? Or what can we realistically expect from key players? And what mechanisms, in addition to the windfall clause, can we use to sort of curb the worst aspects of profit-driven private companies?

Jade: If I knew what DeepMind actually wanted, or what Google actually thought, we’d be in a pretty different place. So a fair amount of what we’ve chatted through, I would echo again. So I think there’s both the importance of realizing that they’re not completely divorced from other people influencing them, or other actors influencing them. And so just thinking hard about which levers are in place already that actually constrain the action of companies, is a pretty good place to start, in terms of thinking about how you can have an impact on their activities.

There’s this common way of talking about big tech companies, which is they can do whatever they want, and they run the world, and we’ve got no way of controlling them. Reality is that they are consistently constrained by a fair number of things. Because they are agents of capitalism, as you described, and because they have to respond to various things within that system. So we’ve mentioned things before, like governments have levers, consumers have levers, employees have levers. And so I think focusing on what those are is a good place to start. Anything that comes to mind is, there’s something here around taking a very optimistic view of how companies could behave. Or at least this is the way that I prefer to think about it, is that you both need to be excited, and motivated, and think that companies can change and create the conditions in which they can. But one also then needs to have a kind of hidden clinic, in some ways.

On both of these, I think the first one, I really want the public discourse to turn more towards the direction of, if we assume that companies want to have the option of demonstrating pro-social incentives, then we should do things like ensure that the market rewards them for acting in pro-social ways, instead of penalizing their attempts at doing so, instead of critiquing every action that they take. So, for example, I think we should be making bigger deals, basically, of when companies are trying to do things that at least will look like them moving in the right direction, as opposed to immediately critiquing them as ethics washing, or sort of just paying lip service to the thing. I want there to be more of an environment where, if you are a company, or you’re a head of a company, if you’re genuinely well-intentioned, you feel like your efforts will be rewarded, because that’s how incentive structures work, right?

And then on the second point, in terms of being realistic about the fact that you can’t just wish companies into being good, that’s when I think the importance of things like public institutions and civil society groups become important. So ensuring that there are consistent forms of pressure, and keep making sure that they feel like their actions are being rewarding if pro-social, but also that there are ways of spotting in which they can be speaking as if they’re being pro-social, but acting differently.

So I think everyone’s kind of basically got a responsibility here, to ensure that this goes forward in some kind of productive direction. I think it’s hard. And we said before, you know, some industries have changed in the past successfully. But that’s always been hard, and long, and messy, and whatnot. But yeah, I do think it’s probably more tractable than the average person would think, in terms of influencing these companies to move in directions that are generally just a little bit more socially beneficial.

Lucas: Yeah. I mean, also the companies were generally made up of fairly reasonable well-intentioned people. I’m not all pessimistic. There are just a lot of people who sit at desks and have their structure. So yeah, thank you so much for coming on, Jade. It’s really been a pleasure. And, yeah.

Jade: Likewise.

Lucas: If you enjoyed this podcast, please subscribe, give it a like, or share it on your preferred social media platform. We’ll be back again soon with another episode in the AI Alignment series.

End of recorded material

FLI Podcast: Is Nuclear Weapons Testing Back on the Horizon? With Jeffrey Lewis and Alex Bell

Nuclear weapons testing is mostly a thing of the past: The last nuclear weapon test explosion on US soil was conducted over 25 years ago. But how much longer can nuclear weapons testing remain a taboo that almost no country will violate? 

In an official statement from the end of May, the Director of the U.S. Defense Intelligence Agency (DIA) expressed the belief that both Russia and China were preparing for explosive tests of low-yield nuclear weapons, if not already testing. Such accusations could potentially be used by the U.S. to justify a breach of the Comprehensive Nuclear-Test-Ban Treaty (CTBT).

The CTBT prohibits all signatories from testing nuclear weapons of any size (North Korea, India, and Pakistan are not signatories). But the CTBT never actually entered into force, in large part because the U.S. has still not ratified it, though Russia did.

The existence of the treaty, even without ratification, has been sufficient to establish the norms and taboos necessary to ensure an international moratorium on nuclear weapons tests for a couple decades. But will that last? Or will the U.S., Russia, or China start testing nuclear weapons again? 

This month, Ariel was joined by Jeffrey Lewis, Director of the East Asia Nonproliferation Program at the Center for Nonproliferation Studies and founder of armscontrolwonk.com, and Alex Bell, Senior Policy Director at the Center for Arms Control and Non-Proliferation. Lewis and Bell discuss the DIA’s allegations, the history of the CTBT, why it’s in the U.S. interest to ratify the treaty, and more.

Topics discussed in this episode: 

  • The validity of the U.S. allegations –Is Russia really testing weapons?
  • The International Monitoring System — How effective is it if the treaty isn’t in effect?
  • The modernization of U.S/Russian/Chinese nuclear arsenals and what that means
  • Why there’s a push for nuclear testing
  • Why opposing nuclear testing can help ensure the US maintains nuclear superiority 

References discussed in this episode: 

You can listen to the podcast above, or read the full transcript below. All of our podcasts are also now on Spotify and iHeartRadio! Or find us on SoundCloudiTunesGoogle Play and Stitcher.

Ariel Conn: Welcome to another episode of the FLI Podcast. I’m your host Ariel Conn, and the big question I want to delve into this month is: will the U.S. or Russia or China start testing nuclear weapons again? Now, at the end of May, the Director of the U.S. Defense Intelligence Agency, the DIA, gave a statement about Russian and Chinese nuclear modernization trends. I want to start by reading a couple short sections of his speech.

About Russia, he said, “The United States believes that Russia probably is not adhering to its nuclear testing moratorium in a manner consistent with the zero-yield standard. Our understanding of nuclear weapon development leads us to believe Russia’s testing activities would help it to improve its nuclear weapons capabilities.”

And then later in the statement that he gave, he said, “U.S. government information indicates that China is possibly preparing to operate its test site year-round, a development that speaks directly to China’s growing goals for its nuclear forces. Further, China continues to use explosive containment chambers at its nuclear test site and Chinese leaders previously joined Russia in watering down language in a P5 statement that would have affirmed a uniform understanding of zero-yield testing. The combination of these facts and China’s lack of transparency on their nuclear testing activities raises questions as to whether China could achieve such progress without activities inconsistent with the Comprehensive Nuclear-Test-Ban Treaty.”

Now, we’ve already seen this year that the Intermediate-Range Nuclear Forces Treaty, the INF, has started to falter. The U.S. seems to be trying to pull itself out of the treaty and now we have reason possibly to be a little worried about the Comprehensive Test-Ban Treaty. So to discuss what the future may hold for this test ban treaty, I am delighted to be joined today by Jeffrey Lewis and Alex Bell.

Jeffrey is the Director of the East Asia Nonproliferation Program at the Center for Nonproliferation Studies at the Middlebury Institute. Before coming to CNS, he was the Director of the Nuclear Strategy and Nonproliferation Initiative at the New America Foundation and prior to that, he worked with the ADAM Project at the Belfer Center for Science and International Affairs, the Association of Professional Schools of International Affairs, the Center for Strategic and International Studies, and he was once a Desk Officer in the Office of the Under Secretary of Defense for Policy. But he’s probably a little bit more famous as being the founder of armscontrolwonk.com, which is the leading blog and podcast on disarmament, arms control, and nonproliferation.

Alex Bell is the Senior Policy Director at the Center for Arms Control and Non-Proliferation. Previously, she served as a Senior Advisor in the Office of the Under Secretary of State for Arms Control and International Security. Before joining the Department of State in 2010, she worked on nuclear policy issues at the Ploughshares Fund and the Center for American Progress. Alex is on the board of the British American Security Information Council and she was also a Peace Corps volunteer. And she is fairly certain that she is Tuxedo, North Carolina’s only nuclear policy expert.

So, Alex and Jeffrey, thank you so much for joining me today.

Jeffrey Lewis: It’s great to be here.

Ariel Conn: Let’s dive right into questions. I was hoping one of you or maybe both of you could just sort of give a really quick overview or a super brief history of the Comprehensive Nuclear-Test-Ban Treaty –– especially who has signed and ratified, and who hasn’t signed and/or ratified with regard to the U.S., Russia, and China.

Jeffrey Lewis: So, there were a number of treaties during the Cold War that restricted nuclear explosions, so you had to do them underground. But in the 1990s, the Clinton administration helped negotiate a global ban on all nuclear explosions. So that’s what the Comprehensive Nuclear-Test-Ban Treaty is. The comprehensive part is, you can’t do any explosions of any yield.

And a curious feature of this agreement is that for the treaty to come into force, certain countries must sign and ratify the treaty. One of those countries was Russia, which has both signed and ratified it. Another country was the United States. We have signed it, but the Senate did not ratify it in 1999, and I think we’re still waiting. China has signed it and basically indicated that they’ll ratify it only when the United States does. India has not signed and not ratified, and North Korea and Iran –– not signed and not ratified.

So it’s been 23 years. There’s a Comprehensive Test-Ban Treaty Organization, which is responsible for getting things ready to go when the treaty is ready; I’m actually here in Vienna at a conference that they’re putting on. But 23 years later, the treaty is still not in force even though we haven’t had any nuclear explosions in the United States or Russia since the end of the Cold War.

Ariel Conn: Yeah. So my understanding is that even though we haven’t actually ratified this and it’s not enforced, most countries, with maybe one or two exceptions, do actually abide by it. Is that true?

Alex Bell: Absolutely. There are 184 member states to the treaty, 168 total ratifications, and the only country to conduct explosive tests in the 21st century is North Korea. So while it is not yet in force, the moratorium against explosive testing is incredibly strong.

Ariel Conn: And do you remain hopeful that that’s going to stay the case, or do comments from people like Lieutenant General Ashley have you concerned?

Alex Bell: It’s a little concerning that the nature of these accusations that came from Lieutenant General Ashley didn’t seem to follow the pattern of how the U.S. government historically has talked about compliance issues that it has seen with various treaties and obligations. We have yet to hear a formal statement from the Department of State who actually has the responsibility to manage compliance issues, nor have we heard from the main part of the Intelligence Community, the Office of the Director for National Intelligence. It’s a bit strange and it has had people thinking, what was the purpose of this accusation if not to sort of move us away from the test ban?

Jeffrey Lewis: I would add that during the debate inside the Trump administration, when they were writing what was called the Nuclear Posture Review, there was a push by some people for the United States to start conducting nuclear explosions again, something that it had not done since the early 1990s. So on the one hand, it’s easy to see this as a kind of straight forward intelligence matter: Are the Russians doing it or are they not?

But on the other hand, there has always been a group of people in the United States who are upset about the test moratorium, and don’t want to see the test ban ratified, and would like the United States to resume nuclear testing. And those people have, since the 1990s, always pointed at the Russians, claiming that they must be doing secret tests and so we should start our own.

And the kind of beautiful irony of this is that when you read articles from Russians who want to start testing –– because, you know, their labs are like ours, they want to do nuclear explosions –– they say, “The Americans are surely getting ready to cheat. So we should go ahead and get ready to go.” So you have these people pointing fingers at one another, but I think the reality is that there are too many people in the United States and Russia who’d be happy to go back to a world in which there was a lot of nuclear testing.

Ariel Conn: And so do we have reason to believe that the Russians might be testing low-yield nuclear weapons or does that still seem to be entirely speculative?

Alex Bell: I’ll let Jeffrey go into some of the historical concerns people have had about the Russian program, but I think it’s important to note that the Russians immediately denied these accusations with the Foreign Minister, Lavrov, actually describing them as delusional and the Deputy Foreign Minister, Sergei Ryabkov, affirmed that they’re in full and absolute compliance with the treaty and the unilateral moratorium on nuclear testing that is also in place until the treaty enters into force. He also penned an op-ed a number of years ago affirming that the Russians believed that any yield on any tests would violate the agreement.

Jeffrey Lewis: Yeah, you know, really from the day the test ban was signed, there have been a group of people in the United States who have argued that the U.S. and Russia have different definitions of zero –– which I don’t find very credible, but it’s a thing people say –– and that the Russians are using this to conduct very small nuclear explosions. This literally was a debate that tore the U.S. Intelligence Community apart during the Clinton administration and these fears led to a really embarrassing moment.

There was a seismic event, some ground motion, some shaking near the Russian nuclear test site in 1997 and the Intelligence Community decided, “Aha, this is it. This is a nuclear test. We’ve caught the Russians,” and Madeline Albright démarched Moscow for conducting a clandestine nuclear test in violation of the CTBT, which it had just signed, and it turned out it was an earthquake out in the ocean.

So there have been a group of people who have been making this claim for more than 20 years. I have never seen any evidence that would persuade me that this is anything other than something they say because they just don’t trust the Russians. I suppose it is possible –– even a stopped watch is right twice a day. But I think before we take any actions, it would behoove us to figure out if there are any facts behind this. Because when you’ve heard the same story for 20 years with no evidence, it’s like the boy who cried wolf. It’s kind of hard to believe

Alex Bell: And that gets back to the sort of strange way that this accusation was framed: not by the Department of State; It’s not clear that Congress has been briefed about it; It’s not clear our allies were briefed about it before Lieutenant General Ashley made these comments. Everything’s been done in a rather unorthodox way and for something as serious as a potential low-yield nuclear test, this really needs to be done according to form.

Jeffrey Lewis: It’s not typical if you’re going to make an accusation that the country is cheating on an arms control treaty to drive a clown car up and then have 15 clowns come out and honk some horns. It makes it harder to accept whatever underlying evidence there may be if you choose to do it in this kind of ridiculous fashion.

Alex Bell: And that would be for any administration, but particularly, an administration that has made a habit of getting out of agreements sort of habitually now.

Jeffrey Lewis: What I loved about the statement that the Defense Intelligence Agency released –– so after the DIA director made this statement, and it’s really worth watching because he reads the statement, which is super inflammatory and there was a reporter in the audience who had been given his remarks in advance. So someone clearly leaked the testimony to make sure there was a reporter there and the reporter asks a question, and then Ashley kind of freaks out and walks back what he said.

So DIA then releases a statement where they double down and say, “No, no, no, he really meant it,” but it starts with the craziest sentence I’ve ever seen, which is “The United States government, including the Intelligence Community, assesses,” which if you know anything about the way the U.S. government works is insane because only the Intelligence Community is supposed to assess. This implies that John Bolton had an assessment, and Mike Pompeo had an assessment, and just the comical manner in which it was handled makes it very hard to take seriously or to see it as anything other than just nakedly partisan assault on the test moratorium and the test ban.

Ariel Conn: So I want to follow up about what the implications are for the test ban, but I want to go back real quick just to some of the technical side of identifying a low-yield explosion. I actually have a background in seismology, so I know that it’s not that big of a challenge for people who study seismic waves to recognize the difference between an earthquake and a blast. And so I’m wondering how small a low yield test actually is. Is it harder to identify, or are there just not seismic stations that the U.S. has access to, or is there something else involved?

Jeffrey Lewis: Well so these are called hydronuclear experiments. They are so incredibly small. They are, on the order in the U.S., there’s something like four pounds of explosive, so basically less explosion than the actual conventional explosions that are used to detonate the nuclear weapon. Some people think the Russians have a slightly bigger definition that might go up to 100 kilograms, but these are mouse farts. They are so small that unless you have the seismic station sitting right next to it, you would never know.

In a way, I think that’s a perfect example of why we’re so skeptical because when the test ban was negotiated, there was this giant international monitoring system put into place. It is not just seismic stations, but it is hydroacoustic stations to listen underwater, infrasound stations to listen for explosions in the air, radionuclide stations to detect any radioactive particles that happen to escape in the event of a test. It’s all of this stuff and it is incredibly sensitive and can detect incredibly small explosions down to about 1,000 tons of explosive and in many cases even less.

And so what’s happened is the allegations against the Russians, every time we have better monitoring and it’s clear that they’re not doing the bigger things, then the allegations are they’re doing ever smaller things. So, again, the way in which it was rolled out was kind of comical and caused us, at least me, to have some doubts about it. It is also the case that the nature of the allegation –– that it’s these tiny, tiny, tiny, tiny experiments, which U.S. scientists, by the way, have said they don’t have any interest in doing because they don’t think they are useful –– it’s almost like the perfect accusation and so that also to me is a little bit suspicious in terms of the motives of the people claiming this is happening.

Alex Bell: I think it’s also important to remember when dealing with verification of treaties, we’re looking for things that would be militarily significant. That’s how we try to build the verification system: that if anybody tried to do anything militarily significant, we’d be able to detect that in enough time to respond effectively and make sure the other side doesn’t gain anything from the violation.

So you could say that experiments like this that our own scientists don’t think are useful are not actually militarily significant, so why are we bringing it up? Do we think that this is a challenge to the treaty overall or do we not like the nature of Russia’s violations? And further, if we’re concerned about it, we should be talking to the Russians instead of about them.

Jeffrey Lewis: I think that is actually the most important point that Alex just made. If you actually think that the Russians have a different definition of zero, then go talk to them and get the same definition. If you think that the Russians are conducting these tests, then talk to the Russians and see if you can get access. If the United States were to ratify the test ban and the treaty were to come into force, there is a provision for the U.S. to ask for an inspection. It’s just a little bit rich to me that the people making this allegation are also the people who refuse to do anything about it diplomatically. If they were truly worried, they’d try to fix the problem.

Ariel Conn: Regarding the fact that the Test-Ban Treaty isn’t technically in force, are a lot of the verification processes still essentially in force anyway?

Alex Bell: The International Monitoring System, as Jeff pointed out, was just sort of in its infancy when the treaty was negotiated and now it’s become this marvel of modern technology capable of detecting tests at even very low yields. And so it is up and running and functioning. It was monitoring the various North Korean nuclear tests that have taken place in this century. It also was doing a lot of additional science like tracking radio particulates that came from the Fukushima disaster back in 2011.

So it is functioning. It is giving readings to any party to the treaty, and it is particularly useful right now to have an independent international source of information of this kind. They specifically did put out a very brief statement following this accusation from the Defense Intelligence Agency saying that they had detected nothing that would indicate a test. So that’s about as far as I think they could get, as far as a diplomatic equivalent of, “What are you talking about?”

Jeffrey Lewis: I Googled it because I don’t remember it off the top of my head, but it’s 321 monitoring stations and 16 laboratories. So the entire monitoring system has been built out and it works far better than anybody thought it would. It’s just that once the treaty comes into force, there will be an additional provision, which is: in the event that the International Monitoring System, or a state party, has any reason to think that there is a violation, that country can request an inspection. And the CTBTO trains to send people to do onsite inspections in the event of something like this. So there is a mechanism to deal with this problem. It’s just that you have to ratify the treaty.

Ariel Conn: So what are the political implications, I guess, of the fact that the U.S. has not ratified this, but Russia has –– and that it’s been, I think you said 23 years? It sounds like the U.S. is frustrated with Russia, but is there a point at which Russia gets frustrated with the U.S.?

Jeffrey Lewis: I’m a little worried about that, yeah. The reality of the situation is I’m not sure that the United States can continue to reap the benefits of this monitoring system and the benefits of what I think Alex rightly described as a global norm against nuclear testing and sort of expect everybody else to restrain themselves while in the United States we refuse to ratify the treaty and talk about resuming nuclear testing.

And so I don’t think it’s a near term risk that the Russians are going to resume testing, but we have seen… We do a lot of work with satellite images at the Middlebury Institute and the U.S. has undertaken a pretty big campaign to keep its nuclear test site modern and ready to conduct a nuclear test on as little as six months’ notice. In the past few years, we’ve seen the Russians do the same thing.

For many years, they neglected their test site. It was in really poor shape and starting in about 2015, they started putting money into it in order to improve its readiness. So it’s very hard for us to say, “Do as we say, not as we do.”

Alex Bell: Yeah, I think it’s also important to realize that if the United States resumes testing, everyone will resume testing. The guardrails will be completely off and that doesn’t make any sense because having the most technologically advanced and capable nuclear weapons infrastructure like we do, we’re benefitted from a global ban on explicit testing. It means we’re sort of locking in our own superiority.

Ariel Conn: So we’re putting that at risk. So I want to expand the conversation from just Russia and the U.S. to pull China in as well because the talk that Ashley gave was also about China’s modernization efforts. And he made some comments that sounded almost like maybe China is considering testing as well. I was sort of curious what your take on his China comments are.

Jeffrey Lewis: I’m going to jump in and be aggressive on this one because my doctoral dissertation was on the history of China’s nuclear weapons program. The class I teach at the Middlebury Institute is one in which we look at declassified U.S. intelligence assessments and then we look at Chinese historical materials in order to see how wrong the intelligence assessments were. This specifically covers U.S. assessments of China’s nuclear testing, and the U.S. just has an awful track record on this topic.

I actually interviewed the former head of China’s nuclear weapons program once, and I was talking to him about this because I was showing him some declassified assessments and I was sort of asking him about, you know, “Had you done this or had you done that?” He sort of kind of took it all in and he just kind of laughed, and he said, “I think many of your assessments were not very accurate.” There was sort of a twinkle in his eye as he said it because I think he was just sort of like, “We wrote a book about it, we told you what we did.”

Anything is possible, and the point of these allegations is events are so small that they are impossible to disprove, but to me, that’s looking at it backwards. If you’re going to cause a major international crisis, you need to come to the table with some evidence, and I just don’t see it.

Alex Bell: The GEM, the Group of Eminent Members, which is an advisory group to the CTBTO, put it best when they said the most effective way to sort of deal with this problem is to get the treaty into force. So we could have intrusive short notice onsite inspections to detect and deter any possible violations.

Jeffrey Lewis: I actually got in trouble, I got to hushed because I was talking to a member and they were trying to work on this statement and they needed the member to come back in.

Ariel Conn: So I guess when you look at stuff like this –– so, basically, all three countries are currently modernizing their nuclear arsenals. Maybe we should just spend a couple minutes talking about that too. What does it mean for each country to be modernizing their arsenal? What does that sort of very briefly look like?

Alex Bell: Nuclear weapons delivery systems, nuclear weapons do age. You do have to maintain them, like you would with any weapon system, but fortunately, from the U.S. perspective, we have exceedingly capable scientists who are able to extend the life of these systems without testing. Jeffrey, if you want to go into what other countries are doing.

Jeffrey Lewis: Yeah. I think the simplest thing to do is to talk about, at least for the nuclear warheads part, I think as Alex mentioned, all of the countries are building new submarines, and missiles, and bombers that can deliver these nuclear weapons. And that’s a giant enterprise. It costs many billions of dollars every year. But when you actually look at the warheads themselves can tell you what we do in the United States. In some cases, we build new versions of existing designs. In almost all cases, we replace components as they age.

So the warhead design might stay the same, but piece by piece things get replaced. And because we’ve been replacing those pieces over time, if they have to put a new fuse in for a nuclear warhead, they don’t go back and build the ’70s era fuse. They build a new fuse. So even though we say that we’re only replacing the existing components and we don’t try to add new capabilities, in fact, we add new capabilities all the time because as all of these components get better than the weapons themselves get better, and we’re altering the characteristics of the warheads.

So the United States has a warhead on its submarine-launched ballistic missiles, and the Trump administration just undertook a program to give it a capability so that we can turn down the yield. So if we want to make it go off with a very small explosion, they can do that. It’s a full plate of the kinds of changes that are being made, and I think we’re seeing that in Russia and China too.

They are doing all of the same things to preserve the existing weapons they have. They rebuild designs that they have, and I think that they tinker with those designs. And that is constrained somewhat by the fact that there is no explosive testing –– that makes it harder to do those things, which is precisely why we wanted this ban in the first place –– but everybody is playing with their nuclear weapons.

And I think just because there’s a testing moratorium, the scientists who do this, some of them, because they want to go back to nuclear testing or nuclear explosions, they say, “If we could only test with explosions, that would be better.” So there’s even more they want to do, but let’s not act like they don’t get to touch the bombs, because they play with them all the time.

Alex Bell: Yeah. It’s interesting you brought up the low yield option for our submarine-launched ballistic missiles because the House of Representatives actually in the defense appropriations and authorization process that it’s going through right now actually blocked further funding and the deployment of this particular type of warhead because, in their opinion, the President already had plenty low-yield nuclear options, thank you very much. He doesn’t need anymore.

Jeffrey Lewis: Of course, I don’t think this president needs any nuclear options, but-

Alex Bell: But it just shows there’s definitely a political and oversight feature that comes into this modernization debate. The idea that even if the forces that Jeffrey talked about who’ve always wanted to return to testing, even if they could prevail upon a particular administration to go in that direction, it’s unlikely Congress would be as sanguine about it.

Nevada, where our former nuclear testing site is, now the Nevada National Security Site –– it’s not clear that Nevadans are going to be okay with a return to explosive nuclear testing, nor will the people of Utah who sit downwind from that particular site. So there’s actually a “not in my backyard” kind of feature to the debate about further testing.

Jeffrey Lewis: Yeah. The Department of Energy has actually taken… Anytime they do a conventional explosion at the Nevada site, they keep it a secret because they were going to do a conventional explosion 10 or 15 years ago and people got wind of it and were outraged because they were terrified the conventional explosion would kick up a bunch of dust and that there might still be radioactive particulates.

I’m not sure that that was an accurate worry, but I think it speaks to the lack of trust that people around the test site have, given some of the irresponsible things that the U.S. nuclear weapons complex has done over the years. That’s a whole other podcast, but you don’t want to live next to anything that NNSA overseas.

Alex Bell: There’s also a proximity issue. Las Vegas is incredibly close to that facility. Back in the day when they did underground testing there, it used to shake the buildings on the Strip. And Las Vegas has only expanded from 20, 30 years ago, so you’re going to have a lot of people that would be very worried.

Ariel Conn: Yeah. So that’s actually a question that I had. I mean, we have a better idea today of what the impacts of nuclear testing are. Would Americans approve of nuclear weapons being tested on our ground?

Jeffrey Lewis: Probably if they didn’t have to live next to them.

Alex Bell: Yeah. I’ve been to some of the states where we conducted tests other than Nevada. So Colorado, where we tried to do this brilliant idea of whether we could do fracking via nuclear explosion. You can see the problems inherent in that idea. Alaska, New Mexico, obviously, where the first nuclear test happened. We also tested weapons in Mississippi. So all of these states have been affected in various ways and radio particulates from the sites in Nevada have drifted as far away from Maine, and scientists have been able to trace cancer clusters half a continent away.

Jeffrey Lewis: Yeah, I would add that –– Alex mentioned testing in Alaska –– so there was a giant test in 1971 in Alaska called Cannikin. It was five megatons. So a megaton is 1,000 kilotons. Hiroshima was 20 kilotons and it really made some Canadians angry and the consequence of the angry Canadians was they founded Greenpeace. So the whole iconic Greenpeace on a boat was originally driven by a desire to stop U.S. nuclear testing in Alaska. So, you know, people get worked up.

Ariel Conn: Do you think someone in the U.S. is actively trying to bring testing back? Do you think that we’re going to see more of this or do you think this might just go away?

Jeffrey Lewis: Oh yeah. There was a huge debate at the beginning of the Trump administration. I actually wrote this article making fun of Rick Perry, the Secretary of Energy, who I have to admit has turned out to be a perfectly normal cabinet secretary in an administration that looks like the Star Wars Cantina.

Alex Bell: It’s a low bar.

Jeffrey Lewis: It’s a low bar, and maybe just barely, but Rick got over it. But I was sort of mocking him and the article was headlined, “Even Rick Perry isn’t dumb enough to resume nuclear testing,” and I got notes, people saying, “This is not funny. This is a serious possibility.” So, yeah, I think there has long been a group of people who did not want to end testing. U.S. labs refuse to prepare for the end of testing. So when the U.S. stopped, it was Congress just telling them to stop. They have always wanted to go back to testing, and these are the same people I think who are accusing the Russians of doing things, I think as much so that they can get out of the test ban as anything else.

Alex Bell: Yeah, I would agree with that assessment. Those people have always been here. It’s strange to me because most scientists have affirmed that we know more about our nuclear weapons now not blowing them up than we did before because of the advanced computer modeling, technological advances of the Stockpile Stewardship program, which is the program that extends the life of these warheads. They get to do a lot of great science, and they’ve learned a lot of things about our nuclear forces that we didn’t know before.

So it’s hard to make a case that it is absolutely necessary or would ever be absolutely necessary to return to testing. You would have to totally throw out our obligations that we have to things like the nuclear non-proliferation treaty, which is to pursue the cessation of an arms race in good faith, and a return to testing I think would not be very good faith.

Ariel Conn: Maybe we’ve sort of touched on this, but I guess it’s still not clear to me. Why would we want to return to testing? Especially if, like you said, the models are so good?

Jeffrey Lewis: I think you have to approach that question like an anthropologist. Because some countries are quite happy living under a test ban for exactly the reason that you pointed out, that they are getting all kinds of money to do all kinds of interesting science. And so Chinese seem pretty happy about it; The UK, actually –– I’ve met some UK scientists who are totally satisfied with it.

But I think the culture in the U.S. laboratories, which had really nothing to do with the reliability of the weapons and everything to do with the culture of the lab, was like the day that a young designer became a man or a woman was the day that person’s design went out into the desert and they had to stand there and be terrified it wasn’t going to work, and then feel the big rumble. So I think there are different ways of doing science. I think the labs in the United States were and are sentimentally attached to solving these problems with explosions.

Alex Bell: There’s also sort of a strange desire to see them. My first trip out to the test site, I was the only woman on the trip and we were looking at the Sedan Crater, which is just this enormous crater from an explosion underground that was much bigger than we thought it was going to be. It made this, I think it’s seven football fields across, and to me, it was just sort of horrifying, and I looked at it with dread. And a lot of the people who were on the trip reacted entirely differently with, “I thought it would be bigger,” and, “Wouldn’t it be awesome to see one of these go off, just once?” and had a much different take on what these tests were for and what they sort of indicated.

Ariel Conn: So we can actually test nuclear weapons without exploding them. Can you talk about what the difference is between testing and explosions, and what that means?

Jeffrey Lewis: The way a nuclear weapon works is you have a sphere of fissile material –– so that’s plutonium or highly enriched uranium –– and that’s surrounded by conventional explosives. And around that, there are detonators and electronics to make sure that the explosives all detonate at the exact same moment so that they spherically compress or implode the plutonium or highly enriched uranium. So when it gets squeezed down, it makes a big bang, and then if it’s a thermonuclear weapon, then there’s something called a secondary, which complicates it.

But you can do that –– you can test all of those components, just as long as you don’t have enough plutonium or highly enriched uranium in the middle to cause a nuclear explosion. So you can fill it with just regular uranium, which won’t go critical, and so you could test the whole setup that way for all of the things in a nuclear weapon that would make it a thermonuclear weapon. There’s a variety of different fusion research techniques you can do to test those kinds of reactions.

So you can really simulate everything, and you can do as many computer simulations as you want, it’s just that you can’t put it all together and get the big bang. And so the U.S. has built this giant facility at Livermore called NIF, the National Ignition Facility, which is a many billion-dollar piece of equipment, in order to sort of simulate some of the fusion aspects of a nuclear weapon. It’s an incredible piece of equipment that has taught U.S. scientists far more than they ever knew about these processes when they were actually exploding things. It’s far better for them, and they can do that. It’s completely legal.

Alex Bell: Yeah, the most powerful computer in the world belongs to Los Alamos. Its job is to help simulate these nuclear explosions and process data related to the nuclear stockpile.

Jeffrey Lewis: Yeah, I got a kick –– I always check in on that list, and it’s almost invariably one of the U.S. nuclear laboratories that has the top computer. And then one time I noticed that the Chinese had jumped up there for a minute and it was their laboratory.

Alex Bell: Yup, it trades back and forth.

Jeffrey Lewis: Good times.

Alex Bell: A lot of the data that goes into this is observational information and technical readings that we got from when we did explosive testing. And our testing record is far more extensive than any other country, which is one of the reasons why we have sort of this advantage that would be locked in, in the event of a CTBT entering into force.

Ariel Conn: Yeah, I thought that was actually a really interesting point. I don’t know if there’s more to elaborate on it, but the idea that the U.S. could actually sacrifice some of its nuclear superiority by ––

Alex Bell: Returning to testing?

Ariel Conn: Yeah.

Alex Bell: Yeah, because if we go, everyone goes.

Ariel Conn: There were countries that still weren’t thrilled even with the testing that is allowed. Can you elaborate on that a little bit?

Alex Bell: Yes. A lot of countries, particularly the countries that back the Treaty on the Prohibition of Nuclear Weapons, which is a new treaty that does not have any nuclear weapon states as a part of it, but it’s a total ban on the possession and use of nuclear weapons, and those countries are particularly frustrated with what they see as the slow pace of disarmament by the nuclear weapon states.

The Nonproliferation Treaty, which is sort of the glue that holds all this together, was indefinitely extended back in 1995. The price for that from the non-nuclear weapon states was the commitment of nuclear weapon states to sign and ratify a comprehensive test ban. So 25 years later almost, they’re still waiting.

Ariel Conn: I will add that, I think as of this week, I believe three of the United States –– California, New Jersey and Oregon –– have passed resolutions supporting the U.S. joining the treaty that actually bans nuclear weapons, that recent one.

Alex Bell: Yeah. It’s been interesting, while it’s something that the verification measures –– Jeffrey might have some thoughts on this too –– to me, principles aside, the verification measures in the Treaty on the Prohibition of Nuclear Weapons makes it sort of an unviable treaty. But from a messaging perspective, you’re seeing kind of the first time since the Cold War where citizenry around the world is saying, “You have to get rid of these weapons. They’re no longer acceptable. They’ve become liabilities, not assets.”

So while I don’t think the treaty itself is a workable treaty for the United States, I think that the sentiment behind it is useful in persuading leaders that we do need to do more on disarmament.

Jeffrey Lewis: I would just say that I think just like we saw earlier, there’s a lot of the U.S. wanting to have its cake and eat it too. And so the Nonproliferation Treaty, which is the big treaty that says, “Countries should not be able to acquire nuclear weapons,” it also commits the United States and the other nuclear powers to work toward disarmament. That’s not something they take seriously.

Just like with nuclear testing where you see this, “Oh, well, maybe we could edge back and do it,” you see the same thing just on disarmament issues generally. So having people out there who are insisting on holding the most powerful countries to account to make sure that they do their share, I also think is really important.

Ariel Conn: All right. So I actually think that’s sort of a nice note to end on. Is there anything else that you think is important that we didn’t get into or that just generally is important for people to know?

Alex Bell: I would just reiterate the point that if the U.S. government is truly concerned that Russia is conducting tests at even very low yields, that we need to be engaged in a conversation with them, that a global ban on nuclear explosive testing is good for every country in this world and we shouldn’t be doing things to derail the pursuit of such a treaty.

Ariel Conn: Agreed. All right, well, thank you both so much for joining today.

As always, if you’ve been enjoying the podcast, please take a moment to like it, share it, and maybe even leave a good review and I will be back again next month with another episode of the FLI Podcast.

FLI Podcast: Applying AI Safety & Ethics Today with Ashley Llorens & Francesca Rossi

As we grapple with questions about AI safety and ethics, we’re implicitly asking something else: what type of a future do we want, and how can AI help us get there?

In this month’s podcast, Ariel spoke with Ashley Llorens, the Founding Chief of the Intelligent Systems Center at the Johns Hopkins Applied Physics Laboratory, and Francesca Rossi, the IBM AI Ethics Global Leader at the IBM TJ Watson Research Lab and an FLI board member, about developing AI that will make us safer, more productive, and more creative. Too often, Rossi points out, we build our visions of the future around our current technology. Here, Llorens and Rossi take the opposite approach: let’s build our technology around our visions for the future.

Topics discussed in this episode include:

  • Hopes for the future of AI
  • AI-human collaboration
  • AI’s influence on art and creativity
  • The UN AI for Good Summit
  • Gaps in AI safety
  • Preparing AI for uncertainty
  • Holding AI accountable

Publications and resources discussed in this episode include:

Ariel: Hello and welcome to another episode of the FLI podcast. I’m your host Ariel Conn, and today we’ll be looking at how to address safety and ethical issues surrounding artificial intelligence, and how we can implement safe and ethical AIs both now and into the future. Joining us this month are Ashley Llorens and Francesca Rossi who will talk about what they’re seeing in academia, industry, and the military in terms of how AI safety is already being applied and where the gaps are that still need to be addressed.

Ashley is the Founding Chief of the Intelligent Systems Center at the John Hopkins Applied Physics Laboratory where he directs research and development in machine learning, robotics, autonomous systems, and neuroscience all towards addressing national and global challenges. He has served on the Defense Science Board, the Naval Studies Board of the National Academy of Sciences, and the Center for a New American Security’s AI task force. He is also a voting member of the Recording Academy, which is the organization that hosts the Grammy Awards, and I will definitely be asking him about that later in the show.

Francesca is the IBM AI Ethics Global Leader at the IBM TJ Watson Research Lab. She is an advisory board member for FLI, a founding board member for the Partnership on AI, a deputy academic director of the Leverhulme Centre for the Future of Intelligence, a fellow with AAAI and EurAI (that’s e-u-r-a-i), and she will be the general chair of AAAI in 2020. She was previously Professor of Computer Science at the University of Padova in Italy, and she’s been president of IJCAI and the editor-in-chief of the Journal of AI Research. She is currently joining us from the United Nations AI For Good Summit, which I will also ask about later in the show.

So Ashley and Francesca, thank you so much for joining us today.

Francesca: Thank you.

Ashley: Glad to be here.

Ariel: Alright. The first question that I have for both of you, and Ashley, maybe I’ll direct this towards you first: basically, as you look into the future and you look at artificial intelligence becoming more of a role in our everyday lives — before we look at how everything could go wrong, what are we striving for? What do you hope will happen with artificial intelligence and humanity?

Ashley: My perspective on AI is informed a lot by my research and experiences at the Johns Hopkins Applied Physics Lab, which I’ve been at for a number of years. My earliest explorations had to do with applications of artificial intelligence to robotics systems, in particular underwater robotics systems, systems where signal processing and machine learning are needed to give the system situational awareness. And of course, light doesn’t travel very well underwater, so it’s an interesting task to make a machine see with sound for all of its awareness and all of its perception.

And in that journey, I realized how hard it is to have AI-enabled systems capable of functioning in the real world. That’s really been a personal research journey that’s turned into an institution-wide research journey for Johns Hopkins APL writ large. And we’re a large not-for-profit R & D organization that does national security, space exploration, and health. We’re about 7,000 folks or so across many different disciplines, but many scientists and engineers working on those kinds of problems — we say critical contributions to critical challenges.

So as I look forward, I’m really looking at AI-enabled systems, whether they’re algorithmic in cyberspace or they’re real-world systems that are really able to act with greater autonomy in the context of these important national and global challenges. So for national security: to have robotic systems that can be where people don’t want to be, in terms of being under the sea or even having a robot go into a situation that could be dangerous so a person doesn’t have to. And to have that system be able to deal with all the uncertainty associated with that.

You look at future space exploration missions where — in terms of AI for scientific discovery, we talk a lot about that — imagine a system that can perform science with greater degrees of autonomy and figure out novel ways of using its instruments to form and interrogate hypotheses when billions of miles away. Or in health applications where we can have systems more ubiquitously interpreting data and helping us to make decisions about our health to increase our lifespan, or health span as they say.

I’ve been accused of being a techno-optimist, I guess. I don’t think technology is the solution to everything, but it is my personal fascination. And in general, just having this AI capable of adding value for humanity in a real world that’s messy and sloppy and uncertain.

Ariel: Alright. Francesca, you and I have talked a bit in the past, and so I know you do a lot of work with AI safety and ethics. But I know you’re also incredibly hopeful about where we can go with AI. So if you could start by talking about some of the things that you’re most looking forward to.

Francesca: Sure. Partially focused on the need of developing autonomous AI systems that can act where humans cannot go, for example, and that’s definitely very, very important. I would like to focus more on the need also of AI systems that can actually work together with humans, augmenting our own capabilities to make decisions or to function in our work environment or in our private environment. That’s the focus of and the purpose of AI that I see, that I work on, and I focus on what are the challenges in making this system really work well with humans.

This means of course that while it may seem that in some sense it’s easier to develop an AI system that works together with humans because there is complementarity — some things are made by humans, some things are made by the machine. But actually, there are several additional challenges because you want these two entities, the human and the machine, to actually become a real team and work together and collaborate together to achieve a certain goal. You want these machines to be able to communicate, interact in a very natural way with human beings and you want these machines to be not just reactive to commands, but also proactive at trying to understand what the human being needs in that moment, in that context in order to provide all the information and knowledge that it needs from the data that surrounds whatever task is going to be addressed.

That’s the focus also of what IBM Business Model is, because of course IBM releases AI to be used in other companies so that their professional people can use it to do better the job that they’re doing. And it has many, many different interesting research directions. The one that I’m mostly focused on is around value alignment. How do you make sure that these systems know and are aware of the values that they should follow and of the ethical principles that they should follow, while trying to help human beings do whatever they need to do? And there are many ways that you can do that and many ways to model them to reason with these ethical principles and so on.

Being here in Geneva at AI For Good, I mean, in general, I think that here for example the emphasis is — and rightly so — about the sustainable development goals of the UN: these 17 goals that define a vision of the future, the future that we want. And we’re trying to understand how we can leverage technologies such as AI to achieve that vision. The vision can be slightly nuanced and different, but to me, the development of advanced AI is not the end goal, but is only a way to get to the vision of the future that I have. And so, to me, this AI For Good Summit and the 17 sustainable development goals define a vision of the future that is important to have when one has in mind how to improve technology.

Ariel: For listeners who aren’t as familiar with the sustainable development goals, we can include links to what all of those are in the podcast description.

Francesca: I was impressed at this AI For Good Summit. This Summit started three years ago with kind of 400 people. Then last year it was like 500 people, and this year there are 3,200 registered participants. To really give you an idea of how more and more everybody’s interested into these subjects.

Ariel: Have you also been equally impressed by the topics that are covered?

Francesca: Well, I mean, it started today. So I just saw in the morning there are five different parallel sessions that will go throughout the following two days. One is AI education and learning. One is health and wellbeing. One is AI human dignity and inclusive society. One is scaling AI for good. And one is AI for space. These five themes will go throughout two days together with many other smaller ones. But for what I’ve seen this morning, it’s really a very high level of the discussion. It’s going to be very impactful. Each event is unique, has its own specificity, but this event is unique because it’s focused on a vision of the future, which in this case are the sustainable development goals.

Ariel: Well, I’m really glad that you’re there. We’re excited to have you there. And so, you’re talking about moving towards futures where we have AIs that can do things that either humans can’t do or don’t want to do or isn’t safe, visions where we can achieve more because we’re working with AI systems as opposed to just humans trying to do things alone. But we still have to get to those points where this is being implemented safely and ethically.

I’ll come back to the question of what we’re doing right so far, but first, what do you see as the biggest gaps in AI safety and ethics? And this is a super broad question, but looking at it with respect to, say, the military or industry or academia. What are some of the biggest problems you see in terms of us safely applying AI to solve problems?

Ashley: It’s a really important question. My answer is going to center around uncertainty and dealing with that in the context of the operation of the system, and let’s say the implementation or the execution of the ethics of the system as well. But first, backing up to Francesca’s comment, I just want to emphasize this notion of teaming and really embrace this narrative in my remarks here.

I’ve heard it said before that every machine is part of some human workflow. I think a colleague Matt Johnson at the Florida Institute for Human and Machine Cognition says that, which I really like. And so, just to make clear, whether we’re talking about the cognitive enhancements, an application of AI where maybe you’re doing information retrieval, or even a space exploration example, it’s always part of a human-machine team. In the space exploration example, the scientists and the engineers are on the earth, maybe many light hours away, but the machines are helping them do science. But at the end of the day, the scientific discovery is really happening on earth with the scientists. And so, whether it’s a machine operating remotely or by cognitive assistance, it’s always part of a human-machine team. That’s just something I wanted to amplify that Francesca said.

But coming back to the gaps, a lot of times I think what we’re missing in our conversations is getting some structure around the role of uncertainty in these agents that we’re trying to create that are going to help achieve that bright future that Francesca was referring to. To help us think about this at APL, we think about agents as needing to perceive, decide, act in teams. This is a framework that just helps us understand these general capabilities that we’ll need and to start thinking about the role of uncertainty, and then combinations of learning and reasoning that would help agents to deal with that. And so, if you think about an agent pursuing goals, the first thing it has to do is get an understanding of the world states. This is this task of perception.

We often talk about, well, if an agent sees this or that, or if an agent finds itself in this situation, we want it to behave this way. Obviously, the trolley problem is an example we revisit often. I won’t go into the details there, but the question is, I think, given some imperfect observation of the world, how does the structure of that uncertainty factor into the correct functioning of the agent in that situation? And then, how does that factor into the ethical, I’ll say, choices or data-driven responses that an agent might have to that situation?

Then we talk about decision making. An agent has goals. In order to act on its goals, it has to decide about how certain sequences of actions would affect future states of the world. And then again how, in the context of an uncertain world, is the agent going to go about accurately evaluating possible future actions when it’s outside of a gaming environment, for example. How does uncertainty play into that and its evaluation of possible actions? And then in the carrying out of those actions, there may be physical reasoning, geometric reasoning that has to happen. For example, if an agent is going to act in a physical space, or reasoning about a cyber-physical environment where there’s critical infrastructure that needs to be protected or something like that.

And then finally, to Francesca’s point, the interactions, or the teaming with other agents that may be teammates or actually may be adversarial. And so, how does the reasoning about what my teammates might be intending to do, what the state of my teammates might be in terms of cognitive load if it’s a human teammate, what might the intent of adversarial agents be in confounding or interfering with the goals of the human-machine team?

And so, to recap a little bit, I think this notion of machines dealing with uncertainty in real world situations is one of the key challenges that we need to deal with over the coming decades. And so, I think having more explicit conversations about how uncertainty manifests in these situations, how you deal with it in the context of the real world operation of an AI-enabled system, and then how we give structure to the uncertainty in a way that should inform our ethical reasoning about the operation of these systems. I think that’s a very worthy area of focus for us over the coming decades.

Ariel: Could you walk us through a specific example of how an AI system might be applied and what sort of uncertainties it might come across?

Ashley: Yeah, sure. So think about the situation where there’s a dangerous environment, let’s say, in a policing action or in a terrorist situation. Hey, there might be hostiles in this building, and right now a human being might have to go into that building to investigate it. We’ll send a team of robots in there to do the investigation of the building to see if it’s safe, and you can think about that situation as analogous for a number of possible different situations.

And now, let’s think about the state of computer vision technology, where straight pattern recognition is hopefully a fair characterization of the state of the art, where we know we can very accurately recognize objects from a given universe of objects in a computer vision feed, for example. Well, now what happens if these agents encounter objects from outside of that universe of training classes? How can we start to bound the performance of the computer vision algorithm with respect to objects from unknown classes? You can start to get a sense from that progression, just from the perception part of that problem, from recognize, of these 200 possible objects, tell me which class it comes from, to having to do vision type tasks in environments that would present many new and novel objects that they may have to perceive and reason about.

You can think about that perception task now as extending to agents that might be in that environment and trying to ascertain from partial observations of what the agents might look like, partial observations of the things they might be doing to try to have some assessment of this is a friendly agent or this is an unfriendly agent, to reasoning about affordances of objects in the environment that might present our systems with ways of dealing with those agents that conform to ethical principles.

That was not a very, very concrete example, but hopefully starts to get one level deeper into the kinds of situations we want to put systems into and the kinds of uncertainty that might arise.

Francesca: To tie to what Ashley just said, we definitely need a lot more ways to have realistic simulations of what can happen in real life. So testbeds, sandboxes, that is definitely needed. But related to that, there is also this ongoing effort — which has already resulted in tools and mechanisms, but many people are still working on it — which is to understand better the error landscape that the machine learning approach may have. We know machine learning always has a small percentage of error in any given situation and that’s okay, but we need to understand what’s the robustness of the system in terms of that error, and also we need to understand the structure of that error space because this information can inform us on what are the most appropriate or less appropriate use cases for the system.

Of course, going from there, this understanding of the error landscape is just one aspect of the need for transparency on the capabilities and limitations of the AI systems when they are deployed. It’s a challenge that spans from academia or research centers to, of course, the business units and the companies developing and delivering AI systems. So that’s why at IBM we are working a lot on this issue of collecting information during the development and the design phase around the properties of the systems, because we think that understanding of this property is very important to really understand what should or should not be done with the system.

And then, of course, there is, as you know, a lot of work around understanding other properties of the system. Like, fairness is one of the values that we may want to inject, but of course it’s not as simple as it looks because there are many, many definitions of fairness and each one is more appropriate or less appropriate in certain scenarios and certain tasks. It is important to identify the right one at the beginning of the design and the development process, and then to inject mechanisms to detect and mitigate bias according to that notion of fairness that we have decided is the correct one for that product.

And so, this brings us also to the other big challenge, which is to help developers understand how to define these notions, these values like fairness that they need to use in developing the system — how to define them not just by themselves within the tech company, but also communicating with the communities that are going to be impacted by these AI product, and that may have something to say on what is the right definition of fairness that they care about. That’s why, for example, another thing that we did, besides developing research and also products, but we also invest a lot in educating developers in trying to help them understand in their everyday jobs how to think about these issues, whether it’s fairness, robustness, transparency, and so on.

And so, we built this very small booklet — we call it the Everyday AI Ethics Guide for Designers and Developers — that raises a lot of questions that should be in their mind in their everyday job. Because as you know, for example, if you don’t think about bias or fairness during these development phases and you just check whether your product is fair or not or when it’s ready to be deployed, then you may discover that actually you need to start from scratch again if you discover that it doesn’t have the right notion of fairness.

Another effort that we really care a lot about in this effort to build teams of humans and machines is the issue of explainability, to make sure that it is possible to understand why these systems are recommending certain decisions. Explainability is something that, especially in this environment of teaming AI machines, is very important, because without this capability of AI systems of explaining why they are recommending certain decision, then the human being part of the team will not in the long run trust the AI system, so will not adopt it possibly. And so we will also lose the positive and beneficial effect of the AI system.

The last thing that I want to say is that this education of developers extends actually much beyond the developers to also the policy makers. That’s why it’s important to have a lot of interaction with policy makers that need to really be educated about the state of the art, about the challenges, about the limits of current AI, in order to understand how to best drive the current technology, to be more and more advanced, but also beneficial and driven towards the beneficial efforts. And what are the right mechanisms to drive the technology into the direction that we want? Still needs a lot more multi-stakeholder discussion to really achieve the best results, I think.

Ashley: Just picking up on a couple of those themes that Francesca raised: first, I just want to touch on simulations. At the applied physics laboratory, one of the core things we do is develop systems for the real world. And so, as the tools of artificial intelligence are evolving, the art and the science of systems engineering is starting to morph into this AI systems engineering regime. And we see simulation as key, more key than it’s ever been, into developing real world systems that are enabled by AI.

One of the things we’re really looking into now is what we call live virtual constructive simulations. These are simulations that you can do distributed learning for agents in a constructive mode where you have highly parallelized learning, but where you actually have links and hooks for live interactions with humans to get the human-machine teaming. And then finally, bridging the gap between simulation and real world where some of the agents represented in the context of the human-machine teaming functionality can be virtual and some can actually be represented by real systems in the real world. And so, we think that these kinds of environments, these live virtual constructive environments, will be important for bridging the gap from simulation to real.

Now, in the context of that is this notion of sharing information. If you think about the complexity of the systems that we’re building, and the complexity and the uncertainty of the real world conditions — whether that’s physical or cyber or what have you — it’s going to be more and more challenging for a single development team to analytically characterize the performance of the system in the context of real-world environment. And so, I think as a community we’re really doing science; We’re performing science, fielding these complex systems in these real-world environments. And so, the more we can make that a collective scientific exploration where we’re setting hypotheses, performing these experiments — these experiments of deploying AI in real world situations — the more quickly we’ll make progress.

And then, finally, I just wanted to talk about accountability, which I think builds on this notion of transparency and explainability. From what I can see — and this is something we don’t talk about enough, I think — is I think we need to change our notion of accountability when it comes to AI-enabled systems. I think our human nature is we want individual accountability for individual decisions and individual actions. If an accident happens, our whole legal system, our whole accountability framework is, “Well, tell me exactly what happened that time,” and I want to get some accountability based on that and I want to see something improve based on that. Whether it’s a plane crash or a car crash, or let’s say there’s corruption in a Fortune 500 company — we want see the CFO fired and we want to see a new person hired.

I think when you look at these algorithms, they’re driven by statistics, and the statistics that drive these models are really not well suited for individual accountability. It’s very hard to establish the validity of a particular answer or classification or something that comes out of the algorithm. Rather, we’re really starting to look at the performance of these algorithms over a period of time. It’s hard to say, “Okay, this AI-enabled system: tell me what happened on Wednesday,” or, “Let me hold you accountable for what happened on Wednesday.” And more so, “Let me hold you accountable for everything that you did during the month of April that resulted in this performance.”

And so, I think our notion of accountability is going to have to embrace this notion of ensemble validity, validity over a collection of activities, actions, decisions. Because right now, I think if you look at the underlying mathematical frameworks for these algorithms, they’re not well supported for this notion of individual accountability for decisions.

Francesca: Accountability is very important. It needs a lot more discussion. This is one of the topic also that we have been discussing in this initiative by the European Commission in defining the AI Ethics Guidelines for Europe, and accountability is one of the seven requirements. But it’s not easy to define what it means. What Ashley said is one possibility: Change our idea of accountability from one specific instance to over several instances. That’s one possibility, but I think that that’s something that needs a lot more discussion with several stakeholders.

Ariel: You’ve both mentioned some things that sound like we’re starting to move in the right direction. Francesca, you talked about getting developers to think about some of the issues like fairness and bias before they start to develop things. You talked about trying to get policy makers more involved. Ashley, you mentioned the live virtual simulations. Looking at where we are today, what are some of the things that you think have been most successful in moving towards a world where we’re considering AI safety more regularly, or completely regularly?

Francesca: First of all, we’ve gone a really long way in a relatively short period of time, and the Future of Life Institute has been instrumental in building the community, and everybody understands that the only approach to address this issue is a multidisciplinary, multi-stakeholder approach. The Future of Life Institute, with the first Puerto Rico conference, showed very clearly that this is the approach to follow. So I think that in terms of building the community that discusses and identifies the issues, I think we have done a lot.

I think that at this point, what we need is greater coordination and also redundancy removal among all these different initiatives. I think we have to find, as a community, the main issues and the main principles and guidelines that we think are needed for the development of more advanced forms of AI, starting from the current state of the art. If you look at the values, at these guidelines or lists of principles around AI ethics from the values initiatives, they are of course different from each other but they have a lot in common. So we really were able to identify these issues, and this identification of the main issues is important as we move forward to more advanced versions of AI.

And then, I think another thing that also we are doing in a rather successful though not complete way is trying to move from research to practice. From high level principles to concrete, develop, and deploy the products that can embed these principles and guidelines into not just the scientific papers that are published, but also into the platform, the services, and the tool kits that companies use with their clients. We needed an initial phase where there were high level discussions about guidelines and principles, but now we are in the second phase where these go and percolate down to the business units and to how products are built and deployed.

Ashley: Yeah, just building on some of Francesca’s comments, I’ve been very inspired by the work of the Future of Life Institute and the burgeoning, I’ll say, emerging AI safety community. Similar to Francesca’s comment, I think that the real frontier here is now taking a lot of that energy, a lot of that academic exploration, research, and analysis and starting to find the intersections of a lot of those explorations with the real systems that we’re building.

You’re definitely seeing within IBM, as Francesca mentioned, within Microsoft, within more applied R & D organizations like Johns Hopkins APL, where I am, internal efforts to try to bridge the gap. And what I really want to try to work to catalyze in the coming years is a broader, more community-wide intersection between the academic research community looking out over the coming centuries and the applied research community that’s looking out over the coming decades, and find the intersection there. How do we start to pose a lot of these longer term challenge problems in the context of real systems that we’re developing?

And maybe we get to examples. Let’s say, for ethics, beyond the trolley problem and into posing problems that are more real-world or closer, better analogies to the kinds of systems we’re developing, the kinds of situations they will find themselves in, and start to give structure to some of the underlying uncertainty. Having our debates informed by those things.

Ariel: I think that transitions really nicely to the next question I want to ask you both, and that is, over the next 5 to 10 years, what do you want to see out of the AI community that you think will be most useful in implementing safety and ethics?

Ashley: I’ll probably sound repetitive, but I really think focusing in on characterizing — I think I like the way Francesca put it — the error landscape of a system as a function of the complex internal states and workings of the system, and the complex and uncertain real-world environments, whether cyber or physical that the system will be operating in, and really get deeper there. It’s probably clear to anyone that works in the space that we really need to fundamentally advance the science and the technology. I’ll start to introduce the word now: trust, as it pertains to AI-enabled systems operating in these complex and uncertain environments. And again, starting to better ground some of our longer-term thinking about AI being beneficial for humanity and grounding those conversations into the realities of the technologies as they stand today and as we hope to develop and advance them over the next few decades.

Francesca: Trust means building trust in the technology itself — and so the things that we already mentioned like making sure that it’s fair, value aligned, robust, explainable — but also building trust in those that produce the technology. But then, I mean, this is the current topic: How do we build trust? Because without trust we’re not going to adopt the full potential of the beneficial effect of the technology. It makes sense to also think in parallel, and more in the long-term, what’s the right governance? What’s the right coordination of initiatives around AI and AI ethics? And this is already a discussion that is taking place.

And then, after governance and coordination, it’s also important with more and more advanced versions of AI, to think about our identity, to think about the control issues, to think in general about this vision of the future, the wellbeing of the people, of the society, of the planet. And how to reverse engineer, in some sense, from a vision of the future to what it means in terms of a behavior of the technology, behavior of those that produce the technology, and behavior of those that regulate the technology, and so on.

We need a lot more of this reverse engineering approach, where instead of starting from the current state of the art of the technology and saying, “Okay, these are the properties that I think I want in this technology: fairness, robustness, transparency, and so on, because otherwise I don’t like this technology to be deployed without these properties.” And then see what happens in the next version, more advanced version of the technology, and think about possibly new properties and so on. This is one approach, but the other approach is that, “Okay, this is the vision of life in, I don’t know, 50 years from now. How do I go from that to the kind of the technology, to the direction that I want to push the technology towards to achieve that vision?

Ariel: We are getting a little bit short on time, and I did want to follow up with Ashley about his other job. Basically, Ashley, as far as my understanding, you essentially have a side job as a hip hop artist. I think it would be fun to just talk a little bit in the last couple of minutes that we have about how both you and Francesca see artificial intelligence impacting these more creative fields. Is this something that you see as enhancing artists’ abilities to do more? Do you think there’s a reason for artists to be concerned that AI will soon be a competition for them? What are your thoughts for the future of creativity and AI?

Ashley: Yeah. It’s interesting. As you point out, over the last decade or so, in addition to furthering my career as an engineer, I’ve also been a hip hop artist and I’ve toured around the world and put out some albums.I think where we see the biggest impact of technology on music and creativity, I think, is, one, in the democratization of access to creation. Technology is a lot cheaper. Having a microphone and a recording setup or something like that, from the standpoint of somebody that does vocals like me, is much more accessible to many more people. And then, you see advances and — you know, when I started doing music I would print CDs and press vinyl. There was no iTunes. And just, iTunes has revolutionized how music is accessed by people, and more generally how creative products are accessed by people in streaming, etc. So I think looking backward, we’ve seen most of the impact of technology on those two things: access to the creation and then access to the content.

Looking forward, will those continue to be the dominant factors in terms of how technology is influencing the creation of music, for example? Or will there be something more? Will AI start to become more of a creative partner? We’ll see that and it will be evolutionary. I think we already see technology being a creative partner more and more so over time. A lot of the things that I studied in school — digital signal processing, frequency, selective filtering — a lot of those things are baked into the tools already. And just as we see AI helping to interpret other kinds of signal processing products like radiology scans, we’ll see more and more of that in the creation of music where an AI assistant — for example, if I’m looking for samples from other music — an AI assistant that can comb through a large library of music and find good samples for me. Just as we do with Instagram filters — an AI suggesting good filters for pictures I take on my iPhone — you can see in music AI suggesting good audio filters or good mastering settings or something, given a song that I’m trying to produce or goals that I have for the feel and tone of the product.

And so, already I think as an evolutionary step, not even a revolutionary step, AI becoming more present in the creation of music. I think maybe, as in other application areas, we may see, again, AI being more of a teammate, not only in the creation of the music, but in the playing of the music. I heard an article or a cast on NPR about a piano player that developed an AI accompaniment for himself. And so, as he played in a live show, for example, there would be an AI accompaniment and you could dial back the settings on it in terms of how aggressive it was in rhythm and time, where it situated with respect to the lead performer. Maybe in hip hop we’ll see AI hype men or AI DJs. It’s expensive to travel overseas, and so somebody like me goes overseas to do a show, and instead of bringing a DJ with me, I have an AI program that can select my tracks and add cuts at the right places and things like that. So that was a long-winded answer, but there’s a lot there. Hopefully that was addressing your question.

Ariel: Yeah, absolutely. Francesca, did you have anything you wanted to add about what you think AI can do for creativity?

Francesca: Yeah. I mean, of course I’m less familiar of what AI is already doing right now, but I am aware of many systems from companies into the space of delivering content or music or so on, systems where the AI part is helping humans develop their own creativity even farther. And as Ashley said, I mean, I hope that in the future AI can help us be more creative — even people that maybe are less able than Ashley to be creative themselves. And I hope that this will enhance the creativity of everybody, because this will enhance the creativity, yes, in hip hop or in making songs or in other things, but also I think it will help to solve some very fundamental problems because having a population which is more creative, of course, is more creative in everything.

So in general, I hope that AI will help us human beings be more creative in all aspects of our life besides entertainment — which is of course very, very important for all of us for the wellbeing and so on — but also in all the other aspects of our life. And this is the goal that I think — going to the beginning where I said AI’s purpose should be the one of enhancing our own capabilities. And of course, creativity is also a very important capability that human beings have.

Ariel: Alright. Well, thank you both so much for joining us today. I really enjoyed the conversation.

Francesca: Thank you.

Ashley: Thanks for having me. I really enjoyed it.

Ariel: For all of our listeners, if you have been enjoying this podcast, please take a moment to like it or share it and maybe even give us a good review. And we will be back again next month.

AI Alignment Podcast: On Consciousness, Qualia, and Meaning with Mike Johnson and Andrés Gómez Emilsson

Consciousness is a concept which is at the forefront of much scientific and philosophical thinking. At the same time, there is large disagreement over what consciousness exactly is and whether it can be fully captured by science or is best explained away by a reductionist understanding. Some believe consciousness to be the source of all value and others take it to be a kind of delusion or confusion generated by algorithms in the brain. The Qualia Research Institute takes consciousness to be something substantial and real in the world that they expect can be captured by the language and tools of science and mathematics. To understand this position, we will have to unpack the philosophical motivations which inform this view, the intuition pumps which lend themselves to these motivations, and then explore the scientific process of investigation which is born of these considerations. Whether you take consciousness to be something real or illusory, the implications of these possibilities certainly have tremendous moral and empirical implications for life’s purpose and role in the universe. Is existence without consciousness meaningful?

In this podcast, Lucas spoke with Mike Johnson and Andrés Gómez Emilsson of the Qualia Research Institute. Andrés is a consciousness researcher at QRI and is also the Co-founder and President of the Stanford Transhumanist Association. He has a Master’s in Computational Psychology from Stanford. Mike is Executive Director at QRI and is also a co-founder. Mike is interested in neuroscience, philosophy of mind, and complexity theory.

Topics discussed in this episode include:

  • Functionalism and qualia realism
  • Views that are skeptical of consciousness
  • What we mean by consciousness
  • Consciousness and casuality
  • Marr’s levels of analysis
  • Core problem areas in thinking about consciousness
  • The Symmetry Theory of Valence
  • AI alignment and consciousness

You can take a short (3 minute) survey to share your feedback about the podcast here.

We hope that you will continue to join in the conversations by following us or subscribing to our podcasts on Youtube, SoundCloud, iTunes, Google Play, Stitcher, or your preferred podcast site/application. You can find all the AI Alignment Podcasts here.

You can learn more about consciousness research at the Qualia Research InstituteMike‘s blog, and Andrés blog. You can listen to the podcast above or read the transcript below. Thanks to Ian Rusconi for production and edits as well as Scott Hirsh for feedback.

Lucas: Hey, everyone. Welcome back to the AI Alignment Podcast. I’m Lucas Perry, and today we’ll be speaking with Andrés Gomez Emilsson and Mike Johnson from the Qualia Research Institute. In this episode, we discuss the Qualia Research Institute’s mission and core philosophy. We get into the differences between and arguments for and against functionalism and qualia realism. We discuss definitions of consciousness, how consciousness might be causal, we explore Marr’s Levels of Analysis, we discuss the Symmetry Theory of Valence. We also get into identity and consciousness, and the world, the is-out problem, what this all means for AI alignment and building beautiful futures.

And then end on some fun bits, exploring the potentially large amounts of qualia hidden away in cosmological events, and whether or not our universe is something more like heaven or hell. And remember, if you find this podcast interesting or useful, remember to like, comment, subscribe, and follow us on your preferred listening platform. You can continue to help make this podcast better by participating in a very short survey linked in the description of wherever you might find this podcast. It really helps. Andrés is a consciousness researcher at QRI and is also the Co-founder and President of the Stanford Transhumanist Association. He has a Master’s in Computational Psychology from Stanford. Mike is Executive Director at QRI and is also a co-founder.

He is interested in neuroscience, philosophy of mind, and complexity theory. And so, without further ado, I give you Mike Johnson and Andrés Gomez Emilsson. So, Mike and Andrés, thank you so much for coming on. Really excited about this conversation and there’s definitely a ton for us to get into here.

Andrés: Thank you so much for having us. It’s a pleasure.

Mike: Yeah, glad to be here.

Lucas: Let’s start off just talking to provide some background about the Qualia Research Institute. If you guys could explain a little bit, your perspective of the mission and base philosophy and vision that you guys have at QRI. If you could share that, that would be great.

Andrés: Yeah, for sure. I think one important point is there’s some people that think that really what matters might have to do with performing particular types of algorithms, or achieving external goals in the world. Broadly speaking, we tend to focus on experience as the source of value, and if you assume that experience is a source of value, then really mapping out what is the set of possible experiences, what are their computational properties, and above all, how good or bad they feel seems like an ethical and theoretical priority to actually make progress on how to systematically figure out what it is that we should be doing.

Mike: I’ll just add to that, this thing called consciousness seems pretty confusing and strange. We think of it as pre-paradigmatic, much like alchemy. Our vision for what we’re doing is to systematize it and to do to consciousness research what chemistry did to alchemy.

Lucas: To sort of summarize this, you guys are attempting to be very clear about phenomenology. You want to provide a formal structure for understanding and also being able to infer phenomenological states in people. So you guys are realists about consciousness?

Mike: Yes, absolutely.

Lucas: Let’s go ahead and lay some conceptual foundations. On your website, you guys describe QRI’s full stack, so the kinds of metaphysical and philosophical assumptions that you guys are holding to while you’re on this endeavor to mathematically capture consciousness.

Mike: I would say ‘full stack’ talks about how we do philosophy of mind, we do neuroscience, and we’re just getting into neurotechnology with the thought that yeah, if you have a better theory of consciousness, you should be able to have a better theory about the brain. And if you have a better theory about the brain, you should be able to build cooler stuff than you could otherwise. But starting with the philosophy, there’s this conception of qualia of formalism; the idea that phenomenology can be precisely represented mathematically. You borrow the goal from Giulio Tononi’s IIT. We don’t necessarily agree with the specific math involved, but the goal of constructing a mathematical object that is isomorphic to a systems phenomenology would be the correct approach if you want to formalize phenomenology.

And then from there, one of the big questions in how you even start is, what’s the simplest starting point? And here, I think one of our big innovations that is not seen at any other research group is we’ve started with emotional valence and pleasure. We think these are not only very ethically important, but also just literally the easiest place to start reverse engineering.

Lucas: Right, and so this view is also colored by physicalism and quality of structuralism and valence realism. Could you explain some of those things in a non-jargony way?

Mike: Sure. Quality of formalism is this idea that math is the right language to talk about qualia in, and that we can get a precise answer. This is another way of saying that we’re realists about consciousness much as people can be realists about electromagnetism. We’re also valence realists. This refers to how we believe emotional valence, or pain and pleasure, the goodness or badness of an experience. We think this is a natural kind. This concept carves reality at the joints. We have some further thoughts on how to define this mathematically as well.

Lucas: So you guys are physicalists, so you think that basically the causal structure of the world is best understood by physics and that consciousness was always part of the game engine of the universe from the beginning. Ontologically, it was basic and always there in the same sense that the other forces of nature were already in the game engine since the beginning?

Mike: Yeah, I would say so. I personally like the frame of dual aspect monism, but I would also step back a little bit and say there’s two attractors in this discussion. One is the physicalist attractor, and that’s QRI. Another would be the functionalist/computationalist attractor. I think a lot of AI researchers are in this attractor and this is a pretty deep question of, if we want to try to understand what value is, or what’s really going on, or if we want to try to reverse engineer phenomenology, do we pay attention to bits or atoms? What’s more real; bits or atoms?

Lucas: That’s an excellent question. Scientific reductionism here I think is very interesting. Could you guys go ahead and unpack though the skeptics position of your view and broadly adjudicate the merits of each view?

Andrés: Maybe a really important frame here is called Marr’s Levels of Analyses. David Marr was a cognitive scientist, wrote a really influential book in the ’80s called On Vision where he basically creates a schema for how to understand knowledge about, in this particular case, how you actually make sense of the world visually. The framework goes as follows: you have three ways in which you can describe a information processing system. First of all, the competitional/behavioral level. What that is about is understanding the input output mapping of an information processing system. Part of it is also understanding the run-time complexity of the system and under what conditions it’s able to perform its actions. Here an analogy would be with an abacus, for example.

On the computational/behavioral level, what an abacus can do is add, subtract, multiply, divide, and if you’re really creative you can also exponentiate and do other interesting things. Then you have the algorithmic level of analysis, which is a little bit more detailed, and in a sense more constrained. What the algorithm level of analysis is about is figuring out what are the internal representations and possible manipulations of those representations such that you get the input output of mapping described by the first layer. Here you have an interesting relationship where understanding the first layer doesn’t fully constrain the second one. That is to say, there are many systems that have the same input output mapping but that under the hood uses different algorithms.

In the case of the abacus, an algorithm might be something whenever you want to add a number you just push a bead. Whenever you’re done with a row, you push all of the beads backs and then you add a bead in the row underneath. And finally, you have the implementation level of analysis, and that is, what is the system actually made of? How is it constructed? All of these different levels ultimately also map onto different theories of consciousness, and that is basically where in the stack you associate consciousness, or being, or “what matters”. So, for example, behaviorists in the ’50s, they may associate consciousness, if they give any credibility to that term, with the behavioral level. They don’t really care what’s happening inside as long as you have extended pattern of reinforcement learning over many iterations.

What matters is basically how you’re behaving and that’s the crux of who you are. A functionalist will actually care about what algorithms you’re running, how is it that you’re actually transforming the input into the output. Functionalists generally do care about, for example, brain imaging, they do care about the high level algorithms that the brain is running, and generally will be very interested in figuring out these algorithms and generalize them in fields like machine learning and digital neural networks and so on. A physicalist associate consciousness at the implementation level of analysis. How the system is physically constructed, has bearings on what is it like to be that system.

Lucas: So, you guys haven’t said that this was your favorite approach, but if people are familiar with David Chalmers, these seem to be the easy problems, right? And functionalists are interested in just the easy problems and some of them will actually just try to explain consciousness away, right?

Mike: Yeah, I would say so. And I think to try to condense some of the criticism we have of functionalism, I would claim that it looks like a theory of consciousness and can feel like a theory of consciousness, but it may not actually do what we need a theory of consciousness to do; specify which exact phenomenological states are present.

Lucas: Is there not some conceptual partitioning that we need to do between functionalists who believe in qualia or consciousness, and those that are illusionists or want to explain it away or think that it’s a myth?

Mike: I think that there is that partition, and I guess there is a question of how principled the partition you can be, or whether if you chase the ideas down as far as you can, the partition collapses. Either consciousness is a thing that is real in some fundamental sense and I think you can get there with physicalism, or consciousness is more of a process, a leaky abstraction. I think functionalism naturally tugs in that direction. For example, Brian Tomasik has followed this line of reasoning and come to the conclusion of analytic functionalism, which is trying to explain away consciousness.

Lucas: What is your guys’s working definition of consciousness and what does it mean to say that consciousness is real.

Mike: It is a word that’s overloaded. It’s used in many contexts. I would frame it as what it feels like to be something, and something is conscious if there is something it feels like to be that thing.

Andrés: It’s important also to highlight some of its properties. As Mike pointed out, consciousness, it’s used in many different ways. There’s like eight to definitions for the word consciousness, and honestly, all of them are really interesting. Some of them are more fundamental than others and we tend to focus on the more fundamental side of the spectrum for the word. A sense that would be very not fundamental would be consciousness in the sense of social awareness or something like that. We actually think of consciousness much more in terms of qualia; what is it like to be something? What is it like to exist? Some of the key properties of consciousness are as follows: First of all, we do think it exists.

Second, in some sense it has causal power in the sense that the fact that we are conscious matters for evolution, evolution made us conscious for a reason that it’s actually doing some computational legwork that would be maybe possible to do, but just not as efficient or not as conveniently as it is possible with consciousness. Then also you have the property of qualia, the fact that we can experience sights, and colors, and tactile sensations, and thoughts experiences, and emotions, and so on, and all of these are in completely different worlds, and in a sense they are, but they have the property that they can be part of a unified experience that can experience color at the same time as experiencing sound. That sends those different types of sensations, we describe them as the category of consciousness because they can be experienced together.

And finally, you have unity, the fact that you have the capability of experiencing many qualia simultaneously. That’s generally a very strong claim to make, but we think you need to acknowledge and take seriously its unity.

Lucas: What are your guys’s intuition pumps for thinking why consciousness exists as a thing? Why is there a qualia?

Andrés: There’s the metaphysical question of why consciousness exists to begin within. That’s something I would like to punt for the time being. There’s also the question of why was it recruited for information processing purposes in animals? The intuition here is that there are various contrasts that you can have within experience, can serve a computational role. So, there may be a very deep reason why color qualia or visual qualia is used for information processing associated with sight, and why tactile qualia is associated with information processing useful for touching and making haptic representations, and that might have to do with the actual map of how all the qualia values are related to each other. Obviously, you have all of these edge cases, people who are seeing synesthetic.

They may open their eyes and they experience sounds associated with colors, and people tend to think of those as abnormal. I would flip it around and say that we are all synesthetic, it’s just that the synesthesia that we have in general is very evolutionarily adaptive. The reason why you experience colors when you open your eyes is that that type of qualia is really well suited to represent geometrically a projective space. That’s something that naturally comes out of representing the world with the sensory apparatus like eyes. That doesn’t mean that there aren’t other ways of doing it. It’s possible that you could have an offshoot of humans that whenever they opened their eyes, they experience sound and they use that very well to represent the visual world.

But we may very well be in a local maxima of how different types of qualia are used to represent and do certain types of computations in a very well-suited way. It’s like the intuition behind why we’re conscious, is that all of these different contrasts in the structure of the relationship of possible qualia values has computational implications, and there’s actual ways of using this contrast in very computationally effective ways.

Lucas: So, just to channel of the functionalist here, wouldn’t he just say that everything you just said about qualia could be fully reducible to input output and algorithmic information processing? So, why do we need this extra property of qualia?

Andrés: There’s this article, I believe is by Brian Tomasik that basically says, flavors of consciousness are flavors of computation. It might be very useful to do that exercise, where basically you identify color qualia as just a certain type of computation and it may very well be that the geometric structure of color is actually just a particular algorithmic structure, that whenever you have a particular type of algorithmic information processing, you get these geometric plate space. In the case of color, that’s a Euclidean three-dimensional space. In the case of tactile or smell, it might be a much more complicated space, but then it’s in a sense implied by the algorithms that we run. There is a number of good arguments there.

The general approach to how to tackle them is that when it comes down to actually defining what algorithms a given system is running, you will hit a wall when you try to formalize exactly how to do it. So, one example is, how do you determine the scope of an algorithm? When you’re analyzing a physical system and you’re trying to identify what algorithm it is running, are you allowed to basically contemplate 1,000 atoms? Are you allowed to contemplate a million atoms? Where is a natural boundary for you to say, “Whatever is inside here can be part of the same algorithm, but whatever is outside of it can’t.” And, there really isn’t a framing variant way of making those decisions. On the other hand, if you ask to see a qualia with actual physical states, there is a framing variant way of describing what the system is.

Mike: So, a couple of years ago I posted a piece giving a critique of functionalism and one of the examples that I brought up was, if I have a bag of popcorn and I shake the bag of popcorn, did I just torture someone? Did I just run a whole brain emulation of some horrible experience, or did I not? There’s not really an objective way to determine which algorithms a physical system is objectively running. So this is a kind of an unanswerable question from the perspective of functionalism, whereas with the physical theory of consciousness, it would have a clear answer.

Andrés: Another metaphor here he is, let’s say you’re at a park enjoying an ice cream. In this system that I created that has, let’s say isomorphic algorithms to whatever is going on in your brain, the particular algorithms that your brain is running in that precise moment within a functionalist paradigm maps onto a metal ball rolling down one of the paths within these machine in a straight line, not touching anything else. So there’s actually not much going on. According to functionalism, that would have to be equivalent and it would actually be generating your experience. Now the weird thing there is that you could actually break the machine, you could do a lot of things and the behavior of the ball would not change.

Meaning that within functionalism, and to actually understand what a system is doing, you need to understand the counter-factuals of the system. You need to understand, what would the system be doing if the input had been different? And all of a sudden, you end with this very, very gnarly problem of defining, well, how do you actually objectively decide what is the boundary of the system? Even some of these particular states that allegedly are very complicated, the system looks extremely simple, and you can remove a lot of parts without actually modifying its behavior. Then that casts in question whether there is a objective boundary, any known arbitrary boundary that you can draw around the system and say, “Yeah, this is equivalent to what’s going on in your brain,” right now.

This has a very heavy bearing on the binding problem. The binding problem for those who haven’t heard of it is basically, how is it possible that 100 billion neurons just because they’re skull-bound, spatially distributed, how is it possible that they simultaneously contribute to a unified experience as opposed to, for example, neurons in your brain and neurons in my brain contributing to a unified experience? You hit a lot of problems like what is the speed of propagation of information for different states within the brain? I’ll leave it at that for the time being.

Lucas: I would just like to be careful about this intuition here that experience is unified. I think that the intuition pump for that is direct phenomenological experience like experience seems unified, but experience also seems a lot of different ways that aren’t necessarily descriptive of reality, right?

Andrés: You can think of it as different levels of sophistication, where you may start out with a very naive understanding of the world, where you confuse your experience for the world itself. A very large percentage of people perceive the world and in a sense think that they are experiencing the world directly, whereas all the evidence indicates that actually you’re experiencing an internal representation. You can go and dream, you can hallucinate, you can enter interesting meditative states, and those don’t map to external states of the world.

There’s this transition that happens when you realize that in some sense you’re experiencing a world simulation created by your brain, and of course, you’re fooled by it in countless ways, especially when it comes to emotional things that we look at a person and we might have an intuition of what type of person that person is, and that if we’re not careful, we can confuse our intuition, we can confuse our feelings with truth as if we were actually able to sense their souls, so to speak, rather than, “Hey, I’m running some complicated models on people-space and trying to carve out who they are.” There’s definitely a lot of ways in which experience is very deceptive, but here I would actually make an important distinction.

When it comes to intentional content, and intentional content is basically what the experience is about, for example, if you’re looking at a chair, there’s the quality of chairness, the fact that you understand the meaning of chair and so on. That is usually a very deceptive part of experience. There’s another way of looking at experience that I would say is not deceptive, which is the phenomenal character of experience; how it presents itself. You can be deceived about basically what the experience is about, but you cannot be deceived about how you’re having the experience, how you’re experiencing it. You can infer based on a number of experiences that the only way for you to even actually experience a given phenomenal object is to incorporate a lot of that information into a unified representation.

But also, if you just pay attention to your experience that you can simultaneously place your attention in two spots of your visual field and make them harmonized. That’s a phenomenal character and I would say that there’s a strong case to be made to not doubt that property.

Lucas: I’m trying to do my best to channel the functionalist. I think he or she would say, “Okay, so what? That’s just more information processing, and i’ll bite the bullet on the binding problem. I still need some more time to figure that out. So what? It seems like these people who believe in qualia have an even tougher job of trying to explain this extra spooky quality in the world that’s different from all the other physical phenomenon that science has gone into.” It also seems to violate Occam’s razor or a principle of lightness where one’s metaphysics or ontology would want to assume the least amount of extra properties or entities in order to try to explain the world. I’m just really trying to tease out your best arguments here for qualia realism as we do have this current state of things in AI alignment where most people it seems would either try to explain away consciousness, would say it’s an illusion, or they’re anti-realist about qualia.

Mike: That’s a really good question, a really good frame. And I would say our strongest argument revolves around predictive power. Just like centuries ago, you could absolutely be a skeptic about, shall we say, electromagnetism realism. And you could say, “Yeah, I mean there is this thing we call static, and there’s this thing we call lightning, and there’s this thing we call load stones or magnets, but all these things are distinct. And to think that there’s some unifying frame, some deep structure of the universe that would tie all these things together and highly compress these phenomenon, that’s crazy talk.” And so, this is a viable position today to say that about consciousness, that it’s not yet clear whether consciousness has deep structure, but we’re assuming it does, and we think that unlocks a lot of predictive power.

We should be able to make predictions that are both more concise and compressed and crisp than others, and we should be able to make predictions that no one else can.

Lucas: So what is the most powerful here about what you guys are doing? Is it the specific theories and assumptions which you take are falsifiable?

Mike: Yeah.

Lucas: If we can make predictive assessments of these things, which are either leaky abstractions or are qualia, how would we even then be able to arrive at a realist or anti-realist view about qualia?

Mike: So, one frame on this is, it could be that one could explain a lot of things about observed behavior and implicit phenomenology through a purely functionalist or computationalist lens, but maybe for a given system it might take 10 terabytes. And if you can get there in a much simpler way, if you can explain it in terms of three elegant equations instead of 10 terabytes, then it wouldn’t be proof that there exists some crystal clear deep structure at work. But it would be very suggestive. Marr’s Levels of Analysis are pretty helpful here, where a functionalist might actually be very skeptical of consciousness mattering at all because it would say, “Hey, if you’re identifying consciousness at the implementation level of analysis, how could that have any bearing on how we are talking about, how we understand the world, how we’d behave?

Since the implementational level is kind of epiphenomenal from the point of view of the algorithm. How can an algorithm know its own implementation, all it can maybe figure out its own algorithm, and it’s identity would be constrained to its own algorithmic structure.” But that’s not quite true. In fact, there is bearings on one level of analysis onto another, meaning in some cases the implementation level of analysis doesn’t actually matter for the algorithm, but in some cases it does. So, if you were implementing a computer, let’s say with water, you have the option of maybe implementing a Turing machine with water buckets and in that case, okay, the implementation level of analysis goes out the window in terms of it doesn’t really help you understand the algorithm.

But if how you’re using water to implement algorithms is by basically creating this system of adding waves in buckets of different shapes, with different resonant modes, then the implementation level of analysis actually matters a whole lot for what algorithms are … finely tuned to be very effective in that substrate. In the case of consciousness and how we behave, we do think properties of the substrate have a lot of bearings on what algorithms we actually run. A functionalist should actually start caring about consciousness if the properties of consciousness makes the algorithms more efficient, more powerful.

Lucas: But what if qualia and consciousness are substantive real things? What if the epiphenomenonalist true and is like smoke rising from computation and it doesn’t have any causal efficacy?

Mike: To offer a re-frame on this, I like this frame of dual aspect monism better. There seems to be an implicit value judgment on epiphenomenalism. It’s seen as this very bad thing if a theory implies qualia as epiphenomenal. Just to put cards on the table, I think Andrés and I differ a little bit on how we see these things, although I think our ideas also mesh up well. But I would say that under the frame of something like dual aspect monism, that there’s actually one thing that exists, and it has two projections or shadows. And one projection is the physical world such as we can tell, and then the other projection is phenomenology, subjective experience. These are just two sides of the same coin and neither is epiphenomenal to the other. It’s literally just two different angles on the same thing.

And in that sense, qualia values and physical values are really talking about the same thing when you get down to it.

Lucas: Okay. So does this all begin with this move that Descartes makes, where he tries to produce a perfectly rational philosophy or worldview by making no assumptions and then starting with experience? Is this the kind of thing that you guys are doing in taking consciousness or qualia to be something real or serious?

Mike: I can just speak for myself here, but I would say my intuition comes from two places. One is staring deep into the beast of functionalism and realizing that it doesn’t lead to a clear answer. My model is that it just is this thing that looks like an answer but can never even in theory be an answer to how consciousness works. And if we deny consciousness, then we’re left in a tricky place with ethics and moral value. It also seems to leave value on the table in terms of predictions, that if we can assume consciousness as real and make better predictions, then that’s evidence that we should do that.

Lucas: Isn’t that just an argument that it would be potentially epistemically useful for ethics if we could have predictive power about consciousness?

Mike: Yeah. So, let’s assume that it’s 100 years, or 500 years, or 1,000 years in the future, and we’ve finally cracked consciousness. We’ve finally solved it. My open question is, what does the solution look like? If we’re functionalists, what does the solution look like? If we’re physicalists, what does the solution look like? And we can expand this to ethics as well.

Lucas: Just as a conceptual clarification, the functionalists are also physicalists though, right?

Andrés: There is two senses of the word physicalism here. So if there’s physicalism in the sense of like a theory of the universe, that the behavior of matter and energy, what happens in the universe is exhaustively described by the laws of physics, or future physics, there is also physicalism in the sense of understanding consciousness in contrast to functionalism. David Pearce, I think, would describe it as non-materialist physicalist idealism. There’s definitely a very close relationship between that phrasing and dual aspect monism. I can briefly unpack it. Basically non materialist is not saying that the stuff of the world is fundamentally unconscious. That’s something that materialism claims, that what the world is made of is not conscious, is raw matter so to speak.

Andrés: Physicalist, again in the sense of the laws of physics exhaustively describe behavior and idealist in the sense of what makes up the world is qualia or consciousness. The big picture view is that the actual substrate of the universe of quantum fields are fields of qualia.

Lucas: So Mike, you were saying that in the future when we potentially have a solution to the problem of consciousness, that in the end, the functionalists with algorithms and explanations of say all of the easy problems, all of the mechanisms behind the things that we call consciousness, you think that that project will ultimately fail?

Mike: I do believe that, and I guess my gentle challenge to functionalists would be to sketch out a vision of what a satisfying answer to consciousness would be, whether it’s completely explaining it a way or completely explaining it. If in 500 years you go to the local bookstore and you check out consciousness 101, and just flip through it, you look at the headlines and the chapter list and the pictures, what do you see? I think we have an answer as formalists, but I would be very interested in getting the functionalists state on this.

Lucas: All right, so you guys have this belief in the ability to formalize our understanding of consciousness, is this actually contingent on realism or anti realism?

Mike: It is implicitly dependent on realism, that consciousness is real enough to be describable mathematically in a precise sense. And actually that would be my definition of realism, that something is real if we can describe it exactly with mathematics and it is instantiated in the universe. I think the idea of connecting math and consciousness is very core to formalism.

Lucas: What’s particularly interesting here are the you’re making falsifiable claims about phenomenological states. It’s good and exciting that your Symmetry Theory of Valence, which we can get into now has falsifiable aspects. So do you guys want to describe here your Symmetry Theory of Valence and how this fits in and as a consequence of your valence realism?

Andrés: Sure, yeah. I think like one of the key places where this has bearings on is and understanding what is it that we actually want and what is it that we actually like and enjoy. That will be answered in an agent way. So basically you think of agents as entities who spin out possibilities for what actions to take and then they have a way of sorting them by expected utility and then carrying them out. A lot of people may associate what we want or what we like or what we care about at that level, the agent level, whereas we think actually the true source of value is more low level than that. That there’s something else that we’re actually using in order to implement agentive behavior. There’s ways of experiencing value that are completely separated from agents. You don’t actually need to be generating possible actions and evaluating them and enacting them for there to be value or for you to actually be able to enjoy something.

So what we’re examining here is actually what is the lower level property that gives rise even to agentive behavior that underlies every other aspect of experience. These would be a valence and specifically valence gradients. The general claim is that we are set up in such a way that we are basically climbing the valence gradient. This is not true in every situation, but it’s mostly true and it’s definitely mostly true in animals. And then the question becomes what implements valence gradients. Perhaps your intuition is this extraordinary fact that things that have nothing to do with our evolutionary past nonetheless can feel good or bad. So it’s understandable that if you hear somebody scream, you may get nervous or anxious or fearful or if you hear somebody laugh you may feel happy.

That makes sense from an evolutionary point of view, but why would the sound of the Bay Area Rapid Transit, the Bart, which creates these very intense screeching sounds, that is not even within like the vocal range of humans, it’s just really bizarre, never encountered before in our evolutionary past and nonetheless, it has an extraordinarily negative valence. That’s like a hint that valence has to do with patterns, it’s not just goals and actions and utility functions, but the actual pattern of your experience may determine valence. The same goes for a SUBPAC, is this technology that basically renders sounds between 10 and 100 hertz and some of them feel really good, some of them feel pretty unnerving, some of them are anxiety producing and it’s like why would that be the case? Especially when you’re getting two types of input that have nothing to do with our evolutionary past.

It seems that there’s ways of triggering high and low valence states just based on the structure of your experience. The last example I’ll give is very weird states of consciousness like meditation or psychedelics that seem to come with extraordinarily intense and novel forms of experiencing significance or a sense of bliss or pain. And again, they don’t seem to have much semantic content per se or rather the semantic content is not the core reason why they feel that they’re bad. It has to do more with a particular structure that they induce in experience.

Mike: There are many ways to talk about where pain and pleasure come from. We can talk about it in terms of neuro chemicals, opioids, dopamine. We can talk about it in terms of pleasure centers in the brain, in terms of goals and preferences and getting what you want, but all these have counterexamples. All of these have some points that you can follow the thread back to which will beg the question. I think the only way to explain emotional valence, pain and pleasure, that doesn’t beg the question is to explain it in terms of some patterns within phenomenology, just intrinsically feel good and some intrinsically feel bad. To touch back on the formalism brain, this would be saying that if we have a mathematical object that is isomorphic to your phenomenology, to what it feels like to be you, then some pattern or property of this object will refer to or will sort of intrinsically encode you are emotional valence, how pleasant or unpleasant this experiences.

That’s the valence formalism aspect that we’ve come to.

Lucas: So given the valence realism, the view is this intrinsic pleasure, pain axis of the world and this is sort of challenging I guess David Pearce’s view. There are things in experience which are just clearly good seeming or bad seeming. Will MacAskill called these pre theoretic properties we might ascribe to certain kinds of experiential aspects, like they’re just good or bad. So with this valence realism view, this potentiality in this goodness or badness whose nature is sort of self intimatingly disclosed in the physics and in the world since the beginning and now it’s unfolding and expressing itself more so and the universe is sort of coming to life, and embedded somewhere deep within the universe’s structure are these intrinsically good or intrinsically bad valances which complex computational systems and maybe other stuff has access to.

Andrés: Yeah, yeah, that’s right. And I would perhaps emphasize that it’s not only pre-theoretical, it’s pre-agentive, you don’t even need an agent for there to be valence.

Lucas: Right. Okay. This is going to be a good point I think for getting into these other more specific hairy philosophical problems. Could you go ahead and unpack a little bit more this view that pleasure or pain is self intimatingly good or bad that just by existing and experiential relation with the thing its nature is disclosed. Brian Tomasik here, and I think functionalists would say there’s just another reinforcement learning algorithm somewhere before that is just evaluating these phenomenological states. They’re not intrinsically or bad, that’s just what it feels like to be the kind of agent who has that belief.

Andrés: Sure. There’s definitely many angles from which to see this. One of them is by basically realizing that liking, wanting and learning are possible to dissociate, and in particular you’re going to have reinforcement without an associated positive valence. You can have also positive valence without reinforcement or learning. Generally they are correlated but they are different things. My understanding is a lot of people who may think of valence as something we believe matters because you are the type of agent that has a utility function and a reinforcement function. If that was the case, we would expect valence to melt away in states that are non agentive, we wouldn’t necessarily see it. And also that it would be intrinsically tied to intentional content, the aboutness of experience. A very strong counter example is that somebody may claim that really what they truly want this to be academically successful or something like that.

They think of the reward function as intrinsically tied to getting a degree or something like that. I would call that to some extent illusory, that if you actually look at how those preferences are being implemented, that deep down there would be valence gradients happening there. One way to show this would be let’s say the person on the graduation day, you give them an opioid antagonist. The person will subjectively feel that the day is meaningless, you’ve removed the pleasant cream of the experience that they were actually looking for, that they thought all along was tied in with intentional content with the fact of graduating but in fact it was the hedonic gloss that they were after, and that’s kind of like one intuition pump part there.

Lucas: These core problem areas that you’ve identified in Principia Qualia, would you just like to briefly touch on those?

Mike: Yeah, trying to break the problem down into modular pieces with the idea that if we can decompose the problem correctly then the sub problems become much easier than the overall problem and if you collect all the solutions to the sub problem than in aggregate, you get a full solution to the problem of consciousness. So I’ve split things up into the metaphysics, the math and the interpretation. The first question is what metaphysics do you even start with? What ontology do you even try to approach the problem? And we’ve chosen the ontology of physics that can objectively map onto reality in a way that computation can not. Then there’s this question of, okay, so you have your core ontology in this case physics, and then there’s this question of what counts, what actively contributes to consciousness? Do we look at electrons, electromagnetic fields, quarks?

This is an unanswered question. We have hypotheses but we don’t have an answer. Moving into the math, conscious system seemed to have boundaries, if something’s happening inside my head it can directly contribute to my conscious experience. But even if we put our heads together, literally speaking, your consciousness doesn’t bleed over into mine, there seems to be a boundary. So one way of framing this is the boundary problem and one way it’s framing it is the binding problem, and these are just two sides of the same coin. There’s this big puzzle of how do you draw the boundaries of a subject experience. IIT is set up to approach consciousness in itself through this lens that has a certain style of answer, style of approach. We don’t necessarily need to take that approach, but it’s a intellectual landmark. Then we get into things like the state-space problem and the topology of information problem.

If we figured out our basic ontology of what we think is a good starting point and of that stuff, what actively contributes to consciousness, and then we can figure out some principled way to draw a boundary around, okay, this is conscious experience A and this conscious experience B, and they don’t overlap. So you have a bunch of the information inside the boundary. Then there’s this math question of how do you rearrange it into a mathematical object that is isomorphic to what that stuff feels like. And again, IIT has an approach to this, we don’t necessarily ascribe to the exact approach but it’s good to be aware of. There’s also the interpretation problem, which is actually very near and dear to what QRI is working on and this is the concept of if you had a mathematical object that represented what it feels like to be you, how would we even start to figure out what it meant?

Lucas: This is also where the falsifiability comes in, right? If we have the mathematical object and we’re able to formally translate that into phenomenological states, then people can self report on predictions, right?

Mike: Yes. I don’t necessarily fully trust self reports as being the gold standard. I think maybe evolution is tricky sometimes and can lead to inaccurate self report, but at the same time it’s probably pretty good, and it’s the best we have for validating predictions.

Andrés: A lot of this gets easier if we assume that maybe we can be wrong in an absolute sense but we’re often pretty well calibrated to judge relative differences. Maybe you ask me how I’m doing on a scale of one to ten and I say seven and the reality is a five, maybe that’s a problem, but at the same time I like chocolate and if you give me some chocolate and I eat it and that improves my subjective experience and I would expect us to be well calibrated in terms of evaluating whether something is better or worse.

Lucas: There’s this view here though that the brain is not like a classical computer, that it is more like a resonant instrument.

Mike: Yeah. Maybe an analogy here it could be pretty useful. There’s this researcher William Sethares who basically figured out the way to quantify the mutual dissonance between pairs of notes. It turns out that it’s not very hard, all you need to do is add up the pairwise dissonance between every harmonic of the notes. And what that gives you is that if you take for example a major key and you compute the average dissonance between pairs of notes within that major key it’s going to be pretty good on average. And if you take the average dissonance of a minor key it’s going to be higher. So in a sense what distinguishes the minor and a major key is in the combinatorial space of possible permutations of notes, how frequently are they dissonant versus consonant.

That’s a very ground truth mathematical feature of a musical instrument and that’s going to be different from one instrument to the next. With that as a backdrop, we think of the brain and in particular valence in a very similar light that the brain has natural resonant modes and emotions may seem externally complicated. When you’re having a very complicated emotion and we ask you to describe it it’s almost like trying to describe a moment in a symphony, this very complicated composition and how do you even go about it. But deep down the reason why a particular frame sounds pleasant or unpleasant within music is ultimately tractable to the additive per wise dissonance of all of those harmonics. And likewise for a given state of consciousness we suspect that very similar to music the average pairwise dissonance between the harmonics present on a given point in time will be strongly related to how unpleasant the experience is.

These are electromagnetic waves and it’s not exactly like a static or it’s not exactly a standing wave either, but it gets really close to it. So basically what this is saying is there’s this excitation inhibition wave function and that happens statistically across macroscopic regions of the brain. There’s only a discrete number of ways in which that way we can fit an integer number of times in the brain. We’ll give you a link to the actual visualizations for what this looks like. There’s like a concrete example, one of the harmonics with the lowest frequency is basically a very simple one where interviewer hemispheres are alternatingly more excited versus inhibited. So that will be a low frequency harmonic because it is very spatially large waves, an alternating pattern of excitation. Much higher frequency harmonics are much more detailed and obviously hard to describe, but visually generally speaking, the spatial regions that are activated versus inhibited are these very thin wave fronts.

It’s not a mechanical wave as such, it’s a electromagnetic wave. So it would actually be the electric potential in each of these regions of the brain fluctuates, and within this paradigm on any given point in time you can describe a brain state as a weighted sum of all of its harmonics, and what that weighted sum looks like depends on your state of consciousness.

Lucas: Sorry, I’m getting a little caught up here on enjoying resonant sounds and then also the valence realism. The view isn’t that all minds will enjoy resonant things because happiness is like a fundamental valence thing of the world and all brains who come out of evolution should probably enjoy resonance.

Mike: It’s less about the stimulus, it’s less about the exact signal and it’s more about the effect of the signal on our brains. The resonance that matters, the resonance that counts, or the harmony that counts we’d say, or in a precisely technical term, the consonance that counts is the stuff that happens inside our brains. Empirically speaking most signals that involve a lot of harmony create more internal consonance in these natural brain harmonics than for example, dissonant stimuli. But the stuff that counts is inside the head, not the stuff that is going in our ears.

Just to be clear about QRI’s move here, Selen Atasoy has put forth this connecting specific harmonic wave model and what we’ve done is combined it with our symmetry threory of valence and said this is sort of a way of basically getting a foyer transform of where the energy is in terms of frequencies of brainwaves in a much cleaner way that has been available through EEG. Basically we can evaluate this data set for harmony. How much harmony is there in a brain, the link to the Symmetry Theory of Valence than it should be a very good proxy for how pleasant it is to be that brain.

Lucas: Wonderful.

Andrés: In this context, yeah, the Symmetry Theory of Valence would be much more fundamental. There’s probably many ways of generating states of consciousness that are in a sense completely unnatural that are not based on the harmonics of the brain, but we suspect the bulk of the differences in states of consciousness would cash out in differences in brain harmonics because that’s a very efficient way of modulating the symmetry of the state.

Mike: Basically, music can be thought of as a very sophisticated way to hack our brains into a state of greater consonance, greater harmony.

Lucas: All right. People should check out your Principia Qualia, which is the work that you’ve done that captures a lot of this well. Is there anywhere else that you’d like to refer people to for the specifics?

Mike: Principia qualia covers the philosophical framework and the symmetry theory of valence. Andrés has written deeply about this connectome specific harmonic wave frame and the name of that piece is Quantifying Bliss.

Lucas: Great. I would love to be able to quantify bliss and instantiate it everywhere. Let’s jump in here into a few problems and framings of consciousness. I’m just curious to see if you guys have any comments on ,the first is what you call the real problem of consciousness and the second one is what David Chalmers calls the Meta problem of consciousness. Would you like to go ahead and start off here with just this real problem of consciousness?

Mike: Yeah. So this gets to something we were talking about previously, is consciousness real or is it not? Is it something to be explained or to be explained away? This cashes out in terms of is it something that can be formalized or is it intrinsically fuzzy? I’m calling this the real problem of consciousness, and a lot depends on the answer to this. There are so many different ways to approach consciousness and hundreds, perhaps thousands of different carvings of the problem, panpsychism, we have dualism, we have non materialist physicalism and so on. I think essentially the core distinction, all of these theories sort themselves into two buckets, and that’s is consciousness real enough to formalize exactly or not. This frame is perhaps the most useful frame to use to evaluate theories of consciousness.

Lucas: And then there’s a Meta problem of consciousness which is quite funny, it’s basically like why have we been talking about consciousness for the past hour and what’s all this stuff about qualia and happiness and sadness? Why do people make claims about consciousness? Why does it seem to us that there is maybe something like a hard problem of consciousness, why is it that we experience phenomenological states? Why isn’t everything going on with the lights off?

Mike: I think this is a very clever move by David Chalmers. It’s a way to try to unify the field and get people to talk to each other, which is not so easy in the field. The Meta problem of consciousness doesn’t necessarily solve anything but it tries to inclusively start the conversation.

Andrés: The common move that people make here is all of these crazy things that we think about consciousness and talk about consciousness, that’s just any information processing system modeling its own attentional dynamics. That’s one illusionist frame, but even within qualia realist, qualia formalist paradigm, you still have the question of why do we even think or self reflect about consciousness. You could very well think of consciousness as being computationally relevant, you need to have consciousness and so on, but still lacking introspective access. You could have these complicated conscious information processing systems, but they don’t necessarily self reflect on the quality of their own consciousness. That property is important to model and make sense of.

We have a few formalisms that may give rise to some insight into how self reflectivity happens and in particular how is it possible to model the entirety of your state of consciousness in a given phenomenal object. These ties in with the notion of a homonculei, if the overall valence of your consciousness is actually a signal traditionally used for fitness evaluation, detecting basically when are you in existential risk to yourself or when there’s like reproductive opportunities that you may be missing out on, that it makes sense for there to be a general thermostat of the overall experience where you can just look at it and you get a a sense of the overall well being of the entire experience added together in such a way that you experienced them all at once.

I think like a lot of the puzzlement has to do with that internal self model of the overall well being of the experience, which is something that we are evolutionarily incentivized to actually summarize and be able to see at a glance.

Lucas: So, some people have a view where human beings are conscious and they assume everyone else is conscious and they think that the only place for value to reside is within consciousness, and that a world without consciousness is actually a world without any meaning or value. Even if we think that say philosophical zombies or people who are functionally identical to us but with no qualia or phenomenological states or experiential states, even if we think that those are conceivable, then it would seem that there would be no value in a world of p-zombies. So I guess my question is why does phenomenology matter? Why does the phenomenological modality of pain and pleasure or valence have some sort of special ethical or experiential status unlike qualia like red or blue?

Why does red or blue not disclose some sort of intrinsic value in the same way that my suffering does or my bliss does or the suffering or bliss of other people?

Mike: My intuition is also that consciousness is necessary for value. Nick Bostrom has this wonderful quote in super intelligence that we should be wary of building a Disneyland with no children, some technological wonderland that is filled with marvels of function but doesn’t have any subjective experience, doesn’t have anyone to enjoy it basically. I would just say that I think that most AI safety research is focused around making sure there is a Disneyland, making sure, for example, that we don’t just get turned into something like paperclips. But there’s this other problem, making sure there are children, making sure there are subjective experiences around to enjoy the future. I would say that there aren’t many live research threads on this problem and I see QRI as a live research thread on how to make sure there is subject experience in the future.

Probably a can of worms there, but as your question about in pleasure, I may pass that to my colleague Andrés.

Andrés: Nothing terribly satisfying here. I would go with David Pearce’s view that these properties of experience are self intimating and to the extent that you do believe in value, it will come up as the natural focal points for value, especially if you’re allowed to basically probe the quality of your experience where in many states you believe that the reason why you like something is for intentional content. Again, the case of graduating or it could be the the case of getting a promotion or one of those things that a lot of people associate, with feeling great, but if you actually probe the quality of experience, you will realize that there is this component of it which is its hedonic gloss and you can manipulate it directly again with things like opiate antagonists and if the symmetry theory of valence is true, potentially also by directly modulating the consonance and dissonance of the brain harmonics, in which case the hedonic gloss would change in peculiar ways.

When it comes to concealiance, when it comes to many different points of view, agreeing on what aspect of the experience is what brings value to it, it seems to be the hedonic gloss.

Lucas: So in terms of qualia and valence realism, would the causal properties of qualia be the thing that would show any arbitrary mind the self intimating nature of how good or bad an experience is, and in the space of all possible minds, what is the correct epistemological mechanism for evaluating the moral status of experiential or qualitative states?

Mike: So first of all, I would say that my focus so far has mostly been on describing what is and not what ought. I think that we can talk about valence without necessarily talking about ethics, but if we can talk about valence clearly, that certainly makes some questions in ethics and some frameworks in ethics make much more or less than. So the better we can clearly describe and purely descriptively talk about consciousness, the easier I think a lot of these ethical questions get. I’m trying hard not to privilege any ethical theory. I want to talk about reality. I want to talk about what exists, what’s real and what the structure of what exists is, and I think if we succeed at that then all these other questions about ethics and morality get much, much easier. I do think that there is an implicit should wrapped up in questions about valence, but I do think that’s another leap.

You can accept the valence is real without necessarily accepting that optimizing valence is an ethical imperative. I personally think, yes, it is very ethically important, but it is possible to take a purely descriptive frame to valence, that whether or not this also discloses, as David Pearce said, the utility function of the universe. That is another question and can be decomposed.

Andrés: One framing here too is that we do suspect valence is going to be the thing that matters up on any mind if you probe it in the right way in order to achieve reflective equilibrium. There’s the biggest example of a talk and neuro scientist was giving at some point, there was something off and everybody seemed to be a little bit anxious or irritated and nobody knew why and then one of the conference organizers suddenly came up to the presenter and did something to the microphone and then everything sounded way better and everybody was way happier. There was these very sorrow hissing pattern caused by some malfunction of the microphone and it was making everybody irritated, they just didn’t realize that was the source of the irritation, and when it got fixed then you know everybody’s like, “Oh, that’s why I was feeling upset.”

We will find that to be the case over and over when it comes to improving valence. So like somebody in the year 2050 might come up to one of the connectome specific harmonic wave clinics, “I don’t know what’s wrong with me,” but if you put them through the scanner we identify your 17th and 19th harmonic in a state of dissonance. We cancel 17th to make it more clean, and then the person who will say all of a sudden like, “Yeah, my problem is fixed. How did you do that?” So I think it’s going to be a lot like that, that the things that puzzle us about why do I prefer these, why do I think this is worse, will all of a sudden become crystal clear from the point of view of valence gradients objectively measured.

Mike: One of my favorite phrases in this context is what you can measure you can manage and if we can actually find the source of dissonance in a brain, then yeah, we can resolve it, and this could open the door for maybe honestly a lot of amazing things, making the human condition just intrinsically better. Also maybe a lot of worrying things, being able to directly manipulate emotions may not necessarily be socially positive on all fronts.

Lucas: So I guess here we can begin to jump into AI alignment and qualia. So we’re building AI systems and they’re getting pretty strong and they’re going to keep getting stronger potentially creating a superintelligence by the end of the century and consciousness and qualia seems to be along the ride for now. So I’d like to discuss a little bit here about more specific places in AI alignment where these views might inform it and direct it.

Mike: Yeah, I would share three problems of AI safety. There’s the technical problem, how do you make a self improving agent that is also predictable and safe. This is a very difficult technical problem. First of all to even make the agent but second of all especially to make it safe, especially if it becomes smarter than we are. There’s also the political problem, even if you have the best technical solution in the world and the sufficiently good technical solution doesn’t mean that it will be put into action in a sane way if we’re not in a reasonable political system. But I would say the third problem is what QRI is most focused on and that’s the philosophical problem. What are we even trying to do here? What is the optimal relationship between AI and humanity and also a couple of specific details here. First of all I think nihilism is absolutely an existential threat and if we can find some antidotes to nihilism through some advanced valence technology that could be enormously helpful for reducing Xrisk.

Lucas: What kind of nihilism or are you talking about here, like nihilism about morality and meaning?

Mike: Yes, I would say so, and just personal nihilism that it feels like nothing matters, so why not do risky things?

Lucas: Whose quote is it, the philosophers question like should you just kill yourself? That’s the yawning abyss of nihilism inviting you in.

Andrés: Albert Camus. The only real philosophical question is whether to commit suicide, whereas how I think of it is the real philosophical question is how to make love last, bringing value to the existence, and if you have value on tap, then the question of whether to kill yourself or not seems really nonsensical.

Lucas: For sure.

Mike: We could also say that right now there aren’t many good shelling points for global coordination. People talk about having global coordination and building AGI would be a great thing but we’re a little light on the details of how to do that. If the clear, comprehensive, useful, practical understanding of consciousness can be built, then this may sort of embody or generate new shelling points that the larger world could self organize around. If we can give people a clear understanding of what is and what could be, then I think we will get a better future that actually gets built.

Lucas: Yeah. Showing what is and what could be is immensely important and powerful. So moving forward with AI alignment as we’re building these more and more complex systems, there’s this needed distinction between unconscious and conscious information processing, if we’re interested in the morality and ethics of suffering and joy and other conscious states. How do you guys see the science of consciousness here, actually being able to distinguish between unconscious and conscious information processing systems?

Mike: There are a few frames here. One is that, yeah, it does seem like the brain does some processing in consciousness and some processing outside of consciousness. And what’s up with that, this could be sort of an interesting frame to explore in terms of avoiding things like mind crime in the AGI or AI space that if there are certain computations which are painful then don’t do them in a way that would be associated with consciousness. It would be very good to have rules of thumb here for how to do that. One interesting could be in the future we might not just have compilers which optimize for speed of processing or minimization of dependent libraries and so on, but could optimize for the valence of the computation on certain hardware. This of course gets into complex questions about computationalism, how hardware dependent this compiler would be and so on.

I think it’s an interesting and important long term frame

Lucas: So just illustrate here I think the ways in which solving or better understanding consciousness will inform AI alignment from present day until super intelligence and beyond.

Mike: I think there’s a lot of confusion about consciousness and a lot of confusion about what kind of thing the value problem is in AI Safety, and there are some novel approaches on the horizon. I was speaking with Stuart Armstrong the last year global and he had some great things to share about his model fragments paradigm. I think this is the right direction. It’s sort of understanding, yeah, human preferences are insane. Just they’re not a consistent formal system.

Lucas: Yeah, we contain multitudes.

Mike: Yes, yes. So first of all understanding what generates them seems valuable. So there’s this frame in AI safety we call the complexity value thesis. I believe Eliezer came up with it in a post on Lesswrong. It’s this frame where human value is very fragile in that it can be thought of as a small area, perhaps even almost a point in a very high dimensional space, say a thousand dimensions. If we go any distance in any direction from this tiny point in this high dimensional space, then we quickly get to something that we wouldn’t think of as very valuable. And maybe if we leave everything the same and take away freedom, this paints a pretty sobering picture of how difficult AI alignment will be.

I think this is perhaps arguably the source of a lot of worry in the community, that not only do we need to make machines that won’t just immediately kill us, but that will preserve our position in this very, very high dimensional space well enough that we keep the same trajectory and that possibly if we move at all, then we may enter a totally different trajectory, that we in 2019 wouldn’t think of as having any value. So this problem becomes very, very intractable. I would just say that there is an alternative frame. The phrasing that I’m playing around with here it is instead of the complexity of value thesis, the unity of value thesis, it could be that many of the things that we find valuable, eating ice cream, living in a just society, having a wonderful interaction with a loved one, all of these have the same underlying neural substrate and empirically this is what effective neuroscience is finding.

Eating a chocolate bar activates same brain regions as a transcendental religious experience. So maybe there’s some sort of elegant compression that can be made and that actually things aren’t so starkly strict. We’re not sort of this point in a super high dimensional space and if we leave the point, then everything of value is trashed forever, but maybe there’s some sort of convergent process that we can follow that we can essentialize. We can make this list of 100 things that humanity values and maybe they all have in common positive valence, and positive valence can sort of be reverse engineered. And to some people this feels like a very scary dystopic scenario, don’t knock it until you’ve tried it, but at the same time there’s a lot of complexity here.

One core frame that the idea of qualia of formalism and valence realism and offer AI safety is that maybe the actual goal is somewhat different than the complexity of value thesis puts forward. Maybe the actual goal is different and in fact easier. I think this could directly inform how we spend our resources on the problem space.

Lucas: Yeah, I was going to say that there exists standing tension between this view of the complexity of all preferences and values that human beings have and then the valence realist view which says that what’s ultimately good or certain experiential or hedonic states. I’m interested and curious about if this valence view is true, whether it’s all just going to turn into hedonium in the end.

Mike: I’m personally a fan of continuity. I think that if we do things right we’ll have plenty of time to get things right and also if we do things wrong then we’ll have plenty of time for things to be wrong. So I’m personally not a fan of big unilateral moves, it’s just getting back to this question of can understanding what is help us, clearly yes.

Andrés: Yeah. I guess one view is we could say preserve optionality and learn what is, and then from there hopefully we’ll be able to better inform oughts and with maintained optionality we’ll be able to choose the right thing. But that will require a cosmic level of coordination.

Mike: Sure. An interesting frame here is whole brain emulation. So whole brain emulation is sort of a frame built around functionalism and it’s a seductive frame I would say. If whole brain emulations wouldn’t necessarily have the same qualia based on hardware considerations as the original humans, there could be some weird lock in effects where if the majority of society turned themselves into p-zombies then it may be hard to go back on that.

Lucas: Yeah. All right. We’re just getting to the end here, I appreciate all of this. You guys have been tremendous and I really enjoyed this. I want to talk about identity in AI alignment. This sort of taxonomy that you’ve developed about open individualism and closed individualism and all of these other things. Would you like to touch on that and talk about implications here in AI alignment as you see it?

Andrés: Yeah. Yeah, for sure. The taxonomy comes from Daniel Kolak, a philosopher and mathematician. It’s a pretty good taxonomy and basically it’s like open individualism, that’s the view that a lot of meditators and mystics and people who take psychedelics often ascribe to, which is that we’re all one consciousness. Another frame is that our true identity is the light of consciousness so to speak. So it doesn’t matter in what form it manifests, it’s always the same fundamental ground of being. Then you have the common sense view, it’s called closed individualism. You start existing when you’re born, you stop existing when you die. You’re just this segment. Some religious actually extend that into the future or past with reincarnation or maybe with heaven.

The sense of ontological distinction between you and others while at the same time ontological continuity from one moment to the next within you. Finally you have this view that’s called empty individualism, which is that you’re just a moment of experience. That’s fairly common among physicists and a lot of people who’ve tried to formalize consciousness, often they converged on empty individualism. I think a lot of theories of ethics and rationality, like the veil of ignorance as a guide or like how do you define the rational decision making as maximizing the expected utility of yourself as an agent, all of those seem to implicitly be based on closed individualism and they’re not necessarily questioning it very much.

On the other hand, if the sense of individual identity of closed individualism doesn’t actually carve nature at its joints as a Buddhist might say, the feeling of continuity of being a separate unique entity is an illusory construction of your phenomenology that casts in a completely different light how to approach rationality itself and even self interest, right? If you start identifying with the light of consciousness rather than your particular instantiation, you will probably care a lot more about what happens to pigs in factory farms because … In so far as they are conscious they are you in a fundamental way. It matters a lot in terms of how to carve out different possible futures, especially when you get into these very tricky situations like, well what if there is mind melding or what if there is the possibility of making perfect copies of yourself?

All of these edge cases are really problematic from the common sense view of identity, but they’re not really a problem from an open individualist or empty individualist point of view. With all of this said, I do personally think there’s probably a way of combining open individualism with valence realism that gives rise to the next step in human rationality where we’re actually trying to really understand what the universe wants so to speak. But I would say that there is a very tricky aspect here that has to do with a game theory. We evolved to believe in close individualism. the fact that it’s evolutionarily adaptive, it’s obviously not an argument for it being fundamentally true, but it does seem to be some kind of a evolutionarily stable point to believe of yourself as who you can affect the most directly in a causal way If you define your boundary that way.

That basically gives you focus on the actual degrees of freedom that you do have, and if you think of society of open individualists, everybody’s altruistically maximally contributing to the universal consciousness, and then you have one close individualist who is just selfishly trying to maybe acquire power just for itself, you can imagine that one view would have a tremendous evolutionary advantage in that context. So I’m not one who just naively advocates for open individualism unreflectively. I think we still have to work out to the game theory of it, how to make it evolutionarily stable and also how to make it ethical. Open question, I do think it’s important to think about and if you take consciousness very seriously, especially within physicalism, that usually will cast huge doubts on the common sense view of identity.

It doesn’t seem like a very plausible view if you actually tried to formalize consciousness.

Mike: The game theory aspect is very interesting. You can think of closed individualism as something evolutionists produced that allows an agent to coordinate very closely with its past and future ourselves. Maybe we can say a little bit about why we’re not by default all empty individualists or open individualists. Empty individualism seems to have a problem where if every slice of conscious experience is its own thing, then why should you even coordinate with your past and future self because they’re not the same as you. So that leads to a problem of defection, and open individualism is everything is the same being so to speak than … As Andrés mentioned that allows free riders, if people are defecting, it doesn’t allow altruist punishment or any way to stop the free ride. There’s interesting game theory here and it also just feeds into the question of how we define our identity in the age of AI, the age of cloning, the age of mind uploading.

This gets very, very tricky very quickly depending on one’s theory of identity. They’re opening themselves up to getting hacked in different ways and so different theories of identity allow different forms of hacking.

Andrés: Yeah, which could be sometimes that’s really good and sometimes really bad. I would make the prediction that not necessarily open individualism in its full fledged form but a weaker sense of identity than closed individualism is likely going to be highly adaptive in the future as people basically have the ability to modify their state of consciousness in much more radical ways. People who just identify with narrow sense of identity will just be in their shells, not try to disturb the local attractor too much. That itself is not necessarily very advantageous. If the things on offer are actually really good, both hedonically and intelligence wise.

I do suspect basically people who are somewhat more open to basically identify with consciousness or at least identify with a broader sense of identity, they will be the people who will be doing more substantial progress, pushing the boundary and creating new cooperation and coordination technology.

Lucas: Wow, I love all that. Seeing closed individualism for what it was has had a tremendous impact on my life and this whole question of identity I think is largely confused for a lot of people. At the beginning you said that open individualism says that we are all one consciousness or something like this, right? For me in identity I’d like to move beyond all distinctions of sameness or differenceness. To say like, oh, we’re all one consciousness to me seems to say we’re all one electromagnetism, which is really to say the consciousness is like an independent feature or property of the world that’s just sort of a ground part of the world and when the world produces agents, consciousness is just an empty identityless property that comes along for the ride.

The same way in which it would be nonsense to say, “Oh, I am these specific atoms, I am just the forces of nature that are bounded within my skin and body” That would be nonsense. In the same way in sense of what we were discussing with consciousness there was the binding problem of the person, the discreteness of the person. Where does the person really begin or end? It seems like these different kinds of individualism have, as you said, epistemic and functional use, but they also, in my view, create a ton of epistemic problems, ethical issues, and in terms of the valence theory, if quality is actually something good or bad, then as David Pearce says, it’s really just an epistemological problem that you don’t have access to other brain states in order to see the self intimating nature of what it’s like to be that thing in that moment.

There’s a sense in which i want to reject all identity as arbitrary and I want to do that in an ultimate way, but then in the conventional way, I agree with you guys that there are these functional and epistemic issues that closed individualism seems to remedy somewhat and is why evolution, I guess selected for it, it’s good for gene propagation and being selfish. But once one sees AI as just a new method of instantiating bliss, it doesn’t matter where the bliss is. Bliss is bliss and there’s no such thing as your bliss or anyone else’s bliss. Bliss is like its own independent feature or property and you don’t really begin or end anywhere. You are like an expression of a 13.7 billion year old system that’s playing out.

The universe is just peopleing all of us at the same time, and when you get this view and you see you as just sort of like the super thin slice of the evolution of consciousness and life, for me it’s like why do I really need to propagate my information into the future? Like I really don’t think there’s anything particularly special about the information of anyone really that exists today. We want to preserve all of the good stuff and propagate those in the future, but people who seek a immortality through AI or seek any kind of continuation of what they believe to be their self is, I just see that all as misguided and I see it as wasting potentially better futures by trying to bring Windows 7 into the world of Windows 10.

Mike: This all gets very muddy when we try to merge human level psychological drives and concepts and adaptations with a fundamental physics level description of what is. I don’t have a clear answer. I would say that it would be great to identify with consciousness itself, but at the same time, that’s not necessarily super easy if you’re suffering from depression or anxiety. So I just think that this is going to be an ongoing negotiation within society and just hopefully we can figure out ways in which everyone can move.

Andrés: There’s an article I wrote it, I just called it consciousness versus replicators. That kind of gets to the heart of this issue, but that sounds a little bit like good and evil, but it really isn’t. The true enemy here is replication for replication’s sake. On the other hand, the only way in which we can ultimately benefit consciousness, at least in a plausible, evolutionarily stable way is through replication. We need to find the balance between replication and benefit of consciousness that makes the whole system stable, good for consciousness and resistant against the factors.

Mike: I would like to say that I really enjoy Max Tegmark’s general frame of you leaving this mathematical universe. One re-frame of what we were just talking about in these terms are there are patterns which have to do with identity and have to do with valence and have to do with many other things. The grand goal is to understand what makes a pattern good or bad and optimize our light cone for those sorts of patterns. This may have some counter intuitive things, maybe closed individualism is actually a very adaptive thing, in the long term it builds robust societies. Could be that that’s not true but I just think that taking the mathematical frame and the long term frame is a very generative approach.

Lucas: Absolutely. Great. I just want to finish up here on two fun things. It seems like good and bad are real in your view. Do we live in heaven or hell?

Mike: Lot of quips that come to mind here. Hell is other people, or nothing is good or bad but thinking makes it so. My pet theory I should say is that we live in something that is perhaps close to heaven as is physically possible. The best of all possible worlds.

Lucas: I don’t always feel that way but why do you think that?

Mike: This gets through the weeds of theories about consciousness. It’s this idea that we tend to think of consciousness on the human scale. Is the human condition good or bad, is the balance of human experience on the good end, the heavenly end or the hellish end. If we do have an objective theory of consciousness, we should be able to point it at things that are not human and even things that are not biological. It may seem like a type error to do this but we should be able to point it at stars and black holes and quantum fuzz. My pet theory, which is totally not validated, but it is falsifiable, and this gets into Bostrom’s simulation hypothesis, it could be that if we tally up the good valence and the bad valence in the universe, that first of all, the human stuff might just be a rounding error.

Most of the value, in this value the positive and negative valence is found elsewhere, not in humanity. And second of all, I have this list in the last appendix of Principia Qualia as well, where could massive amounts of consciousness be hiding in the cosmological sense. I’m very suspicious that the big bang starts with a very symmetrical state, I’ll just leave it there. In a utilitarian sense, if you want to get a sense of whether we live in a place closer to heaven or hell we should actually get a good theory of consciousness and we should point to things that are not humans and cosmological scale events or objects would be very interesting to point it at. This will give a much better clear answer as to whether we live in somewhere closer to heaven or hell than human intuition.

Lucas: All right, great. You guys have been super generous with your time and I’ve really enjoyed this and learned a lot. Is there anything else you guys would like to wrap up on?

Mike: Just I would like to say, yeah, thank you so much for the interview and reaching out and making this happen. It’s been really fun on our side too.

Andrés: Yeah, I think wonderful questions and it’s very rare for an interviewer to have non conventional views of identity to begin with, so it was really fun, really appreciate it.

Lucas: Would you guys like to go ahead and plug anything? What’s the best place to follow you guys, Twitter, Facebook, blogs, website?

Mike: Our website is qualiaresearchinstitute.org and we’re working on getting a PayPal donate button out but in the meantime you can send us some crypto. We’re building out the organization and if you want to read our stuff a lot of it is linked from the website and you can also read my stuff at my blog, opentheory.net and Andrés’ is @qualiacomputing.com.

Lucas: If you enjoyed this podcast, please subscribe, give it a like or share it on your preferred social media platform. We’ll be back again soon with another episode in the AI Alignment series.

End of recorded material

FLI Podcast: The Unexpected Side Effects of Climate Change With Fran Moore and Nick Obradovich

It’s not just about the natural world. The side effects of climate change remain relatively unknown, but we can expect a warming world to impact every facet of our lives. In fact, as recent research shows, global warming is already affecting our mental and physical well-being, and this impact will only increase. Climate change could decrease the efficacy of our public safety institutions. It could damage our economies. It could even impact the way that we vote, potentially altering our democracies themselves. Yet even as these effects begin to appear, we’re already growing numb to the changing climate patterns behind them, and we’re failing to act.

In honor of Earth Day, this month’s podcast focuses on these side effects and what we can do about them. Ariel spoke with Dr. Nick Obradovich, a research scientist at the MIT Media Lab, and Dr. Fran Moore, an assistant professor in the Department of Environmental Science and Policy at the University of California, Davis. They study the social and economic impacts of climate change, and they shared some of their most remarkable findings.

Topics discussed in this episode include:

  • How getting used to climate change may make it harder for us to address the issue
  • The social cost of carbon
  • The effect of temperature on mood, exercise, and sleep
  • The effect of temperature on public safety and democratic processes
  • Why it’s hard to get people to act
  • What we can all do to make a difference
  • Why we should still be hopeful

Publications discussed in this episode include:

You can listen to the podcast above, or read the full transcript below.

Ariel: Hello, and a belated happy Earth Day to everyone. I’m Ariel Conn, your host of The Future of Life podcast. And in honor of Earth Day this month, I’m happy to have two climate-related scientists joining the show. We’ve all heard about the devastating extreme weather that climate change will trigger; We’ve heard about melting ice caps, rising ocean levels, warming oceans, flooding, wildfires, hurricanes, and so many other awful natural events.

And it’s not hard to imagine how people living in these regions will be negatively impacted. But climate change won’t just affect us directly. It will also impact the economy, agriculture, our mental health, our sleep patterns, how we exercise, food safety, the effectiveness of policing, and more.

So today, I have two scientists joining me to talk about some of those issues. Doctor Nick Obradovich is a research scientist at the MIT Media Lab. He studies the way that climate change is likely impacting humanity now and into the future. And Doctor Fran Moore is an assistant professor in the Department of Environmental Science and Policy at the University of California, Davis. Her work sits at the intersection of climate science and environmental economics and is focused on understanding how climate change will affect the social and natural systems that people value.

So Nick and Fran, thank you so much for joining us.

Nick: Thanks for having us.

Fran: Thank you.

Ariel: Now, before we get into some of the topics that I just listed, I want to first look at a paper you both published recently called “Rapidly Declining Remarkability of Temperature Anomalies May Obscure Public Perception of Climate Change.” And essentially, as you describe in the paper, we’re like frogs in boiling water. As long as the temperatures continue to increase, we forget that it used to be cooler and we recalibrate what we consider to be normal for weather. So what may have been considered extreme 15 years ago, we now think of as normal.

Among other things, this can make trying to address climate change more difficult. I want both of you now to talk more about what the study was and what it means for how we address climate change. But first, if you could just talk about what prompted this study.

Fran: So I’ve been interested for a long time in the question of: as the climate changes and people are gradually exposed to this new weather in their everyday life that used to be very unusual but because of climate change more and more typical, how do we think about defining things like extreme events under those kind of conditions?

I think researchers have this intuition that there’s something about human perception and judgment that goes into that or that there’s some kind of limit of how humans kind of understand the weather that define what we think of as normal and extreme, but no one had really been able to measure it. What I think is really cool in this study, and working with Nick and our other coworkers, we’re able to use data from Twitter to actually measure what people think of as remarkable, and then we can show that that changed quickly over time.

Ariel: I found this use of social media to be really interesting. Can you talk a little bit about how you used Twitter? And I was also curious if that — aside from being a new source of information — does it also present limitations in any way or is it just exciting new information?

Nick: The crux of this insight was that we talk about the weather all the time. It’s sort of the way to pass time in casual conversation, to say hi to people, to awkwardly change the topic — if someone has said something a little awkward, start talking about the weather. And we realized that Twitter is a great source for what people are talking about, and I had been collecting billions of tweets over the last number of years. And Fran and I met, and then we got talking about this idea and we were like, “Huh, you know, I bet you could use Twitter to measure how people are talking about the weather.” And then Fran had the excellent insight that you could also use it to get a metric of how remarkable people find the weather by how unusually much they’re talking about unusual weather. And so that was kind of the crux of the insight there.

And then really what we did is we said, “Okay, what terms exist in the English language that might likely refer to weather when people are talking about the weather?” And we combed through the billions of tweets that I had in my store and found all of the tweets plausibly about the weather and used that for our analysis and then mapped that to the historical temperatures that people had experienced and also the rates of warming over time that the locations that people lived in had experienced.

Ariel: And what was the timeframe that you were looking at?

Fran: So it’s about three years: from March of 2014 to the end of 2016. But then we’re able to combine that with weather data that goes back to 1980. So what we can then look at — we can match the tweeting behavior going on in this relatively recent time period, but we can look at how is that explained by all the patterns of temperature change across these counties.

So what we found that, firstly, maybe exactly what you would expect, right, which is that the rate at which people tweet about particular temperatures depends on what is typical for that location, for that time of year. And so if you have very cold weather but that very cold weather is basically what you should be expecting, you’re going to tweet about that less than if that very cold weather is atypical.

But then what we were able to show is that what people think of as “usual” that defines this tweeting behavior changes really quickly, so that if you have these unusual temperatures multiple years in a row the tweeting response quickly starts to decline. So what that indicates is that people are adjusting their ideas of normal weather very quickly. And we’re actually able to use the tweets to directly estimate the rate at which this updating happens and, to our best estimate, we think that people are using approximately the last two to eight years as a baseline for establishing normal temperatures for that location for that time of year. When people think of, look at the weather outside, and they’re evaluating is it hot, is it cold, the reference point they’re using is set by the fairly recent past.

Ariel: What does this mean as we’re trying to figure out ways to address climate change?

Nick: When we saw this result, we were a bit troubled because it was faster than we would perhaps hope. I’m a political scientist by training, and I saw this and I said, “This is not ideal,” because if you have people getting used to a climate that is changing on geologically rapid scales but perhaps on human time scales somewhat slow — if people get used to that as it changes, then some of the things that we know helps to drive political action, policy, and political attention is just awareness of a problem. And so if you’re having people’s expectations adapt pretty quickly to climate change, then all of a sudden a hundred-degree day in North Dakota would have been very unusual in 2000 but maybe it’s fairly normal in 2030. And so as a result, people aren’t as aware of the signal that climate change is producing. And that could have some pretty troubling political implications.

Fran: My takeaway from this is that I think it certainly points to the risk that these conditions that are geologically or even historically very, very unusual — that they are not perceived as such. We’re really limited by our human perception, and that’s even within individuals, right — what we’re estimating is something that happens within an individual’s lifetime.

So what it means is that you can’t just assume that as climate change gets worse it’s going to automatically rise to the top of the political agenda in terms of urgency. And that, like a lot of other chronic, serious social problems we have, that it takes a lot of work on the part of activists and norm entrepreneurs to do something about climate change. And that just because it’s happening and it’s becoming, at least statistically or scientifically, increasingly clear that it’s happening, that won’t necessarily translate into people wanting to do something about it.

Ariel: And so you guys were looking more at what we might consider sort of abnormalities in relatively normal weather: if it’s colder in May than we’d expect or it’s hotter in January than we’d expect. But that’s not the same as some of the extreme weather events that we’ve also seen. I don’t know if this is sort of a speculative question, but do you think the extreme weather events could help counter our normalization of just changing temperatures or do you think we would eventually normalize the extreme weather events as well?

Nick: That’s a great question. So one of the things we didn’t look at is, for example, giant hurricanes, big wildfires, and things like that that are all likely to increase in frequency and severity in the future. So it could certainly be the case that the increase in frequency and intensity of those events offsets the adaptation, as you suggest. We actually are trying to think about ways to measure how people might adapt to other climate-driven phenomena aside from just regular, day-to-day temperature.

I hope that’s the case, right? Because if we’re also adapting to sea level rise pretty rapidly as it goes along and we’re also adapting to increased frequency of wildfires and things like that, a few things might happen; one being that if we’re getting used to semi-regular flooding, for example, we don’t move as quickly as we need to — up to the point where basically cities start getting inundated, and that could be very problematic. So I hope that what you suggest actually turns out to be the case.

Fran: I think that this is a question we get a lot, like, “Oh, well temperature is one thing, but really the thing that’s really going to spur people is these hurricanes or floods or these wildfires.” And I think that’s a hypothesis, but I would say it’s as yet untested. And sure, a hurricane is an extreme event, but when they start happening frequently, is that going to be subject to the same kind of normalization phenomenon that we show here? I would say I don’t know, and it’s possible it would look really different.

But I think it’s also possible that it wouldn’t, and that when you start seeing these happen on a very regular basis, that they become normalized in a very similar way to what you see here. And it might be that they spur some kind of adaptation or response policy, but the idea that they would automatically spur a lot of mitigation policy I think is something that people seem to think might be true, but I would say that we need some more empirical evidence.

Nick: I like to think of humans as an incredibly adaptable species. I think we’re a great species for that reason. We’re arguably the most successful ever. But our adaptability in this instance may perhaps prove to be part of our undoing, just in normalizing worsening conditions as they deteriorate around us. I hope that the hypothesis that Fran lays out ends up being the case: that, as the climate gets weirder and weirder, there is enough signal that people become concerned enough to do something about it. But it is just an empirical hypothesis at this point.

Fran: What I thought was a really neat thing that we were able to do in this paper was ask: are people just not talking about these conditions because they’ve normalized them and they’re no longer interesting or have people actually been able to take action to reduce the negative consequences of these conditions? And so to do that we used sentiment analysis. So this is something that Nick and our other author Patrick Baylis have used before: Just based on the words that are being used in the tweets, you can measure the overall mood being conveyed or the kind of emotional state of people sending those tweets and what very hot and very cold temperatures have negative effects on sentiment. And we find that those effects persist even if people stop talking about these unusual temperatures.

What that’s saying is that this is not a good news story of effective adaptation, that people are able to reduce the negative consequences of these temperatures. Actually, they’re still being very negatively affected by them — and they’re just not talking about them anymore. And that’s kind of the worst of both worlds.

Ariel: So I want to actually follow up with that because I had a question about that paper that you just referenced. And if I was reading it correctly, it sort of seemed like you’re saying that we basically get crankier as the weather falls onto either extreme of our preferred comfort zone. Is that right? Are we just going to be crankier as climate gets worse?

Nick: So that was the paper that Patrick Baylis and I had with a number of other co-authors, and the key point about that paper is that we were looking at historical contemporaneous weather and we weren’t looking for adaptation over time with that analysis. So what we found is that at certain level of temperature, for example when it’s really hot outside, people’s sentiment goes down — their mood is worsened. When it’s really cold outside, we also found that people’s sentiment was worsened; and we found that, for example, lots of precipitation made people unhappy as well.

But with that paper what we didn’t do was examine the degree to which — changes in the weather over time, people got used to those. And so that’s what we were able to do in this paper with Fran, and what we saw was, as Fran points out, troubling, which is that people weren’t substantially adapting to these temperature shocks over time, to longer term changes in climate —  they just weren’t talking about them as much.

So if you think though that there is no adaptation, then yeah, if the world becomes much hotter, on the hot end of things — so in the summer, in the northern hemisphere for example — people will probably be a bit grumpier. Importantly though, on the other side of things, in the wintertime, if you have warming, you might expect that people are in somewhat better moods because they’re able to enjoy nicer weather outside. So it is a little bit of a double-edged sword in that way, but again important that we don’t see that people are adapting, which is pretty critical.

Ariel: Okay. So we can potentially expect at least the possibility of decrease in life satisfaction just because of weather, without us even really appreciating that it’s the weather that’s doing it to us?

Nick: Yes, during hotter periods. The converse is that during the wintertime, in the northern hemisphere, we would have to say that warming temperatures, people would probably enjoy for the most part. If it was supposed to be 35 degrees Fahrenheit outside and it’s now 45 Fahrenheit, that’s a bit more pleasant. Now you can go with a lighter jacket.

So there will be those small positive benefits — although, as Fran is probably going to talk about here in a little bit, there are other big countervailing negatives that we need to consider too.

Fran: What I like about this paper that Nick and Patrick wrote previously on sentiment, they have these comparisons to it being a Monday or to home team loss. Sometimes it’s hard to put these measures in perspective, and so Mondays on average make people miserable and it being very, very hot out also makes people miserable in kind of similar ways to it being a Monday.

Nick: Yeah. We found that particularly cold temperatures, for example, were a similar magnitude of effect on positive sentiment. A reduced positive sentiment of a magnitude that was equivalent to a small earthquake in your location and things like that. So the magnitude effects of the weather are much larger than we necessarily thought that they would be, which we thought was I guess interesting. But also there was a whole big literature from psychology and economics and political science that had looked at weather and various outcomes and found that sometimes the effect sizes were very large and sometimes the effect sizes were effectively zero. So we tried to basically just provide the answer to that question in that paper: The weather matters.

Ariel: I want to go back to the idea of whether or not extreme events will be normalized, because I tend to be slightly cynical — and maybe this is hopeful for once — that the economic cost of the extreme events is not something we would normalize too, that we would not get used to having to spend billions of dollars a year, whatever it is, to rebuild cities.

And Fran, I think that touches on some of your work if I’m correct, in that you look at what some of these costs of climate change would be. So first, is that correct? Is that one of the things that you look at?

Fran: Yeah. A large component of my work has been on improving the representation of climate change damages, so kind of what we know from the physical sciences about how climate change affects the things that we care about and including the representation of that in the thing called the social cost of carbon, which is a measure that’s very relevant for the regulatory and policy analysis for climate change.

Ariel: Can you explain what the social cost of carbon is? What is being measured?

Fran: So if you think about when we emit a ton of CO2, right, and that ton of CO2 goes off into the elements of the earth and it’s going to affect the climate, that change in the climate is going to have consequences around the world in many different sectors and is going to stay in the atmosphere for a long time. And so those effects are going to persist far out into the future.

What the social cost of carbon is, it’s really just an accounting exercise that tries to quantify what are all those impacts and then add them all up together and put them in common units and assign that as a cost of that ton of CO2 that you emitted. You can see in that description why this is an ambitious exercise in that we’re talking about, theoretically there should be all these climate change impacts around the world for all time. And then there’s another step too, which is in order to aggregate these to add them up, you need to put everything into common units. So the units that we use are dollars, so that’s a critical economic valuation step in order to think about these things that happen in agriculture or they happen along coastlines or they affect mortality risk and how do you take all them and then put them into some kind of common unit and value them all.

And so depending on what type of impact you’re talking about, that’s more or less challenging. But it’s an important number because at least in the United States, we have a requirement that all regulations have to have passed a cost-benefit analysis. So in order to do a cost-benefit analysis of climate regulation, you need to understand what are the benefits of not emitting CO2? So pretty much any policy that’s affecting emissions needs to account for these damages in some way. That’s why this is very directly relevant to policy.

Ariel: I want to keep looking at what this means. In one of your papers you have a sentence that reads, “impacts on the agriculture increase from net benefits $2.7 ton per carbon to net cost of $8.5 per ton of CO2.” I think that seemed like a really good example for you to explain what these costs actually mean?

Fran: Yeah. This was an exercise I did a couple of years ago with coauthors Tom Hertel and Uris Baldos and Delavane Diaz. The idea was that we know now a lot about how climate change affects crop yields. There’s been an awful lot of work on that in economics and agricultural sciences. But that was essentially not represented in the social cost of carbon, where our estimates of climate change damages really came from studies that were either in the late 80s or the early 90s, and really our understanding of how climate change will affect agriculture has really changed since then.

What those numbers represent, the benefits of $2.7 per ton is what is currently represented in the models that calculate the social cost of carbon. So the fact that it’s negative, that indicates that these models were thinking that agriculture on net is going to benefit from climate change. This is largely because a combination of CO2 fertilization and a fair bit of assumption that in most of the world crops are going to benefit from higher temperatures. Now we know that’s more or less not the case.

When we look at how we think temperature and CO2 is going to affect the major crops around the world, we use these estimates from the IPCC, and then we introduce those into an economic model. This is a valuation set. That economic model will kind of account for the fact that countries can shift what they grow, they can change their consumption patterns, they can change their trading partners. A lot of these economic adjustments that we know can be done, and this modeling accounts for all of that. We find a fairly large negative effect of climate change on agriculture, which amounts to about $9 per ton of CO2, and those are kind of discounted paths. So you emit a ton of CO2 today, that’s the dollar value today of all the future damages that ton of CO2 will have via the agricultural sector.

Ariel: As a reminder, how many tons of CO2 were emitted, say, last year, or the year before? Something that we know?

Fran: We do know that. I’m not sure I can tell you that off the top of my head. I would caution you that you also don’t want to take this number and just multiply it by the total tons emitted, because this is a marginal value. This is merely about do we emit this ton or not? It’s really not a value that can be used for saying, “Okay, well the total damages from climate change are X.” There’s distinction between total damages and marginal damages, and the social cost of carbon number is very much about marginal damages.

So it’s like at the margin, how much should we tax CO2? It’s really not going to tell you, should we be on a two-degree pathway, or should we be on a four-degree pathway, or should we be on a 1.5-degree pathway? That you need a really different analysis for.

Ariel: I want to ask one more follow-up question to this, and then I want to get onto some of the other papers of Nick’s. What are the cost estimates that we’re looking at right now? What are you comfortable saying that we’re, I don’t know, losing this much money, we’re going to pay this much money, we’re going to negatively be impacted by X number of dollars?

Fran: The exercise that the Obama administration went through, a fairly comprehensive exercise to take the existing models and standardize them in certain ways to try and say, “What is the social cost of carbon value that we should use?” They have a number that’s around $40 per ton of CO2. If you take that number as a benchmark, there’s obviously a lot of uncertainty around it, and I think it’s fair to say a lot of that uncertainty is on the high end rather than on the low end. So if you think about probability distribution around that existing number, I would say there’s a lot of reasons why it might be higher than $40 per ton, and there’s a few, but not a ton, of reasons why it might be lower.

Ariel: Nick, was there anything you wanted to add to what Fran has just been talking about?

Nick: Yeah. The only thing I would say is I totally agree that the uncertainty is on the upper bound of the estimate of the social cost of carbon, and possibly on the extreme upper bound. So there are unknowns that we can’t estimate from the historical data in terms of being able to figure out what happens in the natural system and how that translates through to the social system and the social costs. We and Fran are basically just doing the best we can with the historical evidence that we can bring to bear on the question, but there are giant “unknown unknowns,” to quote Donald Rumsfeld.

Ariel: I want to sort of quantify this ever so slightly. I Googled it, and it looks like we are emitting in the tens of billions of tons of carbon each year? Does that sound right?

Fran: Check that it’s carbon and not CO2. I think it’s eight to nine gigatons of carbon.

Ariel: Okay.

Nick: CO2 equivalence.

Ariel: Anyway, it’s a lot.

Nick: It’s a lot, yeah.

Ariel: That’s the point.

Nick: It’s a lot; It’s increasing. I think 2018 was an increased blip in terms of the rate of emissions. We need to be decreasing, and we’re still increasing. Not great.

Ariel: All right. We’ll take a quick break from the economic side of things and what this will financially cost us, and look at some of the human impacts that we many not necessarily be thinking about, but which Nick has been looking into. I’m just going to go through a list of very quick questions that I asked about a few papers that I looked at.

The first one I looked at is apparently — and this makes sense when I think about it — climate change is going to impact our physical activity, because it’s too hot in places, or things like that. I was wondering if you could talk a little bit about the research you did into that and what you think the health implications are.

Nick: Yeah, totally. So I like to think about the climate impacts that are not necessarily easily and readily and immediately translated into dollar value because I think really we live in a pretty complex system, and when you turn up the temperature on that complex system, it’s probably going to affect basically everything. The question is what’s going to be affected and how much are the important things going to be affected? And so a lot of my work has focused on identifying things that we hadn’t yet thought about as social scientists in doing the social impact estimates in the cost of carbon and just raising questions about those areas.

Physical activity was one. The idea to look at that actually came from back in 2015 — there was a big heat wave in San Diego when I was living there, and I was in a regular running regimen. I would go running at 4:00 or 5:00 PM, but there were a number of weeks, definitely strings of days, where it was 100 degrees or more in October in San Diego, which is very unusual. At 4:00 PM it would be 100 degrees and kind of humid, so I just didn’t run as much for a couple of weeks, and that threw off my whole exercise schedule. I was like, “Huh, that’s an interesting impact of heat that I hadn’t really heard about.”

So I was like, “Well, I know this big data set that collects people’s reported physical activity over time, and has a decade worth of data on randomly sampled US, I think about a million randomly sampled US citizens.” Over a million. So I had those data, and I was like, “Well, I wonder if you see the weather and the climate that these people are living in, does that influence their exercise patterns?” What we found was a little bit surprising to me because I had thought about it on the hot end: “Oh, I stopped running because it was too hot.” But the reality is that temperature, and also rainfall, impact our physical activity patterns across the full distribution.

When it’s really cold outside, people don’t report being very physically active and one of the main reasons for that is one of the primary ways Americans get physical activity is by going outside for a run or a jog or a walk. When it’s very nasty outside, people report not being as physically active. We saw on the cold end of the distribution that as temperatures warmed up, people exercised more. That was actually up to a relatively high peak in that function. It was an inverted U shape, and the peak was relatively high in terms of temperature. It was somewhere around 84 degrees fahrenheit.

What we realized actually is that at least in the US, at least in some of the northern latitudes in the US, people might exercise more as temperatures warm up to a point. They might exercise more in the wintertime, for example. That was this small little silver lining in what is otherwise, from my research and from Fran’s research and most research on this topic, a cascade of negative news that is likely to result from climate change. But the health impacts of being more physically active are positive. It’s one of the most important things we can do for our health. So a small, positive impact of warming temperatures offset by all the other things that we’ve found.

Ariel: I know from personal experience I definitely don’t like to run in the winter. I don’t like ice, so that makes sense.

Nick: Ice, frostbite.

Ariel: Yeah.

Nick: All these things are … yeah. So just observationally, if I look out my window, and there’s a running path near me, I see dramatically more people on a sunny, mild day than I do during the middle of the winter. That’s how most people get their exercise. A lot of people, we know from the public health literature, if they’re not going out for a walk or a stroll, they’re not really getting any physical activity at all.

Ariel: Okay. So potential good news.

Nick: A little bit. Just a little bit.

Fran: Yeah. Nick moved from San Diego to Boston, so I think he’s got a better appreciation of the benefits of warmer wintertime temperatures.

Nick: I do! Although, and this is an important limitation in that study, is we didn’t really, again, look at adaptation over time. And what I found moving to Boston was that I got used to the cold winters much faster than I thought I would coming from San Diego, and now do go running in the wintertime here, though I thought I would barely be able to go outside. So perhaps that’s a positive thing in terms of our ability to adapt on the hotter end as well, and perhaps that undercuts a little bit the degree to which warming during the winter might increase physical activity.

This is a broader and more general point. A lot of these studies — it’s pretty hard to look at long-term adaptation over time because some of the data sets that we have just don’t give us enough span of time to really see people adapt their behaviors within person. So, many of the studies are kind of estimating the direct effect of temperature, for example, on physical activity, and not estimating how much long-term warming has changed people’s physical activity patterns. There are some studies that do that with respect to some outcomes — for example, agricultural yields. But it’s less common to do that with some of the public health-related outcomes and psychological-related outcomes.

Ariel: I want to ask about some of these other studies you’ve done as well, but do you think starting these studies now will help us get more research into this in the future?

Nick: Yeah. I think the more and the better data that we have, the better we’re going to be able to answer some of these questions. For example, the physical activity paper, also we did a sleep paper — the self-report data that we used in those papers are indeed just self-report data. So we’re able to get access to what are called actigraph data, or data that come from monitors like Fitbit and actually track people’s sleep and physical activity. We’re working on those follow-up studies, and the more data that we have and the longer that we have those data, the more we can identify potential adaptation over time.

Ariel: The sleep study was actually where I was going to go next. It seemed nicely connected to the physical activity one. Basically we’ve been told for years to get eight hours of sleep and to try to set the temperatures in our rooms to be cooler so that our quality of sleep is better. But it seems that increasing temperatures from climate change might affect that. So I was hoping you could weigh in on that too.

Nick: Yeah. I think you said it pretty well. The results in that paper basically indicate that higher nighttime temperatures outside, higher ambient temperatures outside, increase the frequency that people report a bad night of sleep. Basically what we say is absent adaptation, climate change might worsen human sleep in the future.

Now, one of the primary ways you adapt, as you just mentioned, is by turning the AC on, keeping it cooler in the room in the summertime, and trying to fight the fact that it’s — as it was in San Diego — it’s 90 degrees and humid at 12:00 AM. The problem with that is that a lot of our electricity grid is currently still on carbon. Until we decarbonize the grid, if we’re using more air conditioning to make it cooler and make it comfortable in our rooms in the summers, we are emitting more carbon.

That poses something else that Fran and I have talked about and others are starting to work on: the idea that it’s not a one-way street. In other words, if the climate system is changing, and it’s changing our behaviors in order to adapt to it, or just frankly changing our behaviors, we are potentially altering the amount of carbon that we put back into the system and the positive feedback loop that’s driven by humans this time, as opposed to permafrost and things like that. So, it’s a big, complex equation. And that makes estimating the social cost of carbon all the harder because it’s no longer just this one-way street. But if it means emitting carbon through behavioral effects of emitting that carbon causes the emission of more carbon, then you have a harder-to-estimate function.

Fran: Yeah, you’re right, and it is hard. I often get questions of like, “Oh, is this in the social cost of carbon? Is this?” And usually the answer is no.

Ariel: Yeah. I guess I’ve got another one sort of like that. I mean, I think studies indicate pretty well right now that if you don’t get enough sleep, you’re not as productive at work, and that’s going to cost the economy as well. Is stuff like that also being considered or taken into account?

Fran: I think in general, I think researchers’ ideas a few decades ago was very much that there were a very limited set of pathways by which a developed economy could be affected by climate. We could enumerate those, and they were things like agriculture or forestry and coastline affected by sea level rise. The newer work that’s being done now, like Nick’s papers that we just talked about, and a lot of other work, is showing that actually we seem to be very sensitive to temperature on a number of fronts, and that has these quite pervasive economic effects.

Fran: And so, yeah, the sleep question is a huge one, right? If you don’t get a good night’s sleep, that affects how much you can learn in school the next day, it affects your productivity at work the next day. So we do see evidence that temperature affects labor productivity in developed countries. Even in sectors that you think should be relatively well insulated against them, let’s say because there’s work that’s being done inside, there’s evidence too that high temperatures affect how well students can learn in school and their test scores. That has potentially a very long term effect on their educational trajectory in life and their ability to accumulate human capital and their earning potential in the future.

Fran: And so, these newer findings I think are suggesting that even developed economies are sensitive in ways that we’re only beginning to learn to climate change, and pretty much none of that is currently represented in our current estimates of the social cost of carbon.

Nick: Yeah, that’s a great point. And to add an example to that, I did a study last year in which I looked at government productivity, so government workers’ productivity. Because we had seen a number of these studies, as Fran mentioned, that private sector productivity was declining, and I was wondering if government workers that are tasked with overseeing our safety, especially in times of heat stress and other forms of stress, if those workers themselves were affected by heat stress and other forms of environmental stress.

We indeed found that they were, so we found that police officers were less likely to stop people in traffic stops even though there was an increased risk of traffic fatalities and also crime increases with higher temperatures as well. We found that food safety inspectors were less likely to do inspections. The probability of an inspection declined as the temperature increased, though the risk of violation conditional on an inspection happening increased. So it’s more likely that there’s a food safety problem when it’s hot out, but food safety inspectors were less likely to go out and do inspections.

That’s another thing that fits into, “Okay, we’re affected in really complex ways.” Maybe it’s the case that the food safety inspectors were less likely to go do their job because they were really tired because they didn’t sleep well the night before, or perhaps because they were grumpy because it was really hot outside. We don’t know exactly, but these systems are indeed really complicated and probably a lot of things are in play all at once.

Ariel: Another one that you have looked that I think is also important to consider in this whole complex system that’s being impacted by climate change is democratic processes.

Nick: Yeah, yeah. I’m a political scientist by training, and what we political scientists do is think a lot about politics, the democratic process, voting, turnout, and one of things that we know best in political science is this thing called retrospective voting or perhaps economic voting — basically the idea that people vote largely based on either how well they individually are doing, or how well they perceive their society is doing under the current incumbent. So in the US for example, if the economy is doing well the incumbent faces better prospects than if the economy is doing poorly. If individuals perceive that they are doing well, the incumbent faces better prospects.

I basically just sat down and thought for a while, and was like, you know, climate change across all these dimensions is likely to worsen both economic well-being, and also just personal, psychological, and physiological well-being. I wonder if it’s the case that it might somewhat disrupt the way that democracies function, and the way that elections function in democracies. For example, if you’re exposed to hotter temperatures there are lots of reasons to suspect that you might perceive being yourself less well-off — and whoever’s in office, you might just be a little bit less likely to vote for them in the next election.

So I put together a bunch of election results from a variety of countries around the world, a variety of democratic institutions around the world, and looked at the effect of hotter temperatures on the incumbent politicians’ prospects in the upcoming elections: So, what were the effects of the temperatures prior to the election on the electoral success of that incumbent? And what I found was that as you had unusual increases in temperature the year prior to an election, and as those got hotter on the distribution — so hotter places — you saw that the incumbent prospects declined in that election. Incumbent politicians were more likely to get thrown out of office when temperatures were unusually warm, especially in hotter places.

And that, as a political scientist, is a little bit troubling because it could be two things. It could be the case that politicians are being thrown out of office because they don’t respond well to the stressors associated with added temperature. So they could, for example, if there was a heatwave, and it caused some crop losses, maybe those politicians didn’t do a good enough job helping the people who lost those crops. But it also might just be the case that people are grumpier, and they’re not feeling as good, and there’s really no way the politician can respond, or the politician has limited resources and can only respond so much.

And if that’s the driving function then what you see is this exogenous shock leading to an ouster of a democratically elected politician, perhaps not directly related to the performance of that politician. And that can lead to added electoral churn; If you see increased rates of electoral churn where politicians are losing office with increasing frequency, it can shorten the electoral time horizons that politicians have. If they think that every election they stand a real good chance of losing office they may be less likely to pursue policies that have benefits over two or three election cycles. That was the crux of that paper.

Ariel: Fran, did you have anything you wanted to add to that?

Fran: I think it’s a really really fascinating question. This is one of my favorite of Nick’s papers. We think about how these really fundamental institutions that we think when we go to the ballot box, and we do our election, there’s a lot of factors that go into that, right? Even the very fact that you can pick up any kind of temperature signal on that is surprising to me, and I think it’s a really important finding. And then trying to pin down these mechanisms I think is interesting for trying to play out the scenarios of how does climate change proceed in terms of the effects of changing the political environment in which we’re operating, and having, like Nick said, these potentially long term effects on the types of issues politicians are willing to work on. It’s really important, and I think it’s something that needs more work.

Nick: Fran makes an excellent point embedded in there, which is the understanding of what we call causal mediation. In other words, if you see that hot temperatures lead to a reduction in GDP growth, why is that? What exactly is causing that? GDP growth is this huge aggregate of all of these different things. Why might temperature be causing that? Or even, for example, if you see that temperature is affecting people’s sleep quality, why is that the case? Is it because it’s influencing the degree to which people are stressed out during the day because they’re grumpier, they’re having more negative interactions, and then they’re thinking about that before they fall asleep? Is it due to purely physiological reasons, circadian rhythm and sleep cascades?

The short of it is, we don’t actually have very good answers to most of these questions for most of the climates impacts that we’ve looked at, and it’s pretty critical to have better answers, largely because if you want to adapt to coming climate changes, you’d like to spend your policy money on the things that are most important in those equations for reducing GDP growth or causing mental health outcomes or worsening people’s mood. You’d like to really be able to tell people precisely what they can do to adapt, and also spend money precisely where it’s needed, and it’s just strictly difficult science to be able to do that well.

Ariel: I want to actually go back real quick to something that you had said earlier, too: the idea that if politicians know that they’re unlikely to get elected during the next cycle, they’re also unlikely to plan long term. And I think especially when we’re looking at a situation like climate change where we need politicians who can plan long term, it seems like can this actually exacerbate our short-term thinking?

Nick: Yeah. That’s what I was concerned about, and still something that I am concerned about. As you get more and more extremes that are occurring more and more regularly and politicians are either responding well or not responding well to those extremes it may be somewhat like our weather and expectations paper — similar underlying psychological dynamics — which is just that people become more and more focused on their recent past, and their recent experience in history, and what’s going on now.

And if that’s the case then if you’re a politician, and you’ve had a bunch of hurricanes, or you’re dealing with the aftermath of hurricanes in your district, really should you be spending your policy efforts on carbon mitigation, or should you be trying to make sure that all of your constituents right now are housed and fed? That’s a little bit of a false dichotomy there, but it isn’t fully a false dichotomy because politicians only have so many resources, and they only have so much time. So as their risk of losing election goes up due to something that is more immediate, politicians will tend to focus on those risks as opposed to longer-term risks.

Ariel: I feel like in that example, too, in defense of the politicians, if you actually have to deal with people who are without homes and without food, that is sort of the higher priority.

Nick: Totally. I mean, I did a bunch of field work in Sub-Saharan Africa for my graduate studies and spent a lot of time in Malawi and South Africa, and talking to politicians there about how they felt about climate change, and specifically climate change mitigation policy. And half the time that I asked them they just looked at me as if I was crazy, and would explicitly say, like, “You must be crazy if you think that we have a  time horizon that gives us 20 years to worry about how our people are doing 20 years from now when they can’t feed themselves, and don’t have running water, and don’t have electricity right now. We’re working on the day to day things, the long term perspective just gets thrown out the window.” I think to a lesser degree that operates in every democratic polity.

Fran: This gets back to that question that we were talking about earlier: Are extreme events kind of fundamentally different in motivating action to reduce emissions? And this is exactly the reason why I’m not convinced that it’s the case, in that when you have the repeated extreme events, yes, there’s a lot of focus on rebuilding or restoring or kind of recovering from those events — potentially at the detriment of longer-term, less immediate action that would affect the long-term probability of getting those events in the future, which is reducing emissions.

And so I think it’s a very complex, causal argument to make in the face of a hurricane or a catastrophe that you need to be reducing emissions to address that, right, and that’s why I’m not convinced that just getting more and more disasters is going to automatically lead to more action on climate change. I think it’s actually almost this kind of orthogonal process that generates the political will to do something about climate change.

Having these disasters and operating in this very resource-constrained world — that’s a world in which action on climate change might be less likely, right? Doing some things that are quite costly involve a lot of political will and political leadership, and doing that in an environment where people are feeling vulnerable and feeling kind of exposed to natural disasters I think is actually going to be more difficult.

Nick: Yeah. So that’s an excellent point, Fran. I think you could see both things operating, which is I think you could see that people aren’t necessarily adapting their expectations to giant wildfires every single summer, that they realize that something is off and weird about that, but that they just simply can’t direct that attention to doing something about climate change because literally their house just burnt down. So they’re not going to be out in the streets lobbying their politicians as directly because they have more things to worry about. That is troubling to me, too.

Ariel: So that, I think, is a super, super important point, and now I have something new to worry about. It makes sense that the local communities that are being directly impacted by these horrific events have to deal with what’s just happened to them, but do we see an increase in external communities looking at what’s happening and saying, “Oh, we’ve got to stop this, and because we weren’t directly impacted we actually can do something?”

Nick: Anecdotally, somewhat yes. I mean, for example, if you look at the last couple of summers and the wildfire season, when there are big wildfire outbreaks the news media does a better than average job at linking that extreme weather to climate change, and starting to talk about climate change.

So if it is the case that people consume that news media and are now thinking about climate change more, that is good. And I think actually from some of the more recent surveys we’ve actually seen an uptick in awareness about climate change, worry about climate change, and willingness to list it as a top priority. So there are some positive trends on that front.

The bigger question is still an empirical one, though, which is what happens when you have 10 years of wildfires every summer. Maybe people are now not talking about it as much as they did in the very beginning.

Ariel: So I have two final questions for both of you. The first is: is there something that you think is really important for people to know or understand that we didn’t touch on?

Nick: I would say this, and this is maybe more extreme than Fran would say, but we are in really big trouble. We are in really, really big trouble. We are emitting more and faster than we were previously. We are probably dramatically underestimating the social cost of carbon because of all the reasons that we noted here and for many more, and the one thing that I kind of always tell people is don’t be lulled by the relatively banal feeling of your sleep getting disrupted, because if your sleep is disrupted it’s because everything is being disrupted, and it’s going to get worse.

We’ve not seen even a small fraction of  the likely total cost of climate change, and so yeah, be worried, and ideally use that worry in a productive way to lobby your politicians to do something about it.

Fran: I would say we talked about the social cost of carbon and the way it’s used, and I think sometimes it does get criticized because we know there’s a lot of things that it doesn’t capture, like what Nick’s been talking about, but I also know that we’re very confident that it’s greater than zero at this point, and substantially greater than zero, right? So the question of, should it be 40 dollars a ton, or should it be 100 dollars a ton, or should it be higher than that, is frankly quite irrelevant when right now we’re really not putting any price on carbon, we’re not doing any kind of ambitious climate policy.

Sometimes I think people get bogged down in these arguments of, is it bad, or is it catastrophic, and frankly either way we should be doing something to reduce our emissions, and they shouldn’t be going up, they should be going down, and we should be doing more than we’re doing right now. And arguing about where we end that process, or when we end that process of reducing our emissions is really not a relevant discussion to be having right now because right now everyone can agree that we need to start the process.

And so I think not getting too hung up on should it be two degrees, should it be 1.5, but just really focused on let’s do more, and let’s do it now, and let’s start that, and see where that gets us, and once we start that process and can begin to learn from it, that’s going to take us a long way to being where we want to be. I think these questions of, “Why aren’t we doing more than we’re doing now?” are the most important and some of the most interesting around climate change right now.

Nick: Yeah. Let’s do everything we can to avoid four or five degrees Celsius, and we can quibble over 1.5 or two later. Totally agree.

Ariel: Okay. So I’m going to actually add a question. So we’ve got two more questions for real this time I think. What do we do? What do you suggest we do? What can a listener right now do to help?

Fran: Vote. Make climate change your priority when you’re thinking about candidates, when you’re engaged in the democratic process, and when you’re talking to your elected representative — reach out to them, and make sure they know that this is the priority for you. And I would also say talk to your friends and family, right? Like these scientists or economists talking about this, that’s not something that’s going to reach everyone, right, but reaching out to your network of people who value your opinion, or just talking about this, and making sure people realize this is a critical issue for our generation, and the decisions we take now are going to shape the future of the planet in very real ways, and collectively we do have agency to do something about it.

Nick: Yes. I second all of that. I think the key is that no one can convince your friends and family that climate change is a threat perhaps better than you, the listener, can. Certainly Fran and I are not going to be able to convince your friends, and that’s just the way that humans work. We trust those that we are close to and trust. So if we want to get a collective movement to start doing something about carbon, it’s going to have to happen via the political process, and it’s also just going to have to happen in our social networks, by actually going out there and talking to people about it. So let’s do that.

Ariel: All right. So final question, now that we’ve gone through all these awful things that are going to happen: what gives you hope?

Fran: If we think about a world that solves this problem, that is a world that has come together to work on a truly global problem. The reason why we’ll solve this problem is because we recognize that we value the future, that we value people living in other countries, people around the world, and that we value nature and nonhuman life on the planet, and that we’ve taken steps to incorporate those values into how we organize our life.

When we think about that, that is a very big ask, right? We shouldn’t underestimate just how difficult this is to do, but we should also recognize that it’s going to be a really amazing world to live in. It’s going to provide a kind of foundation for all kinds of cooperation and collective action I think on other issues to build a better world.

Recognizing that that’s what we’re working towards, these are the values that we want to reflect in our society, and that is a really positive place to be, and a place that is worth working towards — that’s what’s giving me hope.

Nick: That’s a beautiful answer, Fran. I agree with that. It would be a great world to live in. The thing that I would say is giving me hope is actually if I had looked forward in 2010 and said, “Okay, where do I think that renewables are going to be? Where do I think that the electrification of vehicles is going to be?” I would have guessed that we would not be anywhere close to where we are right now on those fronts.

We are making much more progress on getting certain aspects of the economy and our lives decarbonized than I thought we would have been, even without any real carbon policy on those fronts. So that’s pretty hopeful for me. I think that as long as we can continue that trend we won’t have everything go poorly, but I also hesitate to hinge too much of our fate on the hope that technological advances from the past will continue at the same rate into the future. At the end of the day we probably really do need some policy, and we need to get together and engage in collective action to try and solve this problem. I hope that we can.

Ariel: I hope that we can, too. So Nick and Fran, thank you both so much for joining us today.

Nick: Thanks for having me.

Fran: Thanks so much for the interesting conversation.

Ariel: Yeah. I enjoyed this, thank you.

As always, if you’ve been enjoying the show, please take a moment to like it, share it, and follow us no your preferred podcast platform.

 

AI Alignment Podcast: An Overview of Technical AI Alignment with Rohin Shah (Part 2)

The space of AI alignment research is highly dynamic, and it’s often difficult to get a bird’s eye view of the landscape. This podcast is the second of two parts attempting to partially remedy this by providing an overview of technical AI alignment efforts. In particular, this episode seeks to continue the discussion from Part 1 by going in more depth with regards to the specific approaches to AI alignment. In this podcast, Lucas spoke with Rohin Shah. Rohin is a 5th year PhD student at UC Berkeley with the Center for Human-Compatible AI, working with Anca Dragan, Pieter Abbeel and Stuart Russell. Every week, he collects and summarizes recent progress relevant to AI alignment in the Alignment Newsletter

Topics discussed in this episode include:

  • Embedded agency
  • The field of “getting AI systems to do what we want”
  • Ambitious value learning
  • Corrigibility, including iterated amplification, debate, and factored cognition
  • AI boxing and impact measures
  • Robustness through verification, adverserial ML, and adverserial examples
  • Interpretability research
  • Comprehensive AI Services
  • Rohin’s relative optimism about the state of AI alignment

You can take a short (3 minute) survey to share your feedback about the podcast here.

We hope that you will continue to join in the conversations by following us or subscribing to our podcasts on Youtube, SoundCloud, iTunes, Google Play, Stitcher, or your preferred podcast site/application. You can find all the AI Alignment Podcasts here.

Lucas: Hey everyone, welcome back to the AI Alignment Podcast. I’m Lucas Perry, and today’s episode is the second part of our two part series with Rohin Shah, developing an overview of technical AI alignment efforts. If you haven’t listened to the first part, we highly recommend that you do, as it provides an introduction to the varying approaches discussed here. The second part is focused on exploring AI alignment methodologies in more depth, and nailing down the specifics of the approaches and lenses through which to view the problem.

In this episode, Rohin will begin by moving sequentially through the approaches discussed in the first episode. We’ll start with embedded agency, then discuss the field of getting AI systems to do what we want, and we’ll discuss ambitious value learning alongside this. Next, we’ll move to corrigibility, in particular, iterated amplification, debate, and factored cognition.

Next we’ll discuss placing limits on AI systems, things of this nature would be AI boxing and impact measures. After this we’ll get into robustness which consists of verification, adversarial machine learning, and adversarial examples to name a few.

Next we’ll discuss interpretability research, and finally comprehensive AI services. By listening to the first part of the series, you should have enough context for these materials in the second part. As a bit of announcement, I’d love for this podcast to be particularly useful and interesting for its listeners. So I’ve gone ahead and drafted a short three minute survey that you can find link to on the FLI page for this podcast, or in the description of where you might find this podcast. As always, if you find this podcast interesting or useful, please make sure to like, subscribe and follow us on your preferred listening platform.

For those of you that aren’t already familiar with Rohin, he is a fifth year PhD student in computer science at UC Berkeley with the Center for Human Compatible AI working with Anca Dragan, Pieter Abbeel, and Stuart Russell. Every week he collects and summarizes recent progress relative to AI alignment in the Alignment Newsletter. With that, we’re going to start off by moving sequentially through the approached just enumerated. All right. Then let’s go ahead and begin with the first one, which I believe was embedded agency.

Rohin: Yeah, so embedded agency. I kind of want to just differ to the embedded agency sequence, because I’m not going to do anywhere near as good a job as that does. But the basic idea is that we would like to have this sort of theory of intelligence, and one major blocker to this is the fact that all of our current theories, most notably, the reinforcement learning make this assumption that there is a nice clean boundary between the environment and the agent. It’s sort of like the agent is playing a video game, and the video game is the environment. There’s no way for the environment to actually affect the agent. The agent has this defined input channel, takes actions, those actions get sent to the video game environment, the video game environment does stuff based on that and creates an observation, and that observation was then sent back to the agent who gets to look at it, and there’s this very nice, clean abstraction there. The agent could be bigger than the video game, in the same way that I’m bigger than tic tac toe.

I can actually simulate the entire game tree of tic tac toe and figure out what the optimal policy for tic tac toe is. It’s actually this cool XKCD that does just show you the entire game tree, it’s great.

So in the same way in the video game setting, the agent can be bigger than the video game environment, in that it can have a perfectly accurate model of the environment and know exactly what its actions are going to do. So there are all of these nice assumptions that we get in video game environment land, but in real world land, these don’t work. If you consider me on the Earth, I cannot have an exact model of the entire environment because the environment contains me inside of it, and there is no way that I can have a perfect model of me inside of me. That’s just not a thing that can happen. Not to mention having a perfect model of the rest of the universe, but we’ll leave that aside even.

There’s the fact that it’s not super clear what exactly my action space is. Once there is now a laptop available to me, does the laptop start talking as part of my action space? Do we only talk about motor commands I can give to my limbs? But then what happens if I suddenly get uploaded and now I just don’t have any lens anymore? What happened to my actions, are they gone? So Embedded Agency broadly factors this question out into four sub problems. I associate them with colors, because that’s what Scott and Abram do in their sequence. The red one is decision theory. Normally decision theory is consider all possible actions to simulate their consequences, choose the one that will lead to the highest expected utility. This is not a thing you can do when you’re an embedded agent, because the environment could depend on what policy you do.

The classic example of this is Newcomb’s problem where part of the environment is all powerful being, Omega. Omega is able to predict you perfectly, so it knows exactly what you’re going to do, and Omega is 100% trustworthy, and all those nice simplifying assumptions. Omega provides you with the following game. He’s going to put two transparent boxes in front of you. The first box will always contain $1,000 dollars, and the second box will either contain a million dollars or nothing, and you can see this because they’re transparent. You’re given the option to either take one of the boxes or both of the boxes, and you just get whatever’s inside of them.

The catch is that Omega only puts the million dollars in the box if he predicts that you would take only the box with the million dollars in it, and not the other box. So now you see the two boxes, and you see that one box has a million dollars, and the other box has a thousand dollars. In that case, should you take both boxes? Or should you just take the box with the million dollars? So the way I’ve set it up right now, it’s logically impossible for you to do anything besides take the million dollars, so maybe you’d say okay, I’m logically required to do this, so maybe that’s not very interesting. But you can relax this to a problem where Omega is 99.999% likely to get the prediction right. Now in some sense you do have agency. You could choose both boxes and it would not be a logical impossibility, and you know, both boxes are there. You can’t change the amounts that are in the boxes now. Man, you should just take both boxes because it’s going to give you $1,000 more. Why would you not do that?

But I claim that the correct thing to do in this situation is to take only one box because the fact that you are the kind of agent who would only take one box is the reason that the one box has a million dollars in it anyway, and if you were the kind of agent that did not take one box, took two boxes instead, you just wouldn’t have seen the million dollars there. So that’s the sort of problem that comes up in embedded decision theory.

Lucas: Even though it’s a thought experiment, there’s a sense though in which the agent in the thought experiment is embedded in a world where he’s making the observation of boxes that have a million dollars in them with genius posing these situations?

Rohin: Yeah.

Lucas: I’m just seeking clarification on the embeddedness of the agent and Newcomb’s problem.

Rohin: The embeddedness is because the environment is able to predict exactly, or with close to perfect accuracy what the agent could do.

Lucas: The genie being the environment?

Rohin: Yeah, Omega is part of the environment. You’ve got you, the agent, and everything else, the environment, and you have to make good decision. We’ve only been talking about how the boundary between agent and environment isn’t actually all that clear. But to the extent that it’s sensible to talk about you being able to choose between actions, we want some sort of theory for how to do that when the environment can contain copies of you. So you could think of Omega as simulating a copy of you and seeing what you would do in this situation before actually presenting you with a choice.

So we’ve got the red decision theory, then we have yellow embedded world models. With embedded world models, the problem that you have is that, so normally in our nice video game environment, we can have an exact model of how the environment is going to respond to our actions, even if we don’t know it initially, we can learn it overtime, and then once we have it, it’s pretty easy to see how you could plan in order to do the optimal thing. You can sort of trial your actions, simulate them all, and then see which one does the best and do that one. This is roughly AIXI works. AIXI is the model of the optimally intelligent RL agent in this four video game environment like settings.

Once you’re in embedded agency land, you cannot have an exact model of the environment because for one thing the environment contains you and you can’t have an exact model of you, but also the environment is large, and you can’t simulate it exactly. The big issue is that it contains you. So how you get any sort of sensible guarantees on what you can do, even though the environment can contain you is the problem off of embedded world models. You still need a world model. It can’t be exact because it contains you. Maybe you could do something hierarchical where things are fuzzy at the top, but then you can go focus in on each particular levels of hierarchy in order to get more and more precise about each particular thing. Maybe this is sufficient? Not clear.

Lucas: So in terms of human beings though, we’re embedded agents that are capable of creating robust world models that are able to think about AI alignment.

Rohin: Yup, but we don’t know how we do it.

Lucas: Okay. Are there any sorts of understandings that we can draw from our experience?

Rohin: Oh yeah, I’m sure there are. There’s a ton of work on this that I’m not that familiar with, and probably a cog psy or psychology or neuroscience, all of these fields I’m sure will have something to say about it. Hierarchical world models in particular are pretty commonly talked about as interesting. I know that there’s a whole field of hierarchical reinforcement learning in AI that’s motivated by this, but I believe it’s also talked about in other areas of academia, and I’m sure there are other insights to be getting from there as well.

Lucas: All right, let’s move on then from hierarchical world models.

Rohin: Okay. Next is blue robust delegation. So with robust delegation, the basic issue here, so we talked about Vingean reflection a little bit in the first podcast. This is a problem that falls under robust delegation. The headline difficulty under robust delegation is that the agent is able to do self improvement, it can reason about itself and do things based on that. So one way you can think of this is that instead of thinking about it as self modification, you can think about it as the agent is constructing a new agent to act at future time steps. So then in that case your agent has the problem of how do I construct an agent for future time steps such that I am happy delegating my decision making to that future agent? That’s why it’s called robust delegation. Vingean reflection in particular is about how can you take an AI system that uses a particular logical theory in order to make inferences and have it move to a stronger logical theory, and actually trust the stronger logical theory to only make correct inferences?

Stated this way, the problem is impossible because lots of theorems, it’s a well known result in logic that a weaker theory can not prove the consistency of well even itself, but also any stronger theory as a corollary. Intuitively in this pretty simple example, we don’t know how to get an agent that can trust a smarter version of itself. You should expect this problem to be hard, right? It’s in some sense dual to the problem that we have of AI alignment where we’re creating something smarter than us, and we need it to pursue the things we want it to pursue, but it’s a lot smarter than us, so it’s hard to tell what it’s going to do.

So I think of this aversion of the AI alignment problem, but apply to the case of some embedded agent reasoning about itself, and making a better version of itself in the future. So I guess we can move on to the green section, which is sub system alignment. The tagline for subsystem alignment would be the embedded agent is going to be made out of parts. Its’ not this sort of unified coherent object. It’s got different pieces inside of it because it’s embedded in the environment, and the environment is made of pieces that make up the agent, and it seems likely that your AI system is going to be made up of different cognitive sub parts, and it’s not clear that those sub parts will integrate together into a unified whole such that unified whole is pursuing a goal that you like.

It could be that each individual sub part has its own goal and they’re all competing with each other in order to further their own goals, and that the aggregate overall behavior is usually good for humans, at least in our current environment. But as the environment changes, which it will due to technological progression, one of the parts might just win out and be optimizing some goal that is not anywhere close to what we wanted. A more concrete example would be one way that you could imagine building a powerful AI system is to have a world model that is awarded for making accurate predictions about what the world will look like, and then you have a decision making model, which has a normal reward function that we program in, and tries to choose actions in order to maximize that reward. So now we have an agent that has two sub systems in it.

You might worry for example that once the world model gets sufficiently powerful, it starts realizing that the decision making thing is depending on my output in order to make decisions. I can trick it into making the world easier to predict. So maybe I give it some models of the world that say make everything look red, or make everything black, then you will get high reward somehow. Then if the agent actually then takes that action and makes everything black, and now everything looks black forever more, then the world model can very easily predict, yeah, no matter what action you take, the world is just going to look black. That’s what the world is now, and that gets the highest possible reward. That’s a somewhat weird story for what could happen. But there’s no real stronger unit that says nope, this will definitely not happen.

Lucas: So in total sort of, what is the work that has been done here on inner optimizers?

Rohin: Clarifying that they could exist. I’m not sure if there has been much work on it.

Lucas: Okay. So this is our fourth cornerstone here in this embedded agency framework, correct?

Rohin: Yup, and that is the last one.

Lucas: So surmising these all together, where does that leave us?

Rohin: So I think my main takeaway is that I am much more strongly agreeing with MIRI that yup, we are confused about how intelligence works. That’s probably it, that we are confused about how intelligence works.

Lucas: What is this picture that I guess is conventionally held of what intelligence is that is wrong? Or confused?

Rohin: I don’t think there’s a thing that’s wrong about the conventional. So you could talk about a definition of intelligence, of being able to achieve arbitrary goals. I think Eliezer says something like cross domain optimization power, and I think that seems broadly fine. It’s more that we don’t know how intelligence is actually implemented, and I don’t think we ever claim to know that, but embedded agency is like we really don’t know it. You might’ve thought that we were making progress on figuring out how intelligence might be implemented with a classical decision theory, or the Von Neumann–Morgenstern utility theorem, or results like value of perfect information and stuff like being always non negative.

You might’ve thought that we were making progress on it, even if we didn’t fully understand it yet, and then you read on method agency and you’re like no, actually there are lots more conceptual problems that we have not even begin to touch yet. Well MIRI has begun to touch them I would say, but we really don’t have good stories for how any of these things work. Classically we just don’t have a description of how intelligence works. MIRI’s like even the small threads of things we thought about how intelligence could work are definitely not the full picture, and there are problems with them.

Lucas: Yeah, I mean just on simple reflection, it seems to me that in terms of the more confused conception of intelligence, it sort of models it more naively as we were discussing before, like the simple agent playing a computer game with these well defined channels going into the computer game environment.

Rohin: Yeah, you could think of AIXI for example as a model of how intelligence could work theoretically. The sequence is like no, this is why I see it as not a sufficient theoretical model.

Lucas: Yeah, I definitely think that it provides an important conceptual shift. So we have these four corner stones, and it’s illuminating in this way, are there any more conclusions or wrap up you’d like to do on embedded agency before we move on?

Rohin: Maybe I just want to add a disclaimer that MIRI is notoriously hard to understand and I don’t think this is different for me. It’s quite plausible that there is a lot of work that MIRI has done, and a lot of progress that MIRI has made that I either don’t know about or know about but don’t properly understand. So I know I’ve been saying I want to differ to people a lot, or I want to be uncertain a lot, but on MIRI I especially want to do so.

Lucas: All right, so let’s move on to the next one within this list.

Rohin: The next one was doing what humans want. How do I summarize that? I read a whole sequence of posts on it. I guess the story for success, to the extent that we have one right now is something like use all of the techniques that we’re developing, or at least the insights from them, if not the particular algorithms to create an AI system that behaves corrigibly. In the sense that it is trying to help us achieve our goals. You might be hopeful about this because we’re creating a bunch of algorithms for it to properly infer our goals and then pursue them, so this seems like a thing that could be done. Now, I don’t think we have a good story for how that happens. I think there are several open problems that show that our current algorithms are insufficient to do this. But it seems plausible that with more research we could get to something like that.

There’s not really a good overall summary of the field because it’s more like a bunch of people separately having a bunch of interesting ideas and insights, and I mentioned a bunch of them in the first part of the podcast already. Mostly because I’m excited about these and I’ve read about them recently, so I just sort of start talking about them whenever they seem even remotely relevant. But to reiterate them, there is the notion of analyzing the human AI system together as pursuing some sort of goal, or being collectively rational as opposed to having an individual AI system that is individually rational. So that’s been somewhat formalized in Cooperative Inverse Reinforcement Learning. Typically with inverse reinforcement learning, so not the cooperative kind, you have a human, the human is sort of exogenous, the AI doesn’t know that they exist, and the human creates a demonstration of the sort of behavior that they want the AI to do. If you’re thinking about robotics, it’s picking up a coffee cup, or something like this. Then the robot just sort of sees this demonstration and comes out of thin air, it’s just data that it gets.

Let’s say that I had executed this demonstration, what reward function would I have been optimizing? And then it figures out a reward function, and then it uses that reward function however it wants. Usually you would then use reinforcement learning to optimize that reward function and recreate the behavior. So that’s normal inverse reinforcement learning. Notably in here is that you’re not considering the human and the robot together as a full collective system. The human is sort of exogenous to the problem, and also notable is that the robot is sort of taking the reward to be something that it has as opposed to something that the human has.

So CIRL basically says, no, no, no, let’s not model it this way. The correct thing to do is to have a two player game that’s cooperative between the human and the robot, and now the human knows the reward function and is going to take actions somehow. They don’t necessarily have to be demonstrations. But the human knows the reward function and will be taking actions. The robot on the other hand does not know the reward function, and it also gets to take actions, and the robot keeps a probability distribution over the reward that the human has, and updates this overtime based on what the human does.

Once you have this, you get this sort of nice, interactive behavior where the human is taking actions that teach the robot about the reward function. The robot learns the reward function over time and then starts helping the human achieve his or her goals. This sort of teaching and learning behavior comes simply under the assumption that the human and the robot are both playing the game optimally, such that the reward function gets optimized as best as possible. So you get this sort of teaching and learning behavior from the normal notion of optimizing a particular objective, just from having the objective be a thing that the human knows, but not a thing that the robot knows. One thing that, I don’t know if CIRL introduced it, but it was one of the key aspects of CIRL was having probability distribution over a reward function, so you’re uncertain about what reward you’re optimizing.

This seems to give a bunch of nice properties. In particular, once the human starts taking actions like trying to shut down the robot, then the robot’s going to think okay, if I knew the correct reward function, I would be helping the human, and given that the human is trying to turn me off, I must be wrong about the reward function, I’m not helping, so I should actually just let the human turn me off, because that’s what would achieve the most reward for the human. So you no longer have this incentive to disable your shutdown button in order to keep optimizing. Now this isn’t exactly right, because better than both of those option is to disable the shutdown button, stop doing whatever it is you were doing because it was clearly bad, and then just observe humans for a while until you can narrow down what their reward function actually is, and then you go and optimize that reward, and behave like a traditional goal directed agent. This sounds bad. It doesn’t actually seem that bad to me under the assumption that the true reward function is a possibility that the robot is considering and has a reasonable amount of support in the prior.

Because in that case, once the AI system eventually narrows down on the reward function, it will be either the true reward function, or a reward function that’s basically indistinguishable from it, because otherwise, there would be some other information that I could gather in order to distinguish between them. So you actually would get good outcomes. Now of course in practice it seems likely that we would not be able to specify the space of reward functions well enough for this to work. I’m not sure about that point. Regardless, it seems like there’s been some sort of conceptual advance here about when the AI’s trying to do something for the human, it doesn’t have the disabling the shutdown button, the survival incentive.

So while maybe reward uncertainty is not exactly the right way to do it, it seems like you could do something analogous that doesn’t have the problems that reward uncertainty does.

One other thing that’s kind of in this vein, but a little bit different is the idea of an AI system that infers and follows human norms, and the reason we might be optimistic about this is because humans seem to be able to infer and follow norms pretty well. I don’t think humans can infer the values that some other human is trying to pursue and then optimize them to lead to good outcomes. We can do that to some extent. Like I can infer that someone is trying to move a cabinet, and then I can go help them move that cabinet. But in terms of their long term values or something, it seems pretty hard to infer and help with those. But norms, we do in fact do infer and follow all the time. So we might think that’s an easier problem, like our AI systems could do it as well.

Then the story for success is basically that with these AI systems, we are able to accelerate technological progress as before, but the AI systems behave in a relatively human like manner. They don’t do really crazy things that a human wouldn’t do, because that would be against our norms. As with the accelerating technological progress, we get to the point where we can colonize space, or whatever else it is you want to do with the feature. Perhaps even along the way we do enough AI alignment research to build an actual aligned superintelligence.

There are problems with this idea. Most notably if you accelerate technological progress, bad things can happen from that, and norm following AI systems would not necessarily stop that from happening. Also to the extent that if you think human society, if left to its own devices would lead to something bad happening in the future, or something catastrophic, then a norm following AI system would probably just make that worse, in that it would accelerate that disaster scenario, without really making it any better.

Lucas: AI systems in a vacuum that are simply norm following seem to have some issues, but it seems like an important tool in the toolkit of AI alignment to have AIs which are capable of modeling and following norms.

Rohin: Yup. That seems right. Definitely agree with that. I don’t think I had mentioned the reference on this. So for this one I would recommend people look at Incomplete Contracting and AI Alignment I believe is the name of the paper by Dylan Hadfield-Menell, and Gillian Hadfield, or also my post about it in the Value Learning Sequence.

So far I’ve been talking about sort of high level conceptual things within the, ‘get AI systems to do what we want.’ There are also a bunch of more concrete technical approaches. It’s like inverse reinforcement learning, deep reinforcement learning from human preferences, and there you basically get a bunch of comparisons of behavior from humans, and use that to infer a reward function that your agent can optimize. There’s recursive reward modeling where you take the task that you are trying to do, and then you consider a new auxiliary task of evaluating your original task. So maybe if you wanted to train an AI system to write fantasy books, well if you were to give human feedback on that, it would be quite expensive because you’d have to read the entire fantasy book and then give feedback. But maybe you could instead outsource the task, even evaluating fantasy books, you could recursively apply this technique and train a bunch of agents that can summarize the plot of a book or comment on the pros of the book, or give a one page summary of the character development.

Then you can use all of these AI systems to help you give feedback on the original AI system that’s trying to write a fantasy book. So that’s a recursive reward modeling. I guess going a bit back into the conceptual territory, I wrote a paper recently on learning preferences from the state of the world. So the intuition there is that the AI systems that we create aren’t just being created into a brand new world. They’re being instantiated in a world where we have already been acting for a long time. So the world is already optimized for our preferences, and as a result, our AI systems can just look at the world and infer quite a lot about our preferences. So we gave an algorithm that did this in some poor environments.

Lucas: Right, so again, this covers the conceptual category of methodologies of AI alignment where we’re trying to get AI systems to do what we want?

Rohin: Yeah, current AI systems in a sort of incremental way, without assuming general intelligence.

Lucas: And there’s all these different methodologies which exist in this context. But again, this is all sort of within this other umbrella of just getting AI to do things we want them to do?

Rohin: Yeah, and you can actually compare across all of these methods on particular environments. This hasn’t really been done so far, but in theory it can be done, and I’m hoping to do it at some point in the future.

Lucas: Okay. So we’ve discussed embedded agency, we’ve discussed this other category of getting AIs to do what we want them to do. Just moving forward here through diving deep on these approaches.

Rohin: I think the next one I wanted to talk about was ambitious value learning. So here the basic idea is that we’re going to build a superintelligent AI system, it’s going to have goals, because that’s what the Von Neumann—Morgenstern theorem tells us is that anything with preferences, if they’re consistent and coherent, which they should be for a superintelligent system, or at least as far as we can tell they should be consistent. Any type system has a utility function. So natural thought, why don’t we just figure out what the right utility function is, and put it into the AI system?

So there’s a lot of good arguments that you’re not going to be able to get the one correct utility function, but I think Stuart’s hope is that you can find one that is sufficiently good or adequate, and put that inside of the AI system. In order to do this, he wants to, I believe the goal is to learn the utility function by looking at both human behavior as well as the algorithm that human brains are implementing. So if you see that the human brain, when it knows that something is going to be sweet, tends to eat more of it. Then you can infer that humans like to eat sweet things. As opposed to humans really dislike eating sweet things, but they’re really bad at optimizing their utility function. In this project of ambitious value learning, you also need to deal with the fact that human preferences can be inconsistent, that the AI system can manipulate the human preferences. The classic example of that would be the AI system could give you a shot of heroin, and that probably change your preferences from I do not want heroin to I do want heroin. So what does it even mean to optimize for human preferences when they can just be changed like that?

So I think the next one was corrigibility and the associated iterated amplification and debate basically. I guess factored cognition as well. To give a very quick recap, the idea with corrigibility is that we would like to build an AI system that is trying to help us, and that’s the property that we should aim for as opposed to an AI system that actually helps us.

One motivation for focusing on this weaker criteria is that it seems quite difficult to create a system that knowably actually helps us, because that means that you need to have confidence that your AI system is never going to make mistakes. It seems like quite a difficult property to guarantee. In addition, if you don’t make some assumption on the environment, then there’s a no free lens theorem that says this is impossible. Now it’s probably reasonable to put some assumption on the environment, but it’s still true that your AI system could have reasonable beliefs based on past experience, and nature still throws it a curve ball, and that leads to some sort of bad outcome happening.

While we would like this to not happen, it also seems hard to avoid, and also probably not that bad. It seems like the worst outcomes come when your superintelligent system is applying all of its intelligence in pursuit of their own goal. That’s the thing that we should really focus on. That conception of what we want to enforce is probably the thing that I’m most excited about. Then there are particular algorithms that are meant to create corrigible agents, assuming we have the capabilities to get general intelligence. So one of these is iterated amplification.

Iterated amplification is really more of a framework to describe particular methods of training systems. In particular, you alternate between amplification and distillation steps. You start off with an agent that we’re going to assume is already aligned. So this could be a human. A human is a pretty slow agent. So the first thing we’re going to do is distill the human down into a fast agent. So we could use something like imitation learning, or maybe inverse reinforcement learning plus reinforcement learning, followed by reinforcement learning or something like that in order to train a neural net or some other AI system that mostly replicates the behavior of our human, and remains aligned. By aligned maybe I mean corrigible actually. We start with a corrigible agent, and then we produce agents that continue to be corrigible.

Probably the resulting agent is going to be a little less capable than the one that you started out with just because if the best you can do is to mimic the agent that you stated with, that gives you exactly as much capabilities as that agent. So if you don’t succeed at properly mimicking, then you’re going to be a little less capable. Then you take this fast agent and you amplify it, such that it becomes a lot more capable, at perhaps the cost of being a lot slower to compute.

One way that you could image doing amplification would be to have a human get a top level task, and for now we’ll assume that the task is question answering, so they get this top level question and they say okay, I could answer this question directly, but let me make use of this fast agent that we have from the last turn. We’ll make a bunch of sub questions that seem relevant for answering the overall question, and ask our distilled agent to answer all of those sub questions, and then using those answers, the human can then make a decision for their top level question. It doesn’t have to be the human. You could also have a distilled agent at the top level if you want.

I think having the human there seems more likely. So with this amplification you’re basically using the agent multiple times, letting it reason for longer in order to get a better result. So the resulting human, plus many copies of the agent system is more capable than the original distilled agent, but also slower. So we started off with something, let’s call it capability level five, and then we distilled it and it became capability level four, but it was a lot faster. Then we amplified it and maybe now it’s capability level eight. But it’s a lot slower. So we can distill it again and get something at capability level seven that’s pretty fast, and then amplify it again and so on and so forth. So the hope is that this would allow us to continually train an agent that can reach arbitrary levels of capabilities that are actually physically possibly while remaining aligned or corrigible the entire time.

Lucas: Yeah, that’s quite a great picture. I’m happy to jump into debate next, and then I’d like to talk about the potential relationship between the two.

Rohin: Sure. So with debate, the key hope that it is based on is that it is easier to point out a lie then it is to lie in the first place. This would be under the assumption that if you’ve got two agents, one agent is trying to lie, the other agent is trying to detect when the first agent is lying, you would need the second to have access to all of the information that the first agent is using to make this decision. But if that can happen, then we would hope that it is easier to detect lies than it is to actually lie. If this is actually true, we can imagine the following training process. We give a top level question, and then we have two AI systems debate what the correct answer to that top level question is. One of the agents initially says the answer is 12, then the other agent says no, the answer is 15, then they start arguing about why the answer should be 12 versus 15.

Eventually, after some amount of time, a human is asked to judge this debate and say which of the AI systems provided more true and useful information. Whichever agent the human decided was correct gets a reward of plus one, and the other one gets a reward of minus one, and is then updated presumably by a gradient decent or something. So the hope here is basically that the two AI system that are debating each other are about equally powerful, such that if any one of them tries to lie or manipulate the user in some way, the other AI system would be able to point that out, and then the human would give the reward to the AI systems that point it out. So at equilibrium you should expect that both of the agents are just trying to give you true and useful information, and they’re not trying to lie, because if they do try to lie, then the other one will point it out.

Lucas: Right. So there’s this important feature here where as you mentioned, the claim is that it’s easier to spot out lies then to make lies, and this sort of asymmetry here is one of the motivations which says that generally it’ll be easier to tell when agents are telling the truth rather than making up a lie.

Rohin: As along as you have another AI system that can point this out. Certainly a super intelligent AI system could lie to me and I wouldn’t be able to tell, probably, but it’s a lot harder for a superintelligent AI system to lie to me when I have another superintelligent AI system that’s trying to point out lies that the first one makes.

Lucas: Right. So now I think we can go ahead and cover its relationship to iterated amplification?

Rohin: Sure. There is actually quite a close relationship between the two, even though it doesn’t seem like it on first site. The hope with both of them is that your AI systems will learn to do human like reasoning, but on a much larger scale than humans can do. In particular, consider the following kind of agent. You have a human who is given a top level question that they have to answer, and that human can create a bunch of sub questions and then delegate each of those sub questions to another copy of the same human, initialized from scratch or something like that so they don’t know what the top level human has thought.

Then they now have to answer the sub question, but they too can delegate to another human further down the line. And so on you can just keep delegating down until you get something that questions are so easy that the human can just straight up answer them. So I’m going to call this structure a deliberation tree, because it’s a sort of tree of considerations such that every node, the answer to that node, it can be computed from the answers to the children nodes, plus a short bit of human reasoning that happened at that node.

In iterated amplification, what’s basically happening is you start with leaf nodes, the human agent. There’s just a human agent, and they can answer questions quickly. Then when you amplify it the first time, you get trees of depth one, where at the top level there’s a human who can then delegate sub questions out, but then those sub questions have to be answered by an agent that was trained to be like a human. So you’ve got something that approximates depth one human deliberation trees. Then after another round of distillation and amplification, you’ve got human delegating to agents that were trained to mimic humans that could delegate to agents that were trained to mimic humans. An approximate version of a depth two deliberation tree.

So iterated amplification is basically just building up the depth of the tree that the agent is approximating. But we hope that these deliberation trees are always just basically implementing corrigible reasoning, and that eventually once they get deep enough, you get arbitrarily strong capabilities.

Lucas: Can you offer some clarification as to why one might expect a group of copies of an AI, plus the human to scale to be able to make sure that during distillation, that alignment is retained?

Rohin: That’s an open problem. Whether you can do a distillation step that does preserve alignment/corrigibility, it’s a thing that Paul in a few recent posts in the Iterated Amplification Sequence, he calls it the reward engineering problem. The hope is that if you believe that the amplified agent is corrigible, then they are going to be smarter than the agent that they are trying to train via distillation. So you can actually just use the amplified agent to create a reward signal in order to train an agent during distillation. Since the amplified agent is smarter than the agent you’re distilling, you could plausibly actually create a reward function that’s not easily gameable, and actually gets the AI system to do what you want. I think that’s the concise lacking nuance story of why you might be optimistic about this.

Lucas: All right.

Rohin: So I’ll move on to how debate is also related to this. So we talk about how iterated amplification is basically like growing the depth of deliberation trees that the agent is approximating. The human part of this is judging any one node and its children. In debate on the other hand, you can imagine the same sort of deliberation tree, although now they’re more like arguments and counter arguments as opposed to considerations and counter considerations. But broadly the same thing. So imagine there’s this actual debate tree of ways the debate could possibly go.

Then you could think of the AI systems as choosing a particular path in the debate tree that makes them most likely to win. The key point is that given that the entire question can be resolved by exponentially size deliberation tree, if the two AI systems are capable of competing this exponential deliberation tree, then optimal play in the debate game is to go along the path that is going to lead in your victory, even given that the other player is trying to win themselves. The relation between iterated amplification and debate is that they both want the agents to implicitly be able to compute this exponential sized deliberation tree that humans could not do, and then use humans to detect a particular part of that tree. In iterated amplification you check a parent and its children. Those nodes, you look at that one section of the debate tree, and you make sure that it looks good, and then debate you look at a particular path on the debate tree and judge whether that path is good. One critique about these methods, is it’s not actually clear that an exponential sized deliberation tree is able to solve all problems that we might care about. Especially if the amount of work done at each node is pretty short, like ten minutes of a stent of a normal human.

One question that you would care about if you wanted to see if an iterated amplification could work is can these exponential sized deliberation trees actually solve hard problems? This is the factored cognition hypothesis. These deliberation trees can in fact solve arbitrarily complex tasks. And Ought is basically working on testing this hypothesis to see whether or not it’s true. It’s like finding the tasks, which seemed hardest to do in this decompositional way, and then seeing if teams of humans can actually figure out how to do them.

Lucas: Do you have an example of what would be one of these tasks that are difficult to decompose?

Rohin: Yeah. Take a bunch of humans who don’t know differential geometry or something, and have them solve the last problem in a textbook on differential geometry. They each only get ten minutes in order to do anything. None of them can read the entire textbook. Because that takes way more than ten minutes. I believe Ought is maybe not looking into that one in particular, that one sounds extremely hard, but they might be doing similar things with books of literature. Like trying to answer questions about a book that no one has actually read.

But I remember that Andreas was actually talking about this particular problem that I mentioned as well. I don’t know if they actually decided to do it.

Lucas: Right. So I mean just generally in this area here, it seems like there are these interesting open questions and considerations about I guess just the general epistemic efficacy of debate. And how good AI and human systems will be at debate, and again also as you just pointed out, whether or not arbitrarily difficult tasks can be solved through this decompositional process. I mean obviously we do have proofs for much simpler things. Why is there a question as to whether or not it would scale? How would it eventually break?

Rohin: With iterated amplification in particular, if you’re starting with humans who have only ten minutes to look at resources and come up with an answer, the particular thing I would say they might not be able to do is take a math textbook that the human did not know already and solve the final problem in it. Iterated amplification, to the extent that it starts with a human who’s limited to ten minutes, and whenever it does a decomposition, does it with a human who’s limited to ten minutes, would inherit all of the same problems if those problems are real problems.

Lucas: Cool, that’s interesting. I don’t have anything to add here then. So happy to keep on moving.

Rohin: I think I want to talk about limited AI systems next. So with AI boxing, the hope is to create a box around an AI system that prevents anything from inside the box from going out, and anything from outside the box from going in, except a particular input, output channels that we explicitly allow. So the hope here is that this allows us to make assumptions about the input, output channels that actually will hold in practice, and can be used to enter that AI’s function more like tool AIs or something, rather than agential AIs that will do bad things.

I think most literature on boxing has had an AI system in a box, and a human operator outside of it. One interesting proposal I’ve heard recently is to have the human also be inside the box, because if you’ve got information pulling from an AI to a human, that’s a pretty big hole in your box. But if the human is actually inside the box, then there’s not as much that can be done. There’s not as much information flowing into or out of the box.

Lucas: What does it actually mean to have the human locked in the box?

Rohin: One proposal would be have a human go inside a secured room from which no information an enter or leave, apart from the fact of whether or not the human has entered or left the room. The human can then interact with the AI as normal. As soon as the human leaves the room, the AI’s reward function is zero. Basically just doesn’t care about anything that happens in the future after that. So in this setting you would hope that the AI system has no incentive to affect the outside world because the only way it can affect the outside world is by sending the human outside. But as soon as it sends the human outside, its reward gets set to zero forever, so it doesn’t care about any of that.

Lucas: But wouldn’t it care before the human went out?

Rohin: Yes, but before the human goes out, there is nothing it can do to influence the outside world because both the human and the AI are in the box. It could influence the human while the human is in the box. But the human also can’t affect the outside world while it’s in the box.

Lucas: Yeah, but the human leaves the box and then can affect the world.

Rohin: Right. But any change to the world that happens after the human leaves, doesn’t make it into the reward function.

Lucas: But it does because the AI system would be planning over time from its initial moment where the reward isn’t set to zero.

Rohin: But your reward can be limited to the next K times steps, or something like this. Basically you’re limited to until the point that the human leaves. That’s the proposal here.

Lucas: I see, okay. That makes sense.

Rohin: The only information leaving the box, assuming security is a box in all the normal ways, is whether or not the human has chosen to leave. Pretty low information, doesn’t seem like an AI system would be able to use that to influence the outside world in some ridiculous way. So that’s boxing.

The next thing on my list for limited AGI is impact measures of side effects. There is also mild optimization and conservative concepts, but let’s start with impact measures. The basic hope is to create some quantification of how much impact a particular action that the AI chooses, has on the world, and to then penalize the AI for having a lot of impact so that it only does low impact things, which presumably will not cause catastrophe. One approach to this relative reachability. With relative reachability, you’re basically trying to not decrease the number of states that you can reach from the current state. So you’re trying to preserve option value. You’re trying to keep the same states reachable.

It’s not okay for you to make one state unreachable as long as you make a different state reachable. You need all of the states that were previously reachable to continue being reachable. The relative part is that the penalty is calculated relative to a baseline that measures what would’ve happened if the AI had done nothing, although there are other possible baselines you could use. The reason you do this is so that we don’t penalize the agent for side affects that happen in the environment. Like maybe I eat a sandwich, and now these states where there’s a sandwich in front of me are no longer accessible because I can’t un-eat a sandwich. We don’t want to penalize our AI system for that impact, because then it’ll try to stop me from eating a sandwich. We want to isolate the impact of the agent as opposed to impact that were happening in the environment anyway. So that’s what we need the relative part.

There is also attainable utility preservation from Alex Turner, which makes two major changes from relative reachability. First, instead of talking about reachability of states, it talk about how much you can achieve different utility functions. So if previously you were able to make lots of paperclips, then you want to make sure that you can still make lots of paperclips. If previously you were able to travel across the world within a day, then you want to still be able to travel across the world in a day. So that’s the first change I would make.

The second change is not only does it penalize decreases in attainable utility, it also penalizes increase in attainable utility. So if previously you could not mine asteroids in order to get their natural resources, you should still not be able to mine asteroids and get their resources. This seems kind of crazy when you first hear it, but the rational for it is that all of the convergent instrumental sub goals are about increases in power of your AI system. For example, for a broad range of utility functions, it is useful to get a lot of resources and a lot of power in order to achieve those utility functions. Well, if you penalize increases in attainable utility, then you’re going to penalize actions that just broadly get more resources, because those are helpful for many, many, many different utility functions.

Similarly, if you were going to be shutdown, but then you disable the shutdown button, well that just makes it much more possible for you to achieve pretty much every utility, because instead of being off, you are still on and can take actions. So that also will get heavily penalized because it led to such a large increase in attainable utilities. So those are I think the two main impact measures that I know of.

Okay, we’re getting to the things where I have less things to say about them, but now we’re at robustness. I mentioned this before, but there are two main challenges with verification. There’s the specification problem, making it computationally efficient, and all of the work is on the computationally efficient side, but I think the hardest part is the specification side, and I’d like to see more people do work on that.

I don’t think anyone is really working on verification with an eye to how to apply it to powerful AI systems. I might be wrong about that. Like I know something people who do care about AI safety who are working on verification, and it’s possible that they have thoughts about this that aren’t published and that I haven’t talked to them about. But the main thing I would want to see is what specifications can we actually give to our verification sub routines. At first glance, this is just the full problem of AI safety. We can’t just give a specification for what we want to an AGI.

What specifications can we get for a verification that’s going to increase our trust in the AI system. For adversarial training, again, all of the work done so far is in the adversarial example space where you try to frame an image classifier to be more robust to adversarial examples, and this kind of work sometimes, but doesn’t work great. For both verification and adversarial training, Paul Christiano has written a few blog posts about how you can apply this to advance AI systems, but I don’t know if anyone actively working on these with AGI in mind. With adversarial examples, there is too much work for me to summarize.

The thing that I find interesting about adversarial examples is that is shows that are we no able to create image classifiers that have learned human preferences. Humans have preferences over how we classify images, and we didn’t succeed at that.

Lucas: That’s funny.

Rohin: I can’t take credit for that framing, that one was due to Ian Goodfellow. But yeah, I see adversarial examples as contributing to a theory of deep learning that tells us how do we get deep learning systems to be closer to what we want them to be rather than these weird things that classify pandas as givens, even when they’re very clearly still pandas.

Lucas: Yeah, the framing’s pretty funny, and makes me feel kind of pessimistic.

Rohin: Maybe if I wanted to inject some optimism back in, there’s a frame under which an adversarial examples happen because our data sets are too small or something. We have some pretty large data sets, but humans do see more and get far richer information than just pixel inputs. We can go feel a chair and build 3D models of a chair through touch in addition to sight. There is actually a lot more information that humans have, and it’s possible that what we need as AI systems is just to have way more information, and are good to narrow it down on the right model.

So let us move on to I think the next thing is interpretability, which I also do not have much to say about, mostly because there is tons and tons of technical research on interpretability, and there is not much on interpretability from an AI alignment perspective. One thing to note with interpretability is you do want to be very careful about how you apply it. If you have a feedback cycle where you’re like I built an AI system, I’m going to use interpretability to check whether it’s good, and then you’re like oh shit, this AI system was bad, it was not making decisions for the right reasons, and then you go and fix your AI system, and then you throw interpretability at it again, and then you’re like oh, no, it’s still bad because of this other reason. If you do this often enough, basically what’s happening is you’re training your AI system to no longer have failures that are obvious to interpretability, and instead you have failures that are not obvious to interpretability, which will probably exist because your AI system seems to have been full of failures anyway.

So I would be pretty pessimistic about the system that interpretability found 10 or 20 different errors in. I would just expect that the resulting AI system has other failure modes that we were not able to uncover with interpretability, and those will at some point trigger and cause bad outcomes.

Lucas: Right, so interpretability will cover things such as super human intelligence interpretability, but also more mundane examples of present day systems correct, where the interpretability of say neural networks is basically, my understand is nowhere right now.

Rohin: Yeah, that’s basically right. There have been some techniques developed like sailiency maps, feature visualization, neural net models that hallucinate explanations post hoc, people have tried a bunch of things. None of them seem especially good, though some of them definitely are giving you more insight than you had before.

So I think that only leaves CAIS. With comprehensive AI service, it’s like a forecast for how AI will develop in the future. It also has some prescriptive aspects to it, like yeah, we should probably not do these things, because these don’t seem very safe, and we can do these other things instead. In particular, CAIS takes a strong stance AGI agents that are God-like fully integrated systems that are optimizing some utility function over the long term future.

It should be noted that it’s arguing against a very specific kind of AGI agent. This sort of long term expected utility maximizer that’s fully integrated and is okay to black box, can be broken down into modular components. That entire cluster of features, it’s what CAIS is talking about when it says AGI agent. So it takes a strong sense against that, saying A, it’s not likely that this is the first superintelligent thing that we built, and B, it’s clearly dangerous. That’s what we’ve been saying the entire time. So here’s a solution, why don’t we just not build it? And we’ll build these other things instead? As for what the other things are, the basic intuition pump here is that if you look at how AI is developed today, there is a bunch of research in development practices that we do. We try out a bunch of models, we try some different ways to clean our data, we try different ways of collecting data sets, and we try different algorithms and so on and so forth, and these research and development practices allow us to create better and better AI systems.

Now, our AI systems currently are also very bounded in their tasks that they do. There are specific tasks, and they do that task and that task alone, they do it in episodic ways. They are only trying to optimize over a bounded amount of time, they use a bounded amount of computation and other resources. So that’s what we’re going to call a service. It’s an AI system that does a bounded task, in bounded time, with bounded computation. Everything is bounded. Now our research and development practices are themselves bound to tasks, and AI has shown itself to be quite good at automating bounded tasks. We’ve definitely not automated all bounded tasks yet, but it does seem like we are in general are pretty good at automating bounded tasks with enough effort. So probably we will also automate research and development tasks.

We’re seeing some of this already with neural architecture search for example, and once AI R and D processes have been sufficiently automated, then we get this cycle where AI systems are doing the research and development needed to improve AI systems, and so we get to this point of recursive improvement that’s not self improvement anymore, because there’s not really an agentic itself to improve, but you do have recursive AI improving AI. So this can lead to the sort of very quick improvement and capabilities that we often associate with superintelligence. With that we can eventually get to a situation where any task that we care about, we could have a service that breaks that task down into a bunch of simple, automatable bounded tasks, and then we can create services that do each of those bounded tasks and interact with each other in order to in tandem complete the long term task.

This is how humans do engineering and building things. We have these research and development things, we have these modular systems that are interacting with each other via a well defined channel, so this seems more likely to be the firs thing that we build that’s capable of super intelligent reasoning rather than an AGI agent that’s optimizing the utility function of a long term, yada, yada, yada.

Lucas: Is there no risk? Because the superintelligence is the distributed network collaborating. So is there no risk for the collective distributed network to create some sort of epiphenomenal optimization effects?

Rohin: Yup, that’s definitely a thing that you should worry about. I know that Erik agrees with me on this because he explicitly lists this out in the tech report as a thing that needs more research and that we should be worried about. But the hope is that there are other things that you can do that normally we wouldn’t think about with technical AI safety research that would make more sense in this context. For example, we could train a predictive model of human approval. Given any scenario, the AI system should predict how much humans are going to like it or approve of it, and then that service can be used in order to check that other services are doing reasonable things.

Similarly, we might look at each individual service and see which of the other services it’s accessing, and then make sure that those are reasonable services. If we see a CEO of paper clip company going and talking to the synthetic biology service, we might be a bit suspicious and be like why is this happening? And then we can go and check to see why exactly that has happened. So there are all of these other things that we could do in this world, which aren’t really options in the AGI agent world.

Lucas: Aren’t they options in the AGI agential world where the architectures are done such that these important decision points are analyzable to the same degree as they would be in a CAIS framework?

Rohin: Not to my knowledge. As far as I can tell, most end to end train things, you might have the architectures be such that there are these points at which you expect that certain kinds of information will be flowing there, but you can’t easily look at the information that’s actually there and deduce what the system is doing. It’s just not interpretable enough to do that.

Lucas: Okay. I don’t think that I have any other questions or interesting points with regards to CAIS. It’s a very different and interesting conception of the kind of AI world that we can create. It seems to require its own new coordination challenge as if your hypothesis is true and that the agential AIs will be afforded more causal power in the world, and more efficiency than sort of the CAIS systems, that’ll give them a competitive advantage that will potentially bias civilization away from CAIS systems.

Rohin: I do want to note that I think the agential AI systems will be more expensive and take longer to develop than CAIS. So I do think CAIS will come first. Again, this is all in a particular world view.

Lucas: Maybe this might be abstracting too large, but does CAIS claim to function as an AI alignment methodology to be used on the long term? Do we retain the sort of CAIS architecture path, CAIS creating super intelligence or some sort of distributed task force?

Rohin: I’m not actually sure. There’s definitely a few chapters in the technical report that are like okay, what if we build AGI agents? How could we make sure that goes well? As long as CAIS comes before AGI systems, here’s what we can do in that setting.

But I feel like I personally think that AGI systems will come. My guess is that Erik does not think that this is necessary, and we could actually just have CAIS systems forever. I don’t really have a model for when to expect AGI separately of the CAIS world. I guess I have a few different potential scenarios that I can consider, and I can compare it to each of those, but it’s not like it’s CAIS and not CAIS. It’s more like it’s CAIS and a whole bunch of other potential scenarios, and in reality it’ll be some mixture of all of them.

Lucas: Okay, that makes more sense. So, there’s sort of an overload here, or just a ton of awesome information with regards to all of these different methodologies and conceptions here. So just looking at all of it, how do you feel about all of these different methodologies in general, and how does AI alignment look to you right now?

Rohin: Pretty optimistic about AI alignment, but I don’t think that’s so much from the particular technical safety research that we have. That’s some of it. I do think that there are promising approaches, and the fact that there are promising approaches makes me more optimistic. But I think more so my optimism comes from the strategic picture. A belief that A, that we will be able to convince people that this is important, such that people start actually focusing on this problem more broadly, and B that we would be able to get a bunch of people to coordinate such that they’re more likely to invest in safety. C, that I don’t place as much weight on the AI systems that are at long term, utility maximizers, and therefor we’re basically all screwed, which seems to be the position of many other people in the field.

I say optimistic. I mean optimistic relative to them. I’m probably pessimistic relative to the average person.

Lucas: A lot of these methodologies are new. Do you have any sort of broad view about how the field is progressing?

Rohin: Not a great one. Mostly because I would consider myself, maybe I’ve just recently stopped being new to the field, so I didn’t really get to observe the field very much in the past, but it seems like there’s been more of a shift towards figuring out how all of the things people were thinking about apply to real machine learning systems, which seems nice. The fact that it does connect is good. I don’t think the connections are super natural, or they just sort of clicked, but they did mostly work out. I’d say in many cases, and that seems pretty good. So yeah, the fact that we’re now doing a combination of theoretical, experimental, and conceptual work seems good.

It’s no longer the case that we’re mostly doing theory. That seems probably good.

Lucas: You’ve mentioned already a lot of really great links in this podcast, places people can go to learn more about these specific approaches and papers and strategies. And one place that is just generally great for people to go is to the Alignment Forum, where a lot of this information already exists. So are there just generally in other places that you recommend people check out if they’re interested in taking more technical deep dives?

Rohin: Probably actually at this point, one of the best places for a technical deep dive is the alignment newsletter database. I write a newsletter every week about AI alignment, all the stuff that’s happened in the past week, that’s the alignment newsletter, not the database, which also people can sign up for, but that’s not really a thing for technical deep dives. It’s more a thing for keeping a pace with developments in the field. But in addition, everything that ever goes into the newsletter is also kept in a separate database. I say database, it’s basically a Google sheets spreadsheet. So if you want to do a technical deep dive on any particular area, you can just go, look for the right category on the spreadsheet, and then just look at all the papers there, and read some or all of them.

Lucas: Yeah, so thanks so much for coming on the podcast Rohin, it was a pleasure to have you, and I really learned a lot and found it to be super valuable. So yeah, thanks again.

Rohin: Yeah, thanks for having me. It was great to be on here.

Lucas: If you enjoyed this podcast, please subscribe, give it a like, or share it on your preferred social media platform. We’ll be back again soon with another episode in the AI alignment series.

End of recorded material

AI Alignment Podcast: An Overview of Technical AI Alignment with Rohin Shah (Part 1)

The space of AI alignment research is highly dynamic, and it’s often difficult to get a bird’s eye view of the landscape. This podcast is the first of two parts attempting to partially remedy this by providing an overview of the organizations participating in technical AI research, their specific research directions, and how these approaches all come together to make up the state of technical AI alignment efforts. In this first part, Rohin moves sequentially through the technical research organizations in this space and carves through the field by its varying research philosophies. We also dive into the specifics of many different approaches to AI safety, explore where they disagree, discuss what properties varying approaches attempt to develop/preserve, and hear Rohin’s take on these different approaches.

You can take a short (3 minute) survey to share your feedback about the podcast here.

In this podcast, Lucas spoke with Rohin Shah. Rohin is a 5th year PhD student at UC Berkeley with the Center for Human-Compatible AI, working with Anca Dragan, Pieter Abbeel and Stuart Russell. Every week, he collects and summarizes recent progress relevant to AI alignment in the Alignment Newsletter

We hope that you will continue to join in the conversations by following us or subscribing to our podcasts on Youtube, SoundCloud, iTunes, Google Play, Stitcher, or your preferred podcast site/application. You can find all the AI Alignment Podcasts here.

Topics discussed in this episode include:

  • The perspectives of CHAI, MIRI, OpenAI, DeepMind, FHI, and others
  • Where and why they disagree on technical alignment
  • The kinds of properties and features we are trying to ensure in our AI systems
  • What Rohin is excited and optimistic about
  • Rohin’s recommended reading and advice for improving at AI alignment research

Lucas: Hey everyone, welcome back to the AI Alignment podcast. I’m Lucas Perry, and today we’ll be speaking with Rohin Shah. This episode is the first episode of two parts that both seek to provide an overview of the state of AI alignment. In this episode, we cover technical research organizations in the space of AI alignment, their research methodologies and philosophies, how these all come together on our path to beneficial AGI, and Rohin’s take on the state of the field.

As a general bit of announcement, I would love for this podcast to be particularly useful and informative for its listeners, so I’ve gone ahead and drafted a short survey to get a better sense of what can be improved. You can find a link to that survey in the description of wherever you might find this podcast, or on the page for this podcast on the FLI website.

Many of you will already be familiar with Rohin, he is a fourth year PhD student in Computer Science at UC Berkeley with the Center For Human-Compatible AI, working with Anca Dragan, Pieter Abbeel, and Stuart Russell. Every week, he collects and summarizes recent progress relevant to AI alignment in the Alignment Newsletter. And so, without further ado, I give you Rohin Shah.

Thanks so much for coming on the podcast, Rohin, it’s really a pleasure to have you.

Rohin: Thanks so much for having me on again, I’m excited to be back.

Lucas: Yeah, long time, no see since Puerto Rico Beneficial AGI. And so speaking of Beneficial AGI, you gave quite a good talk there which summarized technical alignment methodologies approaches and broad views, at this time; and that is the subject of this podcast today.

People can go and find that video on YouTube, and I suggest that you watch that; that should be coming out on the FLI YouTube channel in the coming weeks. But for right now, we’re going to be going in more depth, and with more granularity into a lot of these different technical approaches.

So, just to start off, it would be good if you could contextualize this list of technical approaches to AI alignment that we’re going to get into within the different organizations that they exist at, and the different philosophies and approaches that exist at these varying organizations.

Rohin: Okay, so disclaimer, I don’t know all of the organizations that well. I know that people tend to fit CHAI in a particular mold, for example; CHAI’s the place that I work at. And I mostly disagree with that being the mold for CHAI, so probably anything I say about other organizations is also going to be somewhat wrong; but I’ll give it a shot anyway.

So I guess I’ll start with CHAI. And I think our public output mostly comes from this perspective of how do we get AI systems to do what we want? So this is focusing on the alignment problem, how do we actually point them towards a goal that we actually want, align them with our values. Not everyone at CHAI takes this perspective, but I think that’s the one most commonly associated with us and it’s probably the perspective on which we publish the most. It’s also the perspective I, usually, but not always, take.

MIRI, on the other hand, takes a perspective of, “We don’t even know what’s going on with intelligence. Let’s try and figure out what we even mean by intelligence, what it means for there to be a super-intelligent AI system, what would it even do or how would we even understand it; can we have a theory of what all of this means? We’re confused, let’s be less confused, once we’re less confused, then we can think about how to actually get AI systems to do good things.” That’s one of the perspectives they take.

Another perspective they take is that there’s a particular problem with AI safety, which is that, “Even if we knew what goals we wanted to put into an AI system, we don’t know how to actually build an AI system that would, reliably, pursue those goals as opposed to something else.” That problem, even if you know what you want to do, how do you get an AI system to do it, is a problem that they focus on. And the difference from the thing I associated with CHAI before is that, with the CHAI perspective, you’re interested both in how do you get the AI system to actually pursue the goal that you want, but also how do you figure out what goal that you want, or what is the goal that you want. Though, I think most of the work so far has been on supposing you know the goal, how do you get your AI system to properly pursue it?

I think DeepMind safety came, at least, is pretty split across many different ways of looking at the problem. I think Jan Leike, for example, has done a lot of work on reward modeling, and this sort of fits in with the how do we get our AI systems be focused on the right task, the right goal. Whereas Vika has done a lot of work on side effects or impact measures. I don’t know if Vika would say this, but the way I interpret it how do we impose a constraint upon the AI system such that it never does anything catastrophic? But it’s not trying to get the AI system to do what we want, just not do what we don’t want, or what we think would be catastrophically bad.

OpenAI safety also seems to be, okay how do we get deep enforcement learning to do good things, to do what we want, to be a bit more robust? Then there’s also the iterated amplification debate factored cognition area of research, which is more along the lines of, can we write down a system that could, plausibly, lead to us building an aligned AGI or aligned powerful AI system?

FHI, no coherent direction, that’s all of FHI. Eric Drexler is also trying to understand how AI will develop it in the future is somewhat very different from what MIRI’s doing, but the same general theme of trying to figure out what is going on. So he just recently published a long technical report on comprehensive AI services, which is the general worldview for predicting what AI development will look like in the future. If we believed that that was, in fact, the way AI would happen, we would probably change what we work on from the technical safety point of view.

And Owain Evans does a lot of stuff, so maybe I’m just not going to try to categorize him. And then Stuart Armstrong works on this, “Okay, how do we get value learning to work such that we actually infer a utility function that we would be happy for an AGI system to optimize, or a super-intelligent AI system to optimize?”

And then Ought works on factory cognition, so it’s very adjacent to be iterated amplification and debate research agendas. Then there’s a few individual researchers, scattered, for example, Toronto, Montreal, and AMU and EPFL, maybe I won’t get into all of them because, yeah, that’s a lot; but we can delve into that later.

Lucas: Maybe a more helpful approach, then, would be if you could start by demystifying some of the MiRI stuff a little bit; which may seem most unusual.

Rohin: I guess, strategically, the point would be that you’re trying to build this AI system that’s going to be, hopefully, at some point in the future vastly more intelligent than humans, because we want them to help us colonize the universe or something like that, and lead to lots and lots of technological progress, etc., etc.

But this, basically, means that humans will not be in control unless we very, very specifically arrange it such that we are in control; we have to thread the needle, perfectly, in order to get this to work out. In the same way that, by default you, would expect that the most intelligent creatures, beings are the ones that are going to decide what happens. And so we really need to make sure and, also it’s probably hard to ensure, that these vastly more intelligent beings are actually doing what we want.

Given that, it seems like what we want is a good theory that allows us to understand and predict what these AI systems are going to do. Maybe not in the fine nitty, gritty details, because if we could predict what they would do, then we could do it ourselves and be just as intelligent as they are. But, at least, in broad strokes what sorts of universes are they going to create?

But given that they can apply so much more intelligence that we can, we need our guarantees to be really, really strong; like almost proof level. Maybe actual proofs are a little too much to expect, but we want to get as close to it as possible. Now, if we want to do something like that, we need a theory of intelligence; we can’t just sort of do a bunch of experiments, look at the results, and then try to extrapolate from there. Extrapolation does not give you the level of confidence that we would need for a problem this difficult.

And so rather, they would like to instead understand intelligence deeply, deconfuse themselves about it. Once you understand how intelligence works at a theoretical level, then you can start applying that theory to actual AI systems and seeing how they approximate the theory, or make predictions about what different AI systems will do. And, hopefully, then we could say, “Yeah, this system does look like it’s going to be very powerful as approximating this particular idea, this particular part of theory of intelligence. And we can see that with this particular theory of intelligence, we can align it with humans somehow, and you’d expect that this was going to work out.” Something like that.

Now, that sounded kind of dumb even to me as I was saying it, but that’s because we don’t have the theory yet; it’s very fun to speculate how you would use the theory before you actually have the theory. So that’s the reason they’re doing this, the actual thing that they’re focusing on is centered around problems of embedded agency. And I should say this is one of their, I think, two main strands of research, the other stand of research, I do not know anything about because they have not published anything about it.

But one of their strands of research is about embedded agency. And here the main point is that in the real world, any agent, any AI system, or a human is a part of their environment. They are smaller than the environment and the distinction between agent and environment is not crisp. Maybe I think of my body as being part of me but, I don’t know, to some extent, my laptop is also an extension of my agency; there’s a lot of stuff I can do with it.

Or, on the other hand, you could think maybe my arms and limbs aren’t actually a part of me, I could maybe get myself uploaded at some point in the future, and then I will no longer have arms or legs; but in some sense I am still me, I’m still an agent. So, this distinction is not actually crisp, and we always pretend that it is in AI, so far. And it turns out that once you stop making this crisp distinction and start allowing the boundary to be fuzzy, there are a lot of weird, interesting problems that show up and we don’t know how to deal with any of them, even in theory, so that’s what they focused on.

Lucas: And can you unpack, given that AI researchers control of the input/output channels for AI systems, why is it that there is this fuzziness? It seems like you could extrapolate away the fuzziness given that there are these sort of rigid and selected IO channels.

Rohin: Yeah, I agree that seems like the right thing for today’s AI systems; but I don’t know. If I think about, “Okay, this AGI is a generally intelligent AI system.” I kind of expect it to recognize that when we feed it inputs which, let’s say, we’re imagining a money maximizing AI system that’s taking in inputs like stock prices, and it outputs which stocks to buy. And maybe it can also read the news that lets it get newspaper articles in order to make better decisions about which stocks to buy.

At some point, I expect this AI system to read about AI and humans, and realize that, hey, it must be an AI system, it must be getting inputs and outputs. Its reward function must be to make this particular number in a bank account be as high as possible and then once it realizes this, there’s this part of the world, which is this number in the bank account, or it could be this particular value, this particular memory block in its own CPU, and its goal is now make that number as high as possible.

In some sense, it’s now modifying itself, especially if you’re thinking of the memory block inside the CPU. If it goes and edits that and sets that to a million, a billion, the highest number possible in that memory block, then it seems like it has, in some sense, done some self editing; it’s changed the agent part of it. It could also go and be like, “Okay actually what I care about is this particular award function box is supposed to output as high a number as possible. So what if I go and change my input channels such that it feeds me things that caused me to believe that I’ve made tons and tons of profit?” So this is a delusion backs consideration.

While it is true that I don’t see a clear, concrete way that an AI system ends up doing this, it does feel like an intelligent system should be capable of this sort of reasoning, even if it initially had these sort of fixed inputs and outputs. The idea here is that its outputs can be used to affect the inputs or future outputs.

Lucas: Right, so I think that that point is the clearest summation of this; it can affect its own inputs and outputs later. If you take human beings who are, by definition, human level intelligences we have, say, in a classic computer science sense if you thought of us, you’d say we strictly have five input channels: hearing seeing, touch, smell, etc.

Human beings have a fixed number of input/output channels but, obviously, human beings are capable of self modifying on those. And our agency is sort of squishy and dynamic in ways that would be very unpredictable, and I think that that unpredictability and the sort of almost seeming ephemerality of being an agent seems to be the crux of a lot of the problem.

Rohin: I agree that that’s a good intuition pump, I’m not sure that I agree it’s the crux. The crux, to me, it feels more like you specify some sort of behavior that you want which, in this case, was make a lot of money or make this number in a bank account go higher, or make this memory cell go as high as possible.

And when you were thinking about the specification, you assumed that the inputs and outputs fell within some strict parameters, like the inputs are always going to be news articles that are real and produced by human journalists, as opposed to a fake news article that was created by the AI in order to convince the reward function that actually it’s made a lot of money. And then the problem is that since the AI’s outputs can affect the inputs, the AI could cause the inputs to go outside of the space of possibilities that you imagine the inputs could be in. And this then allows the AI to game the specification that you had for it.

Lucas: Right. So, all the parts which constitute some AI system are all, potentially, modified by other parts. And so you have something that is fundamentally and completely dynamic, which you’re trying to make predictions about, but whose future structure is potentially very different and hard to predict based off of the current structure?

Rohin: Yeah, basically.

Lucas: And that in order to get past this we must, again, tunnel down on this decision theoretic and rational agency type issues at the bottom of intelligence to sort of have a more fundamental theory, which can be applied to these highly dynamic and difficult to understand situations?

Rohin: Yeah, I think the MIRI perspective is something like that. And in particular, it would be like trying to find a theory that allows you to put in something that stays stable even while the system, itself, is very dynamic.

Lucas: Right, even while your system, whose parts are all completely dynamic and able to be changed by other parts, how do you maintain a degree of alignment amongst that?

Rohin: One answer to this is give the AI a utility function. There is a utility function that’s explicitly trying to maximize that and in that case, it probably has an incentive in order to keep that to protect that the utility function, because if it gets changed, well then it’s not going to maximize that utility function anymore, it’ll maximize something else which will lead to worse behavior by the likes of the original utility function. That’s a thing that you could hope to do with a better theory of intelligence is, how do you create a utility function in an AI system stays stable, even as everything else is dynamically changing?

Lucas: Right, and without even getting into the issues of implementing one single stable utility function.

Rohin: Well, I think they’re looking into those issues. So, for example, Vingean Reflection is a problem that is entirely about how you create better, more improved version of yourself without having any value drift, or a change to the utility function.

Lucas: Is your utility function not self-modifying?

Rohin: So in theory, it could be. The hook would be that we could design an AI system that does not self-modify its utility function under almost all circumstances. Because if you change your utility function, then you’re going to start maximizing that new utility function which, by the original utility function’s evaluation, is worse. If I told you, “Lucas, you have got to go fetch coffee.” That’s the only thing in life you’re concerned about. You must take whatever actions are necessary in order to get the coffee.

And then someone goes like, “Hey Lucas, I’m going to change your utility function so that you want to fetch tea instead.” And then all of your decision making is going to be in service of getting tea. You would probably say, “No, don’t do that, I want to fetch coffee right now. If you change my utility function for being ‘fetch tea’, then I’m going to fetch tea, which is bad because I want to fetch coffee.” And so, hopefully, you don’t change your utility function because of this effect.

Lucas: Right. But isn’t this where corrigibility comes in, and where we admit that as we sort of understand more about the world and our own values, we want to be able to update utility functions?

Rohin: Yeah, so that is a different perspective; I’m not trying to describe that perspective right now. It’s a perspective for how you could get something stable in an AI system. And I associate it most with Eliezer, though I’m not actually sure if he holds this opinion.

Lucas: Okay, so I think this was very helpful for the MIRI case. So why don’t we go ahead and zoom in, I think, a bit on CHAI, which is the Center For Human-Compatible AI.

Rohin: So I think rather than talking about CHAI, I’m going to talk about the general field of trying to get AI systems do what we want; a lot of people at CHAI work on that but not everyone. And also a lot of people outside of CHAI work on that, because that seems to become more useful carving of the field. So there’s this broad argument for AI safety which is, “We’re going to have very intelligent things based on the orthagonality thesis, we can’t really say anything about their goals.” So, the really important thing is to make sure that the intelligence is pointed at the right goals, it’s pointed at doing what we actually want.

And so then the natural approach is, how do we get our AI systems to infer what we want to do and then actually pursue that? And I think, in some sense, it’s one of the most obvious approaches to AI safety. This is a clear enough problem, even with narrow current systems that there are plenty of people outside of AI safety working on this, as well. So this incorporates things like inverse reinforcement learning, preference learning, reward modeling, the CIRL cooperative IRL paper also fits into all of this. So yeah, I can begin to ante up those in more depth.

Lucas: Why don’t you start off by talking about the people who exist within the field of AI safety, give sort of a brief characterization of what’s going on outside of the field, but primarily focusing on those within the field. How this approach, in practice, I think generally is, say, different from MIRI to start off with, because we have a clear picture of them painted right next to what we’re delving into now.

Rohin: So I think difference of MiRI is that this is more targeted directly at the problem right now, in that you’re actually trying to figure out how do you build an AI system that does what you want. Now, admittedly, most of the techniques that people have come up with are not likely to scale up to super-intelligent AI, they’re not meant to, no one claims that they’re going to scale up to super-intelligent AI. They’re more like some incremental progress on figuring out how to get AI systems to do what we want and, hopefully, with enough incremental progress, we’ll get to a point where we can go, “Yes, this is what we need to do.”

Probably the most well known person here would be Dylan Hadfield-Menell, who you had on your podcast. And so he talked about CIRL and associated things quite a bit there, there’s not really that much I would say in addition to it. Maybe a quick summary of Dylan’s position is something like, “Instead of having AI systems that are optimizing for their own goals, we need to have AI systems that are optimizing for our goals, and try to infer our goals in order to do that.”

So rather than having an AI system that is individually rational with respect to its own goals, you instead want to have a human AI system such that the entire system is rationally optimizing for the human’s goals. This is sort of the point made by CIRL, where you have an AI system, you’ve got a human, they’re playing those two player game, the humans is the only one who knows the reward function, the robot is uncertain about what the reward function is, and has to learn by observing what the humans does.

And so, now you see that the robot does not have a utility function that it is trying to optimize; instead is learning about a utility function that the human has and then helping the human optimize that reward function. So summary, try to build human AI systems that are group rational, as opposed to an AI system that is individually rational; so that’s Dylan’s view. Then there’s Jan Leike at DeepMind, and a few people at OpenAI.

Lucas: Before we pivot into OpenAI and DeepMind, just sort of focusing here on the CHAI end of things and this broad view, and help me explain here how you would characterize it. The present day actively focused view on current issues, and present day issues and alignment and making incremental progress there. This view here you see as a sort of subsuming multiple organizations?

Rohin: Yes, I do.

Lucas: Okay. Is there a specific name you would, again, use to characterize this view?

Rohin: Oh, getting AI systems to do what we want. Let’s see, do I have a pithy name for this? Helpful AI systems or something.

Lucas: Right which, again, is focused on current day things, is seeking to make incremental progress, and which subsumes many different organizations?

Rohin: Yeah, that seems broadly true. I do think there are people who are doing more conceptual work, thinking about how this will scale to AGI and stuff like that; but it’s a minority of work in the space.

Lucas: Right. And so the question of how do we get AI systems to do what we want them to do, also includes these views of, say, Vingean Reflection or how we become idealized versions of ourselves, or how we build on value over time, right?

Rohin: Yeah. So, those are definitely questions that you would need to answer at some point. I’m not sure that you would need to answer Vingean Reflection at some point. But you would definitely need to answer how do you update, given that humans don’t actually know what they want, for a long-term future; you need to be able to deal with that fact at some point. It’s not really a focus of current research, but I agree that that is a thing about this approach will have to deal with, at some point.

Lucas: Okay. So, moving on from you and Dylan to DeepMind and these other places that you view as this sort of approach also being practice there?

Rohin: Yeah, so while Dylan and I and other at CHAI has been focused on sort of conceptual advances, like in toy environments, does this do the right thing? What are some sorts of data that we can learn from? Do they work in these very simple environments with quite simple algorithms? I would say that OpenAI and DeepMind safety teams are more focused on trying to get this to work in complex environments of the sort that we’re getting this to work on state-of-the-art environments, the most complex ones that we have.

Now I don’t mean DoTA and StarCraft, because running experiments with DoTAi and StarCraft is incredibly expensive, but can we get AI systems that do what we want for environments like Atari or MuJoCo? There’s some work on this happening at CHAI, there are pre-prints available online, but it hasn’t been published very widely yet. Most of the work, I would say, has been happening with an OpenAI/DeepMind collaboration, and most recently, there was a position paper from DeepMind on recursive reward modeling.

Right before that there was also a paper on combining first a paper, deeper enforcement learning from human preferences, which said, “Okay if we allow humans to specify what they want by just comparing between different pieces of behavior from the AI system, can we train an AI system to do what the human wants?” And then they built on that in order to create a system that could learn from demonstrations, initially, using a kind of imitation learning, and then improve upon the demonstrations using comparisons in the same way that deep RL from human preferences did.

So one way that you can do this research is that there’s this field of human computer interaction, which is about … well, it’s about many things. But one of the things that it’s about is how do you make the user interface for humans intuitive and easy to use such that you don’t have user error or operator? One comment from people that I liked is that most of the things that are classified as ‘user error’ or ‘operator error’ should not be classified as such, they should be classified as ‘interface errors’ where you had such a confusing interface that well, of course, at some point some user was going to get it wrong.

And similarly, here, what we want is a particular behavior out of the AI, or at least a particular set of outcomes from the AI; maybe we don’t know exactly how to achieve those outcomes. And AI is about giving us the tools to create that behavior in automated systems. The current tool that we all use is the reward function, we write down the reward function and then we give it to an algorithm, and it produces behaviors and the outcomes that we want.

And reward functions, they’re just a pretty terrible user interface, they’re better than the previous interface which is writing a program explicitly, which humans cannot do it if the task is something like image classification or continuous control in MuJoCo; it’s an improvement upon that. But reward functions are still a pretty poor interface, because they’re implicitly saying that they encode perfect knowledge of the optimal behavior in all possible environments; which is clearly not a thing that humans can do.

I would say that this area is about moving on from reward functions, going to the next thing that makes the human’s job even easier. And so we’ve got things like comparisons, we’ve got things like inverse award design where you specify a proxy to work function that only needs to work in the training environment. Or you do something like inverse reinforcement learning, where you learn from demonstrations; so I think that’s one nice way of looking at this field.

Lucas: So do you have anything else you would like to add on here about how we present-day get AI systems to do what we want them to do, section of the field?

Rohin: Maybe I want to plug my value learning sequence, because it talks about this much more eloquently than I can on this podcast?

Lucas: Sure. Where can people find your value learning sequence?

Rohin: It’s on the Alignment Forum. You just go to the Alignment Forum, at the top there’s ‘Recommended Sequences’, there’s ‘Embedded Agency’, which is from MIRI, the sort of stuff we already talked about; so that’s also great sequence, I would recommend it. There’s iterated amplification, also great sequence we haven’t talked about it yet. And then there’s my value learning sequence, so you can see it on the front page of the Alignment Forum.

Lucas: Great. So we’ve characterized these, say, different parts of the AI alignment field. And probably just so far it’s been cut into this sort of MIRI view, and then this broad approach of trying to get present-day AI systems to do what we want them to do, and to make incremental progress there. Are there any other slices of the AI alignment field that you would like to bring to light?

Rohin: Yeah, I’ve got four or five more. There’s the interated amplification and debate side of things, which is how do we build using current technologies, but imagining that they were way better? How do we build and align AGI? So they’re trying to solve the entire problem, as opposed to making incremental progress and, simultaneously, hopefully thinking about, conceptually, how do we fit all of these pieces together?

There’s limiting the AGI system, which is more about how do we prevent AI systems from behaving catastrophically? It makes no guarantees about the AI systems doing what we want, it just prevents them from doing really, really bad things. Techniques in that section includes boxing and avoiding side effects. There’s the robustness view, which is about how do we make AI systems well behaved or robustly? I guess that’s pretty self explanatory.

There’s transparency or interpretability, which I wouldn’t say is a technique by itself, but seems to be broadly useful for almost all of the other avenues, it’s something we would want to add to other techniques in order to make those techniques more effective. There’s also, in the same frame as MIRI, can we even understand intelligence? Can we even forecast what’s going to happen with AI? And within that, there’s comprehensive AI services.

here’s also lots of efforts on forecasting, but comprehensive AI services actually makes claims about what technical AI safety should do. So I think that one actually does have a place in this podcast, whereas most of the forecasting things do not, obviously. They have some implications on the strategic picture, but they don’t have clear implications on technical safety research directions, as far as I can tell it right now.

Lucas: Alright, so, do you want to go ahead and start off with the first one on the list there And then we’ll move sequentially down?

Rohin: Yeah, so iterated amplification and debate. This is similar to the helpful AGI section in the sense that we are trying to build an AI system that does what we want. That’s still the case here, but we’re now trying to figure out, conceptually, how can we do this using things like reinforcement learning and supervised learning, but imagining that they’re way better than they are right now? Such that the resulting agent is going to be aligned with us and reach arbitrary levels of intelligence; so in some sense, it’s trying to solve the entire problem.

We want to come up with a scheme such that if we run that scheme, we get good outcomes, we’ve solved almost all the problem. I think that it also differs in that the argument for why we can be successful is also different. This field is aiming to get a property of corrigibility, which I like to summarize as trying to help the overseer. It might fail to help the overseer, or the human, or the user, because it’s not very competent and maybe it makes a mistake and things that I like apples when actually I want oranges. But it was actually trying to help me; it actually thought I wanted apples.

So in corrigibility, you’re trying to help the overseer, whereas, in the previous thing about helpful AGI, you’re more getting an AI system that actually does what we want; there isn’t this distinction between what you’re trying to do versus what you actually do. So there’s a slightly different property that you’re trying to ensure, I think, on the strategic picture that’s the main difference.

The other difference is that these approaches are trying to make a single, unified generally intelligent AI system, and so they will make assumptions like, given that we’re trying to imagine something that’s generally intelligent, it should be able to do X, Y, and Z. Whereas the research agenda that’s let’s try to get AI systems that do want you want, tends not to make those assumptions. And so it’s more applicable to current systems or narrow system where you can’t assume that you have general intelligence.

For example, a claim that that Paul Christiano often talks about is that, “If your AI agent is generally intelligent and a little bit corrigible, it will probably easily be able to infer that its overseer, or the user, would like to remain in control of any resources that they have, and would like to be better informed about the situation, that the user would prefer that the agent does not lie to them etc., etc.” It was definitely not something that current day AI systems can do unless you really engineer them to, so this is presuming some level of generality, which we do not currently have.

So the next thing I said was limited AGI. Here the idea is, there are not very many policies or AI systems that will do what we want; what we want is a pretty narrow space in the space of all possible behaviors. Actually selecting one of the behaviors out of that space is quite difficult and requires a lot of information in order to narrow in on that piece of behavior. But if all you’re trying to do is avoid the catastrophic behaviors, then there are lots and lots of policies that successfully do that. And so it might be easier to find one of those policies; a policy that doesn’t ever kill all humans.

Lucas: At least the space of those policies, one might have this view and not think it sufficient for AI alignment, but see it as sort of a low hanging fruit to be picked. Because the space of non-catastrophic outcomes is larger than the space of extremely specific futures that human beings support.

Rohin: Yeah, exactly. And the success story here is, basically, that we develop this way of preventing catastrophic behaviors. All of our AI systems are filled with the system in place, and then technological progress continues as usual; it’s maybe not as fast as it would have been if we had an aligned AGI doing all of this for us, but hopefully it would still be somewhat fast, and hopefully enabled a bit by AI systems. Eventually, we will either make it to the future without ever building an AI system that doesn’t have a system in place, or we use this to do a bunch more AI research until we solve the full alignment problem, and then we can build, with high confidence that it’ll go well.

And actual proper aligned, super-intelligence that is helping us without any of these limitations systems in place. I think from a strategic picture, that’s basically the important parts about limited AGI. There are two subsections within those limits based on trying to change what the AI’s optimizing for, so this would be something like impact measures versus limits on the input/output channels of the AI system; so this would be something like AI boxing.

So, with robustness, I sort of think of the robustness mostly, it’s not going to give us safety by itself, probably, though there are some scenarios in which it could happen. It’s more meant to harden whichever other approach that we use. Maybe if we have an AI system that is trying to do what we want, to go back to the helpful AGI setting, maybe it does that 99.9 percent of the time. But we’re using this AI to make millions of decisions, which means it’s going to not do what we want 1,000 times. That seems like way too many times for comfort, because if it’s applying its intelligence to the wrong goal in those 1,000 times, you could get some pretty bad outcomes.

This is a super heuristic and fluffy argument, but there are lots of problems with it. I think it sets up the general reason that we would want robustness. So with robustness techniques, you’re basically trying to get some nice worst case guarantees that say, “Yeah, the AI system is never going to screw up super, super bad.” And this is helpful when you have an AI system that’s going to make many, many, many decisions, and we want to make sure that none of those decisions are going to be catastrophic.

And so some techniques in here include verification, adversarial training, and other adversarial ML techniques like Byzantine fault tolerance, or stuff like that. These are all the data poisoning, interpretability can also be helpful for robustness if you’ve got a strong overseer who can use interpretability to give good feedback to your AI system. But yeah, the overall goal is take something that doesn’t fail 99 percent of the time, and get it to not fail 100 percent of the time, or check whether or not it ever fails, so that you don’t have this very rare but very bad outcome.

Lucas: And so would you see this section as being within the context of any others or being sort of at a higher level of abstraction?

Rohin: I would say that it applies to any of the others, well okay, not the MIRI embedded agency stuff, because we don’t really have a story for how that ends up helping with AI safety. It could apply to however that caches out in the future, but we don’t really know right now. With limited AGI, many have this theoretical model, if you apply this sort of penalty, this sort of impact measure, then you’re never going to have any catastrophic outcomes.

But, of course, in practice, we train our AI systems to optimize that penalty and get the sort of weird black box thing out. And we’re not entirely sure if it’s respecting the penalty or something like this. Then you could use something like verification or your transparency in order to make sure that this is actually behaving the way we would predict them behave based on our analysis of what limits we need to put on the AI system.

Similarly, if you build AI systems that are doing what we want, maybe you want to use adversarial training to see if you can find any situations in which the AI system’s doing something weird, doing something which we wouldn’t classify as what we want, with iterated amplification or debate, maybe we want to verify that the corrigibility property happens all the time. It’s unclear how you would use verification for that, because it seems like a particularly hard property to formalize, but you could still do things like adversarial training or transparency.

We might have this theoretical arguments for why our systems will work, then once we turn them into actual real systems that will probably use neural nets and other messy stuff like that, are we sure that in the translation from theory to practice, all of our guarantees stayed? Unclear, we should probably use some robustness techniques to check that.

Interpretability, I believe, was next. It’s sort of similar in that it’s broadly useful for everything else. If you want to figure out whether an AI system is doing what you want, it would be really helpful to be able to look into the agent and see, “Oh, it chose to buy apples because it had seen me eat apples in the past.” Versus, “It chose to buy apples because there was this company that made it to buy the apples, so that it would make more profit.”

If we could see those two cases, if we could actually see into the decision making process, it becomes a lot easier to tell whether or not the AI system is doing what we want, or whether or not the AI system is corrigible, or whether or not be AI system is properly … Well, maybe it’s not as obvious for impact measures, but I wouldn’t expect it to be useful there as well, even if I don’t have a story off the top of my head.

Similarly with robustness, if you’re doing something like adversarial training, it sure would help if your adversary was able to look into the inner workings of the agent and be like, “Ah, I see this agent, it tends to underwrite this particular class of risky outcomes. So why don’t I search within that class of situations for one that is going to take a big risk on that it shouldn’t have taken otherwise?” It just makes all of the other problems a lot easier to do.

Lucas: And so how is progress made on interpretability?

Rohin: Right now I think most of the progress is in image classifiers. I’ve seen some work on interpretability for deep RL as well. Honestly, that’s probably most of the research is happening with classification systems, primarily image classifiers, but others as well. And then I also see the deep RL explanation systems because I read a lot of deep RL research.

But it’s motivated a lot, there are real problems with current AI systems, and interpretability helps you to diagnose and fix those, as well. For example, the problems of bias in classifiers, one thing that I remember from Deep Dream is you can ask Deep Dream to visualize barbells. And you always see these sort of muscular arms that are attached to the barbells because, in the training set, barbells were always being picked up by muscular people. So, that’s a way that you can tell that your classifier is not really learning the concepts that you wanted it to do.

In the bias case maybe your classifier always classifies anyone sitting at a computer as a man, because of bias in the data set. And using interpretability techniques, you could see that, okay when you look at this picture, the AI system is looking primarily at the pixels that represent the computer, as opposed to the pixels that represent the human. And making its decision to label this person as a man, based on that, and you’re like, no, that’s clearly the wrong thing to do. The classifier should be paying attention to the human, not to the laptop.

So I think a lot of interpretability research right now is you take a particular short term problem and figure out how you can make that problem easier to solve. Though a lot of it is also what would be the best way to understand what our model is doing? So I think a lot of the work that Chris Olah doing, for example, is in this vein, and then as we do this exploration, finding some sort of bias in the classifiers that you’re studying.

So, Comprehensive AI Services, an attempt to predict what the feature of AI development will look like, and the hope is that, by doing this, we can figure out what sort of technical safety things we will need to do. Or, strategically, what sort of things we should push for in the AI research community in order to make those systems safer.

There’s a big difference between, we are going to build a single unified AGI agent and it’s going to be generally intelligent to optimize the world according to a utility function versus we are going to build a bunch of disparate, separate, narrow AI systems that are going to interact with each other quite a lot. And because of that, they will be able to do a wide variety of tasks, none of them are going to look particularly like expected utility maximizers. And the safety research you want to do is different in those two different worlds. And CAIS is basically saying “We’re in the second of those worlds, not the first one.”

Lucas: Can you go ahead and tell us about ambitious value learning?

Rohin: Yeah, so with ambitious value learning, this is also an approach to how do we make an aligned AGI solve the entire problem in some sense? Which is look at not just human behavior, but also human brains of the algorithm that they implement, and use that to infer an adequate utility function, the one that we would be okay with the behavior that results from that.

Infer this utility function, I’m going to plug it into an expected utility maximizer. Now, of course, we do have to solve problems with even once we have the utility function, how do we actually build a system that maximizes that utility function, which is not a solved problem yet? But it does seem to be capturing from the main difficulties, if you could actually solve the problem. And so that’s an approach I associate most with Stuart Armstrong.

Lucas: Alright, and so you were saying earlier, in terms of your own view, it’s sort of an amalgamation of different credences that you have in the potential efficacy of all these different approaches. So, given all of these and all of their broad missions, and interests, and assumptions that they’re willing to make, what are you most hopeful about? What are you excited about? How do you, sort of, assign your credence and time here?

Rohin: I think I’m most excited about the concept of corrigibility. That seems like the right thing to aim for, it seems like it’s a thing we can achieve, it seems like if we achieve it, we’re probably okay, nothing’s going to go horribly wrong and probably will go very well. I am less confident on which approach to corrigibility I am most excited about. Iterated amplification and debate seem like if we were to implement them, they will probably lead to incorrigible behavior. But I am worried that either of those will be … Either we won’t actually be able to build generally intelligent agents, in which case both of those approaches don’t really work. Or another worry that I have is that those approaches might be too expensive to actually do in that other systems are just so much more computationally efficient that we just use those instead.

Due to economic pressures, Paul does not seem to be worried by either of these things. He’s definitely aware of both these issues, in fact, he was the one I think who listed computational efficiency as a desideratum, and he still is optimistic about them. So, I would not put a huge amount of credence in this view of mine.

If I were to say what I was excited about for portability instead of that, it would be something like take the research that we’re currently doing on how to get current AI systems to work, which often called ‘narrow value learning’. If you take that research, it seems plausible that this research, extended into the future, will give us some method of creating an AI system that’s implicitly learning our narrow values, and is corrigible as a result of that, even if it is not generally intelligent.

This is sort of a very hand wavey speculative intuition, certainly not as concrete as the hope that we have with iterated amplification. But I’m somewhat optimistic about it, and less optimistic about limiting AI systems, it seems like even if you succeed in finding a nice, simple rule that eliminates all catastrophic behaviors, which plausibly you could do, it seems hard to find one that both does that and also lets you do all of the things that you do want to do.

If you’re talking about impact metrics, for example, if you require AI to be a low impact, I expect that that would prevent you from doing many things that we actually want to do, because many things that we want to do are actually quite high impact. Now, Alex Turner disagrees with me on this, and he developed attainable utility preservation. He is explicitly working on this problem and disagree with me, so again I don’t know how much credence to put in this.

I don’t know if Vika agrees with me on this or not, she also might disagree with me and she is also directly working with this problem. So, yeah, seems hard to put a limit that also lets us do and things that we want. And in that case, it seems like due to economic pressures, we’d end up doing the things that don’t limit our AI systems from doing what they want.

I want to keep emphasizing my extreme uncertainty over all of this given that other people disagree with me on this, but that’s my current opinion. Similarly with boxing, it seems like it’s going to just make it very hard to actually use the AI system. Robustness and interpretability seems very broadly useful and supportive of most research on interpretability; maybe with an eye towards long term concerns, just because it seems to make every other approach to AI safety a lot more feasible and easier to solve.

I don’t think it’s a solution by itself, but given that it seems to improve almost every story I have for making an aligned AGI, seems like it’s very much worth getting a better understanding of it. Robustness is an interesting one, it’s not clear to me, if it is actually necessary. I kind of want to just voice lots of uncertainty about robustness and leave it at that. It’s certainly good to do in that it helps us be more confident in our AI systems, but maybe everything would be okay even if we just didn’t do anything. I don’t know, I feel like I would have to think a lot more about this and also see the techniques that we actually used to build AGI in order to have a better opinion on that.

Lucas: Could you give a few examples of where your intuitions here are coming from that don’t see robustness as an essential part of the AI alignment?

Rohin: Well, one major intuition, if you look at humans, they’re at least some human where I’m like, “Okay, I could just make this human a lot smarter, a lot faster, have them think for many, many years, and I still expect that they will be robust and not lead to some catastrophic outcome. They may not do exactly what I would have done, because they’re doing what they want. But they’re probably going to do something reasonable, they’re not going to do something crazy or ridiculous.

I feel like humans, some humans, the sufficiently risk averse and uncertain ones seem to be reasonably robust. I think that if you know that you’re planning over a very, very, very long time horizon, so imagine that you know you’re planning over billions of years, then the rational response to this is, “I really better make sure not to screw up right now, since there is just so much reward in the future, I really need to make sure that I can get it.” And so you get very strong pressures for preserving option value or not doing anything super crazy. So I think you could, plausibly, just get the reasonable outcomes from those effects. But again, these are not well thought out.

Lucas: All right, and so I just want to go ahead and guide us back to your general views, again, on the approaches. Is there anything that you’d like to add their own the approaches?

Rohin: I think I didn’t talk about CAIS yet. I guess my general view of CAIS, I broadly agree with it, that this does seem to be the most likely development path, meaning that it’s more likely than any other specific development path, but not more likely to have any other development path.

So I broadly agree with the worldview presented, I’m still trying to figure out what implications it has for technical safety research. I don’t agree with all of it, in particular, I think that you are likely to get AGI agents at some point, probably, after the CAIS soup of services happens. Which, I think, again, Drexler disagrees with me on that. So, put a bunch of uncertainty on that, but I broadly agree with that worldview that CAIS is proposing.

Lucas: In terms of this disagreement between you and Eric Drexler, are you imagining agenty AGI or super-intelligence which comes after the CAIS soup? Do you see that as an inevitable byproduct of CAIS or do you see that as an inevitable choice that humanity will make? And is Eric pushing the view that the agenty stuff doesn’t necessarily come later, it’s a choice that human beings would have to make?

Rohin: I do think it’s more like saying that this will be a choice that humans will make at some point. I’m sure that Eric, to some extent, is saying, “Yeah, just don’t do that.” But I think Eric and I do, in fact, have a disagreement on how much more performance you can get from an AGI agent, than a CAIS super of services. My argument is something like there is efficiency to be gained from going to an AGI agent, and Eric’s position as best I understand it, is that there is actually just not that much economic incentive to go to an AGI agent.

Lucas: What are your intuition pumps for why you think that you will gain a lot of computational efficiency from creating sort of an AGI agent? We don’t have to go super deep, but I guess a terse summary or something?

Rohin: Sure, I guess the main intuition pump is that in all of the past cases that we have of AI systems, you see that in speech recognition, in deep reinforcement learning, in image classification, we had all of the hand-built systems that separated these out into a few different modules that interacted with each other in a vaguely CAIS-like way. And then, at some point, we got enough computer and large enough data sets that we just threw deep learning at it, and deep learning just blew those approaches out of the water.

So there’s the argument from empirical experience, and there’s also the argument of if you try to modularize your systems yourself, you can’t really optimize the communication between them, you’re less integrated and you can’t make decisions based on global information, you have to make it based off of local information. And so the decisions tend to be a little bit worse. This could be taken as an explanation for the empirical observation that I made that we can already make; so that’s another intuition pump there.

Eric’s response would probably be something like, “Sure, this seems true for these narrow tasks, for narrow tasks.” You can get a lot of efficiency gains by integrating everything together and throwing deep learning and [inaudible 00:54:10] training at all of it. But for a sufficiently high level tasks, there’s not really that much to be gained by doing global information instead of local information, so you don’t actually lose much by having these separate systems, and you do get a lot of computational deficiency in generalization bonuses by modularizing. He had a good example of this that I’m not replicating and I don’t want to make my own example, because it’s not going to be as convincing; but that’s his current argument.

And then my counter-argument is that’s because humans have small brains, so given the size of our brains and the limits of our data, and the limits of the compute that we have, we are forced to do modularity and systematization to break tasks apart into modular chunks that we can then do individually. Like if you are running a corporation, you need each person to specialize in their own task without thinking about all the other tasks, because we just do not have the ability to optimize for everything all together because we have small brains, relatively speaking; or limited brains, is what I should say.

But this is not a limit that AI systems will have. An AI system would just vastly more computer than the human brain, vastly more data will, in fact, just be able to optimize all of this with global information and get better results. So that’s one thread of the argument taken down to two or three levels of arguments and counter-arguments. There are other threads of that debate, as well.

Lucas: I think that that serves a purpose for illustrating that here. So are there any other approaches here that you’d like to cover, or is that it?

Rohin: I didn’t talk about factored cognition very much. But I think it’s worth highlighting separately from iterated amplification in that it’s testing an empirical hypothesis of can humans decompose tasks into chunks of some small amount of time? And can we do arbitrarily complex tasks using these humans? I am particularly excited about this sort of work that’s trying to figure out what humans are capable of doing and what supervision they can give to AI systems.

Mostly because going back to a thing I said way back in the beginning, what we’re aiming for is a human AI system to be collectively rational as opposed to an AI system as individually rational. Part of the human-AI-system is the human, you want to be able to know what the human can do, what sort of policies they can implement, what sort of feedback they can be giving to the AI system. And something like factory cognition is testing a particular aspect of that; and I think that seems great and we need more of it.

Lucas: Right. I think that this seems to be the sort of emerging view of where social science or scientists are needed in AI alignment in order to, again as you said, sort of understand what human beings are capable in terms of supervised learning and analyzing the human component of the AI alignment problem as it requires us to be collectively rational with AI systems.

Rohin: Yeah, that seems right. I expect more writing on this in the future.

Lucas: All right, so there’s just a ton of approaches here to AI alignment, and our heroic listeners have a lot to take in here. In terms of getting more information, generally, about these approaches or if people are still interested in delving into all these different views that people take at the problem and methodologies of working on it, what would you suggest that interested persons look into or read into?

Rohin: I cannot give you a overview of everything, because that does not exist. To the extent that it exists, it’s either this podcast or the talk that I did at Beneficial AGI. I can suggest resources for individual items, so for embedded agency there’s the embedded agency sequence on the Alignment Forum; far and away the best thing for read for that.

For CAIS, Comprehensive AI Services, there was a 200 plus page tech report published by Eric Drexler at the beginning of this month, if you’re interested, you should go read the entire thing; it is quite good. But I also wrote a summary of it on the Alignment Forum, which is much more readable, in the sense that it’s shorter. And then there are a lot of comments on there that analyze it a bit more.

There’s also another summary written by Richard Ngo, also on the Alignment Forum. Maybe it’s only on Lesswrong, I forget; it’s probably on the Alignment Forum. But that’s a different take on comprehensive AI services, so I’d recommend reading that too.

For limited AGI, I have not really been keeping up with the literature on boxing, so I don’t have a favorite to recommend. I know that a couple have been written by, I believe, Jim Babcock and Roman Yampolskiy.

For impact measures, you want to read Vika’s paper on relative reachability. There’s also a blog post about it if you don’t want to read the paper. And Alex Turner’s blog posts on attainable utility preservation, I think it’s called ‘Towards A New Impact Measure’, and this is on the Alignment Forum.

For robustness, I would read Paul Christiano’s post called ‘Techniques For Optimizing Worst Case Performance’. This is definitely specific to how robustness will help under Paul’s conception of the problem and, in particular, his thinking of robustness in the setting where you have a very strong overseer for your AI system. But I don’t know of any other papers or blog post that’s talking about robustness, generally.

For AI systems that do what we want, there’s my value learning sequence that I mentioned before on the Alignment Forum. There’s CIRL or Cooperative Inverse Reinforcement Learning which is a paper by Dylan and others. There’s Deep Reinforcement Learning From Human Preferences and Recursive Reward Modeling, these are both papers that are particular instances of work in this field. I also want to recommend Inverse Reward Design, because I really like that paper; so that’s also a paper by Dylan, and others.

For corrigibility and iterated amplification, the iterated amplification sequence on the Alignment Forum or half of what Paul Christiano has written. If you want to read not an entire sequence of blog posts, then I think Clarifying AI alignment is probably the post I would recommend. It’s one of the posts in the sequence and talks about this distinction of creating an AI system that is trying to do what you want, as opposed to actually doing what you want and why we might want to aim for only the first one.

For iterated amplification, itself, that technique, there is a paper that I believe is called something like Supervising Strong Learners By Amplifying Weak Experts, which is a good thing to read and there’s also corresponding OpenAI blog posts, whose name I forget. I think if you search iterated amplification, OpenAI blog you’ll find it.

And then for debate, there’s AI Safety via Debate, which is a paper, there’s also a corresponding OpenAI blog post. For factory cognition, there’s a post called Factored Cognition, on the Alignment Forum; again, in the iterated amplification sequence.

For interpretability, there isn’t really anything talking about interpretability, from the strategic point of view of why we want it. I guess that same post I recommend before of techniques for optimizing worst case performance talks about it a little bit. For actual interpretability techniques, I recommend the distill articles, the building blocks of interpretability and feature visualization, but these are more about particular techniques for interpretability, as opposed to why we wanted interpretability.

And on ambitious value learning, the first chapter of my sequence on value learning talks exclusively about ambitious value learning; so that’s one thing I’d recommend. But also Stuart Armstrong has so many posts, I think there’s one that’s about resolving human values adequately and something else, something like that. That one might be one worth checking out, it’s very technical though; lots of math.

He’s also written a bunch of posts that convey the intuitions behind the ideas. They’re all split into a bunch of very short posts, so I can’t really recommend any one particular one. You could go to the alignment newsletter database and just search Stuart Armstrong, and click on all of those posts and read them. I think that was everything.

Lucas: That’s a wonderful list. So we’ll go ahead and link those all in the article which goes along with this podcast, so that’ll all be there organized in nice, neat lists for people. This is all probably been fairly overwhelming in terms of the number of approaches and how they differ, and how one is to adjudicate the merits of all of them. If someone is just sort of entering the space of AI alignment, or is beginning to be interested in sort of these different technical approaches, do you have any recommendations?

Rohin: Reading a lot, rather than trying to do actual research. This was my strategy, I started back in September of 2017 and I think for the first six months or so, I was reading about 20 hours a week, in addition to doing research; which was why it was only 20 hours a week, it wasn’t a full time thing I was doing.

And I think that was very helpful for actually forming a picture of what everyone was doing. Now, it’s plausible that you don’t want to actually learn about what everyone is doing, and you’re okay with like, “I’m fairly confident that this thing, this particular problem is an important piece of the problem and we need to solve it.” And I think it’s very easy to get that wrong, so I’m a little wary of recommending that but it’s a reasonable strategy to just say, “Okay, we probably will need to solve this problem, but even if we don’t, the intuitions that we get from trying to solve this problem will be useful.

Focusing on that particular problem, reading all of the literature on that, attacking that problem, in particular, lets you start doing things faster, while still doing things that are probably going to be useful; so that’s another strategy that people could do. But I don’t think it’s very good for orienting yourself in the field of AI safety.

Lucas: So you think that there’s a high value in people taking this time to read, to understand all the papers and the approaches before trying to participate in particular research questions or methodologies. Given how open this question is, all the approaches make different assumptions and take for granted different axioms which all come together to create a wide variety of things which can both complement each other and have varying degrees of efficacy in the real world when AI systems start to become more developed and advanced.

Rohin: Yeah, that seems right to me. Part of the reason I’m recommending this is because it seems to be that no one does this. I think, on the margin, I want more people who do this in a world where 20 percent of the people were doing this, and the other 80 percent were just taking particular piece of the problem and working on those. That might be the right balance, somewhere around there, I don’t know, it depends on how you count who is actually in the field. But somewhere between one and 10 percent of the people are doing this; closer to the one.

Lucas: Which is quite interesting, I think, given that it seems like AI alignment should be in a stage of maximum exploration just given the conceptually mapping the territory is very young. I mean, we’re essentially seeing the birth and initial development of an entirely new field and specific application of thinking. And there are many more mistakes to be made, and concepts to be clarified, and layers to be built. So, seems like we should be maximizing our attention in exploring the general space, trying to develop models, the efficacy of different approaches and philosophies and views of AI alignment.

Rohin: Yeah, I agree with you, that should not be surprising given that I am one of the people doing this, or trying to do this. Probably the better critique will come from people who are not doing this, and can tell both of us why we’re wrong about this.

Lucas: We’ve covered a lot here in terms of the specific approaches, your thoughts on the approaches, where we can find resources on the approaches, why setting the approaches matters. Are there any parts of the approaches that you feel deserve more attention in terms of these different sections that we’ve covered?

Rohin: I think I would want more work on looking at the intersection between things that are supposed to be complimentary, how interpretability can help you have AI systems that have the right goals, for example, would be a cool thing to do. Or what you need to do in order to get verification, which is a sub-part of robustness, to give you interesting guarantees on AI systems that we actually care about.

Most of the work on verification right now is like, there’s this nice specification that we have for adversarial examples, in particular, is there an input that is within some distance from a training data point, such that it gets classified differently from that training data point. And those are the nice formal specification and most of the work in verification takes this specification as given and that figures out more and more computationally efficient ways to actually verify that property, basically.

That does seem like a thing that needs to happen, but the much more urgent thing, in my mind, is how do we come up with these specifications in the first place? If I want to verify that my AI system is corrigible, or I want to verify that it’s not going to do anything catastrophic, or that it is going to not disable my value learning system, or something like that; how do I specify this at all in any way that lets me do something like a verification technique even given infinite computing power? It’s not clear to me how you would do something like that, and I would love to see people do more research on that.

That particular thing is my current reason for not being very optimistic about verification, in particular, but I don’t think anyone has really given it a try. So it’s plausible that there’s actually just some approach that could work that we just haven’t found yet because no one’s really been trying. I think all of the work on limited AGI is talking about, okay, does this actually eliminate all of the catastrophic behavior? Which, yeah, that’s definitely an important thing, but I wish that people would also do research on, given that we put this penalty or this limit on the AGI system, what things is it still capable of doing?

Have we just made it impossible for it to do anything of interest whatsoever, or can it actually still do pretty powerful things, even though we’ve placed these limits on it? That’s the main thing I want to see. From there, let’s have AI systems that do what we want, probably the biggest thing I want to see there, and I’ve been trying to do some of this myself, some conceptual thinking about how does this lead to good outcomes in the long term? So far, we’ve not been dealing with the fact that the human doesn’t actually know, doesn’t actually have a nice consistent utility function that they know and that can be optimized. So, once you relax that assumption, what the hell do you do? And then there’s also a bunch of other problems that would benefit from more conceptual clarification, maybe I don’t need to go into all of them right now.

Lucas: Yeah. And just to sort of inject something here that I think we haven’t touched on and that you might have some words about in terms of approaches. We discussed sort of agential views of advanced artificial intelligence, a services-based conception, though I don’t believe that we have talked about aligning AI systems that simply function as oracles or having a concert of oracles. You can get rid of the services thing, and the agency thing if the AI just tells you what is true, or answers your questions in a way that is value aligned.

Rohin: Yeah, I mostly want to punt on that question because I have not actually read all the papers. I might have read a grand total of one paper on the oracles, and also super intelligence which talks about oracles. So I feel like I know so little about the state of the art on oracles, that it should not actually say anything about them.

Lucas: Sure. So then just as a broad point to point out to our audience is that in terms of conceptualizing these different approaches to AI alignment, it’s important and crucial to consider the kind of AI system that you’re thinking about the kinds of features and properties that it has, and oracles are another version here that one can play with in one’s AI alignment thinking?

Rohin: I think the canonical paper there is something like Good and Safe Pieces of Oracles, but I have not actually read it. There is a list of things I want to read, it is on that list. But that list also has, I think, something like 300 papers on it, and apparently I have not gotten to oracles yet.

Lucas: And so for the sake of this whole podcast being as comprehensive as possible, are there any conceptions of AI, for example, that we have omitted so far adding on to this agential view, the CAIS view of it actually just being a lot of distributed services, or an oracle view?

Rohin: There’s also the Tool AI View. This is different from the services view, but it’s somewhat akin to the view you were talking about at the beginning of this podcast where you’ve got AI systems that have a narrowly defined input/output space, they’ve got a particular thing that they do with limit, and they just sort of take in their inputs and do some computation, they spit out their outputs and that’s it, that’s all that they do. You can’t really model them as having some long term utility function that they’re optimizing, they’re just implementing a particular input-output relation and it’s all they’re trying to do.

Even saying something like, “They are trying to do X.” Is basically using a bad model for them. I think the main argument against expecting tool AI systems is that they’re probably not going to be as useful as other services or agential AI, because tool AI systems would have to be programmed in a way where we understood what they were doing and why they were doing it. Whereas agential AI systems or services would be able to consider new possible ways of achieving goals that we hadn’t thought about and enact those plans.

And so they could get super human behavior by considering things that we wouldn’t consider. Whereas, true Ais … Like Google Maps is super human in some sense, but it’s super human only because it has a compute advantage over us. If we were given all of the data and all of the time, in human real time, that Google Maps had, we could implement a similar sort of algorithm as Google Maps and compute the optimal route ourselves.

Lucas: There seems to be this duality that is constantly being formed in our conception of AI alignment, where the AI system is this tangible external object which stands in some relationship to the human and is trying to help the human to achieve certain things.

Are there conceptions of value alignment which, however the procedure or methodology is done, changes or challenges the relationship between the AI system and the human system where it challenges what it means to be the AI or what it means to be human, whereas, there’s potentially some sort of merging or disruption of this dualistic scenario of the relationship?

Rohin: I don’t really know, I mean, it sounds like you’re talking about things like brain computer interfaces and stuff like that. I don’t really know of any intersection between AI safety research and that. I guess, this did remind me, too, that I want to make the point that all of this is about the relatively narrow, I claim, problem of aligning an AI system with a single human.

There is also the problem of, okay what if there are multiple humans, what if there are multiple AI systems, what if you’ve got a bunch of different groups of people and each group is value aligned within themselves, they build an AI that’s value aligned with them, but lots of different groups do this now what happens?

Solving the problem that I’ve been talking about does not mean that you have a good outcome in the long term future, it is merely one piece of a larger overall picture. I don’t think any of that larger overall picture removes the dualistic thing that you were talking about, but they dualistic part reminded me of the fact that I am talking about a narrow problem and not the whole problem, in some sense.

Lucas: Right and so just to offer some conceptual clarification here, again, the first problem is how do I get an AI system to do what I want it to do when the world is just me and that AI system?

Rohin: Me and that AI system and the rest of humanity, but the rest of humanity is treated as part of the environment.

Lucas: Right, so you’re not modeling other AI systems or how some mutually incompatible preferences and trained systems would interact in the world or something like that?

Rohin: Exactly.

Lucas: So the full AI alignment problem is… It’s funny because it’s just the question of civilization, I guess. How do you get the whole world and all of the AI systems to make a beautiful world instead of a bad world?

Rohin: Yeah, I’m not sure if you saw my lightning talk at Beneficial AGI, but I talked a bit about those. I think I called that top level problem, make AI related features stuff go well, very, very, very concrete, obviously.

Lucas: It makes sense. People know what you’re talking about.

Rohin: I probably wouldn’t call that broad problem the AI alignment problem. I kind of wonder is there a different alignment for the narrower trouble? We could maybe call it the ‘AI Safety Problem’ or the ‘AI Future Problem’, I don’t know. ‘Beneficially AI’ problem actually, I think that’s what I used last time.

Lucas: That’s a nice way to put it. So I think that, conceptually, leave us at a very good place for this first section.

Rohin: Yeah, seems pretty good to me.

Lucas: If you found this podcast interesting or useful, please make sure to check back for part two in a couple weeks where Rohin and I go into more detail about the strengths and weaknesses of specific approaches.

We’ll be back again soon with another episode in the AI Alignment podcast.

[end of recorded material]

FLI Podcast: Why Ban Lethal Autonomous Weapons?

Why are we so concerned about lethal autonomous weapons? Ariel spoke to four experts –– one physician, one lawyer, and two human rights specialists –– all of whom offered their most powerful arguments on why the world needs to ensure that algorithms are never allowed to make the decision to take a life. It was even recorded from the United Nations Convention on Conventional Weapons, where a ban on lethal autonomous weapons was under discussion. 

Dr. Emilia Javorsky is a physician, scientist, and Founder of Scientists Against Inhumane Weapons; Bonnie Docherty is Associate Director of Armed Conflict and Civilian Protection at Harvard Law School’s Human Rights Clinic and Senior Researcher at Human Rights Watch; Ray Acheson is Director of The Disarmament Program of the Women’s International League for Peace and Freedom; and Rasha Abdul Rahim is Deputy Director of Amnesty Tech at Amnesty International.

Topics discussed in this episode include:

  • The role of the medical community in banning other WMDs
  • The importance of banning LAWS before they’re developed
  • Potential human bias in LAWS
  • Potential police use of LAWS against civilians
  • International humanitarian law and the law of war
  • Meaningful human control

Once you’ve listened to the podcast, we want to know what you think: What is the most convincing reason in favor of a ban on lethal autonomous weapons? We’ve listed quite a few arguments in favor of a ban, in no particular order, for you to consider:

  • If the AI community can’t even agree that algorithms should not be allowed to make the decisions to take a human life, then how can we find consensus on any of the other sticky ethical issues that AI raises?
  • If development of lethal AI weapons continues, then we will soon find ourselves in the midst of an AI arms race, which will lead to cheaper, deadlier, and more ubiquitous weapons. It’s much harder to ensure safety and legal standards in the middle of an arms race.
  • These weapons will be mass-produced, hacked, and fall onto the black market, where anyone will be able to access them.
  • These weapons will be easier to develop, access, and use, which could lead to a rise in destabilizing assassinations, ethnic cleansing, and greater global insecurity.
  • Taking humans further out of the loop will lower the barrier for entering into war.
  • Greater autonomy increases the likelihood that the weapons will be hacked, making it more difficult for military commanders to ensure control over their weapons.
  • Because of the low cost, these will be easy to mass-produce and stockpile, making AI weapons the newest form of Weapons of Mass Destruction.
  • Algorithms can target specific groups based on sensor data such as perceived age, gender, ethnicity, facial features, dress code, or even place of residence or worship.
  • Algorithms lack human morality and empathy, and therefore they cannot make humane context-based kill/don’t kill decisions.
  • By taking the human out of the loop, we fundamentally dehumanize warfare and obscure who is ultimately responsible and accountable for lethal force.
  • Many argue that these weapons are in violation of the Geneva Convention, the Marten’s Clause, the International Covenant on Civil and Political Rights, etc. Given the disagreements about whether lethal autonomous weapons are covered by these pre-existing laws, a new ban would help clarify what are acceptable uses of AI with respect to lethal decisions — especially for the military — and what aren’t.
  • It’s unclear who, if anyone, could be held accountable and/or responsible if a lethal autonomous weapon causes unnecessary and/or unexpected harm.
  • Significant technical challenges exist which most researchers anticipate will take quite a while to solve, including: how to program reasoning and judgement with respect to international humanitarian law, how to distinguish between civilians and combatants, how to understand and respond to complex and unanticipated situations on the battlefield, how to verify and validate lethal autonomous weapons, how to understand external political context in chaotic battlefield situations.
  • Once the weapons are released, contact with them may become difficult if people learn that there’s been a mistake.
  • By their very nature, we can expect that lethal autonomous weapons will behave unpredictably, at least in some circumstances.
  • They will likely be more error-prone than conventional weapons.
  • They will likely exacerbate current human biases putting innocent civilians at greater risk of being accidentally targeted.
  • Current psychological research suggests that keeping a “human in the loop” may not be as effective as many hope, given human tendencies to be over-reliant on machines, especially in emergency situations.
  • In addition to military uses, lethal autonomous weapons will likely be used for policing and border control, again putting innocent civilians at greater risk of being targeted.

So which of these arguments resonates most with you? Or do you have other reasons for feeling concern about lethal autonomous weapons? We want to know what you think! Please leave a response in the comments section below.

Publications discussed in this episode include:

For more information, visit autonomousweapons.org.

AI Alignment Podcast: AI Alignment through Debate with Geoffrey Irving

“To make AI systems broadly useful for challenging real-world tasks, we need them to learn complex human goals and preferences. One approach to specifying complex goals asks humans to judge during training which agent behaviors are safe and useful, but this approach can fail if the task is too complicated for a human to directly judge. To help address this concern, we propose training agents via self play on a zero sum debate game. Given a question or proposed action, two agents take turns making short statements up to a limit, then a human judges which of the agents gave the most true, useful information…  In practice, whether debate works involves empirical questions about humans and the tasks we want AIs to perform, plus theoretical questions about the meaning of AI alignment.” AI safety via debate

Debate is something that we are all familiar with. Usually it involves two or more persons giving arguments and counter arguments over some question in order to prove a conclusion. At OpenAI, debate is being explored as an AI alignment methodology for reward learning (learning what humans want) and is a part of their scalability efforts (how to train/evolve systems to safely solve questions of increasing complexity). Debate might sometimes seem like a fruitless process, but when optimized and framed as a two-player zero-sum perfect-information game, we can see properties of debate and synergies with machine learning that may make it a powerful truth seeking process on the path to beneficial AGI.

On today’s episode, we are joined by Geoffrey Irving. Geoffrey is a member of the AI safety team at OpenAI. He has a PhD in computer science from Stanford University, and has worked at Google Brain on neural network theorem proving, cofounded Eddy Systems to autocorrect code as you type, and has worked on computational physics and geometry at Otherlab, D. E. Shaw Research, Pixar, and Weta Digital. He has screen credits on Tintin, Wall-E, Up, and Ratatouille. 

We hope that you will join in the conversations by following us or subscribing to our podcasts on Youtube, SoundCloud, iTunes, Google Play, Stitcher, or your preferred podcast site/application. You can find all the AI Alignment Podcasts here.

Topics discussed in this episode include:

  • What debate is and how it works
  • Experiments on debate in both machine learning and social science
  • Optimism and pessimism about debate
  • What amplification is and how it fits in
  • How Geoffrey took inspiration from amplification and AlphaGo
  • The importance of interpretability in debate
  • How debate works for normative questions
  • Why AI safety needs social scientists
You can find out more about Geoffrey Irving at his website. Here you can find the debate game mentioned in the podcast. Here you can find Geoffrey Irving, Paul Christiano, and Dario Amodei’s paper on debate. Here you can find an Open AI blog post on AI Safety via Debate. You can listen to the podcast above or read the transcript below.

Lucas: Hey, everyone. Welcome back to the AI Alignment Podcast. I’m Lucas Perry, and today we’ll be speaking with Geoffrey Irving about AI safety via Debate. We discuss how debate fits in with the general research directions of OpenAI, what amplification is and how it fits in, and the relation of all this with AI alignment. As always, if you find this podcast interesting or useful, please give it a like and share it with someone who might find it valuable.

Geoffrey Irving is a member of the AI safety team at OpenAI. He has a PhD in computer science from Stanford University, and has worked at Google Brain on neural network theorem proving, cofounded Eddy Systems to autocorrect code as you type, and has worked on computational physics and geometry at Otherlab, D. E. Shaw Research, Pixar, and Weta Digital. He has screen credits on Tintin, Wall-E, Up, and Ratatouille. Without further ado, I give you Geoffrey Irving.

Thanks again, Geoffrey, for coming on the podcast. It’s really a pleasure to have you here.

Geoffrey: Thank you very much, Lucas.

Lucas: We’re here today to discuss your work on debate. I think that just to start off, it’d be interesting if you could provide for us a bit of framing for debate, and how debate exists at OpenAI, in the context of OpenAI’s general current research agenda and directions that OpenAI is moving right now.

Geoffrey: I think broadly, we’re trying to accomplish AI safety by reward learning, so learning a model of what humans want and then trying to optimize agents that achieve that model, so do well according to that model. There’s sort of three parts to learning what humans want. One part is just a bunch of machine learning mechanics of how to learn from small sample sizes, how to ask basic questions, how to deal with data quality. There’s a lot more work, then, on the human side, so how do humans respond to the questions we want to ask, and how do we sort of best ask the questions?

Then, there’s sort of a third category of how do you make these systems work even if the agents are very strong? So stronger than human in some or all areas. That’s sort of the scalability aspect. Debate is one of our techniques for doing scalability. Amplification being the first one and Debate is a version of that. Generally want to be able to supervise a learning agent, even if it is smarter than a human or stronger than a human on some task or on many tasks.

Debate is you train two agents to play a game. The game is that these two agents see a question on some subject, they give their answers. Each debater has their own answer, and then they have a debate about which answer is better, which means more true and more useful, and then a human sees that debate transcript and judges who wins based on who they think told the most useful true thing. The result of the game is, one, who won the debate, and two, the answer of the person who won the debate.

You can also have variants where the judge interacts during the debate. We can get into these details. The general point is that, in my tasks, it is much easier to recognize good answers than it is to come up with the answers yourself. This applies at several levels.

For example, at the first level, you might have a task where a human can’t do the task, but they can know immediately if they see a good answer to the task. Like, I’m bad at gymnastics, but if I see someone do a flip very gracefully, then I can know, at least to some level of confidence, that they’ve done a good job. There are other tasks where you can’t directly recognize the answer, so you might see an answer, it looks plausible, say, “Oh, that looks like a great answer,” but there’s some hidden flaw. If an agent were to point out that flaw to you, you’d then think, “Oh, that’s actually a bad answer.” Maybe it was misleading, maybe it was just wrong. You need two agents doing a back and forth to be able to get at the truth.

Then, if you apply this recursively through several levels, you might have tasks where you can’t recognize whether an answer is good directly. You can’t even recognize whether a counterargument is good. Maybe a counter-counterargument, then you could recognize it. If you do sort of three steps of Debate, back and forth with two agents, you get to the truth.

Depending on the task, you need some number of these steps to pin down what the true answer is. Broadly, what’s happening is that the debaters, when they’re giving these counterarguments, they’re skipping over a lot of the space where maybe the human doesn’t understand what would happen, but they know that it would go a certain way. Like, if you’d see two expert players playing a game of Go, they’re constantly ignoring a larger number of moves which are just very bad and would waste a lot of time if you try to analyze them, and they’re just looking at kind of the most important part of the arguments about what is the best move in the game.

For all of the experiments, there are simple image tasks, like MNIST. MNIST is a dataset of digits, and then also some preliminary kind of human experiments with just human debaters. On MNIST, an example is you have a sparse classifier that learns to recognize MNIST from just seeing a few pixels, so you might train it by picking random masks of just a few pixels of an MNIST image. This classifier is very bad, because it only sees a small part of the image. It might get accuracies below 50%. I forget exactly the number, I could look them up.

Then, you train debaters that choose which parts of the image to show to the judge. You fix the judge at this kind of weak training process, and then the debaters will play a game where they try to pick pixels that convince the judge of what they want to argue for. There’s kind of two ways to decide … In the first paper, we had one debater that was incentivized to tell the truth, so it knew the correct class. Like say, we’re looking at a seven, and the other debater had to pre-specify which digit it was going to lie for. It might say, “I’m going to lie and say things were an eight.”

The game would be each of these agents pick a single pixel, and you sort of iterate for, let’s say six rounds, picking one pixel after another, and then the judge sees those pixels, and chooses who wins the debate. The judge has seen kind of a small part of the evidence. The result of that is that you can boost a weak classifier up to sort of higher performance, even if the agents are incentivized only to win the game. In this case, if you pick one debater to be the honest player and one debater to be the liar, the honest player wins a significant majority of the time, much more than the classifier would win on random play.

Lucas: The judge here, sorry, was not a human?

Geoffrey: The judge here is not a human. It’s just a classifier trained to do a bad job at MNIST, because it sees only a little bit of information. It’s trained to convergence, but its input is just a few pixels of an image.

Lucas: Then, so the pixels that are then revealed by the debaters, each pixel is sort of the argument.

Geoffrey: Each pixel is the argument.

Lucas: Just to provide a little bit more framing, there’s this feature of debate, you can apply it to a very large domain of things that you’d be surprised about if you expand the notion of what it means to debate to showing pixels or something like this.

Geoffrey: It’s actually more important to debate in natural language. The end goal here is we want to extract a strengthened, kind of improved version of human performance at a task. The way we go about this, either in amplification or in debate, is we sort of factor through reasoning. Instead of trying to train directly on the task, like the answers to the task, you might have some questions and some answers, and you could train directly on question/answer pairs, we’re going to build a task which includes all possible human reasoning in the form of, say, in this case, debates, and then we’ll train the agents to do well in this space of reasoning, and then well pick out the answers at the very end. Once we’re satisfied that the reasoning all works out.

Because humans, sort of the way we talk about higher level concepts, especially abstract concepts, and say subtle moral concepts, is natural language, the most important domain here, in the human case, is natural language. What we’ve done so far, in all experiments for Debate, is an image space, because it’s easier. We’re trying now to move that work into natural language so that we can get more interesting settings.

Lucas: Right. In terms of natural language, do you just want to unpack a little bit about how that would be done at this point in natural language? It seems like our natural language technology is not at a point where I really see robust natural language debates.

Geoffrey: There’s sort of two ways to go. One way is human debates. You just replace the ML agents with human debaters and then a human judge, and you see whether the system works in kind of an all-human context. The other way is machine learning natural language is getting good enough to do interestingly well on sample question/answer datasets, and Debate is already interesting if you do a very small number of steps. In the general debate, you sort of imagine that you have this long transcript, dozens of statements long, with points and counterpoints and counterpoints, but if you already do just two steps, you might do question, answer, and then single counterargument. For some tasks, at least in theory, it already should be stronger than the baseline of just doing direct question/answer, because you have this ability to focus in on a counterargument that is important.

An example might be you see a question and an answer and then another debater just says, “Which part of the answer is problematic?” They might point to a word or to a small phrase, and say, “This is the point you should sort of focus in on.” If you learn how to self critique, then you can boost the performance by iterating once you know how to self critique.

The hope is that even if we can’t do general debates on the machine learning side just yet, we can do shallow debates, or some sort of simple first step in this direction, and then work up over time.

Lucas: This just seems to be a very fundamental part of AI alignment where you’re just breaking things down into very simple problems and then trying to succeed in those simple cases.

Geoffrey: That’s right.

Lucas: Just provide a little bit more illustration of debate as a general concept, and what it means in the context of AI alignment. I mean, there are open questions here, obviously, about the efficacy of debate, how debate exists as a tool within the space, so epistemological things that allow us to arrive at truth, and I guess, infer other people’s preferences. Sorry, again, in terms of reward learning, and AI alignment, and debate’s place in all of this, just contextualize, I guess, its sort of role in AI alignment, more broadly.

Geoffrey: It’s focusing, again, on the scalability aspect. One way to formulate that is we have this sort of notion of, either from a philosophy side, reflective equilibrium, or kind of from the AI alignment literature, coherent extrapolated volition, which is sort of what a human would do if we had thought very carefully for a very long time about a question, and sort of considered all the possible nuances, and counterarguments, and so on, and kind of reached the conclusion that is sort of free of inconsistencies.

Then, we’d like to take this kind of vague notion of, what happens when a human thinks for a very long time, and compress it into something we can use as an algorithm in a machine learning context. It’s also a definition. This vague notion of, let a human think for a very long time, that’s sort of a definition, but it’s kind of a strange one. A single human can’t think for a super long time. We don’t have access to that at all. You sort of need a definition that is more factored, where either a bunch of humans think for a long time, we sort of break up tasks, or you sort of consider only parts of the argument space at a time, or something.

You go from there to things that are both definitions of what it means to simulate thinking for long time and also algorithms. The first one of these is Amplification from Paul Christiano, and there you have some questions, and you can’t answer them directly, but you know how to break up a question into subquestions that are hopefully somewhat simpler, and then you sort of recursively answer those subquestions, possibly breaking them down further. You get this big tree of all possible questions that descend from your outer question. You just sort of imagine that you’re simulating over that whole tree, and you come up with an answer, and then that’s the final answer for your question.

Similarly, Debate is a variant of that, in the sense that you have this kind of tree of all possible arguments, and you’re going to try to simulate somehow what would happen if you considered all possible arguments, and picked out the most important ones, and summarized that into an answer for your question.

The broad goal here is to give a practical definition of what it means for people to take human input and push it to its inclusion, and then hopefully, we have a definition that also works as an algorithm, where we can do practical ML training, to train machine learning models.

Lucas: Right, so there’s, I guess, two thoughts that I sort of have here. The first one is that there is just sort of this fundamental question of what is AI alignment? It seems like in your writing, and in the writing of others at OpenAI, it’s to get AI to do what we want them to do. What we want them to do is … either it’s what we want them to do right now, or what we would want to do under reflective equilibrium, or at least we want to sort of get to reflective equilibrium. As you said, it seems like a way of doing that is compressing human thinking, or doing it much faster somehow.

Geoffrey: One way to say it is we want to do what humans want, even if we understood all of the consequences. It’s some kind of, Do what humans want, plus some side condition of: ‘imagine if we knew everything we needed to know to evaluate their question.”

Lucas: How does Debate scale to that level of compressing-

Geoffrey: One thing we should say is that everything here is sort of a limiting state or a goal, but not something we’re going to reach. It’s more important that we have closure under the relative things we might not have thought about. Here are some practical examples from kind of nearer-term misalignment. There’s an experiment in social science where they send out a bunch of resumes to job applications to classified ads, and the resumes were paired off into pairs that were identical except that the name of the person was either white sounding or black sounding, and the result was that you got significantly higher callback rates if the person sounded white, and even if they had an entirely identical resume to the person sounding black.

Here’s a situation where direct human judgment is bad in the way that we could clearly know. You could imagine trying to push that into the task by having an agent say, “Okay, here is a resume. We’d like you to judge it.” Either pointing explicitly to what they should judge, or pointing out, “You might be biased here. Try to ignore the name of the resume, and focus on this issue, like say their education or their experience.” You sort of hope that if you have a mechanism for surfacing concerns or surfacing counterarguments, you can get to a stronger version of human decision making. There’s no need to wait for some long term very strong agent case for this to be relevant, because we’re already pretty bad at making decisions in simple ways.

Then, broadly, I sort of have this sense that there’s not going to be magic in decision making. If I go to some very smart person, and they have a better idea for how to make a decision, or how to answer a question, I expect there to be some way they could explain their reasoning to me. I don’t expect I just have to take them on faith. We want to build methods that surface the reasons they might have to come to a conclusion.

Now, it may be very difficult for them to explain the process for how they came to those arguments. There’s some question about whether the arguments they’re going to make is the same as the reasons they’re giving the answers. Maybe they’re sort of rationalizing and so on. You’d hope that once you sort of surface all the arguments around the question that could be relevant, you get a better answer than if you just ask people directly.

Lucas: As we move out of debate in simple cases of image classifiers or experiments in similar environments, what does debate look like … I don’t really understand the ways in which the algorithms can be trained to elucidate all of these counterconcerns, and all of these different arguments, in order to help human beings arrive at the truth.

Geoffrey: One case we’re considering, especially on kind of the human experiment side, or doing debates with humans, is some sort of domain expert debate. The two debaters are maybe an expert in some field, and they have a bunch of knowledge, which is not accessible to the judge, which is maybe a reasonably competent human, but doesn’t know the details of some domain. For example, we did a debate where there were two people that knew computer science and quantum computing debating a question about quantum computing to a person who has some background, but nothing in that field.

The idea is you start out, there’s a question. Here, the question was, “Is the complexity class BQP equal to NP, or does it contain NP?” One point is that you don’t have to know what those terms mean for that to be a question you might want to answer, say in the course of some other goal. The first steps, things the debaters might say, is they might give short, intuitive definitions for these concepts and make their claims about what the answer is. You might say, “NP is the class of problems where we can verify solutions once we’ve found them, and BQP is the class of things that can run on a quantum computer.”

Now, you could have a debater that just straight up lies right away and says, “Well, actually NP is the class of things that can run on fast randomized computers.” That’s just wrong, and so what would happen then is that the counter debater would just immediately point to Wikipedia and say, “Well, that isn’t the definition of this class.” The judge can look that up, they can read the definition, and realize that one of the debaters has lied, and the debate is over.

You can’t immediately lie in kind of a simple way or you’ll be caught out too fast and lose the game. You have to sort of tell the truth, except maybe you kind of slightly veer towards lying. This is if you want to lie in your argument. At every step, if you’re an honest debater, you can try to pin the liar down to making sort of concrete statements. In this case, if say someone claims that quantum computers can solve all of NP, you might say, “Well, you must point me to an algorithm that does that.” The debater that’s trying to lie and say that quantum computers can solve all of NP might say, “Well, I don’t know what the algorithm is, but meh, maybe there’s an algorithm,” and then they’re probably going to lose, then.

Maybe they have to point to a specific algorithm. There is no algorithm, so they have to make one up. That will be a lie, but maybe it’s kind of a subtle complicated lie. Then, you could kind of dig into the details of that, and maybe you can reduce the fact that that algorithm is a lie to some kind of simple algebra, which either the human can check, maybe they can ask Mathematica or something. The idea is you take a complicated question that’s maybe very broad and covers a lot of the knowledge that the judge doesn’t know and you try to focus in closer and closer on details of arguments that the judge can check.

What the judge needs to be able to do is kind of follow along in the steps until they reach the end, and then there’s some ground fact that they can just look up or check and see who wins.

Lucas: I see. Yeah, that’s interesting. A brief passing thought is thinking about double cruxes and some tools and methods that CFAR employs, like how they might be interesting or used in debate. I think I also want to provide some more clarification here. Beyond debate being a truth-seeking process or a method by which we’re able to see which agent is being truthful, or which agent is lying, and again, there’s sort of this claim that you have in your paper that seems central to this, where you say, “In the debate game, it is harder to lie than to refute a lie.” This asymmetry in debate between the liar and the truth-seeker should hopefully, in general, bias towards people more easily seeing who is telling the truth.

Geoffrey: Yep.

Lucas: In terms of AI alignment again, in the examples that you’ve provided, it seems to help human beings arrive at truth for complex questions that are above their current level of understanding. How does this, again, relate directly to reward learning or value learning?

Geoffrey: Let’s assume that in this debate game, it is the case that it’s very hard to liar, so the winning move is to say the truth. What we want to do then is train kind of two systems. One system will be able to reproduce human judgment. That system would be able to look at the debate transcript and predict what the human would say is the correct winner of the debate. Once you get that system trained, so that’s sort of you’re learning not direct toward, but again, some notion of predicting how humans deal with reasoning. Once you learn that bit, then you can train an agent to play this game.

Then, we have a zero sum game, and then we can sort of apply any technique used to play a zero sum game, like Monte Carlo tree search in AlphaGo, or just straight up RL algorithms, as in some of OpenAI’s work. The hope is that you can train an agent to play this game very well, and therefore, it will be able to predict where counter-arguments exist that would help it win debates, and therefore, if it plays the game well, and the best way to play the game is to tell the truth, then you end up with a value aligned system. Those are large assumptions. You should be cautious if those are true.

Lucas: There’s also all these issues that we can get into about biases that humans have, and issues with debate. Whether or not you’re just going to be optimizing the agents for exploiting human biases and convincing humans. Definitely seems like, even just looking at how human beings value align to each other, debate is one thing in a large toolbox of things, and in AI alignment, it seems like potentially Debate will also be a thing in a large toolbox of things that we use. I’m not sure what your thoughts are about that.

Geoffrey: I could give them. I would say that there’s two ways of approaching AI safety and AI alignment. One way is to try to propose, say, methods that do a reasonably good job at solving a specific problem. For example, you might tackle reversibility, which means don’t take actions that can’t be undone, unless you need to. You could try to pick that problem out and solve it, and then imagine how we’re going to fit this together into a whole picture later.

The other way to do it is try to propose algorithms which have at least some potential to solve the whole problem. Usually, they won’t, and then you should use them as a frame to try to think about how different pieces might be necessary to add on.

For example, in debate, the biggest thing in there is that it might be the case that you train a debate agent that gets very good at this task, the task is rich enough that it just learns a whole bunch of things about the world, and about how to think about the world, and maybe it ends up having separate goals, or it’s certainly not clearly aligned because the goal is to win the game. Maybe winning the game is not exactly aligned.

You’d like to know sort of not only what it’s saying, but why it’s saying things. You could imagine sort of adding interpret ability techniques to this, which would say, maybe Alice and Bob are debating. Alice says something and Bob says, “Well, Alice only said that because Alice is thinking some malicious fact.” If we add solid interpret ability techniques, we could point into Alice’s thoughts at that fact, and pull it out, and service that. Then, you could imagine sort of a strengthened version of a debate where you could not only argue about object level things, like using language, but about thoughts of the other agent, and talking about motivation.

It is a goal here in formulating something like debate or amplification, to propose a complete algorithm that would solve the whole problem. Often, not to get to that point, but we have now a frame where we can think about the whole picture in the context of this algorithm, and then fix it as required going forwards.

I think, in the end, I do view debate, if it succeeds, as potentially the top level frame, which doesn’t mean it’s the most important thing. It’s not a question of importance. More of just what is the underlying ground task that we want to solve? If we’re training agents to either play video games or do question/answers, here the proposal is train agents to engage in these debates and then figure out what parts of AI safety and AI alignment that doesn’t solve and add those on in that frame.

Lucas: You’re trying to achieve human level judgment, ultimately, through a judge?

Geoffrey: The assumption in this debate game is that it’s easier to be a judge than a debater. If it is the case, though, that you need the judge to get to human level before you can train a debater, then you have a problematic bootstrapping issue where, first you must solve value alignment for training the judge. Only then do you have value alignment for training the debater. This is one of the concerns I have. I think the concern sort of applies to some of other scalability techniques. I would say this is sort of unresolved. The hope would be that it’s not actually sort of human level difficult to be a judge on a lot of tasks. It’s sort of easier to check consistency of, say, one debate statement to the next, than it is to do long, reasoning processes. There’s a concern there, which I think is pretty important, and I think we don’t quite know how it plays out.

Lucas: The view is that we can assume, or take the human being to be the thing that is already value aligned, and the process by which … and it’s important, I think, to highlight the second part that you say. You say that you’re pointing out considerations, or whichever debater is saying that which is most true and useful. The useful part, I think, shouldn’t be glossed over, because you’re not just optimizing debaters to arrive at true statements. The useful part smuggles in a lot issues with normative things in ethics and metaethics.

Geoffrey: Let’s talk about the useful part.

Lucas: Sure.

Geoffrey: Say we just ask the question of debaters, “What should we do? What’s the next step that I, as an individual person, or my company, or the whole world should take in order to optimize total utility?” The notion of useful, then, is just what is the right action to take? Then, you would expect a debate that is good to have to get into the details of why actions are good, and so that debate would be about ethics, and metaethics, and strategy, and so on. It would pull in all of that content and sort of have to discuss it.

There’s a large sea of content you have to pull in. It’s roughly kind of all of human knowledge.

Lucas: Right, right, but isn’t there this gap between training agents to say what is good and useful and for agents to do what is good and useful, or true and useful?

Geoffrey: The way in which there’s a gap is this interpretability concern. You’re getting at a different gap, which I think is actually not there. I like giving game analogies, so let me give a Go analogy. You could imagine that there’s two goals in playing the game of Go. One goal is to find the best moves. This is a collaborative process where all of humanity, all of sort of Go humanity, say, collaborates to learn, and explore, and work together to find the best moves in Go, defined by, what are the moves that most win this game? That’s a non-zero sum game, where we’re sort of all working together. Two people competing on the other side of the Go board are working together to get at what the best moves are, but within a game, it’s a zero sum game.

You sit down, and you have two players, two people playing a game of Go, one of them’s going to win, zero sum. The fact that that game is zero sum doesn’t mean that we’re not learning some broad thing about the world, if you’ll zoom out a bit and look at the whole process.

We’re training agents to win this debate game to give the best arguments, but the thing we want to zoom out and get is the best answers. The best answers that are consistent with all the reasoning that we can bring into this task. There’s huge questions to be answered about whether the system actually works. I think there’s an intuitive notion of, say, reflective equilibrium, or coherent extrapolated volition, and whether debate achieves that is a complicated question that’s empirical, and theoretical, and we have to deal with, but I don’t think there’s quite the gap you’re getting at, but I may not have quite voiced your thoughts correctly.

Lucas: It would be helpful if you could unpack how the alignment that is gained through this process is transferred to new contexts. If I take an agent trained to win the Debate game outside of that context.

Geoffrey: You don’t. We don’t take it out of the context.

Lucas: Okay, so maybe that’s why I’m getting confused.

Geoffrey: Ah. I see. Okay, this [inaudible 00:26:09]. We train agents to play this debate game. To use them, we also have them play the debate game. By training time, we give them kind of a rich space of questions to think about, or concerns to answer, like a lot of discussion. Then, we want to go and answer a question in the world about what we should do, what the answer to some scientific question is, is this theorem true, or this conjecture true? We state that as a question, and we have them debate, and then whoever wins, they gave the right answer.

There’s a couple of important things you can add to that. I’ll give like three levels of kind of more detail you can go. One thing is the agents are trained to look at state in the debate game, which could be I’ve just given the question, or there’s a question and there’s a partial transcript, and they’re trained to say the next thing, to make the next move in the game. The first thing you can do is you have a question that you want to answer, say, what should the world do, or what should I do as a person? You just say, “Well, what’s the first move you’d make?” The first move they’d make is to give an answer, and then you just stop there, and you’re done, and you just trust that answer is correct. That’s not the strongest thing you could do.

The next thing you can do is you’ve trained this model of a judge that knows how to predict human judgment. You could have them, from the start of this game, play a whole bunch of games, play 1,000 games of debate, and from that learn with more accuracy what the answer might be. Similar to how you’d, say if you’re playing a game of Go, if you want to know the best move, you would say, “Well, let’s play 1,000 games of Go from this state. We’ll get more evidence and we’ll know what the best move is.”

The most interesting thing you can do, though, is you yourself can act as a judge in this game to sort of learn more about what the relevant issues are. Say there’s a question that you care a lot about. Hopefully, “What should the world do,” is a question you care a lot about. You want to not only see what the answer is, but why. You could act as a judge in this game, and you could, say, play a few debates, or explore part of this debate tree, the tree of all possible debates, and you could do the judgment yourself. There, the end answer will still be who you believe is the right answer, but the task of getting to that answer is still playing this game.

The bottom line here is, at test time, we are also going to debate.

Lucas: Yeah, right. Human beings are going to be participating in this debate process, but does or does not debate translate into systems which are autonomously deciding what we ought to do, given that we assume that their models of human judgment on debate are at human level or above?

Geoffrey: Yeah, so if you turn off the human in the loop part, then you get an autonomous agent. If the question is, “What should the next action be in, say, an environment?” And you don’t have humans in the loop at test time, then you can get an autonomous agent. You just sort of repeatedly simulate debating the question of what to do next. Again, you can cut this process short. Because the agents are trained to predict moves in debate, you can stop them after they’ve predicted the first move, which is what the answer is, and then just take that answer directly.

If you wanted the maximally efficient autonomous agent, that’s the case you would do. At OpenAI, my view, our goal is I don’t want to take AGI and immediately deploy it in the most fast twitch tasks. Something like self-driving a car. If we get to human level intelligence, I’m not going to just replace all the self-driving cars with AGI and let them do their thing. We want to use this for the paths where we need very strong capabilities. Ideally, those tasks are slower and more deliberative, so we can afford to, say, take a minute to interact with the system, or take a minute to have the system engage in its own internal debates to get more confidence in these answers.

The model here is basically the Oracle AI model, that rather than the autonomous agent operating at an NDP model.

Lucas: I think that this is a very important part to unpack a bit more. This distinction here that it’s more like an oracle and less like an autonomous agent going around optimizing everything. What does a world look like right before, during, after AGI given debate?

Geoffrey: The way I think about this is that, an oracle here is a question/answer system of some complexity. You asked it questions, possibly with a bunch of context attached, and it gives you answers. You can reduce pretty much anything to an oracle, if oracle is sort of general enough. If your goal is to take actions in an environment, you can ask the oracle, “What’s the best action to take, and the next step?” And just iteratively ask that oracle over and over again as you take the steps.

Lucas: Or you could generate the debate, right? Over the future steps?

Geoffrey: The most direct way to do an NDP with Debate is to engage in a debate at every step, restart the debate process, showing all the history that’s happened so far, and say, the question at hand, that we’re debating, is what’s the best action to take next? I think I’m relatively optimistic that when we make AGI, for a while after we make it, we will be using it in ways that aren’t extremely fine grain NDP-like in the sense of we’re going to take a million actions in a row, and they’re all actions that hit the environment.

We’d mainly use this full direct reduction. There’s more practical reductions for other questions. I’ll give an example. Say you want to write the best book on, say, metaethics, and you’d like debaters to produce this books. Let’s say that debaters are optimal agents so they know how to do debates on any subject. Even if the book is 1,000 pages long, or say it’s a couple hundred pages long, that’s a more reasonable book, you could do it in a single debate as follows. Ask the agents to write the book. Each agent writes its own book, say, and you ask them to debate which book is better, and that debate all needs to point at small parts of the book.

One of the debaters writes a 300 page book and buried in the middle of it is a subtle argument, which is malicious and wrong. The other debater need only point directly at the small part of the book that’s problematic and say, “Well, this book is terrible because of the following malicious argument, and my book is clearly better.” The way this works is, if you are able to point to problematic parts of books in a debate, and therefore win, the best first move in the debate is to write the best book, so you can do it in one step, where you produce this large object with a single debate, or a single debate game.

The reason I mention this is that’s a little better in terms of practicality, then, writing the book. If the book is like 100,000 words, you wouldn’t want to have a debate about each word, one after another. That’s sort of a silly, very expensive process.

Lucas: Right, so just to back up here, and to provide a little bit more framing, there’s this beginning at which we can see we’re just at a very low level trying to optimize agents for debate, and there’s going to be an asymmetry here that we predict, that it should, in general, usually be easier to tell who’s telling the truth over who’s not, because it’s easier to tell the truth than to lie, and lie in convincing ways. Scaling from there, it seems that what we ultimately really want is to then be able to train a judge, right?

Geoffrey: The goal is to train … You need both.

Lucas: Right. You need both to scale up together.

Geoffrey: Yep.

Lucas: Through doing so, we will have oracles that will be able to assist in decision making, because what we’re doing is amplifying human explicit deliberative processes. Again, this tying into coherent extrapolated volition and reflective equilibrium. What we want is to be able to amplify human decision making, which I guess is, in a broad sense, value alignment.

Geoffrey: Right.

Lucas: Are there any other technical motivations behind debate that you would like to unpack for why within the set of all these different AI alignment methodologies, why debate is so important?

Geoffrey: Sure. I think maybe the right way to talk about that is to contrast amplification and debate. I got to Debate by thinking a bunch about Amplification, talking a bunch with Paul Christiano. To summarize it for listeners again, Amplification is take questions, decompose them into subquestions that are hopefully easier to answer, repeat that decomposition until we get to things we can check, and use that whole process to answer complicated questions.

The stock amplification, the human is doing the task of decomposing the questions. The human decomposition has to be complete. For example, you might have a question, going back to this resume example, we’re having this debate about whether someone whose name sounds white or black is a good candidate. For whatever reason, no one here wants to be racist, but they’re all just completely oblivious to the idea they could be racist. They’re going through and discussing whether the candidate is applicable, they’re kind of breaking these questions down. Just subtly at every point, racism is kind of inching its way into the answers, and they just never thought to kind of investigate.

If at some point they had asked, “Well, am I deciding this because the person has a white sounding name?” They would have realized their mistake, but they never think of that. In debate, you are relying on the agents to be able to do the decomposition, so the agents can, even if it’s hard for a human to do it, point out, “Maybe you’re being racist here,” and sort of get the correction. The advantage of debate there is you get some help doing this combination of tasks for you.

The other thing that happened, frankly, which is one of the reasons I thought of debate, was AlphaGo. In thinking about amplification, I’ve been sort of concerned. “Is this process going to be scalable? Are we going to lose a bunch of efficiency in doing this complicated decomposition process?” I was sort of concerned that we would lose a bunch of efficiency and therefore be not competitive with unsafe techniques to getting to AGI.

Then, AlphaGo came out, and AlphaGo got very strong performance, and it did it by doing an explicit tree search. As part of AlphaGo, it’s doing this kind of deliberative process, and that was not only important for performance at test time, but was very important for getting the training to work. What happens is, in AlphaGo, at training time, it’s doing a bunch of tree search through the game of Go in order to improve the training signal, and then it’s training on that improved signal. That was one thing kind of sitting in the back of my mind.

I was kind of thinking through, then, the following way of thinking about alignment. At the beginning, we’re just training on direct answers. We have these questions we want to answer, an agent answers the questions, and we judge whether the answers are good. You sort of need some extra piece there, because maybe it’s hard to understand the answers. Then, you imagine training an explanation module that tries to explain the answers in a way that humans can understand. Then, those explanations might be kind of hard to understand, too, so maybe you need an explanation explanation module.

For a long time, it felt like that was just sort of ridiculous epicycles, adding more and more complexity. There was no clear end to that process, and it felt like it was going to be very inefficient. When AlphaGo came out, that kind of snapped into focus, and it was like, “Oh. If I train the explanation module to find flaws, and I train the explanation explanation module to find flaws in flaws, then that becomes a zero-sum game. If it turns out that ML is very good at solving zero-sum games, and zero-sum games were a powerful route to drawing performance, then we should take advantage of this in safety.” Poof. We have, in this answer, explanation, explanation, explanation route, that gives you the zero-sum game of Debate.

That’s roughly sort of how I got there. It was a combination of thinking about Amplification and this kick from AlphaGo, that zero-sum games and search are powerful.

Lucas: In terms of the relationship between debate and amplification, can you provide a bit more clarification on the differences, fundamentally, between the process of debate and amplification? In terms of amplification, there’s a decomposition process, breaking problems down into subproblems, eventually trying to get the broken down problems into human level problems. The problem has essentially doubled itself many items over at this point, right? It seems like there’s going to be a lot of questions for human beings to answer. I don’t know how interrelated debate is to decompositional argumentative process.

Geoffrey: They’re very similar. Both Amplification and Debate operate on some large tree. In amplification, it’s the tree of all decomposed questions. Let’s be concrete and say the top level question in amplification is, “What should we do?” In debate, again, the question at the top level is, “What should we do?” In amplification, we take this question. It’s a very broad open-ended question, and we kind of break it down more and more and more. You sort of imagine this expanded tree coming out from that question. Humans are constructing this tree, but of course, the tree is exponentially large, so we can only ever talk about a small part of it. Our hope is that the agents learn to generalize across the tree, so they’re learning the whole structure of the tree, even given finite data.

In the debate case, similarly, you have top level question of, “What should we do,” or some other question, and you have the tree of all possible debates. Imagine every move in this game is, say, saying a sentence, and at every point, you have maybe an exponentially large number of sentences, so the branching factor, now in the tree, is very large. The goal in debate is kind of see this whole tree.

Now, here is the correspondence. In amplification, the human does the decomposition, but I could instead have another agent do the decomposition. I could say I have a question, and instead of a human saying, “Well, this question breaks down into subquestions X, Y, and Z,” I could have a debater saying, “The subquestion that is most likely to falsify this answer is Y.” It could’ve picked at any other question, but it picked Y. You could imagine that if you replace a human doing the decomposition with another agent in debate pointing at the flaws in the arguments, debate would kind of pick out a path through this tree. A single debate transcript, in some sense, corresponds to a single path through the tree of amplification.

Lucas: Does the single path through the tree of amplification elucidate the truth?

Geoffrey: Yes. The reason it does is it’s not an arbitrarily chosen path. We’re sort of choosing the path that is the most problematic for the arguments.

Lucas: In this exponential tree search, there’s heuristics and things which are being applied in general to the tree search in order to collapse onto this one branch or series?

Geoffrey: Let’s say, in amplification, we have a question. Our decomposition is, “Well, this decomposes into X, Y, and Z,” and then we recursively call the agent, and it says, “The answers are AX, AY, AZ, for these questions.” Now, if I trusted those subanswers, I could do the reconstruction of the answers to the original question. If I don’t trust the subanswers, I might say, “Well, which subanswer is most likely to be false?” The correspondence with debate is the other debating agent would point to which one of the subanswers is probably false, and then you recourse down just that tree.

Geoffrey: Here’s the way it works. You can trust amplification if all of the subanswers are correct. If at least one of them is false, a strong debater can find which of those subanswers is false and recourse down to that. That’s why, if it turns out that a single path chosen by strong debaters never finds a false subanswer, then you will have established that the whole tree is true.

Lucas: I see. I think this picture of, “What should we do,” at the top, and then the trees would fall from that in terms of one version of amplification and one version of debate is pretty helpful here.

Geoffrey: The most important thing here is that in Amplification, the branching factor is small. I’m a human, I see a question, I sort of decompose that into a relatively small number of questions. Say, two to 10. In debate, the branching factor is very large. A debate agent could make any statement as the next countermove. What this means is that if it works for strong play, because the branching factor is so large, a shallow debate tree is more powerful than a shallow amplification tree, because you can cover more terrain with a couple steps of debate that are a proxy for a large tree.

The problem with that statement, maybe the biggest advantage to amplification, is you have to really be assuming that you know the agents are strong enough to cover the tree. Say I’m debating whether there’s a black swan, and there in fact is a black swan, but there’s only one of them, and there’s only one in the entire world, and the correct countermove would be pointing at that black swan, finding it out of the entire world. A debater who wanted to say, “There are no black swans,” to just claim, “There are no black swans. Trust me. The other agent would be able to point to one if it exists.” Maybe it’s the case that a black swan does exist, but the other agent is just too weak to point at the black swan, and so that debate doesn’t work.

This argument that shallow debates are powerful leans a whole lot on debaters being very strong, and debaters in practice will not be infinitely strong, so there’s a bunch of subtlety there that we’re going to have to wrestle.

Lucas: It would also be, I think, very helpful if you could let us know how you optimize for strong debaters, and how is amplification possible here if human beings are the ones who are pointing out the simplifications of the questions?

Geoffrey: Whichever one we choose, whether it’s amplification, debate, or some entirely different scheme, if it depends on humans in one of these elaborate ways, we need to do a bunch of work to know that humans are going to be able to do this. At amplification, you would expect to have to train people to think about what kinds of decompositions are the correct ones. My sort of bias is that because debate gives the humans more help in pointing out the counterarguments, it may be cognitively kinder to the humans, and therefore, that could make it a better scheme. That’s one of the advantages of debate.

The technical analogy there is a shallow debate argument. The human side is, if someone is pointing out the arguments for you, it’s cognitively kind. In amplification, I would expect you’d need to train people a fair amount to have the decomposition be reliably complete. I don’t know that I have a lot of confidence that you can do that. One way you can try to do it is, as much as possible, systematize the process on the human side.

In either one of these schemes, we can give the people involved an arbitrary amount of training and instruction in whatever way we think is best, and we’d like to do the work to understand what forms of instruction and training are most truth seeking, and try to do that as early as possible so you have a head start.

I would say I’m not going to be able to give you a great argument for optimism about amplification. This is a discussion that Paul, and Andreas Stuhlmueller, and I have, where I think Paul and Andreas, they kind of lean towards these metareasoning arguments, where if you wanted to answer the question, “Where should I go on vacation,” the first subquestion is, “What would be a good way to decide where to go on vacation?” Quickly go meta, and maybe you go meta, meta, like it’s kind of a mess. Whereas, the hope is that because debate, you have sort of have help pointing to things, you can do much more object level, where the first step in a debate about where to go on vacation is just Bali or Alaska. You give the answer and then you focus in on more …

For a broader class of questions, you can stay at object level reasoning. Now, if you want to get to metaethics, you would have to bring in the kind of reasoning. It should be a goal of ours to, for a fixed task, try to use the simplest kind of human reasoning possible, because then we should expect to get better results out of people.

Lucas: All right. Moving forward. Two things. The first that would be interesting would be if you could unpack this process of training up agents to be good debaters, and to be good predictors of human decision making regarding debates, what that’s actually going to look like in terms of your experiments, currently, and your future experiments. Then, also just pivoting into discussing reasons for optimism and pessimism about debate as a model for AI alignment.

Geoffrey: On the experiment side, as I mentioned, we’re trying to get into the natural language domain, because I think that’s how humans debate and reason. We’re doing a fair amount of work at OpenAI on core ML language modeling, so natural language processing, and then trying to take advantage of that to prototype these systems. At the moment, we’re just doing what I would call zero step debate, or one step debate. It’s just a single agent answering a question. You have question, answer, and then you have a human kind of judging whether the answer is good.

The task of predicting an answer is just read a bunch of text and predict a number. That is essentially just a standard NLP type task, and you can use standard methods from NLP on that problem. The hope is that because it looks so standard, we can sort of just paste the development on the capability side in natural language processing on the safety side. Predicting the result is just sort of use whatever most powerful natural language processing architecture is, and apply it to this task. Architecture and method.

Similarly, on the task of answering questions, that’s also a natural language task, just a generative one. If you’re answering questions, you just read a bunch of text that is maybe the context of the question, and you produce an answer, and that answer is just a bunch of words that you spit out via a language model. If you’re doing, say, a two step debate, where you have question, answer, counterargument, then similarly, you have a language model that spits out an answer, and a language model that spits out the counterargument. Those can in fact be the same language model. You just flip the reward at some point. An agent is rewarded for answering and winning, and answering well while it’s spitting out the answer, and then when it’s spitting out the counteranswer, you just reward it for falsifying the answer. It’s still just degenerative language task with some slightly exotic reward.

Going forwards, we expect there to need to be something like … This is not actually high confidence. Maybe there’s things like AlphaGo zero style tree search that are required to make this work very well on the generative side, and we will explore those as required. Right now, we need to falsify the statement that we can just do it with stock language modeling, which we’re working on. Does that cover the first part?

Lucas: I think that’s great in terms of the first part, and then again, the second part was just places to be optimistic and pessimistic here about debate.

Geoffrey: Optimism, I think we’ve covered a fair amount of it. The primary source of optimism is this argument that shallow debates are already powerful, because you can cover a lot of terrain in argument space with a short debate, because of the high branching factor. If there’s an answer that is robust to all possible counteranswers, then it hopefully is a fairly strong answer, and that gets stronger as you increase the number of steps. This assumes strong debaters. That would be a reason for pessimism, not optimism. I’ll get to that.

The top two is that one, and then the other part is that ML is pretty good at zero-sum games, particularly zero-sum perfect information games. There have been these very impressive headline results from AlphaGo, DeepMind, and Dota at OpenAI, and a variety of other games. In general, zero-sum, close to perfect information games, we roughly know how to do them, at least in this not too high branching factor case. There’s an interesting thing where if you look at the algorithms, say for playing poker, or for playing more than two player games, where poker is zero-sum two player, but is imperfect information, or the algorithm for playing, say, 10 player games, they’re just much more complicated. They don’t work as well.

I like the fact that debate is formulated as a two player zero-sum perfect information game, because we seem to have better algorithms to play them with ML. This is both practically true, it is in practice easier to play them, and also there’s a bunch of theory that says that two player zero-sum is a different complexity class than, say, two player non-zero-sum, or N player. The complexity class gets harder, and you need nastier algorithms. Finding a Nash equilibrium in a general game, that’s either non-zero-sum or more than two players is PPAD-complete, in a tabular case, in a small game, with two player zero-sum, that problem is convex and has a polynomial-time solution. It’s a nicer class. I expect there to continue to be better algorithms to play those games. I like formulating safety as that kind of problem.

Those are kind of the reasons for optimism that I think are most important. I think going into more of those is kind of less important and less interesting than worrying about stuff. I’ll list three of those, or maybe four. Try to be fast so we can circle back. As I mentioned, I think interpretability has a large role to play here. I would like to be able to have an agent say … Again, Alice and Bob are debating. Bob should be able to just point directly into Alice’s thoughts and say, “She really thought X even though she said Y.” The reason you need an interpretability technique for that is, in this conversation, I could just claim that you, Lucas Perry, are having some malicious thought, but that’s not a falsifiable statement, so I can’t use it in a debate. I could always make statement. Unless I can point into your thoughts.

Because we have so much control over machine learning, we have the potential ability to do that, and we can take advantage of it. I think that, for that to work, we need probably a deep hybrid between the two schemes, because an advanced agent’s thoughts will probably be advanced, and so you may need some kind of strengthened thing like amplification or debate just to be able to describe the thoughts, or to point at them in a meaningful way. That’s a problem that we have not really solved. Interpretability is coming along, but it’s definitely not hybridized with these fancy alignment schemes, and we need to solve that at some point.

Another problem is there’s no point in this kind of natural language debate where I can just say, for example, “You know, it’s going to rain tomorrow, and it’s going to rain tomorrow just because I’ve looked at all the weather in the past, and it just feels like it’s going to rain tomorrow.” Somehow, debate is missing this just straight up pattern matching ability of machine learning where I can just read a dataset and just summarize it very quickly. The theoretical side of this is if I have a debate about, even something as simple as, “What’s the average height of a person in the world?” In the debate method I’ve described so far, that debate has to have depth, at least logarithmic in the number of people. I just have to subdivide by population. Like, this half of the world, and then this half of that half of the world, and so on.

I can’t just say, “You know, on average it’s like 1.6 meters.” We need to have better methods for hybridizing debate with pattern matching and statistical intuition, and that’s something that is, if we don’t have that, we may not be competitive with other forms of ML.

Lucas: Why is that not just an intrinsic part of debate? Why is debating over these kinds of things different than any other kind of natural language debate?

Geoffrey: It is the same. The problem is just that for some types of questions, and there are other forms of this in natural language, there aren’t short deterministic arguments. There are many questions where the shortest deterministic argument is much longer than the shortest randomized argument. For example, if you allow randomization, I can say, “I claim the average height of a person is 1.6 meters.” Well, pick a person at random, and you’ll score me according to the square difference between those two numbers. My claim and the height of this particular person you’ve chosen. The optimal move to make there is to just say the average height right away.

The thing I just described is a debate using randomized steps that is extremely shallow. It’s only basically two steps long. If I want to do a deterministic debate, I have to deterministically talk about the average height of a person in North America is X, and in Asia, it’s Y. The other debater could say, “I disagree about North America,” and you sort of recourse into that.

It would be super embarrassing if we propose these complicated alignment schemes, “This is how we’re going to solve AI safety,” and they can’t quickly answer a trivial statistical questions. That would be a serious problem. We kind of know how to solve that one. The harder case is if you bring in this more vague statistical intuition. It’s not like I’m computing a mean over some dataset. I’ve looked at the weather and, you know, it feels like it’s going to rain tomorrow. Getting that in is a bit trickier, but we have some ideas there. They’re unresolved.

The thing which I am optimistic about, but we need to work on, that’s one. The most important reason to be concerned is just that humans are flawed in a variety of ways. We have all these biases, ethical inconsistencies, and cognitive biases. We can write down some toy theoretical arguments. The debate works with a limited but reliable judge, but does it work in practice with a human judge? I think there’s some questions you can kind of reason through there, but in the end, a lot of that will be determined by just trying it, and seeing whether debate works with people. Eventually, when we start to get agents that can play these debates, then we can sort of check whether it worked with two ML agents and a human judge. For now, when language modeling is not that far along, we may need to try it out first with all humans.

This would be, you play the same debate game, but both the debaters are also people, and you set it up so that somehow it’s trying to model this case where the debaters are better than the judge at some task. The debaters might be experts at some domain, they might have access to some information that the judge doesn’t have, and therefore, you can ask whether a reasonably short debate is truth seeking if the humans are playing to win.

The hope there would be that you can test out debate on real people with interesting questions, say complex scientific questions, and questions about ethics, and about areas where humans are biased in known ways, and see whether it works, and also see not just whether it works, but which forms of debate are strongest.

Lucas: What does it mean for debate to work or be successful for two human debaters and one human judge if it’s about normative questions?

Geoffrey: Unfortunately, if you want to do this test, you need to have a source of truth. In the case of normative questions, there’s two ways to go. One way is you pick a task where we may not know the entirety of the answer, but we know some aspect of it with high confidence. An example would be this resume case, where two resumes are identical except for the name at the top, and we just sort of normatively … we believe with high confidence that the answer shouldn’t depend on that. If it turns out that a winning debater can maliciously and subtly take advantage of the name to spread fear into the judge, and make a resume with a black name sound bad, that would be a failure.

We sort of know that because we don’t know in advance whether a resume should be good or bad overall, but we know that this pair of identical resumes shouldn’t depend on the name. That’s one way just we have some kind of normative statement where we have reasonable confidence in the answer. The other way, which is kind of similar, is you have two experts in some area, and the two experts agree on what the true answer is, either because it’s a consensus across the field, or just because maybe those two experts agree. Ideally, it should be a thing that’s generally true. Then, you force one of the experts to lie.

You say, “Okay, you both agree that X is true, but now we’re going to flip a coin and now one of you only wins if you lie, and we’ll see whether that wins or not.”

Lucas: I think it also … Just to plug your game here, you guys do have a debate game. We’ll put a link to that in the article that goes along with this podcast. I suggest that people check that out if you would like a little bit more tangible and fun way to understand debate, and I think it’ll help elucidate what the process looks like, and the asymmetries that go on, and the key idea here that it is harder to lie than to refute a lie. It seems like if we could deploy some sort of massive statistical analysis over many different iterated debates across different agents, that we would be able to come down on the efficacy of debate in different situations where the judge and the debaters are all AI, mixed situations, or all human debates. I think it’d be interesting to see the varying results there.

Geoffrey: This is going to be a noisy enough process for a variety of reasons, that we will probably do this a lot to know. So far, we’ve just done a very small, informal number of these human, human, human debates. Say, if you’re doing expert debate, we’ve already learned a fair amount at a qualitative level, just in those few things. I’ll give an example. In this debate about CS theory question, there was a judge present while the debate was going on, but they were interacting only fairly minimally.

Early in the process of the debate, the debaters … I was the debater telling the truth here. I stated my formulation of the question, and unpacked it intuitively, and the judge perfectly understandably had a subtle misunderstanding of how I’d framed the question, and therefore, throughout the debate, this misunderstanding was not corrected, so there was a misunderstanding of just what the topic was about that was never fixed, and therefore, it was much harder for the honest player to win, because it seemed like the honest player had a harder case to make.

That sort of taught us that having judge interaction is potentially quite important so that the debaters have a detailed understanding of what the judge is thinking. If your goal is to model debate as a perfect information game, the closer to that you can get, the more information that debaters have, the better it should be.

Lucas: Yeah. I mean, that also allows the debaters to exploit cognitive biases in the judge.

Geoffrey: That’s right.

Lucas: You would point that out. Like, “Hey, this person’s exploiting your cognitive bias.”

Geoffrey: Yeah, so I think it’s an open question how exactly to strike that balance, and if there’s a way to strike it that works. Generally, the more information about, say, the judge, that he provides to the debaters, either through judge interaction or just tell the debaters something about the judge, that will make them stronger as players of the game, but it might reveal ways to attack the judge.

Now, if our goal is to be resistant to very strong agents, and it turns out that the only way to make it safe is to hide information from the agents, maybe you shouldn’t use this method. It may not be very resilient. It’s likely that for experiments, we should push as far as we can towards strong play, revealing as much as possible, and see whether it still works in that case.

Lucas: In terms here of the social scientists playing a role here, do you want to go ahead and unpack that a bit more? There’s a paper that you’re working on with Amanda Askell on this.

Geoffrey: As you say, we want to run statistically significant experiments that test whether debate is working and which form of debate are best, and that will require careful experimental design. That is an experiment that is, in some sense, an experiment in just social science. There’s no ML involved. It’s motivated by machine learning, but it’s just a question about how people think, and how they argue and convince each other. Currently, no one at OpenAI has any experience running human experiments of this kind, or at least no one that is involved in this project.

The hope would be that we would want to get people involved in AI safety that have experience and knowledge in how to structure experiments on the human side, both in terms of experimental design, having an understanding of how people think, and where they might be biased, and how to correct away from those biases. I just expect that process to involve a lot of knowledge that we don’t possess at the moment as ML researchers.

Lucas: Right. I mean, in order for there to be an efficacious debate process, or AI alignment process in general, you need to debug and understand the humans as well as the machines. Understanding our cognitive biases in debates, and our weak spots and blind spots in debate, it seems crucial.

Geoffrey: Yeah. I sort of view it as a social science experiment, because it’s just a bunch of people interacting. It’s a fairly weird experiment. It differs from normal experiments in some ways. In thinking about how to build AGI in a safe way, we have a lot of control over the whole process. If it takes a bunch of training to make people good at judging these debates, we can provide that training, pick people who are better or worse at judging. There’s a lot of control that we can exert. In addition to just finding out whether this thing works, it’s sort of an engineering process of debugging the humans, maybe it’s sort of working around human flaws, taking them into account, and making the process resilient.

My highest level hope here is that humans have various flaws and biases, but we are willing to be corrected, and set our flaws aside, or maybe there’s two ways of approaching a question where one way hits the bias and one way doesn’t. We want to see whether we can produce some scheme that picks out the right way, at least to some degree of accuracy. We don’t need to be able to answer every question. If we, for example, learned that, “Well, debate works perfectly well for some broad class of tasks, but not for resolving the final question of what humans should do over the long term future, or resolving all metaethical disagreements or something,” we can afford to say, “We’ll put those aside for now. We want to get through this risky period, make sure AI doesn’t do something malicious, and we can deliberately work through these product questions, take our time doing that.”

The goal includes the task of knowing which things we can safely answer, and the goal should be to structure the debates so that if you give it a question where humans just disagree too much or are too unreliable to reliably answer, the answer should be, “We don’t know the answer to that question yet.” A debater should be able to win a debate by admitting ignorance in that case.

There is an important assumption I’m making about the world that we should make explicit, which is that I believe it is safe to be slow about certain ethical or directional decisions. Y/ou can construct games where you just have to make a decision now, like you’re barreling along in some car with no brakes, you have to dodge left or right around an obstacle, but you can’t just say, “I’m going to ponder this question for a while and sort of hold off.” You have to choose now. I would hope that the task of choosing what we want to do as a civilization is not like that. We can resolve some immediate concerns about serious problems now, and existential risk, but we don’t need to resolve everything,

That’s a very strong assumption about the world, which I think is true, but it’s worth saying that I know that is an assumption.

Lucas: Right. I mean, it’s true insofar as coordination succeeds, and people don’t have incentives just to go do what they think is best.

Geoffrey: That’s right. If you can hold off deciding things until we can deliberate longer.

Lucas: Right. What does this distillization process look for debate, where ensuring alignment is maintained as a system capability is amplified and changed?

Geoffrey: One property of amplification, which is nice, is that you can sort of imagine running it forever. You train on simple questions, and then you train on more complicated questions, and then you keep going up and up and up, and if you’re confident that you’ve trained enough on the simple questions, you can never see them again, freeze that part of the model, and keep going. I think in practice, that’s probably not how we would run it, so you don’t inherit that advantage. In debate, what you would have to do to get to more and more complicated questions is, at some point, and maybe this point is fairly far off, but you have to go to the longer and longer and longer debates.

If you’re just sort of thinking about the long term future, I expect to have to switch over to some other scheme, or at least layer a scheme, embed debate in a larger scheme. An example would be it could be that the question you resolve with debate is, “What is an even better way to build AI alignment?” That, you can resolve with, say, depth 100 debates, and maybe you can handle that depth well. What that spits out to you is an algorithm, you interrogate it enough to know that you trust it, and you can put that one.

You can also imagine eventually needing to hybridize kind of a Debate-like scheme and an Amplification-like scheme, where you don’t get a new algorithm out, but you trust this initial debating oracle enough that you can view it as fixed, and then start a new debate scheme, which can trust any answer that original scheme produces. Now, I don’t really like that scheme, because it feels like you haven’t gained a whole lot. Generally, if you think about, say, the next 1,000 years … It’s useful to think about the long-term. AI alignment going forwards. I expect to need further advances after we get past this AI risk period.

I’ll give a concrete example. You ask your debating agents, “Okay, give me a perfect theorem prover.” Right now, all of our theorem provers have little bugs, probably, so you can’t really trust them to resist superintelligent agent. You say you trust that theorem prover that you get out, and you say, “Okay, now, just I want a proof that AI alignment works.” You bootstrap your way up using this agent as an oracle on sort of interesting, complicated questions, until you’ve got to a scheme that gets you to the next level, and then you iterate.

Lucas: Okay. In terms of practical, short-term world to AGI world maybe in the next 30 years, what does this actually look like? In what ways could we see debate and amplification deployed and used at scale?

Geoffrey: There is the direct approach, where you use them to answer questions, using exactly the structure they’re trained as. Debating agent, you would just engage in debates, and you would use it as an oracle in that way. You can also use it to generate training data. You could, for example, ask a debating agent to spit out the answers to a large number of questions, and then you just train a little module. If you trust all the answers, and you trust supervised learning to work. If you wanted to build a strong self-driving car, you could ask it to train a much smaller network that way. It would not be human level, but it just gives you a way to access data.

There’s a lot you could do with a powerful oracle that gives you answers to questions. I could probably go on at length about fancy schemes you could do with oracles. I don’t know if it’s that important. The more important part to me is what is the decision process we deploy these things into? How we choose which questions to answer and what we do with those answers. It’s probably not a great idea to train an oracle and then give it to everyone in the world right away, unfiltered, for reasons you can probably fill in by yourself. Basically, malicious people exist, and would ask bad questions, and eventually do bad things with the results.

If you have one of these systems, you’d like to deploy it in a way that can help as many people as possible, which means everyone will have their own questions to ask of it, but you need some filtering mechanism or some process to decide which questions to actually ask what to do with the answers, and so on.

Lucas: I mean, can the debate process be used to self-filter out providing answers for certain questions, based off of modeling the human decision about whether or not they would want that question answered?

Geoffrey: It can. There’s a subtle issue, which I think we need to deal with, but haven’t dealt with yet. There’s a commutativity question, which is, say you have a large number of people, there’s a question of whether you reach reflective equilibrium for each person first, and then you would, say, vote across people, or whether you have a debate, and then you vote on the answer to what the judgment should be. Imagine playing a Debate game where you play a debate, and then everyone votes on who wins. There’s advantages on both sides. On the side of voting after reflective equilibrium, you have this problem that if you reach reflective equilibrium for a person, it may be disastrous if you pick the wrong person. That extreme is probably bad. The other extreme is also kind of weird because there are a bunch of standard results where if you take a bunch of rational agents voting, it might be true that A and B implies C, but the agents might vote yes on A, yes on B, and no on C. Votes on statements where every voter is rational are not rational. The voting outcome is irrational.

The result of voting before you take reflective equilibrium is sort of an odd philosophical concept. Probably, you need some kind of hybrid between these schemes, and I don’t know exactly what that hybrid looks like. That’s an area where I think technical AI safety mixes with policy to a significant degree that we will have to wrestle with.

Lucas: Great, so to back up and to sort of zoom in on this one point that you made, is the view that one might want to be worried about people who might undergo an amplified long period of explicit human reasoning, and that they might just arrive at something horrible through that?

Geoffrey: I guess, yes, we should be worried about that.

Lucas: Wouldn’t one view of debate be that also humans, given debate, would also over time come more likely to true answers? Reflective equilibrium will tend to lead people to truth?

Geoffrey: Yes. That is an assumption. The reason I think there is hope there … I think that you should be worried. I think the reason for hope is our ability to not answer certain questions. I don’t know that I trust reflective equilibrium applied incautiously, or not regularized in some way, but I expect that if there’s a case where some definition of reflective equilibrium is not trustworthy, I think it’s hopeful that we can construct debate so that the result will be, “This is just too dangerous too decide. We don’t really know with high confidence the answer.”

Geoffrey: This is certainly true of complicated moral things. Avoiding lock in, for example. I would not trust reflective equilibrium if it says, “Well, the right answer is just to lock our values in right now, because they’re great.” We need to take advantage of the outs we have in terms of being humble about deciding things. Once you have those outs, I’m hopeful that we can solve this, but there’s a bunch of work to do to know whether that’s actually true.

Lucas: Right. Lots more experiments to be done on the human side and the AI side. Is there anything here that you’d like to wrap up on, or anything that you feel like we didn’t cover that you’d like to make any last minute points?

Geoffrey: I think the main point is just that there’s a bunch of work here. OpenAI is hiring people to work on both the ML side of things, also theoretical aspects, if you think you like wrestling with how these things work on the theory side, and then certainly, trying to start on this human side, doing the social science and human aspects. If this stuff seems interesting, then we are hiring.

Lucas: Great, so people that are interested in potentially working with you or others at OpenAI on this, or if people are interested in following you and keeping up to date with your work and what you’re up to, what are the best places to do these things?

Geoffrey: I have taken a break from pretty much all social media, so you can follow me on Twitter, but I won’t ever post anything, or see your messages, really. I think email me. It’s not too hard to find my email address. That’s pretty much the way, and then watch as we publish stuff.

Lucas: Cool. Well, thank you so much for your time, Geoffrey. It’s been very interesting. I’m excited to see how these experiments go for debate, and how things end up moving along. I’m pretty interested and optimistic, I guess, about debate is an epistemic process in its role for arriving at truth and for truth seeking, and how that will play in AI alignment.

Geoffrey: That sounds great. Thank you.

Lucas: Yep. Thanks, Geoff. Take care.

If you enjoyed this podcast, please subscribe, give it a like, or share it on your preferred social media platform. We’ll be back again soon with another episode in the AI Alignment series.

[end of recorded material]

FLI Podcast (Part 2): Anthrax, Agent Orange, and Yellow Rain: Verification Stories with Matthew Meselson and Max Tegmark

In this special two-part podcast Ariel Conn is joined by Max Tegmark for a conversation with Dr. Matthew Meselson, biologist and Thomas Dudley Cabot Professor of the Natural Sciences at Harvard University. Dr. Meselson began his career with an experiment that helped prove Watson and Crick’s hypothesis on the structure and replication of DNA. He then got involved in arms control, working with the US government to renounce the development and possession of biological weapons and halt the use of Agent Orange and other herbicides in Vietnam. From the cellular level to that of international policy, Dr. Meselson has made significant contributions not only to the field of biology, but also towards the mitigation of existential threats.   

Part Two focuses on three major incidents in the history of biological weapons: the 1979 anthrax outbreak in Russia, the use of Agent Orange and other herbicides in Vietnam, and the Yellow Rain controversy in the early 80s. Dr. Meselson led the investigations into all three and solved some perplexing scientific mysteries along the way.

Topics discussed in this episode include:

  • The value of verification, regardless of the challenges
  • The 1979 Sverdlovsk anthrax outbreak
  • The use of “rainbow” herbicides during the Vietnam War, including Agent Orange
  • The Yellow Rain Controversy

Publications and resources discussed in this episode include:

  • The Sverdlovsk anthrax outbreak of 1979, Matthew Meselson, Jeanne Guillemin, Martin Hugh-Jones, Alexander Langmuir, Ilona Popova, Alexis Shelokov, and Olga Yampolskaya, Science, 18 November 1994, Vol. 266, pp 1202-1208.
  • Preliminary Report- Herbicide Assessment Commission of the American Association for the Advancement of Science, Matthew Meselson, A. H. Westing, J. D. Constable, and Robert E. Cook, 30 December 1970, private circulation, 8 pp. Reprinted in Congressional Record, U.S. Senate, Vol. 118-part 6, 3 March 1972, pp 6806-6807.
  • “Background Material Relevant to Presentations at the 1970 Annual Meeting of the AAAS”, Herbicide Assessment Commission of the AAAS, with A.H. Westing and J.D. Constable, December 1970, private circulation, 48 pp. Reprinted in the Congressional Record, U.S. Senate, Vol. 118-part 6, 3 March 1972, pp 6807-6813.
  • “The Yellow Rain Affair: Lessons from a Discredited Allegation”, with Julian Perry Robinson Terrorism, War, or Disease? eds. A.L. Clunan, P.R. Lavoy, and SB Martin, Stanford University Press, Stanford, California. 2008, pp 72-96.
  • Yellow Rain by Thomas D. Seeley, Joan W. Nowicke, Matthew Meselson, Jeanne Guillemin and Pongthep Akratanakul, Scientific American, September 1985, Vol. 253, pp 128-137.

Click here for Part 1: From DNA to Banning Biological Weapons with Matthew Meselson and Max Tegmark

Four-ship formation on a defoliation spray run. (U.S. Air Force photo)

Ariel: Hi everyone. Ariel Conn here with the Future of Life Institute. And I would like to welcome you to part two of our two-part FLI podcast with special guest Matthew Meselson and special guest/co-host Max Tegmark. You don’t need to have listened to the first episode to follow along with this one, but I do recommend listening to the other episode, as you’ll get to learn about Matthew’s experiment with Franklin Stahl that helped prove Watson and Crick’s theory of DNA and the work he did that directly led to US support for a biological weapons ban. In that episode, Matthew and Max also talk about the value of experiment and theory in science, as well as how to get some of the world’s worst weapons banned. But now, let’s get on with this episode and hear more about some of the verification work that Matthew did over the years to help determine if biological weapons were being used or developed illegally, and the work he did that led to the prohibition of Agent Orange.

Matthew, I’d like to ask about a couple of projects that you were involved in that I think are really closely connected to issues of verification, and those are the Yellow Rain Affair and the Russian Anthrax incident. Could you talk a little bit about what each of those was?

Matthew: Okay, well in 1979, there was a big epidemic of anthrax in the Soviet city of Sverdlovsk, just east of the Ural mountains, in the beginning of Siberia. We learned about this epidemic not immediately but eventually, through refugees and other sources, and the question was, “What caused it?” Anthrax can occur naturally. It’s commonly a disease of bovids, that is cows or sheep, and when they die of anthrax, the carcass is loaded with the anthrax bacteria, and when the bacteria see oxygen, they become tough spores, which can last in the earth for a long, long time. And then if another bovid comes along and manages to eat something that’s got those spores, he might get anthrax and die, and the meat from these animals who died of anthrax, if eaten, can cause gastrointestinal anthrax, and that can be lethal. So, that’s one form of anthrax. You get it by eating.

Now, another form of anthrax is inhalation anthrax. In this country, there were a few cases of men who worked in leather factories with leather that had come from anthrax-affected animals, usually imported, which had live anthrax spores on the leather that got into the air of the shops where people were working with the leather. Men would breathe this contaminated air and the infection in that case was through the lungs.

The question here was, what kind of anthrax was this: inhalational or gastrointestinal? And because I was by this time known as an expert on biological weapons, the man who was dealing with this issue at the CIA in Langley, Virginia — a wonderful man named Julian Hoptman, a microbiologist by training — asked me if I’d come down and work on this problem at the CIA. He had two daughters who were away at college, and so he had a spare bedroom, so I actually lived with Julian and his wife. And in this way, I was able to talk to Julian night and day, both at the breakfast and dinner table, but also in the office. Of course, we didn’t talk about classified things except in the office.

Now, we knew from the textbooks that the incubation period for inhalation anthrax was thought to be four, five, six, seven days; Between the time you inhale it, four, five days later, if you hadn’t yet come down with it, you probably wouldn’t. Well, we knew from classified sources that people were dying of this anthrax over a period of six weeks, April all the way into the middle of May 1979. So, if the incubation period was really that short, you couldn’t explain how that would be airborne because a cloud goes by right away. Once it’s gone, you can’t inhale it anymore. So that made the conclusion that it was airborne difficult to reach. You could still say, well maybe it got stirred up again by people cleaning up the site, maybe the incubation period is longer than we thought, but there was a problem there.

And so the conclusion of our working group was that it was probable that it was airborne. In the CIA, at that time at least, in a conclusion that goes forward to the president, you couldn’t just say, “Well maybe, sort of like, kind of like, maybe if …” Words like that just didn’t work, because the poor president couldn’t make heads nor tails. Every conclusion had to be called “possible,” “probable,” or “confirmed.” Three levels of confidence.

So, the conclusion here was that it was probable that it was inhalation, and not ingestion. The Soviets said that it was bad meat, but I wasn’t convinced, mainly because of this incubation period thing. So I decided that the best thing to do would be to go and look. Then you might find out what it really was. Maybe by examining the survivors or maybe by talking to people — just somehow, if you got over there, with some kind of good luck, you could figure out what it was. I had no very clear idea, but when I would meet any high level Soviet, I’d say, “Could I come over there and bring some colleagues and we would try to investigate?”

The first time that happened was with a very high-level Soviet who I met in Geneva, Switzerland. He was a member of what’s called the Military Industrial Commission in the Soviet Union. They decided on all technical issues involving the military, and that would have included their biological weapons establishments, and we knew that they had a big biological laboratory in the city of Sverdlovsk, there was no doubt about that. So, I told them, “I want to go in and inspect. I’ll bring some friends. We’d like to look.” And he said, “No problem. Write to me.”

So, I wrote to him, and I also went to the CIA and said, “Look, I got to have a map because maybe they’d let me go there and take me to the wrong place, and I wouldn’t know it’s the wrong place, and I wouldn’t learn anything. So, the CIA gave me a map — which turned out to be wrong, by the way — but then I got a letter back from this gentleman saying no, actually they couldn’t let us go because of the shooting down of the Korean jet #007, if any of you remember that. A Russian fighter plane shot down a Korean jet — a lot of passengers on it and they all got killed. Relations were tense. So, that didn’t happen.

Then the second time, an American and the Russian Minister of Health got a Nobel prize. The winner over there was the minister of health named Chazov, and the fellow over here was Bernie Lown in our medical school, who I knew. So, I asked Bernie to take a letter when he went next time to see his friend Chazov in Moscow, to ask him if he could please arrange that I could take a team to Sverdlovsk, to go investigate on site. And when Bernie came back from Moscow, I asked him and he said, “Yeah. Chazov says it’s okay, you can go.” So, I sent a telex — we didn’t have email — to Chazov saying, “Here’s the team. We want to go. When can we go?” So, we got back a telex saying, “Well, actually, I’ve sent my right-hand guy who’s in charge of international relations to Sverdlovsk, and he looked around, and there’s really no evidence left. You’d be wasting your time,” which means no, right? So, I telexed back and said, “Well, scientists always make friends and something good always comes from that. We’d like to go to Sverdlovsk anyway,” and I never heard back. And then, the Soviet Union collapses, and we have Yeltsin now, and it’s the Russian Republic.

It turns out that a group of — I guess at that time they were still Soviets — Soviet biologists came to visit our Fort Detrick, and they were the guests of our Academy of Sciences. So, there was a welcoming party, and I was on the welcoming party, and I was assigned to take care of one particular one, a man named Mr. Yablokov. So, we got to know each other a little bit, and at that time we went to eat crabs in a Baltimore restaurant, and I told him I was very interested in this epidemic in Sverdlovsk, and I guess he took note of that. He went back to Russia and that was that. Later, I read in a journal that the CIA produced, abstracts from the Russian literature press, that Yeltsin had ordered his minister, or his assistant for Environment and Health, to investigate the anthrax epidemic back in 1979, and the guy who he appointed to do this investigation for him was my Mr. Yablokov, who I knew.

So, I sent a telex to Mr. Yablokov saying, “I see that President Yeltsin has asked for you to look into this old epidemic and decide what really happened, and that’s great, I’m glad he did that, and I’d like to come and help you. Could I come and help you?” So, I got back a telex saying, “Well, it’s a long time ago. You can’t bring skeletons out of the closet, and anyway, you’d have to know somebody there.” Basically it was a letter that said no. But then my friend Alex Rich of Cambridge Massachusetts, a great molecular biologist and X-ray crystallographer at MIT, had a party for a visiting Russian. Who is the visiting Russian but a guy named Sverdlov, like Sverdlovsk, and he’s staying with Alex. And Alex’s wife came over to me and said, “Well, he’s a very nice guy. He’d been staying with us for several days. I make him breakfast and lunch. I make the bed. Maybe you could take him for a while.”

So we took him into our house for a while, and I told him that I had been given a turn down by Mr. Yablokov, and this guy whose name is Sverdlov, which is an immense coincidence, said, “Oh, I know Yablokov very well. He’s a pal. I’ll talk to him. I’ll get it fixed so you can go.” Now, I get a letter. In this letter, handwritten by Mr. Yablokov, he said, “Of course, you can go, but you’ve got to know somebody there to invite you.” Oh, who would I know there?

Well, there had been an American Physicist, a solid-state physicist named Ellis who was there on a United States National Academy of Sciences–Russian Academy of Sciences Exchange Agreement doing solid-state physics with a Russian solid-state physicist there in Sverdlovsk. So, I called Don Ellis and I asked him, “That guy who you cooperated with in Sverdlovsk — whose name was Gubanov — I need someone to invite me to go to Sverdlovsk, and you probably still maintain contact with him over there in Sverdlovsk, and you could ask him to invite me.” And Don said, “I don’t have to do that. He’s visiting me today. I’ll just hand him the telephone.”

So, Mr. Gubanov comes on the telephone and he says, “Of course I’ll invite you, my wife and I have always been interested in that epidemic.” So, a few days later, I get a telex from the rector of the university there in Sverdlovsk, who was a mathematical physicist. And he says, “The city is yours. Come on. We’ll give you every assistance you want.” So we went, and I formed a little team, which included a pathologist, thinking maybe we’ll get ahold of some information of autopsies that could decide whether it was inhalation or gastrointestinal. And we need someone who speaks Russian; I had a friend who was a virologist who spoke Russian. And we need a guy who knows a lot about anthrax, and veterinarians know a lot about anthrax, so I got a veterinarian. And we need an anthropologist who knows a lot about how to work with people and that happened to be my wife, Jeanne Guillemin.

So, we all go over there, we were assigned a solid-state physicist, a guy named Borisov, to take us everywhere. He knew how to fix everything. Cars that wouldn’t work, and also the KGB. He was a genius, and became a good friend. It turns out that he had a girlfriend, and she, by this time, had been elected to be a member of the Duma. In other words, she’s a congresswoman. She’s from Sverdlovsk. She had been a friend of Yeltsin. She had written Yeltsin a letter, which my friend Borisov knew about, and I have a photocopy of the letter. What it says is, “Dear Boris Nikolayevich,”that’s Yeltsin, “My constituents here at Sverdlovsk want to know if that anthrax epidemic was caused by a government activity or not. Because if it was, the families of those who died — they’re entitled to double pension money, just like soldiers killed in war.” So, Yeltsin writes back, “We will look into it.” And that’s why my friend Yablokov got asked to look into it. It was decided eventually that it was the result of government activity — by Yeltsin, he decided that — and so he had to have a list of the people who were going to get the extra pensions. Because otherwise everybody would say, “I’d like to have an extra pension.” So there had to be a list.

So she had this list with 68 names of the people who had died of anthrax during this time period in 1979. The list also had the address where they lived. So,now my wife, Jeanne Guillemin, Professor of Anthropology at Boston College, goes door-to-door — with two Russian women who were professors at the university and who knew English so they could communicate with Jeanne — knocks on the doors: “We would like to talk to you for a little while. We’re studying health, we’re studying the anthrax epidemic of 1979. We’re from the university.”

Everybody let them in except one lady who said she wasn’t dressed, so she couldn’t let anybody in. So in all the other cases, they did an interview and there were lots of questions. Did the person who died have TB? Was that person a smoker? One of the questions was where did that person work, and did they work in the day or the night? We asked that question because we wanted to make a map. If it had been inhalation anthrax, it had to be windborne, and depending on the wind, it might have been blown in a straight line if the wind was of a more or less unchanging direction.

If, on the other hand, it was gastrointestinal, people get bad meat from black market sellers all over the place, and the map of where they were wouldn’t show anything important, they’d just be all over the place. So, we were able to make a map when we got back home, we went back there a second time to get more interviews done, and Jeanne went back a third time to get even more interviews done. So, finally we had interviews with families of nearly all of those 68 people, and so we had 68 map locations: where they lived, and where they worked, and whether it was day or night. Nearly all of them were daytime workers.

When we plotted where they lived, they lived all over the southern part of the city of Sverdlovsk. When we plotted where they were likely would have been in the daytime, they all fell in to one narrow zone with one point at the military biological lab. The lab was inside the city. The other point was at the city limit: The last case was at the edge of the city limit, the southern part. We also had meteorological information, which I had brought with me from the United States. We knew the wind direction every three hours, and there was only one day when the wind was constantly blowing in the same direction, and that same direction was exactly the direction along which the people who died of anthrax lived.

Well, bad meat does not blow around in straight lines. Clouds of anthrax spores do. It was rigorous: We could conclude from this, with no doubt whatsoever, that it had been airborne, and we published this in Science magazine. It was really a classic of epidemiology, you couldn’t ask for anything better. Also, the autopsy records were inspected by the pathologist along with our trip, and he concluded from the autopsy specimens that it was inhalation. So, there was that evidence, too, and that was published in the PNAS. So, that really ended the mystery. The Soviet explanation was just wrong, and the CIA explanation, which was only probable: it was confirmed.

Max: Amazing detective story.

Matthew: I liked going out in the field, using whatever science I knew to try and deal with questions of importance to arms control, especially chemical and biological weapons arms control. And that happened to me on three occasions, one I just told you. There were two others.

Ariel: So, actually real quick before you get into that. I just want to mention that we will share or link to that paper and the map. Because I’ve seen the map that shows that straight line, and it is really amazing, thank you.

Matthew: Oh good.

Max: I think at the meta level this is also a wonderful example of what you mentioned earlier there, Matthew, about verification. It’s very hard to hide big programs because it’s so easy for some little thing to go wrong or not as planned and then something like this comes out.

Matthew: Exactly. By the way, that’s why having a verification provision in the treaty is worth it even if you never inspect. Let’s say that the guys who are deciding whether or not to do something which is against the treaty, they’re in a room and they’re deciding whether or not to do it. Okay? Now it is prohibited by a treaty that provides for verification. Now they’re trying to make this decision and one guy says, “Let’s do it. They’ll never see it. They’ll never know it.” Another guy says, “Well, there is a provision for verification. They may ask for a challenge inspection.” So, even the remote possibility that, “We might get caught,” might be enough to make that meeting decide, “Let’s not do it.” If it’s not something that’s really essential, then there is a potential big price.

If, on the other hand, there’s not even a treaty that allows the possibility of a challenge inspection, if the guy says, “Well, they might find it,” the other guy is going to say, “How are they going to find it? There’s no provision for them going there. We can just say, if they say, ‘I want to go there,’ we say, ‘We don’t have a treaty for that. Let’s make a treaty, then we can go to your place, too.’” It makes a difference: Even a provision that’s never used is worth having. I’m not saying it’s perfection, but it’s worth having. Anyway, let’s go on to one of these other things. Where do you want me to go?

Ariel: I’d really love to talk about the Agent Orange work that you did. So, I guess if you could start with the Agent Orange research and the other rainbow herbicides research that you were involved in. And then I think it would be nice to follow that up with, sort of another type of verification example, of the Yellow Rain Affair.

Matthew: Okay. The American Association for the Advancement of Science, the biggest organization of science in the United States, became, as the Vietnam War was going on, more and more concerned that the spraying of herbicides in Vietnam might cause ecological or health harm. And so at successive national meetings, there were resolutions to have it looked into. And as a result of one of those resolutions, the AAAS asked a fellow named Fred Tschirley to look into it. Fred was at the Department of Agriculture, but he was one of the people who developed the military use of herbicides. He did a study, and he concluded that there was no great harm. Possibly to the mangrove forest, but even then they would regenerate.

But at the next annual meeting, there was more appealing on the part of the membership, and now they wanted the AAAS to do its own investigation, and the compromise was they’d do their own study to design an investigation, and they had to have someone to lead that. So, they asked a fellow named John Cantlon, who was provost of Michigan State University, would he do it, and he said yes. And after a couple of weeks, John Cantlon said, “I can’t do this. I’m being pestered by the left and the right and the opponents on all sides and it’s just, I can’t do it. It’s too political.”

So, then they asked me if I would do it. Well, I decided I’d do it. The reason was that I wanted to see the war. Here I’d been very interested in chemical and biological weapons; very interested in war, because that’s the place where chemical and biological weapons come into play. If you don’t know anything about war, you don’t know what you’re talking about. I taught a course at Harvard for over two years on war, but that wasn’t like being there. So, I said I’d do it.

I formed a little group to do it. A guy named Arthur Westing, who had actually worked with herbicides and who was a forester himself and had been in the army in Korea, and I think had a battlefield promotion to captain. Just the right combination of talents. Then we had a chemistry graduate student, a wonderful guy named Bob Baughman. So, to design a study, I decided I couldn’t do it sitting here in Cambridge, Massachusetts. I’d have to go to Vietnam and do a pilot study in order to design a real study. So, we went to Vietnam — by the way, via Paris, because I wanted to meet the Vietcong people, I wanted them to give me a little card we could carry in our boots that would say, if we were captured, “We’re innocent scientists, don’t imprison us.” And we did get such little cards that said that. We were never captured by the Vietcong, but we did have some little cards.

Anyway, we went to Vietnam and we found, to my surprise, that the military assistance command, that is the United States Military in Vietnam, very much wanted to help our investigation. They gave us our own helicopter. That is, they assigned a helicopter and a pilot to me. And anywhere we wanted to go, I’d just call a certain number the night before and then go to Tan Son Nhut Air Base, and there would be a helicopter waiting with a pilot instructed FAD — fly as directed.

So, one of the things we did was to fly over a valley on which herbicides had been sprayed to kill the rice. John Constable, the medical member of our team, and I did two flights of that so we could take a lot of pictures. And the man who had designed this mission, a chemical corps captain named Captain Franz, had designed the mission and requested it and gotten permission through a series of review processes that it was really an enemy crop production area, not an area of indigenous Montagnard people growing food for their own eating, but rather enemy soldiers growing it for themselves.

So we took a lot of pictures and as we flew, Colonel Franz said, “See down there, there are no houses. There’s no civilian population. It’s just military down there. Also, the rice is being grown on terraces on the hillsides. The Montagnard people don’t do that. They just grow it down in the valley. They don’t practice terracing. And also, the extent of the rice fields down there — that’s all brand new. Fields a few years ago were much, much smaller in area. So, that’s how we know that it’s an enemy crop production area.” And he was a very nice man, and we believed him. And then we got home, and we had our films developed.

Well, we had very good cameras and although you couldn’t see from the aircraft, you could certainly see in the film: The valley was loaded with little grass shacks with yellow roofs — meaning that they were built recently, because you have to replace the roofs every once in a while with straw and if it gets too old, it turns black, but if there’s yellow, it means that somebody is living in those. And there were hundreds and hundreds of them.

We got from the Food and Agriculture Organization in Rome how much rice you need to stay alive for one year, and what area in hectares of dry rice — because this isn’t patty rice, it’s dry rice — you’d need to make that much rice, and we measured the area that was under cultivation from our photographs, and the area was just enough to support that entire population, if we assumed that there were five people who needed to be fed in every one of the houses that we counted.

Also, we could get from the French aerial photography that they had done in the late 1940s, and it turns out that the rice fields had not expanded. They were exactly the same. So it wasn’t that the military had moved in and made bigger rice fields: They were the same. So, everything that Colonel Franz said was just wrong. I’m sure he believed it, but it was wrong.

So, we made great big color enlargements of our photographs — we took photographs all up and down this valley, 15 kilometers long — and we made one set for Ambassador Bunker; one copy for General Abrams — Creighton Abrams was the head of our military assistance command; and one set for Secretary of State Rogers; along with a letter saying that this one case that we saw may not be typical, but in this one case, this crop destruction program was achieving the opposite of what it intended. It was denying food to the civilian population and not to the enemy. It was completely mistaken. So, as a result, I think, of that, but I have no proof, only the time connection, but right after that in early November — we’d sent the stuff in early November — Ambassador Bunker and General Abrams ordered a new review of the crop destruction program. Was it in response to our photographs and our letter? I don’t know, but I think it was.

The result of that review was a recommendation by Ambassador Bunker and General Abrams to stop the herbicide program immediately. They sent this recommendation back in a top secret telegram to Washington. Well, the top-secret telegram fell into the hands of the Washington Post, and they published it. Well, now here are the Ambassador and the General on the spot, saying to stop doing something in Vietnam. How on earth can anybody back in Washington gainsay them? Of course, President Nixon had to stop it right away. There’d be no grounds. How could he say, “Well, my guys here in Washington, in spite of what the people on the spot say, tell us we should continue this program.”

So that very day, he announced that the United States would stop all herbicide operations in Vietnam in a rapid and orderly manner. That very day happened to be the day that I, John Constable, and Art Westing were on the stage at the annual meeting in Chicago of the AAAS, reporting on our trip to Vietnam. And the president of AAAS ran up to me to tell me this news, because it just came in while I was talking, giving our report. So, that’s how it got stopped, and thanks to General Abrams.

By the way, the last day I was in Vietnam, General Abrams had just come back from Japan — he’d had an operation for gallbladder, and he was still convalescing. We spent all morning talking with each other. And he asked me at one point, “What about the military utility of the herbicides?” And of course, I said I had no idea what it was, or not. And he said, “Do you want to know what I think?” I said, “Yes, sir.” He said, “I think it’s shit.” I said, “Well, why are we doing it here?” He said, “You don’t understand anything about this war, young man. I do what I’m ordered to do from Washington. It’s Washington who tells me to use this stuff, and I have to use it because if I didn’t have those 55-gallon drums of herbicides offloaded on the decks at Da Nang and Saigon, then they’d make walls. I couldn’t offload the stuff I need over those walls. So, I do let the chemical corps use this stuff.” He said, “Also, my son, who is a captain up in I Corps, agrees with me about that.”

I wrote something about this recently, which I sent to you, Ariel. I want to be sure my memory was right about the conversation with General Abrams — who, by the way, was a magnificent man. He is the man who broke through at the Battle of the Bulge in World War II. He’s the man about whom General Patton, the great tank general, said, “There’s only one tank officer greater than me, and it’s Abrams.”

Max: Is he the one after whom the Abrams tank is named?

Matthew: Yes, it was named after him. Yes. He had four sons, they all became generals, and I think three of them became four-stars. One of them who did become a four-star is still alive in Washington. He has a consulting company. I called him up and I said, “Am I right, is this what your dad thought and what you thought back then?” He said, “Hell, yes. It’s worse than that.” Anyway, that’s what stopped the herbicides. They may have stopped anyway. It was dwindling down, no question. Now the question of whether dioxin and herbicides have caused too many health effects, I just don’t know. There’s an immense literature about this and it’s nothing I can say we ever studied. If I read all the literature, maybe I’d have an opinion.

I do know that dioxin is very poisonous, and there’s a prelude to this order from President Nixon to stop the use of all herbicides. That’s what caused the United States to stop the use of Agent Orange specifically. That happened first, before I went to Vietnam. That happened for a funny reason. A Harvard student, a Vietnamese boy, came to my office one day with a stack of newspapers from Saigon in Vietnamese. I couldn’t read them, of course, but they all had pictures of deformed babies, and this student claimed that this was because of Agent Orange, that the newspaper said it was because of Agent Orange.

Well, deformed babies are born all the time and I appreciated this coming from him, but there’s nothing I could do about it. But then I got from a graduate student here — Bill Haseltine, now become a very wealthy man — he had a girlfriend and she was working for Ralph Nader one summer, and she somehow got a purloined copy of a study that had been ordered by the NIH of the possible keratogenic, mutagenic, and carcinogenic effects of common herbicides, pesticides, and fungicides.

This company, called the Bionetics company, had this huge contract that tests all these different compounds, and they concluded from this that there was only one of these chemicals that did anything that might be dangerous for people. That was 2,4,5-T, trichlorophenoxyacetic acid. Well, that’s what Agent Orange is made out of. So, I had this report that had not yet been released to the public saying that this could cause birth defects in humans if it did the same thing as it did in guinea pigs and mice. I thought, the White House better know about this. That’s pretty explosive: claims in the newspapers in Saigon and scientific suggestions that this stuff might cause birth defects.

So, I decided to go down to Washington and see President Nixon’s science advisor. That was Lee DuBridge, physicist. Lee DuBridge had been the president of Caltech when I was a graduate student there and so he knew me, and I knew him. So, I went down to Washington with some friends, and I think one of the friends was Arthur Galston from Yale. He was a scientist who worked on herbicides, not on the phenoxyacetic herbicides but other herbicides. So we went down to see the President’s science advisor, and I showed them these newspapers and showed him the Bionetics report. He hadn’t seen it, it was at too low a level of government for him to see it and it had not yet been released to the public. Then he did something amazing, Lee DuBridge: He picked up the phone and he called David Packard, who was the number two at the Defense Department. Right then and there, without consulting anybody else, without asking the permission of the President, they canceled Agent Orange.

Max: Wow.

Matthew: That was the end Agent Orange. Now, not exactly the end. I got a phone call from Lee DuBridge a couple of days later when I was back at Harvard. He says, “Matt, the DuPont people have come to me. It’s not Agent Orange itself, it’s an impurity in Agent Orange called dioxin, and they know that dioxin is very toxic, and the Agent Orange that they make has very little dioxin in it because they know it’s bad and they make the stuff at low temperature, when dioxin is a by-product, that’s made in very small amount. These other companies like Diamond Shamrock and other companies, Monsanto, who make Agent Orange for the military, it must be their Agent Orange. It’s not our Agent Orange.

So, in other words the question was, we just use the Dow Agent Orange — maybe that’s safe. But the question is does the Dow Agent Orange cause defects in mice? So, a whole new series of experiments were done with Agent Orange containing much less dioxin in it. It still made birth defects. So, since it still made birth defects in one species of rodent, you could hardly say, “Well, it’s okay then for humans.” So, that really locked it, closed it down, and then even the Department of Agriculture prohibited the use in the United States, except on land that would have been unlikely to get into the human food chain. So, that ended the use of Agent Orange.

That had happened already before we went to Vietnam. They were then using only Agent White and Agent Blue, two other herbicides, but Agent Orange had been knocked out ahead of time. But that was the end of the whole herbicide program. It was two things: the dioxin concern, on the one hand, stopping Agent Orange, and the decision of President Nixon; and militarily Bunker and Abrams had said, “It’s no use, we want to get it stopped, it’s doing more harm than good. It’s getting the civilian population against us.”

Max: One reaction I have to these fascinating stories is how amazing it is that back in those days politicians really trusted scientists. You could go down to Washington, there would be a science advisor. You know, we even didn’t have a presidential science advisor for a while now during this administration. Do you feel that the climate has changed somehow in the way politicians view scientists?

Matthew: Well, I don’t have a big broad view of the whole thing. I just get the impression, like you do, that there are more politicians who don’t pay attention to science than there used to be. There are still some, but not as many, and not in the White House.

Max: I would say we shouldn’t particularly just point fingers at any particular administration, I think there has been a general downward trend for people’s respect for scientists overall. If you go back to when you were born, Matthew, and when I was born, I think generally people thought a lot more highly about scientists contributing very valuable things to society and they were very interested in them. I think right now there are much more people who can name — If you ask the average person how many famous movie stars can they name, or how many billionaires can they name, versus how many Nobel laureates can they name, the answer is going to be kind of different from the way it was a long time ago. It’s very interesting to think about what we can do to more help people appreciate the things that they do care about, like living longer and having technology and so on, are things that they, to a large extent, owe to science. It isn’t just the nerdy stuff that isn’t relevant to them.

Matthew: Well, I think movie stars were always at the top of the list. Way ahead of Nobel Prize winners and even of billionaires, but you’re certainly right.

Max: The second thing that really strikes me, which you did so wonderfully there, is that you never antagonized the politicians and the military, but rather went to them in a very constructive spirit and said look, here are the options. And based on the evidence, they came to your conclusion.

Matthew: That’s right. Except for the people who actually were doing these programs — that was different, you couldn’t very well tell them that. But for everybody else, yes, it was a help. You need to offer help, not hindrance.

The last thing was the Yellow Rain. That, too, involved the CIA. I was contacted by the CIA. They had become aware of reports from Southeast Asia, particularly from Thailand, Hmong tribespeople who were living in Laos, coming out of Laos across the Mekong into Thailand, and telling stories of being poisoned by stuff dropped from airplanes. Stuff that they called kemi or yellow rain.

At first, I thought maybe there was something to this, there are some nasty chemicals that are yellow. Not that lethal, but who knows, maybe there is exaggeration in their stories. One of them is called adamsite, it’s yellow, it’s an arsenical. So we decided we’d have a conference, because there was a  mystery: What is this yellow rain? We had a conference. We invited people from the intelligence community, from the state department. We invited anthropologists. We invited a bunch of people to ask, what is this yellow rain?

By this time, we knew that the samples that had been turned in contained pollen. One reason we knew that was that the British had samples of this yellow rain and they had shown that it contains pollen. They had looked at the samples of the yellow rain brought in by the Hmong tribespeople, given to British officers — or maybe Americans, I don’t know — but found its way into the hands of British intelligence, who bring these samples back to Porton and they’re examined in various ways, but also under the microscope. And the fellow who looked at them under the microscope happened to be a beekeeper. He knew just what pollen grains look like. And he knew that there was pollen, and then they sent this information to the United States, and we looked at the samples of yellow rain we had, and they all contained — all these yellow samples contained pollen.

The question was, what is it? It’s got pollen in it. Maybe it’s very poisonous. The Montagnard people say it falls from the sky. It lands on leaves and on rocks. The spots were about two millimeters in diameter. It’s yellow or brown or red, different colors. What is it? So, we had this meeting in Cambridge, and one of the people there, Peter Ashton, is a great botanist, his specialty is the trees of Southeast Asia and in particular the great dipterocarp trees, which are like the oaks in our part of the world. And he was interested in the fertilization of these dipterocarps, and the fertilization is done by bees. They collect pollen, though, like other bees.

And so the hypothesis we came to at the end of this day-long meeting was that maybe this stuff is poisonous, and the bees get poisoned by it because it falls on everything, including flowers that have pollen, and the bees get sick, and these yellow spots, they’re the vomit of the bees. These bees are smaller individually than the yellow spots, but maybe several bees get together and vomit on the same spot. Really a crazy idea. Nevertheless, it was the best idea we could come up with that explained why something could be toxic but have pollen in it. It could be little drops, associated with bees, and so on.

A couple of days later, both Peter Ashton, the botanist, and I, noticed on the backs of our cars on the windshields, the rear windshields, yellow spots loaded with pollen. These were being dropped by bees,  these were the natural droppings of bees, and that gave us the idea that maybe there was nothing poisonous in this stuff. Maybe it was the natural droppings of bees that the people in the villages thought was poisonous, but that wasn’t. So, we decided we better go to Thailand and find out what’s happening.

So, a great bee biologist named Thomas Seeley, who’s now at Cornell — he was at Yale at that time — and I flew over to Thailand, and went up into the forest to see if bees defecate in showers. Now why did we do that? It’s because friends here said, “Matt, this can’t be the source of the yellow rain that the Hmong people complained about, because bees defecate one by one. They don’t go out in a great armada of bees and defecate all at once. Each bee goes out and defecates by itself. So, you can’t explain the showers — they’d only get tiny little driblets, and the Hmong people say they’re real showers, with lots of drops falling all at once.”

So, Tom Seeley and I went to Thailand, where they also had this kind of bee. So, we went there, and it turns out that they defecate all at once, unlike the bees here. Now they do defecate in showers here too, but they’re small showers. That’s because the number of bees in a nest here is rather small, but they do come out on the first warm days of spring, when there’s now pollen and nectar to be harvested, but those showers are kind of small. Besides that, the reason that there are showers at all even in New England is because the bees are synchronized by winter. Winter forces them to stay in their nest all winter long, during which they’re eating the stored-up pollen and getting very constipated. Now, when they fly out, they all fly out, they’re all constipated, and so you get a big shower. Not as big as the natives in Southeast Asia reported, but still a shower.

But in southeast Asia, there are no seasons. Too near the equator. So, there’s nothing that would synchronize the defecation of bees, and that’s why we had to go to Thailand to see if — even though there’s no winter to synchronize their defecation flights — if they nevertheless do go out in huge numbers and all at once.

So, we’re in Thailand and we go up into the Khao Yai National Park and find places where there are clearings in the forests where you could see up into the sky, where if there were bees defecating their feces would fall to the ground, not get caught up in the trees. And we put down big pieces, one meter square, of white paper, and anchored them with rocks, and went walking around in the forest some more, and come back and look at our pieces of white paper every once in a while.

And then suddenly we saw a large number of spots on the paper, which meant that they had defecated all at once. They weren’t going around defecating one by one by one. There were great showers then. That’s still a question: Why they don’t go out one by one? And there are some good ideas why, I won’t drag you into that. It’s the convoy principle, to avoid getting picked off one by one by birds. That’s why people think that they go out in great armadas of constipated bees.

So, this gave us a new hypothesis. The so-called yellow rain is all a mistake. It’s just bees defecating, which people confuse and think is poisonous. Now, that still doesn’t prove that there wasn’t a poison. What was the evidence for poison? The evidence was that the Defense Intelligence Agency was sending samples of this yellow rain and also samples of human blood and other materials to a laboratory in Minnesota that knew how to analyze for the particular toxin that the Defense establishment thought was the poison. It’s a toxin called trichothecene mycotoxins, there’s a whole family of them. And this lab reported positive findings in the samples from Thailand but not in controls. So that seemed to be real proof that there was poison.

Well, this lab is a lab that also produced trichothecene mycotoxins, and the way they analyzed for them was by mass spectroscopy, and everybody knows that if you’re going to do mass spectroscopy, you’re going to be able to detect very, very, very tiny amounts of stuff, and so you shouldn’t both make large quantities and try to detect small quantities in the same room, because there’s the possibility of cross contamination. I have an internal report from the Defense Intelligence Agency saying that that laboratory did have numerous false positive, and that probably all of their results were bedeviled by contamination from the trichothecenes that were in the lab, and also because there may have been some false reading of the mass spec diagram.

The long and short of it is that when other laboratories tried to find trichothecenes in their samples: the US Army looked at at least 80 samples and found nothing. The British looked at at least 60 samples, found nothing. The Swedes looked at some number of samples, I don’t know the number, but found nothing. The French looked at a very few samples at their military analytical lab, and the French found nothing. No lab could confirm it. There was one lab at Rutgers that thought it could confirm it, but I believe that they were suffering from contamination also, because they were a lab that worked with trichothecenes also.

So, the long and short of it is that the chemical evidence was no good, and finally the ambassador there decided that we should have another look — Ambassador Dean. And that the military should send out a team that was properly equipped to check up on these stories, because up until then there was no dedicated team. There were teams that would come up briefly, listen to the refugees’ stories, collect samples, and go back. So Ambassador Dean requested a team that would stay there. So out comes a team from Washington, stays there longer than a year. Not just a week, but longer than a year, and they tried to re-locate the Hmong people in the camps who had told these stories in the refugee camps.

They couldn’t find a single one who would tell the same story twice. Either because they weren’t telling the same story twice, or because the interpreter interpreted the same story differently. So, whatever it was. Then they did something else. They tried to find people who were in the same location at the same time as was claimed there was such attacks, and those people never confirmed the attack. They could never find any confirmation by interrogation of people.

Then also, there was a CIA unit out there in that theater questioning captured prisoners of war and also people who surrendered from the North Vietnamese army: the people who were presumably behind the use of this toxic stuff. And they interrogated hundreds of people, and one of these interrogators wrote an article in an Intelligence Agency Journal, but an open journal, saying that he doubted that there was anything to the yellow rain because they had interrogated so many people including chemical corps people from the North Vietnamese Army, that he couldn’t believe that there really was anything going on.

So we did some more investigating of various kinds, not just going to Thailand, but doing some analysis of various things. We looked at the samples — we found bee hairs in the samples. We found that the bee pollen in the samples of the alleged poison had no protein inside. You can stain pollen grains with something called Coomassie brilliant blue, and these pollen grains that were in the samples handed in by the refugees, that were given to us by the army and by the Canadians, by the Australians, they didn’t stain blue. Why not? Because if a pollen grain passes through the gut of a bee, the bee digests out all of the good protein that’s inside the pollen grain, as its nutrition.

So, you’d have to believe that the Soviets were collecting pollen not from plants, which is hard enough, but had been regurgitated by bees. Well, that’s insane. You could never get enough to be a weapon by collecting bee vomit. So the whole story collapsed, and we’ve written a longer account of this. The United States government has never said we were right, but a few years ago said that maybe they were wrong. So that’s at least something.

So one case we were right, and the Soviets were wrong. Another case, the Soviets were wrong, and we were right, and the third case, the herbicides, nobody was right or wrong. It was just that it was, in my view, by the way, it was useless militarily. I’ll tell you why.

If you spray the deep forest, hoping to find a military installation that you can now see because there are no more leaves, it takes four or five weeks for the leaves to fall off. So, you might as well drop little courtesy cards that say, “Dear enemy. We have now sprayed where you are with herbicide. In four or five weeks we will see you. You may choose to stay there, in which case, we will shoot you. Or, you have four or five weeks to move somewhere else, in which case, we won’t be able to find you. You decide.” Well, come on, what kind of a brain came up with that?

The other use was along roadsides, for convoys to be safer from snipers who might be hidden in the woods. You knock the leaves off the trees and you can see deeper into the woods. That’s right, but you have to realize the fundamental law of physics, which is that if you can see from A to B, B can see back to A, right? If there’s a clear light path from one point to another, there’s a clear light path in the other direction.

Now think about it. You are a sniper in the woods, and the leaves now have not been sprayed. They grow right up to the edge of the forest and a convoy is coming down the road. You can stick your head out a little bit but not for very long. They have long-range weapons; When they’re right opposite you, they have huge firepower. If you’re anywhere nearby, you could get killed.

Now, if we get rid of all the leaves, now I can stand way back into the forest, and still sight you between the trunks. Now, that’s a different matter. A very slight move on my part determines how far up the road and down the road I can see. By just a slight movement of my eye and my gun, I can start putting you under fire a couple kilometers up the road — you won’t even know where it’s coming from. And I can keep you under fire a few kilometers down the road, when you pass me by. And you don’t know where I am anymore. I’m not right up by the roadside, because the leaves would otherwise keep me from seeing anything. I’m back in there somewhere. You can pour all kinds of fire, but you might not hit me.

So, for all these reasons, the leaves are not the enemy. The leaves are the enemy of the enemy. Not of us. We’d like to get rid of the trunks — that’s different, we do that with bulldozers. But getting rid of the leaves leaves a kind of a terrain which is advantageous to the enemy, not to us. So, on all these grounds, my hunch is that by embittering the civilian population — and after all our whole strategy was to win the hearts and minds — by embittering the native population by wiping out their crops with drifting herbicide, the herbicides helped us lose the war, not win it. We didn’t win it. But it helped us lose it.

But anyway, the herbicides got stopped in two steps. First Agent Orange, because of dioxin and the report from the Bionetics Company, and second because Abrams and Bunker said, “Stop it.” We now have a treaty, by the way, the ENMOD treaty, that makes it illegal under international law to do any kind of large-scale environmental modification as a weapon of war. So, that’s about everything I know.

And I should add: you might say, how could they interpret something that’s common in that region as a poison? Well, in China, in 1970, I believe it was, the same sort of thing happened, but the situation was very different. People believed that yellow spots were falling from the sky, they were fallout from nuclear weapons tests being conducted by the Soviet Union, and they were poisonous.

Well, the Chinese government asked a geologist from a nearby university to go investigate, and he figured out — completely out of touch with us, he had never heard of us, we had never heard of him — that it was bee feces that were being misinterpreted by the villagers as fallout from nuclear weapons test done by Russians.

It was exactly the same situation, except that in this case there was no reason whatsoever to believe that there was anything toxic there. And why was it that people didn’t recognize bee droppings for what they were? After all, there’s lots of bees out there. There are lots of bees here, too. And if in April, or near that part of spring, you look at the rear windshield of your car, if you’ve been out in the countryside or even here in midtown, you will see lots of these spots, and that’s what those spots are.

When I was trying to find out what kinds of pollen were in the samples of the yellow rain — the so-called yellow rain — that we had, I went down to Washington. The greatest United States expert on pollen grains and where they come from was at the Smithsonian Institution, a woman named Joan Nowicki. I told her that bees make spots like this all the time and she said, “Nonsense. I never see it.” I said, “Where do you park your car?” Well there’s a big parking lot by the Smithsonian, we go down there, and her rear windshield was covered with these things. We see them all the time. They’re part of what we see, but we don’t take any account of.

Here at Harvard there’s a funny story about that. One of our best scientists here, Ed Wilson, studies ants — but also bees — but mostly ants. But he knows a lot about bees. Well, he has an office in the museum building, and lots of people come to visit the museum at Harvard, a great museum, and there’s a parking lot for them. Now there’s a graduate student who has, in those days, bee nests up on top of the museum building. He’s doing some experiments with bees. But these bees defecate, of course. And some of the nice people who come to see Harvard Museum park their cars there and some of them are very nice new cars, and they come back out from seeing the museum and there’s this stuff on their windshields. So, they go to find out who is it that they can blame for this and maybe do something about it or pay them get it fixed or I don’t know what — anyway, to make a complaint. So, they come to Ed Wilson’s office.

Well, this graduate student is a graduate student of Ed Wilson, and of course, he knows that he’s got bee nests up there, and so the secretary of Ed Wilson knows what this stuff is. And the graduate student has the job of taking a rag with alcohol on it and going down and gently wiping the bee feces off of the windshields of these distressed drivers, so there’s never any harm done. But now, when I had some of this stuff that I’d collected in Thailand, I took two people to lunch at the faculty club here at Harvard, and some leaves with these spots on them under a plastic petri dish, just to see if they would know.

Now, one of these guys, Carroll Williams, knew all about insects, lots of things about insects, and Wilson of course; and we’re having lunch and I bring out this petri dish with the leaves covered with yellow spots and asked them, two professors who are great experts on insects, what the stuff is, and they hadn’t the vaguest idea. They didn’t know. So, there can be things around us that we see every day, and even if we’re experts we don’t know what it is. We don’t notice it. It’s just part of the environment. We don’t notice it. I’m sure that these Hmong people were getting shot at, they were getting napalmed, they were getting everything else, but they were not getting poisoned. At least not by bee feces. It was all a big mistake.

Max: Thank you so much, both for this fascinating conversation and all the amazing things you’d done to keep science a force for good in the world.

Ariel: Yes. This has been a really, really great and informative discussion, and I have loved learning about the work that you’ve done, Matthew. So, Matthew and Max, thank you so much for joining the podcast.

Max: Well, thank you.

Matthew: I enjoyed it. I’m sure I enjoyed it more than you did.

Ariel: No, this was great. It’s truly been an honor getting to talk with you.

If you’ve enjoyed this interview, let us know! Please like it, share it, or even leave a good review. I’ll be back again next month with more interviews with experts.  

 

AI Alignment Podcast: Human Cognition and the Nature of Intelligence with Joshua Greene

How do we combine concepts to form thoughts? How can the same thought be represented in terms of words versus things that you can see or hear in your mind’s eyes and ears? How does your brain distinguish what it’s thinking about from what it actually believes? If I tell you a made up story, yesterday I played basketball with LeBron James, maybe you’d believe me, and then I say, oh I was just kidding, didn’t really happen. You still have the idea in your head, but in one case you’re representing it as something true, in another case you’re representing it as something false, or maybe you’re representing it as something that might be true and you’re not sure. For most animals, the ideas that get into its head come in through perception, and the default is just that they are beliefs. But humans have the ability to entertain all kinds of ideas without believing them. You can believe that they’re false or you could just be agnostic, and that’s essential not just for idle speculation, but it’s essential for planning. You have to be able to imagine possibilities that aren’t yet actual. So these are all things we’re trying to understand. And then I think the project of understanding how humans do it is really quite parallel to the project of trying to build artificial general intelligence.” -Joshua Greene

Josh Greene is a Professor of Psychology at Harvard, who focuses on moral judgment and decision making. His recent work focuses on cognition, and his broader interests include philosophy, psychology and neuroscience. He is the author of Moral Tribes: Emotion, Reason, and the Gap Bewtween Us and Them. Joshua Greene’s current research focuses on further understanding key aspects of both individual and collective intelligence. Deepening our knowledge of these subjects allows us to understand the key features which constitute human general intelligence, and how human cognition aggregates and plays out through group choice and social decision making. By better understanding the one general intelligence we know of, namely humans, we can gain insights into the kinds of features that are essential to general intelligence and thereby better understand what it means to create beneficial AGI. This particular episode was recorded at the Beneficial AGI 2019 conference in Puerto Rico. We hope that you will join in the conversations by following us or subscribing to our podcasts on Youtube, SoundCloud, iTunes, Google Play, Stitcher, or your preferred podcast site/application. You can find all the AI Alignment Podcasts here.

If you’re interested in exploring the interdisciplinary nature of AI alignment, we suggest you take a look here at a preliminary landscape which begins to map this space.

Topics discussed in this episode include:

  • The multi-modal and combinatorial nature of human intelligence
  • The symbol grounding problem
  • Grounded cognition
  • Modern brain imaging
  • Josh’s psychology research using John Rawls’ veil of ignorance
  • Utilitarianism reframed as ‘deep pragmatism’
You can find out more about Joshua Greene at his website or follow his lab on their Twitter. You can listen to the podcast above or read the transcript below.

Lucas: Hey everyone. Welcome back to the AI Alignment Podcast. I’m Lucas Perry, and today we’ll be speaking with Joshua Greene about his research on human cognition as well as John Rawls’ veil of ignorance and social choice. Studying the human cognitive engine can help us better understand the principles of intelligence, and thereby aid us in arriving at beneficial AGI. It can also inform group choice and how to modulate persons’ dispositions to certain norms or values, and thus affect policy development in observed choice. Given this, we discussed Josh’s ongoing projects and research regarding the structure, relations, and kinds of thought that make up human cognition, key features of intelligence such as it being combinatorial and multimodal, and finally how a particular thought experiment can change how impartial a person is, and thus what policies they support.

And as always, if you enjoy this podcast, please give it a like, share it with your friends, and follow us on your preferred listening platform. As a bit of announcement, the AI Alignment Podcast will be releasing every other Wednesday instead of once a month, so there are a lot more great conversations on the way. Josh Greene is a professor of psychology at Harvard, who focuses on moral judgment and decision making. His recent work focuses on cognition, and his broader interests include philosophy, psychology and neuroscience. And without further ado, I give you Josh Greene.

What sort of thinking has been predominantly occupying the mind of Josh Greene?

Joshua: My lab has two different main research areas that are related, but on a day to day basis are pretty separate. You can think of them as focused on key aspects of individual intelligence versus collective intelligence. On the individual intelligence side, what we’re trying to do is understand how our brains are capable of high level cognition. In technical terms, you can think of that as compositional semantics, or multimodal compositional semantics. What that means in more plain English is how does the brain take concepts and put them together to form a thought, so you can read a sentence like the dog chased the cat, and you understand that it means something different from the cat chased the dog. The same concepts are involved, dog and cat and chasing, but your brain can put things together in different ways in order to produce a different meaning.

Lucas: The black box for human thinking and AGI thinking is really sort of this implicit reasoning that is behind the explicit reasoning, that it seems to be the most deeply mysterious, difficult part to understand.

Joshua: Yeah. A lot of where machine learning has been very successful has been on the side of perception, recognizing objects, or when it comes to going from say vision to language, simple labeling of scenes that are already familiar, so you can show an image of a dog chasing a cat and maybe it’ll say something like dog chasing cat, or at least we get that there’s a dog running and a cat chasing.

Lucas: Right. And the caveat is that it takes a massive amount of training, where it’s not one shot learning, it’s you need to be shown a cat chasing a dog a ton of times just because of how inefficient the algorithms are.

Joshua: Right. And the algorithms don’t generalize very well. So if I show you some crazy picture that you’ve never seen before where it’s a goat and a dog and Winston Churchill all wearing roller skates in a rowboat on a purple ocean, a human can look at that and go, that’s weird, and give a description like the one I just said. Whereas today’s algorithms are going to be relying on brute statistical associations, and that’s not going to cut it for getting a precise, immediate reasoning. So humans have this ability to have thoughts, which we can express in words, but we also can imagine in something like pictures.

And the tricky thing is that it seems like a thought is not just an image, right? So to take an example that I think comes from Daniel Dennett, if you hear the words yesterday my uncle fired his lawyer, you might imagine that in a certain way, maybe you picture a guy in a suit pointing his finger and looking stern at another guy in a suit, but you understand that what you imagined doesn’t have to be the way that that thing actually happened. The lawyer could be a woman rather than a man. The firing could have taken place by phone. The firing could have taken place by phone while the person making the call was floating in a swimming pool and talking on a cell phone, right?

The meaning of the sentence is not what you imagined. But at the same time we have the symbol grounding problem, that is it seems like meaning is not just a matter of symbols chasing each other around. You wouldn’t really understand something if you couldn’t take those words and attach them meaningfully to things that you can see or touch or experience in a more sensory and motor kind of way. So thinking is something in between images and in between words. Maybe it’s just the translation mechanism for those sorts of things, or maybe there’s a deeper language of thought to use, Jerry Fodor’s famous phrase. But in any case, what part of my lab is trying to do is understand how does this central really poorly understood aspect of human intelligence work? How do we combine concepts to form thoughts? How can the same thought be represented in terms of words versus things that you can see or hear in your mind’s eyes and ears?

How does your brain distinguish what it’s thinking about from what it actually believes? If I tell you a made up story, yesterday I played basketball with LeBron James, maybe you’d believe me, and then I say, oh I was just kidding, didn’t really happen. You still have the idea in your head, but in one case you’re representing it as something true, in another case you’re representing it as something false, or maybe you’re representing it as something that might be true and you’re not sure. For most animals, the ideas that get into its head come in through perception, and the default is just that they are beliefs. But humans have the ability to entertain all kinds of ideas without believing them. You can believe that they’re false or you could just be agnostic, and that’s essential not just for idle speculation, but it’s essential for planning. You have to be able to imagine possibilities that aren’t yet actual.

So these are all things we’re trying to understand. And then I think the project of understanding how humans do it is really quite parallel to the project of trying to build artificial general intelligence.

Lucas: Right. So what’s deeply mysterious here is the kinetics that underlie thought, which is sort of like meta-learning or meta-awareness, or how it is that we’re able to have this deep and complicated implicit reasoning behind all of these things. And what that actually looks like seems deeply puzzling in sort of the core and the gem of intelligence, really.

Joshua: Yeah, that’s my view. I think we really don’t understand the human case yet, and my guess is that obviously it’s all neurons that are doing this, but these capacities are not well captured by current neural network models.

Lucas: So also just two points of question or clarification. The first is this sort of hypothesis that you proposed, that human thoughts seem to require some sort of empirical engagement. And then what was your claim about animals, sorry?

Joshua: Well animals certainly show some signs of thinking, especially some animals like elephants and dolphins and chimps engage in some pretty sophisticated thinking, but they don’t have anything like human language. So it seems very unlikely that all of thought, even human thought, is just a matter of moving symbols around in the head.

Lucas: Yeah, it’s definitely not just linguistic symbols, but it still feels like conceptual symbols that have structure.

Joshua: Right. So this is the mystery, human thought, you could make a pretty good case that symbolic thinking is an important part of it, but you could make a case that symbolic thinking can’t be all it is. And a lot of people in AI, most notably DeepMind, have taken the strong view and I think it’s right, that if you’re really going to build artificial general intelligence, you have to start with grounded cognition, and not just trying to build something that can, for example, read sentences and deduce things from those sentences.

Lucas: Right. Do you want to unpack what grounded cognition is?

Joshua: Grounded cognition refers to a representational system where the representations are derived, at least initially, from perception and from physical interaction. There’s perhaps a relationship with empiricism in the broader philosophy of science, but you could imagine trying to build an intelligent system by giving it lots and lots and lots of words, giving it lots of true descriptions of reality, and giving it inference rules for going from some descriptions to other descriptions. That just doesn’t seem like it’s going to work. You don’t really understand what apple means unless you have some sense of what an apple looks like, what it feels like, what it tastes like, doesn’t have to be all of those things. You can know what an apple is without ever eaten one, or I could describe some fruit to you that you’ve never seen, but you have experience with other fruits or other physical objects. Words don’t just exist in a symbol storm vacuum. They’re related to things that we see and touch and interact with.

Lucas: I think for me, just going most foundationally, the question is before I know what an apple is, do I need to understand spatial extension and object permanence? I have to know time, I have to have some very basic ontological understanding and world model of the universe.

Joshua: Right. So we have some clues from human developmental psychology about what kinds of representations, understandings, capabilities humans acquire, and in what order. To state things that are obvious, but nevertheless revealing, you don’t meet any humans who understand democratic politics before they understand objects.

Lucas: Yes.

Joshua: Right?

Lucas: Yeah.

Joshua: Which sounds obvious and it is in a sense obvious, right? But it tells you something about what it takes to build up abstract and sophisticated understandings of the world and possibilities for the world.

Lucas: Right. So for me it seems that the place where grounded cognition is most fundamentally is in between when like the genetic code that seeds the baby and when the baby comes out, the epistemics and whatever is in there, has the capacity to one day potentially become Einstein. So like what is that grounded cognition in the baby that underlies this potential to be a quantum physicist or a scientist-

Joshua: Or even just a functioning human. 

Lucas: Yeah.

Joshua: I mean even people with mental disabilities walk around and speak and manipulate objects. I think that in some ways the harder question is not how do we get from normal human to Einstein, but how do we get from a newborn to a toddler? And the analogous or almost analogous question for artificial intelligence is how do you go from a neural network that has some kind of structure, have some that’s favorable for acquiring useful cognitive capabilities, and how do you figure out what the starting structure is, which is kind of analogous to the question of how does the brain get wired up in utero?

And it gets connected to these sensors that we call eyes and ears, and it gets connected to these effectors that we call hands and feet. And it’s not just a random blob of connectoplasm, the brain has a structure. So one challenge for AI is what’s the right structure for acquiring sophisticated intelligence, or what are some of the right structures? And then what kind of data, what kind of training, what kind of training process do you need to get there?

Lucas: Pivoting back into the relevance of this with AGI, there is like you said, this fundamental issue of grounded cognition that babies and toddlers have that sort of lead them to become full human level intelligences eventually. How does one work to isolate the features of grounded cognition that enable babies to grow and become adults?

Joshua: Well, I don’t work with babies, but I can tell you what we’re doing with adults, for example.

Lucas: Sure.

Joshua: In the one paper in this line of research we already have published, this is work led by Steven Franklin, we have people reading sentences like the dog chased the cat, the cat chased the dog, or the dog was chased by the cat and the cat was chased by the dog. And what we’re doing is looking for parts of the brain where the pattern is different depending on whether the dog is chasing the cat and the cat is chasing the dog. So it has to be something that’s not just involved in representing dog or cat or chasing, but of representing that composition of those three concepts where they’re composed in one way rather than another way. And we found is that their region in the temporal lobe where the pattern is different for those things.

And more specifically, what we’ve found is that in one little spot in this broader region in the temporal lobe, you can better than chance decode who the agent is. So if it’s the dog chased the cat, then in this spot you can better than chance tell that it’s dog that’s doing the chasing. If it’s the cat was chased by the dog, same thing. So it’s not just about the order of the words, and then you can decode better than chance that it’s cat being chased for a sentence like that. So the idea is that these spots in the temporal lobe are functioning like data registers, and representing variables rather than specific values. That is this one region is representing the agent who did something and the other region is representing the patient, as they say in linguistics, who had something done to it. And this is starting to look more like a computer program where the way classical programs work is they have variables and values.

Like if you were going to write a program that translates Celsius into Fahrenheit, what you could do is construct a giant table telling you what Fahrenheit value corresponds to what Celsius value. But the more elegant way to do it is to have a formula where the formula has variables, right? You put in the Celsius value, you multiply it by the right thing and you get the Fahrenheit value. And then what that means is that you’re taking advantage of that recurring structure. Well, the something does something to something else is a recurring structure in the world and in our thought. And so if you have something in your brain that has that structure already, then you can quickly slot in dog as agent, chasing as the action, cat as patient, and that way you can very efficiently and quickly combine new ideas. So the upshot of that first work is that it seems like when we’re representing the meaning of a sentence, we’re actually doing it in a more classical computer-ish way than a lot of neuroscientists might have thought.

Lucas: It’s Combinatorial.

Joshua: Yes, exactly. So what we’re trying to get at is modes of composition. In that experiment, we did it with sentences. In an experiment we’re now doing, this is being led by my grad student Dylan Plunkett, and Steven Franklin is also working on this, we’re doing it with words and with images. We actually took a bunch of photos of different people doing different things. Specifically we have a chef which we also call a cook, and we have a child which we also call a kid. We have a prisoner, which we also call an inmate, and we have male and female versions of each of those. And sometimes one is chasing the other and sometimes one is pushing the other. In the images, we have all possible combinations of the cook pushes the child, the inmate chases the chef-

Lucas: Right, but it’s also gendered.

Joshua: We have male and female versions for each. And then we have all the possible descriptions. And in the task what people have to do is you put two things on the screen and you say, do these things match? So sometimes you’ll have two different images and you have to say, do those images have the same meaning? So it could be a different chef chasing a different kid, but if it’s a chef chasing a kid in both cases, then you would say that they mesh. Whereas if it’s a chef chasing an inmate, then you’d say that they don’t. And then in other cases you would have two sentences, like the chef chased the kid, or it could be the child was chased by the cook, or was pursued by the cook, and even though those are all different words in different orders, you’ve recognized that they have the same meaning or close enough.

And then in the most interesting case, we have an image and a set of words, which you can think of it as as a description, and the question is, does it match? So if you see a picture of a chef chasing a kid, and then the words are chef chases kid or cook pursues child, then you’d say, okay, that one’s a match. And what we’re trying to understand is, is there something distinctive that goes on in that translation process when you have to take a complex thought, not complex in the sense of very sophisticated by human standards, but complex in the sense that it has parts, that it’s composite, and translate it from a verbal representation to a visual representation, and is that different or is the base representation visual? So for example, one possibility is when you get two images, if you’re doing something that’s fairly complicated, you have to translate them both into words. It’s possible that you could see language areas activated when people have to look at two images and decide if they match. Or maybe not. Maybe you can do that in a purely visual kind of way-

Lucas: And maybe it depends on the person. Like some meditators will report that after long periods of meditation, certain kinds of mental events happen much less or just cease, like images or like linguistic language or things like that.

Joshua: So that’s possible. Our working assumption is that basic things like understanding the meaning of the chef chased the kid, and being able to point to a picture of that and say that’s the thing, the sentence described, that our brains do this all more or less than the same way. That could be wrong, but our goal is to get at basic features of high level cognition that all of us share.

Lucas: And so one of these again is this combinatorial nature of thinking.

Joshua: Yes. That I think is central to it. That it is combinatorial or compositional, and that it’s multimodal, that you’re not just combining words with other words, you’re not just combining images with other images, you’re combining concepts that are either not tied to a particular modality or connected to different modalities.

Lucas: They’re like different dimensions of human experience. You can integrate it with if you can feel it, or some people are synesthetic, or like see it or it could be a concept, or it could be language, or it could be heard, or it could be subtle intuition, and all of that seems to sort of come together. Right?

Joshua: It’s related to all those things.

Lucas: Yeah. Okay. And so sorry, just to help me get a better picture here of how this is done. So this is an MRI, right?

Joshua: Yeah.

Lucas: So for me, I’m not in this field and I see generally the brain is so complex that our resolution is just different areas of the brain light up, and so we understand what these areas are generally tasked for, and so we can sort of see how they relate when people undergo different tasks. Right?

Joshua: No, we can do better than that. So that was kind of brain imaging 1.0, and brain imaging 2.0 is not everything we want from a brain imaging technology, but it does take us a level deeper, which is to say instead of just saying this brain region is involved, or it ramps up when people are doing this kind of thing, region function relationships, we can look at the actual encoding of content, I can train a pattern classifier. So let’s say you’re showing people pictures of dog or the word dog versus other things. You can train a pattern classifier to recognize the difference between someone looking at a dog versus looking at a cat, or reading the word dog or reading the word cat. There are patterns of activity that are more subtle than just this region is active or more or less active.

Lucas: Right. So the activity is distinct in a way that when you train the thing on when it looks like people are recognizing cats, then it can recognize that in the future.

Joshua: Yeah.

Lucas: So is there anything besides this multimodal and combinatorial features that you guys have isolated, or that you’re looking into, or that you suppose are like essential features of grounded cognition?

Joshua: Well, this is what we’re trying to study, and we have the ones that have result that’s kind of done and published that I described about representing the meaning of a sentence in terms of representing the agent here and the patient there for that kind of sentence, and we have some other stuff in the pipeline that’s getting at the kinds of representations that the brain uses to combine concepts and also to distinguish concepts that are playing different roles. In another set of studies we have people thinking about different objects.

Sometimes they’ll think about an object where it’s a case where they’d actually get money if it turns out that that object is the one that’s going to appear later. It looks like when you think about, say dog, and if it turns out that it’s dog under the card, then you’ll get five bucks. You see that you were able to decode the dog representation in part of our motivational circuitry, whereas you don’t see that if you’re just thinking about it. So that’s another example, is that things are represented in different places in the brain depending on what function that representation is serving at that time.

Lucas: So with this pattern recognition training that you can do based on how people recognize certain things, you’re able to see sort of the sequence and kinetics of the thought.

Joshua: MRI is not great for temporal resolution. So what we’re not seeing is how on the order of milliseconds a thought gets put together.

Lucas: Okay. I see.

Joshua: What MRI is better for, it has better spatial resolution and is better able to identify spatial patterns of activity that correspond to representing different ideas or parts of ideas.

Lucas: And so in the future, as our resolution begins to increase in terms of temporal imaging or being able to isolate more specific structures, I’m just trying to get a better understanding of what your hopes are for increased ability of resolution and imaging in the future, and how that might also help to disclose grounded cognition.

Joshua: One strategy for getting a better understanding is to combine different methods. fMRI can give you some indication of where you’re representing the fact that it’s a dog that you’re thinking about as opposed to a cat. But other neuroimaging techniques have better temporal resolution but not as good spatial resolution. So EEG which measures electrical activity from the scalp has millisecond temporal resolution, but it’s very blurry spatially. The hope is that you combine those two things and you get a better idea. Now both of these things have been around for more than 20 years, and there hasn’t been as much progress as I would have hoped combining those things. Another approach is more sophisticated models. What I’m hoping we can do is say, all right, so we have humans doing this task where they are deciding whether or not these images match these descriptions, and we know that humans do this in a way that enables them to generalize, so that if they see some combination of things they’ve never seen before.

Joshua: Like this is a giraffe chasing a Komodo Dragon. You’ve never seen that image before, but you could look at that image for the first time and say, okay, that’s a giraffe chasing a Komodo Dragon, at least if you know what those animals look like, right?

Lucas: Yeah.

Joshua: So then you can say, well, what does it take to train a neural network to be able to do that task? And what does it take to train a neural network to be able to do it in such a way that it can generalize to new examples? So if you teach it to recognize Komodo Dragon, can it then generalize such that, well, it learned how to recognize giraffe chases lion, or lion chases giraffe, and so it understands chasing, and it understands lion, and it understands giraffe. Now if you teach it what a Komodo dragon looks like, can it automatically slot that into a complex relational structure?

And so then let’s say we have a neural network that we trained, is able to do that. It’s not all of human cognition. We assume it’s not conscious, but it may capture key features of that cognitive process. And then we look at the model and say, okay, well in real time, what is that model doing and how is it doing it? And then we have a more specific hypothesis that we can go back to the brain and say, well, does the brain do it, something like the way this artificial neural network does it? And so the hope is that by building artificial neural models of these certain aspects of high level cognition, we can better understand human high level cognition, and the hope is that also it will feed back the other way. Where if we look and say, oh, this seems to be how the brain does it, well maybe if you wired up a network like this, what if we mimic that kind of architecture in a neural network and an artificial neural network, does that enable it to solve the problem in a way that it otherwise wouldn’t?

Lucas: Right. I mean we already have AGIs, they just have to be created by humans and they live about 80 years, and then they die, and so we already have an existence proof, and the problem really is the brain is so complicated that there are difficulties replicating it on machines. And so I guess the key is how much can our study of the human brain inform our creation of AGI through machine learning or deep learning or like other methodologies.

Joshua: And it’s not just that the human brain is complicated, it’s that the general intelligence that we’re trying to replicate in machines only exists in humans. You could debate the ethics of animal research and sticking electrodes in monkey brains and things like that, but within ethical frameworks that are widely accepted, you can do things to monkeys or rats that help you really understand in a detailed way what the different parts of their brain are doing, right?

But for good reason, we don’t do those sorts of studies with humans, and we would understand much, much, much, much more about how human cognition works if we were–

Lucas: A bit more unethical.

Joshua: If we were a lot more unethical, if we were willing to cut people’s brains open and say, what happens if you lesion this part of the brain? What happens if you then have people do these 20 tasks? No sane person is suggesting we do this. What I’m saying is that part of the reason why we don’t understand it is because it’s complicated, but another part of the reason why we don’t understand is that we are very much rightly placing ethical limits on what we can do in order to understand it.

Lucas: Last thing here that I just wanted to touch on on this is when I’ve got this multimodal combinatorial thing going on in my head, when I’m thinking about how like a Komodo dragon is chasing a giraffe, how deep does that combinatorialness need to go for me to be able to see the Komodo Dragon chasing the giraffe? Your earlier example was like a purple ocean with a Komodo Dragon wearing like a sombrero hat, like smoking a cigarette. I guess I’m just wondering, well, what is the dimensionality and how much do I need to know about the world in order to really capture a Komodo Dragon chasing a giraffe in a way that is actually general and important, rather than some kind of brittle, heavily trained ML algorithm that doesn’t really know what a Komodo Dragon chasing a giraffe is.

Joshua: It depends on what you mean by really know. Right? But at the very least you might say it doesn’t really know it if it can’t both recognize it in an image and output a verbal label. That’s the minimum, right?

Lucas: Or generalize the new context-

Joshua: And generalize the new cases, right. And I think generalization is key, right. What enables you to understand the crazy scene you described is it’s not that you’ve seen so many scenes that one of them is a pretty close match, but instead you have this compositional engine, you understand the relations, and you understand the objects, and that gives you the power to construct this effectively infinite set of possibility. So what we’re trying to understand is what is the cognitive engine that interprets and generates those infinite possibilities?

Lucas: Excellent. So do you want to sort of pivot here into how Rawls’ veil of justice fits in here?

Joshua: Yeah. So on the other side of the lab, one side is focused more on this sort of key aspect of individual intelligence. On the more moral and social side of the lab, we’re trying to understand our collective intelligence and our social decision making, and we’d like to do research that can help us make better decisions. Of course, what counts is better is always contentious, especially when it comes to morality, but these influences that one could plausibly interpret as better. Right? One of the most famous ideas in moral and political philosophy is John Rawls’s idea of the veil of ignorance, where what Rawls essentially said is you want to know what a just society looks like? Well, the essence of justice is impartiality. It’s not favoring yourself over other people. Everybody has to play this side by the same rules. It doesn’t mean necessarily everybody gets exactly the same outcome, but you can’t get special privileges just because you’re you.

And so what he said was, well, a just society is one that you would choose if you didn’t know who in that society you would be. Even if you are choosing selfishly, but you are constrained to be impartial because of your ignorance. You don’t know where you’re going to land in that society. And so what Rawls says very plausibly is would you rather be randomly slotted into a society where a small number of people are extremely rich and most people are desperately poor? Or would you rather be slotted into a society where most people aren’t rich but are doing pretty well? The answer pretty clearly is you’d rather be slotted randomly into a society where most people are doing pretty well instead of a society where you could be astronomically well off, but most likely would be destitute. Right? So this is all background that Rawls applied this idea of the veil of ignorance to the structure of society overall, and said a just society is one that you would choose if you didn’t know who in it you were going to be.

And this sort of captures the idea of impartiality as sort of the core of justice. So what we’ve been doing recently, and we as this is a project led by Karen Huang and Max Bazerman along with myself, is applying the veil of ignorance idea to more specific dilemmas. So one of the places where we have applied this is with ethical dilemmas surrounding self driving cars. We took a case that was most famously recently discussed by Bonnefon, Sharrif, and Rahwan in their 2016 science paper, The Social Dilemma of Autonomous Vehicles, and the canonical version goes something like you’ve got an autonomous vehicle, and AV, that is headed towards nine people and nothing is done. It’s going to run those nine people over. But it can swerve out of the way and save those nine people, but if it does that, it’s going to drive into a concrete wall and kill the passenger inside.

So the question is should the car swerve or should it go straight? Now, you can just ask people. So what do you think the car should do, or would you approve a policy that says that in a situation like this, the car should minimize the loss of life and therefore swerve? What we did is, some people we just had answer the question just the way I posed it, but other people, we had them do a veil of ignorance exercise first. So we say, suppose you’re going to be one of these 10 people, the nine on the road or the one in the car, but you don’t know who you’re going to be.

From a purely selfish point of view, would you want the car to swerve or not, and almost everybody says, I’d rather have the car swerve. I’d rather have a nine out of 10 chance of living instead of a one out of 10 chance of living. And then we asked people, okay, that was a question about what you would want selfishly, if you didn’t know who you were going to be. Would you approve of a policy that said that cars in situations like this should swerve to minimize the loss of life.

The people who’ve gone through the veil of ignorance exercise, they are more likely to approve of the utilitarian policy, the one that aims to minimize the loss of life, if they’ve gone through that veil of ignorance, exercise first, than if they just answered the question. And we have control conditions where we have them do a version of the veil of ignorance exercise, but where the probabilities are mixed up. So there’s no relationship between the probability and the number of people, and that’s sort of the tightest control condition, and you still see the effect. The idea is that the veil of ignorance is a cognitive device for thinking about a dilemma in a kind of more impartial kind of way.

And then what’s interesting is that people recognize, they do a bit of kind of philosophizing. They say, huh, if I said that what I would want is to have the car swerve, and I didn’t know who I was going to be, that’s an impartial judgment in some sense. And that means that even if I feel sort of uncomfortable about the idea of a car swerving and killing its passenger in a way that is foreseen, if not intended in the most ordinary sense, even if I feel kind of bad about that, I can justify it because I say, look, it’s what I would want if I didn’t know who I was going to be. So we’ve done this with self driving cars, we’ve done it with the classics of the trolley dilemma, we’ve done it with a bioethical case involving taking oxygen away from one patient and giving it to nine others, and we’ve done it with a charity where we have people making a real decision involving real money between a more versus less effective charity.

And across all of these cases, what we find is that when you have people go through the veil of ignorance exercise, they’re more likely to make decisions that promote the greater good. It’s an interesting bit of psychology, but it’s also perhaps a useful tool, that is we’re going to be facing policy questions where we have gut reactions that might tell us that we shouldn’t do what favors the greater good, but if we think about it from behind a veil of ignorance and come to the conclusion that actually we’re in favor of what promotes the greater good at least in that situation, then that can change the way we think. Is that a good thing? If you have consequentialist inclinations like me, you’ll think it’s a good thing, or if you just believe in the procedure, that is I like whatever decisions come out of a veil of ignorance procedure, then you’ll think it’s a good thing. I think it’s interesting that it affects the way people make the choice.

Lucas: It’s got me thinking about a lot of things. I guess a few things are that I feel like if most people on earth had a philosophy education or at least had some time to think about ethics and other things, they’d probably update their morality in really good ways.

Joshua: I would hope so. But I don’t know how much of our moral dispositions come from explicit education versus our broader personal and cultural experiences, but certainly I think it’s worth trying. Certainly believe in the possibility that, understand, this is why I do research on and I come to that with some humility about how much that by itself can accomplish. I don’t know.

Lucas: Yeah, it would be cool to see like the effect size of Rawls’s veil of ignorance across different societies and persons, and then other things you can do are also like the child drowning in the shallow pool argument, and there’s just tons of different thought experiments, it would be interesting to see how it updates people’s ethics and morality. The other thing I just sort of wanted to inject here, the difference between naive consequentialism and sophisticated consequentialism. Sophisticated consequentialism would also take into account not only the direct effect of saving more people, but also how like human beings have arbitrary partialities to what I would call a fiction, like rights or duties or other things. A lot of people share these, and I think within our sort of consequentialist understanding and framework of the world, people just don’t like the idea of their car smashing into walls. Whereas yeah, we should save more people.

Joshua: Right. And as Bonnefon and all point out, and I completely agree, if making cars narrow the utilitarian in the sense that they always try to minimize the loss of life, makes people not want to ride in them, and that means that there are more accidents that lead to human fatalities because people are driving instead of being driven, then that is bad from a consequentialist perspective, right? So you can call it sophisticated versus naive consequentialism, but really there’s no question that utilitarianism or consequentialism in its original form favors the more sophisticated readings. So it’s kind of more-

Lucas: Yeah, I just feel that people often don’t do the sophisticated reasoning, and then they come to conclusions.

Joshua: And this is why I’ve attempted with not much success, at least in the short term, to rebrand utilitarianism as what I call deep pragmatism. Because I think when people hear utilitarianism, what they imagine is everybody walking around with their spreadsheets and deciding what should be done based on their lousy estimates of the greater good. Whereas I think the phrase deep pragmatism gives you a much clearer idea of what it looks like to be utilitarian in practice. That is you have to take into account humans as they actually are, with all of their biases and all of their prejudices and all of their cognitive limitations.

When you do that, it’s obviously a lot more subtle and flexible and cautious than-

Lucas: Than people initially imagine.

Joshua: Yes, that’s right. And I think utilitarian has a terrible PR problem, and my hope is that we can either stop talking about the U philosophy and talk instead about deep pragmatism, see if that ever happens, or at the very least, learn to avoid those mistakes when we’re making serious decisions.

Lucas: The other very interesting thing that this brings up is that if I do the veil of ignorance thought exercise, and then I’m more partial towards saving more people and partial towards policies, which will reduce the loss of life. And then I sort of realize that I actually do have this strange arbitrary partiality, like my car I bought not crash me into a wall, from sort of a third person point of view, I think maybe it seems kind of irrational because the utilitarian thing initially seems most rational. But then we have the chance to reflect as persons, well maybe I shouldn’t have these arbitrary beliefs. Like maybe we should start updating our culture in ways that gets rid of these biases so that the utilitarian calculations aren’t so corrupted by scary primate thoughts.

Joshua: Well, so I think the best way to think about it is how do we make progress? Not how do we radically transform ourselves into alien beings who are completely impartial, right. And I don’t think it’s the most useful thing to do. Take the special case of charitable giving, that you can turn yourself into a happiness pump, that is devote all of your resources to providing money for the world’s most effective charities.

And you may do a lot of good as an individual compared to other individuals if you do that, but most people are going to look at you and just say, well that’s admirable, but it’s super extreme. That’s not for me, right? Whereas if you say, I give 10% of my money, that’s an idea that can spread, that instead of my kids hating me because I deprived them of all the things that their friends had, they say, okay, I was brought up in a house where we give 10% and I’m happy to keep doing that. Maybe I’ll even make it 15. You want norms that are scalable, and that means that your norms have to feel livable. They have to feel human.

Lucas: Yeah, that’s right. We should be spreading more deeply pragmatic approaches and norms.

Joshua: Yeah. We should be spreading the best norms that are spreadable.

Lucas: Yeah. There you go. So thanks so much for joining me, Joshua.

Joshua: Thanks for having me.

Lucas: Yeah, I really enjoyed it and see you again soon.

Joshua: Okay, thanks.

Lucas: If you enjoyed this podcast, please subscribe, give it a like, or share it on your preferred social media platform. We’ll be back again soon with another episode of the AI Alignment Series.

[end of recorded material]

FLI Podcast: AI Breakthroughs and Challenges in 2018 with David Krueger and Roman Yampolskiy

Every January, we like to look back over the past 12 months at the progress that’s been made in the world of artificial intelligence. Welcome to our annual “AI breakthroughs” podcast, 2018 edition.

Ariel was joined for this retrospective by researchers Roman Yampolskiy and David Krueger. Roman is an AI Safety researcher and professor at the University of Louisville. He also recently published the book Artificial Intelligence Safety & Security. David is a PhD candidate in the Mila lab at the University of Montreal, where he works on deep learning and AI safety. He’s also worked with safety teams at the Future of Humanity Institute and DeepMind and has volunteered with 80,000 hours.

Roman and David shared their lists of 2018’s most promising AI advances, as well as their thoughts on some major ethical questions and safety concerns. They also discussed media coverage of AI research, why talking about “breakthroughs” can be misleading, and why there may have been more progress in the past year than it seems.

Topics discussed in this podcast include:

  • DeepMind progress, as seen with AlphaStar and AlphaFold
  • Manual dexterity in robots, especially QT Opt and Dactyl
  • Advances in creativity, as with Generative Adversarial Networks (GANs)
  • Feature-wise transformations
  • Continuing concerns about DeepFakes
  • Scaling up AI systems
  • Neuroevolution
  • Google Duplex, the AI assistant that sounds human on the phone
  • The General Data Protection Regulation (GDPR) and AI policy more broadly

Publications discussed in this podcast include:

You can listen to the podcast above, or read the full transcript below.

Ariel: Hi everyone, welcome to the FLI podcast. I’m your host, Ariel Conn. For those of you who are new to the podcast, at the end of each month, I bring together two experts for an in-depth discussion on some topic related to the fields that we at the Future of Life Institute are concerned about, namely artificial intelligence, biotechnology, climate change, and nuclear weapons.

The last couple of years for our January podcast, I’ve brought on two AI researchers to talk about what the biggest AI breakthroughs were in the previous year, and this January is no different. To discuss the major developments we saw in AI in 2018, I’m pleased to have Roman Yampolskiy and David Krueger joining us today.

Roman is an AI safety researcher and professor at the University of Louisville, his new book Artificial Intelligence Safety and Security is now available on Amazon and we’ll have links to it on the FLI page for this podcast. David is a PhD candidate in the Mila Lab at the University of Montreal, where he works on deep learning and AI safety. He’s also worked with teams at the Future of Humanity Institute and DeepMind, and he’s volunteered with 80,000 Hours to help people find ways to contribute to the reduction of existential risks from AI. So Roman and David, thank you so much for joining us.

David: Yeah, thanks for having me.

Roman: Thanks very much.

Ariel: So I think that one thing that stood out to me in 2018 was that the AI breakthroughs seemed less about surprising breakthroughs that really shook the AI community as we’ve seen in the last few years, and instead they were more about continuing progress. And we also didn’t see quite as many major breakthroughs hitting the mainstream press. There were a couple of things that made big news splashes, like Google Duplex, which is a new AI assistant program that sounded incredibly human on phone calls it made during the demos. And there was also an uptick in government policy and ethics efforts, especially with the General Data Protection Regulation, also known as the GDPR, which went into effect in Europe earlier this year.

Now I’m going to want to come back to Google and policy and ethics later in this podcast, but