Skip to content
All Podcast Episodes

FLI Podcast: Beyond the Arms Race Narrative: AI & China with Helen Toner & Elsa Kania

Published
August 30, 2019

Discussions of Chinese artificial intelligence frequently center around the trope of a U.S.-China arms race. On this month’s FLI podcast, we’re moving beyond the arms race narrative and taking a closer look at the realities of AI in China and what they really mean for the United States. Experts Helen Toner and Elsa Kania, both of Georgetown University’s Center for Security and Emerging Technology, discuss China’s rise as a world AI power, the relationship between the Chinese tech industry and the military, and the use of AI in human rights abuses by the Chinese government. They also touch on Chinese-American technological collaboration, technological difficulties facing China, and what may determine international competitive advantage going forward. 

Topics discussed in this episode include:

  • The rise of AI in China
  • The escalation of tensions between U.S. and China in the AI realm 
  • Chinese AI Development plans and policy initiatives
  • The AI arms race narrative and the problems with it 
  • Civil-military fusion in China vs. U.S.
  • The regulation of Chinese-American technological collaboration
  • AI and authoritarianism
  • Openness in AI research and when it is (and isn’t) appropriate
  • The relationship between privacy and advancement in AI 

References discussed in this episode include:

You can listen to the podcast above, or read the full transcript below. All of our podcasts are also now on Spotify and iHeartRadio! Or find us on SoundCloudiTunesGoogle Play and Stitcher.

Transcript

Ariel Conn: Hi everyone and welcome to another episode of the FLI podcast! I’m your host Ariel Conn. Now, by sheer coincidence, Lucas and I both brought on guests to cover the same theme this month, and that is AI and China. Fortunately, AI and China is a huge topic with a lot to cover. For this episode, I’m pleased to have Helen Toner and Elsa Kania join the show. We will be discussing things like the Beijing AI Principles, why the AI arms race narrative is problematic, civil-military fusion in China versus in the US, the use of AI in human rights abuses, and much more.

Helen is Director of Strategy at Georgetown’s Center for Security and Emerging Technology. She previously worked as a Senior Research Analyst at the Open Philanthropy Project, where she advised policymakers and grantmakers on AI policy and strategy. Between working at Open Philanthropy and joining CSET, Helen lived in Beijing for nine months, studying the Chinese AI ecosystem as a Research Affiliate of Oxford University’s Center for the Governance of AI. Helen holds a Bachelor of Science and a Diploma in Languages from the University of Melbourne.

Elsa is a Research Fellow also at Georgetown's CSET, and she is also a PhD student in Harvard University's Department of Government. Her research focuses on Chinese military innovation and technological development.

Elsa and Helen, thank you so much for joining us.

Helen Toner: Great to be here.

Elsa Kania: Glad to be here.

Ariel Conn: So, I have a lot of questions for you about what's happening in China with AI, and how that's impacting U.S. China relations. But before I dig into all of that, I want to actually start with some of the more recent news, which is the Beijing principles that came out recently. I was actually surprised because they seem to be some of the strongest principles about artificial intelligence that I've seen, and I was wondering if you both could comment on your own reactions to those principles.

Elsa Kania: I was encouraged to see these principles released, and I think it is heartening to see greater discussion of AI ethics in China. At the same time, I'm not convinced that these are necessarily strong in the sense of not clear as to what the mechanism for enforcement would be, and I think that this is not unique to China, but I think often the articulation of principles can be a means of burnishing the image, whether of a company or a country, with regard to its intentions in AI.

Although it's encouraging to hear a commitment to use AI to do good, and for humanity, and control risks, these are very abstract statements, and some of them are rather starkly at odds with realities of how we know AI is being abused by the Chinese government today for purposes that reinforce the coercive capacity of the state: including censorship, surveillance; prominently in Xinjiang where facial recognition has been racially targeted against ethnic minorities, against the backdrop of the incarceration and imprisonment of upwards of a million — by some estimates — Uyghurs in Xinjiang.

So, I think it's hard not to feel a degree of cognitive dissonance when reading these principles. And again I applaud those involved in the process for their efforts and for continuing to move this conversation forward in China; But again, I'm skeptical that this espoused commitment to certain ethics will necessarily constrain the Chinese government from using AI in ways that it appears to be deeply committed to do so for reasons of concerns about social stability and state security.

Ariel Conn: So one question that I have is, did the Chinese government actually sign on to these principles? Or is it other entities that are involved?

Elsa Kania: So the Beijing AI principles were launched in some association with the Ministry of Science and Technology for China. So, certainly the Chinese government, actually initially in its New Generation AI Development Plan back in the summer of 2017, had committed to trying to lead and engage with issues of legal, ethical, and regulatory frameworks for artificial intelligence. And I think it is telling that these have been released in English; And to some degree part of the audience for these principles is international, against the backdrop of a push for the Chinese government to promote international cooperation in AI.

And the launch of a number of world AI conferences and attempts to really engage with the international community, again, are encouraging in some respects — but also there can be a level of inconsistency. And I think a major asymmetry is the fact that these principles, and many initiatives in AI ethics in China, are shaped by the government's involvement. And it's hard to imagine the sort of open exchange among civil society and different stakeholders that we've seen in the United States, and globally, happen in China, given the role of the government. I think it's telling at the same time that the preamble for the Beijing AI principles talks about the construction of a human community with a shared future, which is a staple in Xi Jinping’s propaganda, and a concept that really encapsulates Chinese ambitions to shape the future course of global governance.

So again, I think I'm heartened to see greater discussion of AI ethics in China. But I think the environment in which these conversations are happening — as well as of course the constraints from any meaningful enforcement, or alteration of the government's current trajectory in AI — makes me skeptical in some respects. I hope that I am wrong, and I hope that we will see this call to use AI for humanity, and to be diverse and inclusive, start to shape the conversation. So, it will be interesting to see whether we see indicators of results, or impact from these principles going forward.

Helen Toner: Yeah. I think that's exactly right. And in particular, the release of these principles I think made clear a limitation of this kind of document in general. This was one of a series of sets of principles like this that have been released by a number of different organizations. And the fact of seeing principles like this that look so good on paper, in contrast with some of the behavior that Elsa described from the Chinese government, I think really puts into stark relief the limitations of well-meaning, nice sounding ideas like this that really have no enforcement mechanism.

Ariel, you asked about whether the Chinese government had signed onto these, and as Elsa described, there was certainly government involvement here. But just because there is some amount of the government giving, or some part of the Chinese government giving its blessing to the principles, does not imply that there are any kind of enforcement mechanisms, or any kind of teeth to a document of this kind.

Elsa Kania: And certainly that's not unique to China. And I think there have been questions of whether corporate AI principles, whether from American or Chinese companies, are essentially intended for public relations purposes, or will actually shape the company's decision making. So, I think it's really important to move these conversations forward on ethics. At the same time, it will be interesting to see how principles translate into practice, or perhaps in some cases don't.

Ariel Conn: So I want to backtrack a little bit to where some of the discussion about China's development of AI started, at least from more Western perspectives. My understanding is that seeing AlphaGo beat Lee Sedol led to something of a rallying cry — I don't know if that's quite the right phrase — but that that sort of helped trigger the Chinese government to say, "We need to be developing this a lot stronger and faster." Is that the case? Or what's been sort of the trajectory of AI development in China?

Elsa Kania: I think it depends on how far back you want to go historically.

Ariel Conn: That's fair.

Elsa Kania: I think in recent history certainly AlphaGo was a unique moment — both as an indication of how rapidly AI was progressing, given that experts had not anticipated an AI could win the game of Go for another 10, perhaps 15 years — and also in the context of how the Chinese government, and even the Chinese military, saw this as an indication of the capabilities of American artificial intelligence, including the relevance of the capacities for tactics and strategizing, command decision making in a military context. 

At the same time of course I think another influence in 2016 appears to have been the U.S. government's emphasis on AI at the time, including a plan for research and development that may have received more attention in Beijing than it did in Washington in some respects, because this does appear to have been one of the factors that inspired China's New Generation AI Development Plan, launched the following year. 

But I think if we're looking at the history of AI in China, we can trace it back much further: even some linkages to the early history of cybernetics and systems engineering. And there are honestly some quite interesting episodes early on, because during the Cold War, artificial intelligence could be a topic that had some ideological undertones and underpinnings — including how the Soviet Union saw AI in system science, and some of the critiques of this as revisionism.

And then there is even an interesting detour in the 80s or so: when Qian Xuesen, a prominent strategic scientist in China's nuclear weapons program, saw AI as entangled with an interest in parapsychology — including exceptional human body functions such as the capacity to recognize characters with your ears. There was a craze for ESP in China in the 80s, and actually received some attention in scientific literature as well: There was an interesting conflation of artificial intelligence and special functions that became the subject of some ideological debate in which Qian Xuesen was an advocate essentially of ESP in ways that undermined early AI development in China.

And other academic rivals in the Chinese Academy of Sciences argued in favor of AI as a discipline of emerging science relative to the pseudoscience that human special functions turned out to be, and this became a debate of some ideological importance as well against the backdrop of questions of arbitrating what science was, and how the Chinese Communist Party tried to sort of shape science. 

I think that does go to illustrate that although a lot of the headlines about China's rise in AI are much more recent, not only state support for research, but also the significant increasing in publications far predates this attention, and really can be traced to some degree to the 90s, and especially from the mid 2000s onward.

Helen Toner: I'll just add as well that if we're thinking about what it is that caused this surge in Western interest in Chinese AI, I think a really important part of the backdrop is the shift in U.S. defense thinking to move away from thinking primarily about terrorism, and non-state actors as the primary threat to U.S. security, and shifting towards thinking about near-peer adversaries — so primarily China and Russia — which is a recent change in U.S. doctrine. And I think that is also an important factor in understanding why Chinese interest and success in AI has become such an important sort of conspicuous part of the discussion.

Elsa Kania: There's also been really a recalibration of assessments of the state of technology and innovation in China, from often outright skepticism and dismissal that China could innovate to sometimes now a course correction towards the opposite extreme; and now anxieties that China may be beating us in the “race for AI” or 5G — even quantum computing has provoked a lot of concern. So, I think on one hand it is long overdue that U.S. policy makers and the American National Security community take seriously what are quite real and rapid advances in science and technology in China.

At the same time I think sometimes this reaction has resulted in more inflated assessments that have provoked concerns about the notion of an arms race, which I think is really wrong and misleading framing of this when we're talking about a general purpose technology that has such a range of applications, and for which the economic and societal impacts may be more significant than the military applications in the near-term, which I say is an analyst who focuses on military issues.

Ariel Conn: I want to keep going with this idea of the fear that’s sort of been developing in the U.S. in response to China's developments. And I guess I first started seeing it a lot more when China released their Next Generation Artificial Intelligence Plan — I believe that's the one that said by 2030 they wanted to dominate in AI.

Helen Toner: That's right.

Ariel Conn: So I'd like to hear both of your thoughts on that. But I'm also sort of interested in — to me it seemed like that plan came out in part as a response to what they were seeing from the US, and then the U.S. response to this is to — maybe panic is a little bit extreme, but possibly overreact to the Chinese plan — and maybe they didn't overreact, that might be incorrect. But it seems like we're definitely seeing an escalation occurring.

So let's start by just talking about what that plan said, and then I want to dive into this idea of the escalation, and maybe how we can look at that problem, or address it, or consider it.

Elsa Kania: So, I'd been certainly looking at a lot of different plans and policy initiatives for the 13th Five-Year Plan period, which is 2016 to 2020, and I had noticed when this New Generation AI Development Plan came out; and initially it was only available in Chinese. A couple of us, after we'd come across it initially, had organized to work on a translation of it, and to this day that's still the only unofficial English translation of this plan available. So far as I can tell the Chinese government itself never actually translated that plan. And in that regard, it does not appear to have been intended for an international audience in the way that, for instance, the Beijing AI Principles were.

So, I think that some of the rhetoric in the plan that rightly provoked concerns — calling for China to lead the world in AI and be a premier global innovation center for artificial intelligence — is striking, but is consistent with S&T plans that often call for China to seize the strategic commanding heights of innovation, and future advantage. So I think that a lot of the signaling about the strategic importance of AI to some degree was intended for an internal audience, and certainly we've seen a powerful response in terms of plans and policies launched across all elements of the Chinese government, and at all levels of government including a number of cities and provinces.

I do think it was highly significant in reflecting how the Chinese government saw AI as really a critical strategic technology to transform the Chinese economy, and society, and military — though that's discussed in less detail in the plan.

But there is also an open acknowledgement in the plan that China still sees itself as well behind the U.S. in some respects. So, I think the ambitions and the resources and policy support across all levels of government that this plan has catalyzed are extremely significant, and I think do merit some concern, but I think some of the rhetoric about an AI race, or arms race — clearly there is competition in this domain. But I do think the plan should be placed in the context of an overall drive by the Chinese government to escape the middle income trap, and sustain economic growth at a time when it's slowing and looking to AI as an important instrument to advance these national objectives.

Helen Toner: I also think there is something kind of amusing that happened where, as Elsa said earlier, it seems like one driver of the creation of this plan was that China saw the U.S. government under the Obama administration in 2016 run a series of events and then put together a white paper about AI, and a federal R&D plan. And China's response to this was to think, "Oh, we should really put together our own strategy, since the U.S.has one." And then somehow with the change in administrations, and the time that had elapsed, there suddenly became this narrative of, "Oh no, China has an AI strategy and the U.S. doesn't have one; So now we have to have one because they have one.” And that was a little bit farcical to be honest. And I think has now died down after, I believe it's called the American AI Initiative that President Trump released. But that was amusing to watch while it was happening.

Elsa Kania: I hope that the concerns over the state of AI in China can provoke concerns that motivate productive responses. I agree that sometimes the debate has focused too much on the notion of what it would mean to have an AI strategy, or concerns about the plan as sort of one of the most tangible manifestations of these ambitions. But I do think there are reasons for concern that the U.S. has really not recognized the competitive challenge, and sometimes still seems to take for granted American leadership in emerging technologies for which the landscape does remain much more contested.

Helen Toner: For sure.

Ariel Conn: Do you feel like we're starting to see de-escalation then — that people are starting to maybe change their rhetoric about making sure someone's ahead, or who's ahead, or all that type of lingo? Or do you think we are still seeing this escalation that is perhaps being reported in the press still?

Helen Toner: I think there is still a significant amount of concern. Perhaps one shift that we've seen a little bit — and Elsa I'd be curious if you agree — is that I think around the time that the Next Generation Plan was released, and attention was starting to turn to China, there began to be a bit of a narrative of, “Not only is China trying to catch up with the U.S. and making progress in catching up with the U.S. but perhaps has already surpassed the U.S. and is perhaps already clearly ahead in AI research globally.” That's an extremely difficult thing to measure, but I think some of the arguments that were made to say that were not as well backed up as they could have been.

Maybe one thing that I've observed over the last six or 12 months is a little bit of a rebalancing in thinking. It's certainly true that China is investing very heavily in this, and is trying really hard; And it's certainly true that they are seeing some results from that, but it's not at all clear that they have already caught up with the U.S. in any meaningful way, or are surpassing it. Of course, it depends how you slice up the space, and whether you're looking more at fundamental research, or applied research, or so on. But that might be one shift we've seen a little bit.

Elsa Kania: I agree. I think there has continued to be a recalibration of assessments, and even a rethinking of the notion of what leading in AI even means. And I used to be asked the question all the time of who was winning the race, or even arms race, for AI. And often I would respond by breaking down the question, asking, "Well what do you mean by who?" Because the answer will differ depending on whether we're talking about American and Chinese companies, relative to how do we think about aggregating China and the United States as a whole when it comes to AI research — particularly considering the level of integration and interdependence between American and Chinese innovation ecosystems. What do we mean by winning in this context? How do we think about the metrics, or even desired end states? Is this a race to develop something akin to artificial general intelligence? Or is this a rivalry to see which nation can best leverage AI for economic and societal development across the board?

And then again, why do we continue to talk about this as a race? I think that is a metaphor in framing that does readily come to mind and can be catchy. And as someone who looks at the military dimension of this quite frequently, I often find myself explaining why I don't think “arms race” is an appropriate conceptualization either. Because this is a technology that will have a range of applications across different elements of the military enterprise — and that does have great promise for providing decisive advantage in the future of warfare, and yet we're not talking about a single capability or weapon systems, but rather something that is much more general purpose, and that is fairly nascent in its development.

So, AI does factor into this overall U.S.-China military competition that is much more complex and amorphous than the notion of an arms race to develop killer robots would imply. Because certainly there are autonomous weapons development underway in the U.S. and China today; and I think that is quite concerning from the perspective of thinking about the future military balance, or how the U.S. and Chinese militaries might be increasing the risks of a crisis, and considerations of how to mitigate those concerns and reinforce strategic stability.

So hopefully there is starting to be greater questioning of some of these more simplistic framings, often in headlines, often in some of the more sensationalist statements out there. I don't believe China is yet an AI superpower, but clearly China is an AI powerhouse.

Ariel Conn: Somewhat recently there was an op ed by Peter Thiel in which he claims that China's tech development is naturally a part of the military. There's also this idea that I think comes from China of military-civil fusion. And I was wondering if you could go into the extent to which China's AI development is naturally a part of their military, and the extent to which companies and research institutes are able to differentiate their work from military applications.

Elsa Kania: All right. So, the article in question did not provide a very nuanced discussion of these issues. And to start I would say that it is hardly surprising that the Chinese military is apparently enthusiastic about leveraging artificial intelligence. China's new national defense white paper, titled “China's National Defense in the New Era,” talked about advances in technologies like big data, cloud computing, artificial intelligence, quantum information, as significant at a time when the character of warfare is evolving — what is known as today's informatized warfare, towards future intelligentized warfare, in which some of these emerging technologies, namely artificial intelligence, could be integrated into the system of systems for future conflict.

And the Chinese military is pursuing this notion of military intelligentization, which essentially involves looking to leverage AI for a range of military applications. At the same time, I see military-civil fusion as a concept and strategy to remain quite aspirational in some respects.

There’s also a degree of irony, I'd argue, that much of what China is attempting to achieve through military-civil fusion is inspired by dynamics and processes that they have seen be successful in the American defense innovation ecosystem. I think sometimes there is this tendency to talk about military-civil fusion as this exotic or uniquely Chinese approach, when in fact there are certain aspects of it that are directly mimicking, or responding to, or learning from what the U.S. has had within our ecosystem for a much longer history. And China's trying to create this more rapidly and more recently. 

So, the delta of increase, perhaps, and the level of integration between defense, academic, and commercial developments, may be greater. But I think the actual results so far are more limited. And again it is significant, and there are reasons for concern. We are seeing a greater and greater blurring of boundaries between defense and commercial research, but the fusion is again much more aspirational, as opposed to the current state of play.

Helen Toner: I'll add as well, returning to that specific op ed when Thiel mentioned military-civil fusion, he actually linked to an article by a colleague of Elsa's and mine, Lorand Laskai, where he wrote about military-civil fusion, and Lorand straight up said that Thiel had clearly not read the article, based on the way that he described military-civil fusion.

Ariel Conn: Well, that's reassuring.

Elsa Kania: We are seeing militaries around the world, the U.S. and China among them, looking to build bridges to the private sector, and deepening cooperation with commercial enterprises. And I think it's worth thinking about the factors that could provide a potential advantage; or for militaries that are looking to increase their capacity as organizations to leverage these technologies — this is an important dimension of that. And I think we are seeing some major progress in China in terms of new partnerships, including initiatives at the local level, new parks, new joint laboratories. But I do think, as with the overall status of China's AI plan, there's a lot of activity and a lot of investment. But the results are harder to ascertain at this point.

And again, I think it also does speak to questions of ethics in the sense that we have in the U.S. seen very open debate about companies and concerns, particularly of their employees, about whether they should or should not be working with the military or government on different projects. And I remain skeptical that we could see comparable debates or conversations happening in China, or that a Chinese company would outright say no to the government. I think certainly some companies may resist on certain points, or at the margins, especially when they have commercial interests that differ from the priorities of the government. But I do think the political economy of this ecosystem as a whole is very distinct.

And again I'm skeptical that if the employees of a Chinese company had moral qualms about working with the Chinese military, they'd have the freedom to organize, and engage in activism to try to change that.

Ariel Conn: I'd like to go into that a little bit more, because there's definitely concerns that get raised that we have companies in the U.S. that are rejecting contracts with the U.S. government for fear that their work will be militarized, while at the same time — as you said — companies in China may not have that luxury. But then there's also instances where you have say Google in China doing research, and so does that mean that Google is essentially working with the Chinese military and not the U.S. military? I think there's a lot of misunderstanding about what the situation actually is there. I was wondering if you could both go into that a little bit.

Helen Toner: Yeah. I think this is a refrain that comes up a lot in DC as, “Well, look at how Google withdrew from its contract to work on Project Maven,” which is a Department of Defense Initiative looking at tagging overhead imagery, “So clearly U.S. companies aren't willing to work with the U.S. government, But on the other hand they are still working in China. And as we all know, research in China is immediately used by the Chinese military, so therefore, they're aiding the Chinese military even though they're not willing to aid the U.S. military.” And I do think this is highly oversimplified description, and pretty incorrect.

So, a couple elements here. One is that I think the Google Project Maven decision seems to have been pretty unique. We haven't really seen it repeated by other companies. Google continues to work with the U.S. military and the U.S. government in some other ways — for example working on DARPA projects, and working on other projects; And other U.S. companies are also very willing to work with the U.S. government including really world-leading companies. A big example right now is Amazon and Microsoft bidding on this JEDI contract, which is to provide cloud computing services to the Pentagon. So, I think on the one hand, this claim that U.S. companies are unwilling to work with the U.S. military is a vast overgeneralization.

And then on the other hand, I think I would point back to what Elsa was saying about the state of military-civil fusion in China, and the extent to which it makes sense or doesn't make sense to say that any research done in China is immediately going to be incorporated into Chinese military technologies. I definitely wouldn't say there is nothing to be concerned about here. But I think that the simplified refrain is not very productive.

Elsa Kania: With regard to some of these controversies, I do continue to believe that having these open debates, and the freedom that American companies and researchers have, is a strength of our system. I don't think we should envy the state of play in China, where we have seen the Chinese Communist Party become more and more intrusive with regard to its impositions upon the tech sector, and I think there may be costs in terms of the long-term trajectory of innovation in China.

And with regard to the particular activities of American companies in China, certainly there have been some cases where companies have engaged in projects, or with partners, that I think are quite problematic. And one of the most prominent examples of that recently has been Google's involvement in Dragonfly — creating a censored search engine — which was thoroughly condemned, including because of its apparent inconsistency with their principles. So, I do think there are concerns not only of values but also of security when it comes to American companies and universities that are engaged in China, and it's never quite a black and white issue or distinction.

So for instance in the case of Google, their presence in China in terms of research does remain fairly limited. There have been a couple of cases where papers published in collaboration between a Google researcher and a Chinese colleague involve topics that are quite sensitive and evidently not the best topic on which to be collaborating, in my opinion — such as target recognition. There's also been concerns over research on facial recognition, given the known abuse of that technology by the Chinese government. 

I think that also when American companies or universities partner or coauthor with Chinese counterparts, especially those that are linked to or are outright elements of the Chinese military — such as the National University of Defense Technology, which has been quite active in overseas collaborations — I do think that there should be some red lines. I don't think the answer is “no American companies or universities should do any work on AI in China.” I think that would actually be damaging to American innovation, and I think some of the criticisms of Google have been unfair in that regard, because I do think that a more nuanced conversation is really critical going forward to think about the risks and how to get policy right.

Ariel Conn: So I want to come back to this idea of openness in a minute, but first I want to stick with some pseudo-military concerns. Maybe this is more reflective of what I'm reading, but I seem to see a lot more concern being raised about military applications of AI in China, and some concerns obviously about AI use with their humanitarian issues are starting to come to the surface. In light of some recent events especially like what we're seeing in Hong Kong, and then with the Uyghurs, should we be worrying more about how China is using AI for what we perceive as human rights abuses?

Elsa Kania: That is something that greatly concerns me, particularly when it comes to the gravity of the atrocities in Xinjiang. And certainly there are very low tech coercive elements to how the Chinese government is essentially trying to re-engineer an entire population in ways that have been compared by experts as tantamount to a cultural genocide, and the creation of concentration camps — and beyond that, the pervasiveness of biometrics and surveillance enabled by facial recognition, and the creation of new software programs to better aggregate big data about individuals. I think all of that paints a very dark picture of ways in which artificial intelligence can enable authoritarianism, and can reinforce the Chinese government's capability to repress its own population in ways that in some cases can become pervasive in day to day life.

And I'd say that having been to Beijing recently, surveillance is kind of like air pollution. It is pervasive, in terms of the cameras you see out on the streets. It is inescapable in a sense, and it is something that the average person or citizen in China can do very little about. I think of course this is not quite a perfect panopticon yet; Elements of this remain a work in progress. But I do think that the overall trajectory of these developments is deeply worrying in terms of human rights abuses, and yet it's not as much of a feature of conversations in AI ethics in China. But I think it does overshadow some of the more positive aspects of what the Chinese government is doing with AI, like in health care and education, that this is also very much a reality.

And I think when it comes to the Chinese military's interest in AI, it is quite a complex landscape of research and development and experimentation. To my knowledge it does not appear that the Chinese military is yet at the stage of deploying all that much in the way of AI: again very active efforts and long term development of weapons systems — including cruise missiles, hypersonics, a range of unmanned systems across all domains with growing degrees of autonomy, unmanned underwater vehicles and submarines, progress in swarming that has been prominently demonstrated, scavenger robots in space as a covert counter-space capability, human machine integration or interaction.

But I think that the translation of some of these initial stages of military innovation into future capabilities will be challenging for the PLA in some respects. There could be ways in which the Chinese military has advantages relative to the U.S., given apparent enthusiasm and support from top-level leadership at the level of Xi Jinping himself, and several prominent generals, who have been advocating for and supporting investments in these future capabilities.

But I do think that we're really just at the start of seeing what AI will mean for the future of military affairs, and future of warfare. But when it comes to developments underway in China, particularly in the Chinese defense industry, I think the willingness of Chinese companies to export drones, robotic systems — many of which again have growing levels of autonomy, or at least are advertised as such — is also concerning from the perspective of other militaries that will be acquiring these capabilities and could use them in ways that violate human rights. 

But I do think there are concerns how the Chinese military would use its own capabilities. The export of some of these weapons systems going forward, as well as the potential use of made-in-China technologies by non-state actors and terrorist organizations, as we've already seen with the use of drones made by DJI by ISIS, or Daesh, in Syria, including as improvised IEDs. So there are no shortage of reasons for concerns, but I'll stop there for now.

Ariel Conn: Helen, did you have anything you wanted to add?

Helen Toner: I think Elsa said it well. I would just reiterate that I think the ways that we're starting to see China incorporating AI into its larger surveillance state, and methods of domestic control, are extremely concerning.

Ariel Conn: There's debate I think about how open AI companies and researchers should be about their technology. But we sort of have a culture of openness in AI. And so I'm sort of curious: how is that being treated in China? Does it seem like that can actually help mitigate some of the negative applications that we see of AI? Or does it help enable the Chinese or anyone else to develop AI in non-beneficial ways that we are concerned about? What's the role of openness in this?

Elsa Kania: I think openness is vital to innovation, and I hope that can be sustained — even as we are seeing greater concerns about the misuse or transfer of these technologies. I think that the level of openness and integration between the American and Chinese innovation ecosystems is useful in the sense that it does provide a level of visibility, or awareness, or sort of a shared understanding of the state of research. But I think at the same time there are reasons to have some thought-through parameters on that openness, or again — whether from the perspective of ethics or security — ways that having better guidelines or frameworks for how to engage, I think, will be important in order to sustain that openness and engagement.

I think that having better guardrails, and how to think about where openness is warranted, and when there should be at the very least common sense, and hopefully some rigorous consideration of these concerns, will be important. And then also another dimension of openness is thinking about when to release, or publish, or make available certain research, or even the tools underlying those advances; and when it's better to keep more information proprietary. And I think the greater concern there, beyond the U.S.-China relationship, may be the potential for misuse or exploitation of these technologies by non-state actors, or terrorist organizations, even high end criminal organizations. I think the openness of the AI field is really critical. But I also think to sustain that, it will be important to think very carefully through some of these potential negative externalities across the board.

Helen Toner: One element that makes it extra complicated here in terms of openness and collaboration between U.S. and Chinese researchers: so much of the work that is going on there is really quite basic research — work on computer vision, or on speech recognition, or things of that nature. And that kind of research can be used for so many things, including both harmful, oppressive applications, as well as many much more acceptable applications. I think it's really difficult to think through how to think about openness in that context.

So, one thing I would love to see is more information being made available to researchers. For example, I do think that any researcher who is working with a Chinese individual, or company, or organization should be aware of what is going on in Xinjiang, and should be aware of the governance practices that are common in China. And it would be great if there were more information available on specific institutions, and how they're connected to various practices, and so on. That would be a good step towards helping non-Chinese researchers understand what kinds of situations they might be getting themselves involved in.

Ariel Conn: Do you get the sense that AI researchers are considering how some of their work can be applied in these situations where human rights abuses are taking place? I mean, I think we're starting to see that more, but I guess maybe how much do you feel like you're seeing that vs. how much more do you think AI researchers need to be making themselves aware?

Helen Toner: I think there's a lot of interest and care among many AI researchers in how their work will be used, and in making the world a better place, and so on. And I think things like Google's withdrawal from Project Maven, and also the pressure that was put on Google when it was leaked that it was working on a censored search engine to be used in China: I think those are both evidence of the level of, I guess, caring that is there. But I do think that there could be more awareness of specific issues that are going on in China. I think the situation in Xinjiang is gradually becoming more widely known, but I wouldn't be surprised if it wasn't something that plenty of AI researchers had come across. I think it's a matter of pairing that interest in how their work might be used with information about what is going on, and what might happen in the future.

Ariel Conn: One of the things that I've also read, and I think both of you addressed this in works of yours that I was looking at: there's this concern that China obviously has a lot more people, their privacy policies aren't as strict, and so they have a lot more access to big data, and that that could be a huge advantage for them. Reading some of your work, it sounded like maybe that wasn't quite the advantage that people worry about, at least yet. And I was hoping you could explain a little bit about technological difficulties that they might be facing even if they do have more data.

Helen Toner: For sure. I think there are quite a few different ways in which this argument is weaker than it might appear at first. So, I think there are many reasons to be concerned about the privacy implications of China's data practices. Certainly having spent time in China, it's very clear that the instant messages you're sending, for example, are not only being read by you; That's certainly concerning from that perspective. But if we're talking about whether data will give them an advantage in developing AI, think there are a few different reasons to be a little bit skeptical.

One reason, which I think you alluded to, is simply whether they can make use of this data that they're collecting. There was some reporting, I believe, last year coming out of Tencent, talking about ways in which data was very siloed inside the company, and it's notoriously difficult. The joke among the data scientists is that when you're trying to solve some problem with data, you spend the first 90% of your time just cleaning and structuring the data, and then only the last 10 percent actually solving the problem. So, that's the sort of logistical or practical issue that you mentioned.

Other issues are things like: the U.S. doesn't have as large a population as China, but U.S. companies have much greater international reach. So, they often have as many, if not more, users compared with Chinese companies. Even more importantly, I think, are two extra issues — one of which being that for most AI applications, the kind of data that will be useful in training a given model needs to be relevant to the problem that model is solving. So, if you have lots of data about Chinese customers’ purchases on Taobao, which is Chinese Amazon, then you're going to be really good at predicting what kind of purchases Chinese consumers will make on Taobao. But that's not going to help you with, for example, the kind of overhead imagery analysis that Project Maven was targeting, and things like this.

So that's one really fundamental problem, I think, is this matter of data primarily being useful for training systems that are solving problems that are very related to the data that you have. And then a second really fundamental issue is thinking about how important it is or isn't to have pre-gathered data in order to train a given model. And so, something that I think is left out of a lot of conversations on this issue is the fact that many types of models — notably, reinforcement learning models — can often be trained on what is referred to as synthetic data, which basically means data that you generate during the experiment — as opposed to requiring a pre gathered data set that you are training your model on.

So, an example of this would be AlphaGo, that we mentioned before. The original AlphaGo was first trained on human games, and then fine tuned from there. But AlphaGo Zero, which was released subsequently, did not actually need any pre-collected data, and instead just used computation to simulate games and play against itself, and thereby learn how to play the game even better than AlphaGo, which was trained on human data. So, I think there are all manner of reasons to be a little bit skeptical of this story that China has some fundamental advantage in access to data.

Elsa Kania: Those are all great points, and I would just add that I think this is particularly true when we look at the apparent disparities in access to data between China's commercial ecosystem and the Chinese military. As Helen mentioned, much of that data generated from China's mobile ecosystem will have very little relevance if you are looking to build advanced weapon systems, and the critical question going forward, or the much more relevant concern, will be the Chinese military's capacity as an organization to improve its management and employment of its own data, while also gaining access to other relevant sources of data and looking to leverage simulations, even war gaming, as techniques to generate more data of relevance to training AI systems for military purposes.

So, the notion that data is the new oil I think is at best a massive oversimplification, given this is much more a complex landscape; And access to and use of, even labeling of data become very practical measures that militaries, among other bureaucracies, will have to grapple with as they think about how to develop AI that is trained particularly for the missions they have in mind.

Ariel Conn: So, does it seem fair to say then that it's perfectly reasonable for Western countries to maintain, and possibly even develop, stricter privacy laws and still remain competitive?

Helen Toner: I think absolutely. The idea that one would need to reduce privacy controls in order to keep up with some volume of data that needs to be collected in order to be competitive in AI fundamentally misunderstands how AI research works. And I think also misunderstands the ways that Western companies will stay competitive; I think it's not an accident that WeChat, for example, the most popular messaging app in China has really struggled to spread beyond China, the Chinese diaspora. I would posit that a significant part of that is the fact that it's clear that messages on that app are going to the Chinese government. So, I think U.S. and other Western companies should be wary of sacrificing the kinds of features and functionalities that are based in the values that we hold dear.

Elsa Kania: I'd just add that I think there's often this framing of a dichotomy between privacy and advancement in AI — and as Helen said, I think that there are ways to reconcile our priorities and our values in this context. And I think the U.S. government can also do much more when it comes to better leveraging data that it does have available, and making it more open for research purposes while focusing on privacy in the process. Exploitation of data should not come at the expense of privacy or be seen as at odds with advancement.

Helen Toner: And I’ll also add as well that we're seeing advancements in various technologies that make it possible to utilize data without invading the privacy of the holder of that data. So, these are things like differential privacy, multi-party computation, a number of other related techniques that make it possible to securely and privately make use of data for improving goals without exposing the individual data of any particular user.

Ariel Conn: I feel like that in and of itself is another podcast topic.

Helen Toner: I agree.

Ariel Conn: The last question I have is: what do you think is most important for people to know and consider when looking at Chinese AI development and the Western concerns about it?

Elsa Kania: The U.S. in many respects does remain in a fairly advantageous position. However, I worry we may erode our own advantages if we don't recognize what they are. And I think it does come down to the fact that the openness of the American innovation ecosystem, including our welcome to students and scholars from all over the world, has been critical to progress in science in the United States. And I think it's really vital to sustain that. I think between the United States and China today, the critical determinant of competitive advantage going forward will be talent. I think there are many ways that China continues to struggle and is lagging behind its access to human capital resources — though there are some major policy initiatives underway from the Chinese Ministry of Education, significant expansions of the use of AI in and for education.

So, I think that as we think about relative trajectories in the long term, it will be important to think about talent, and how this is playing out in a very complex and often very integrated landscape between the U.S. and China. And I've said it before, and I'll say it again: I think in the United States it is encouraging that the Department of Defense has a strategy for AI and is thinking very carefully about the ethics and opportunities it provides. I hope that the U.S. Department of Education, and that states and cities across the U.S., will also start to think more about what AI can do in terms of opportunities, in terms of more personalized and modernized approaches to education in the 21st century.

Because I think again, although I'm someone who as an analyst looks more at the military elements of this question, I think talent and education are foundational to everything. And some of what the Chinese government is doing with exploring the potential of AI in education are things that I wish the U.S. government would consider pursuing equally actively — though with greater concern to privacy and to the well-being of students. I don't think we should necessarily envy or look to emulate many elements of China's approach, but I think on talent and education it's really critical for the U.S. to think about that as a main frontier of competition and to sustain openness to students and scientists from around the world, which requires thinking about some of these tricky issues of immigration that have become politicized to a level that is unfortunate and risks damaging our overall innovation ecosystem, not to mention the well-being and opportunities of those who can sometimes get caught in this crossfire in terms of the geopolitics and politics.

Helen Toner: I'd echo what Elsa said. I think in a nutshell what I would recommend for those interested in thinking about China's prospects in AI is to be less concerned about how much data they have access to, or about the Chinese government and its plans being a well-oiled machine that works perfectly on the first try — and to pay more attention to, on the one hand, the willingness of the Chinese Communist Party to use extremely oppressive measures, and on the other hand, to pay more attention to the question of human capital and talent in AI development, and to focus more on how the U.S. can do better at attracting and retaining top talent — which has historically been something the U.S. has done really well, but for a variety of reasons has perhaps started to slide a little bit in recent years.

Ariel Conn: All right. Well, thank you both so much for joining this month. This was really interesting for me.

Elsa Kania: Thank you so much. Enjoyed the conversation, and certainly much more to discuss on these fronts.

Helen Toner: Thanks so much for having us.

View transcript
Podcast

Related episodes

If you enjoyed this episode, you might also like:
All episodes

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram