Skip to content
8 minute read

FLI August, 2019 Newsletter

September 10, 2019
Revathi Kumar

FLI August, 2019 Newsletter

AI in China & More

FLI Podcast: Beyond the Arms Race Narrative: AI & China with Helen Toner & Elsa Kania

Discussions of Chinese artificial intelligence frequently center around the trope of a U.S.-China arms race. On this month’s FLI podcast, we’re moving beyond the arms race narrative and taking a closer look at the realities of AI in China and what they really mean for the United States. Experts Helen Toner and Elsa Kania, both of Georgetown University’s Center for Security and Emerging Technology, discuss China’s rise as a world AI power, the relationship between the Chinese tech industry and the military, and the use of AI in human rights abuses by the Chinese government. They also touch on Chinese-American technological collaboration, technological difficulties facing China, and what may determine international competitive advantage going forward. Listen here.

AI Alignment Podcast: China’s AI Superpower Dream with Jeffrey Ding

"In July 2017, The State Council of China released the New Generation Artificial Intelligence Development Plan. This policy outlines China’s strategy to build a domestic AI industry worth nearly US$150 billion in the next few years and to become the leading AI power by 2030. This officially marked the development of the AI sector as a national priority and it was included in President Xi Jinping’s grand vision for China." (FLI’s AI Policy – China page) In the context of these developments and an increase in conversations regarding AI and China, Lucas spoke with Jeffrey Ding from the Center for the Governance of AI (GovAI). Jeffrey is the China lead for GovAI where he researches China’s AI development and strategy, as well as China’s approach to strategic technologies more generally. Listen here.

Coming Soon: Not Cool, a Climate Podcast

We're thrilled to announce the upcoming launch of our new podcast series, Not Cool with Ariel Conn, which will go live on Tuesday, Sept 3. Through interviews with climate experts from around the world, Not Cool will dive deep into the climate crisis, exploring its causes and impacts and examining solutions.

We'll talk about what we know and what’s still uncertain. We’ll break down some of the basic science behind climate change, from the carbon cycle to tipping points and extreme weather events. We’ll look at the challenges facing us, and at the impacts on human health, national security, biodiversity, and more. And we’ll learn about carbon finance, geoengineering, adaptation, and how individuals and local communities can take action.

Let's talk about what's happening. Let’s build momentum. And let’s make change. Because climate change is so not cool.

Podcast Survey

How can we make our podcasts a better resource for you? Please take this short survey and let us know what works, what doesn't, and what you'd like to see in the future!

Take the Survey

You can find all the FLI Podcasts here and all the AI Alignment Podcasts here. Or listen on SoundCloudiTunesGoogle Play, and Stitcher.

Recent Articles

As we move towards a more automated world, tech companies are increasingly faced with decisions about how they want — and don’t want — their products to be used. Perhaps most critically, the sector is in the process of negotiating its relationship to the military, and to the development of lethal autonomous weapons in particular. Some companies, including industry leaders like Google, have committed to abstaining from building weapons technologies; Others have wholeheartedly embraced military collaboration. In a new report titled "Don’t Be Evil," Dutch advocacy group Pax evaluated the involvement of 50 leading tech companies in the development of military technology. Read our summary of their findings here.

As data-driven learning systems continue to advance, it would be easy enough to define "success" according to technical improvements, such as increasing the amount of data algorithms can synthesize and, thereby, improving the efficacy of their pattern identifications. However, for ML systems to truly be successful, they need to understand human values. More to the point, they need to be able to weigh our competing desires and demands, understand what outcomes we value most, and act accordingly. Read our overview of current value alignment research trends here.

What We’ve Been Up to This Month

Max Tegmark was a keynote panelist at the Johns Hopkins University Applied Physics Laboratory entitled The Future of Humans and Machines: Assuring Artificial Intelligence. Ariel Conn and Jared Brown collaborated with APL/JHU on planning and also attended.

Richard Mallah co-chaired and presented on the AI safety landscape at AISafety 2019, a workshop at the International Joint Conference on AI (IJCAI) in Macau.

FLI in the News

VENTURE BEAT: Partnership on AI’s Terah Lyons talks ethics washing, moonshots, and power


Our content

Related posts

If you enjoyed this, you also might like:

FLI October 2022 Newsletter: Against Reckless Nuclear Escalation

Welcome to the Future of Life Institute Newsletter. Every month, we bring 24,000+ subscribers the latest news on how emerging technologies […]
November 11, 2022

FLI September 2022 Newsletter: $3M Impacts of Nuclear War Grants Program!

Welcome to the FLI newsletter! Every month, we bring 24,000+ subscribers the latest news on how emerging technologies are transforming our world - for better and worse.
October 17, 2022

FLI August 2022 Newsletter

Nuclear Winter Deservedly Back in the Public Eye On 6th August, the Future of Life Institute proudly announced the winners […]
September 14, 2022

Sign up for the Future of Life Institute newsletter

Join 20,000+ others receiving periodic updates on our work and cause areas.
View previous editions
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram