Skip to content
10 minute read

FLI April, 2017 Newsletter

May 3, 2017

FLI April, 2017 Newsletter

Scientists on Climate Change and Nuclear Weapons

Podcast: Climate Change with Brian Toon and Kevin Trenberth

With April hosting Earth Day, the March for Science, and the People's Climate March, it seemed like a natural month to highlight climate change for our monthly podcast. Too often, the media focus their attention on climate-change deniers, and as a result, when scientists speak with the press, it's almost always a discussion of whether climate change is real. Unfortunately, that can make it harder for those who recognize that climate change is a legitimate threat to fully understand the science and impacts of rising global temperatures.

Ariel visited the National Center for Atmospheric Research in Boulder, CO and met with climate scientists Dr. Kevin Trenberth and CU Boulder's Dr. Brian Toon to talk about what climate change is, what its effects could be, and how we can prepare for the future.

Dr. Trenberth is a distinguished senior scientist in the Climate Analysis Section at the National Center for Atmospheric Research (NCAR), where he studies climate variability and change. He has been prominent in most of the Intergovernmental Panel on Climate Change (IPCC) Scientific Assessments of Climate Change, and is one of the most cited scientists in his field.

Dr. Toon is a professor in the Department of Atmospheric and Oceanic Sciences at the University of Colorado in Boulder, Colorado. He has investigated the climate changes that killed the dinosaurs, climate changes after large volcanic eruptions, climate changes that follow a nuclear war, as well as the distant past and far future climates of Earth.

Read highlights from the interview here or listen to the podcast here.

Scientists Against Nukes

Last month, a group of the world's foremost scientists gathered at the United Nations headquarters in support of a global ban on nuclear weapons. They were all signatories of the open letter in support of the UN nuclear weapons negotiations.  Here are their informal statements to the public. Speakers include Freeman Dyson, Alexander Glaser, Jonathan King, Zia Mian, Joseph Taylor, and Frank von Hippel.

Don't forget to check us out on SoundCloud and iTunes and follow us on YouTube!

Spring Conference at MIT, Saturday, May 6

Are you near Boston on Saturday, May 6? Come join us at MIT as scientists and politicians talk about the new nuclear threats facing us today:

The growing hostility between the US and Russia -- and with North Korea and Iran -- adds new urgency to this conference, as do plans to spend a trillion dollars replacing US nuclear weapons by new ones that will be more suited for launching a first-strike.

This one-day event includes lunch as well as food for thought from a great speaker lineup, including Iran-deal broker Ernie Moniz (MIT, fmr Secretary of Energy), California Congresswoman Barbara Lee, Lisbeth Gronlund (Union of Concerned Scientists), Joe Cirincione (Ploughshares), our former congressman John Tierney, MA state reps Denise Provost and Mike Connolly, and Cambridge Mayor Denise Simmons. The focus will be on concrete steps we can take to reduce the risks.

The Asilomar AI Principles

More interviews with AI experts. Learn what they think about the Asilomar AI Principles and why they decided to sign.

John C. Havens Interview
By Ariel Conn

Havens is the Executive Director of The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems. He is the author of Heartificial Intelligence: Embracing Our Humanity to Maximize Machines and Hacking H(app)iness - Why Your Personal Data Counts and How Tracking It Can Change the World, and previously worked as the founder of both The H(app)athon Project and Transitional Media.

By Ariel Conn

Lin is the director of the Ethics + Emerging Sciences Group, based at California Polytechnic State University, San Luis Obispo, where he is an associate philosophy professor. He regularly gives invited briefings to industry, media, and government; and he teaches courses in ethics, political philosophy, philosophy of technology, and philosophy of law.

Susan Schneider Interview
By Ariel Conn

Schneider is a philosopher and cognitive scientist at the University of Connecticut, YHouse (NY) and the Institute for Advanced Study in Princeton, NJ.

This Month’s Most Popular Articles

90% of All the Scientists That Ever Lived Are Alive Today
By Eric Gastfriend

"This simple statistic captures the power of the exponential growth in science that has been taking place over the past century. It is attributable to Derek de Solla Price, the father of scientometrics (i.e., the science of studying science), in his 1961 book Science Since Babylon. If science is growing exponentially, then the major technological advancements and upheavals of the past 200 years are only the tip of the iceberg."

This article was originally posted over a year ago, but this month, it got picked up on Reddit and quickly became the most read article on the site. If you haven't read it yet, it's a good one!

Ensuring Smarter-than-human Intelligence Has a Positive Outcome
By Nate Soares

Soares recently gave a talk at Google on the problem of aligning smarter-than-human AI with operators' goals. You can watch a video of his talk or read the modified transcript he put together of his talk.

Op-ed: Poll Shows Strong Support for AI Regulation Though Respondents Admit Limited Knowledge of AI
By Matt Scherer

On April 11, Morning Consult released perhaps the most wide-ranging public survey ever conducted on AI-related issues.  In the poll, 2200 Americans answered 39 poll questions about AI (plus a number of questions on other issues).

The headline result that Morning Consult is highlighting is that overwhelming majorities of respondents supported national regulation (71% support) and international regulation (67%) of AI.  Thirty-seven percent strongly support national regulation, compared to just 4% who strongly oppose it (for international, those numbers were 35% and 5%, respectively). However, nearly half of respondents also indicated they had very limited knowledge of what AI actually is.

What we’ve been up to this month

Richard spoke on the landscape of technical AI Safety research, and also led a round table discussion on near-term perspectives on the topic at the GMIC conference in Beijing, where Stephen Hawking also gave the primary keynote of the conference on motivation for the topic. Richard also spoke on the topic at the Swarm Agents Club association of startups and academics in Beijing. He also spoke on AI safety at the China Digital Collective forum in Shanghai.

Back in February, Viktoriya attended the FHI workshop on bad actors and AI, and this month she attended International Conference on Learning Representations.


Our content

Related posts

If you enjoyed this, you also might like:

FLI October 2022 Newsletter: Against Reckless Nuclear Escalation

Welcome to the Future of Life Institute Newsletter. Every month, we bring 24,000+ subscribers the latest news on how emerging technologies […]
November 11, 2022

FLI September 2022 Newsletter: $3M Impacts of Nuclear War Grants Program!

Welcome to the FLI newsletter! Every month, we bring 24,000+ subscribers the latest news on how emerging technologies are transforming our world - for better and worse.
October 17, 2022

FLI August 2022 Newsletter

Nuclear Winter Deservedly Back in the Public Eye On 6th August, the Future of Life Institute proudly announced the winners […]
September 14, 2022

Sign up for the Future of Life Institute newsletter

Join 20,000+ others receiving periodic updates on our work and cause areas.
View previous editions
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram