Skip to content
All Newsletters

FLI February, 2018 Newsletter

Published
March 3, 2018
Author
Revathi Kumar

Contents

FLI February, 2018 Newsletter

The Challenge of Value Aligned AI

Podcast: AI and the Value Alignment Problem
with Meia Chita-Tegmark and Lucas Perry


What does it mean to create beneficial artificial intelligence? How can we expect to align AIs with human values if humans can’t even agree on what we value? Building safe and beneficial AI involves tricky technical research problems, but it also requires input from philosophers, ethicists, and psychologists on these fundamental questions. How can we ensure the most effective collaboration?

Ariel spoke with FLI’s Meia Chita-Tegmark and Lucas Perry on this month’s podcast about the value alignment problem: the challenge of aligning the goals and actions of AI systems with the goals and intentions of humans.

Topics discussed in this episode include:

  • a recent value alignment workshop in Long Beach
  • the role of psychology in value alignment
  • the possibility of creating suffering risks (s-risks)
  • how AGI can inform human values
 You can listen to this podcast on SoundCloud and iTunes.





UPDATE: 2018 Grants Competition

We have received 181 applications in response to our AGI grants competition, and reviewing is now in full swing. Thank you to all of the talented researchers that applied!

ICYMI: This Month’s Most Popular Articles

Artificial Intelligence




From privacy concerns, to algorithmic bias and “black box” decision making, to broader questions of value alignment, recursive self-improvement, and existential risk from superintelligence — there’s no shortage of AI safety issues. But with limited funding and too few researchers, trade-offs in research are inevitable. Researchers must prioritize their causes.






How to Prepare for the Malicious Use of AI
By Jessica Cussins

How can we forecast, prevent, and (when necessary) mitigate the harmful effects of malicious uses of AI? This is the question posed by a 100-page report released last week, written by authors from the Future of Humanity Institute, the Center for the Study of Existential Risk, OpenAI, the Electronic Frontier Foundation, and the Center for a New American Security.






Transparent and Interpretable AI: An Interview With Percy Liang
By Sarah Marquart

To really know whether a technique is effective, “there is no substitute for applying it to real life,” says Liang, “this goes for language, vision, and robotics.” An autonomous vehicle may perform well in all testing conditions, but there is no way to accurately predict how it could perform in an unpredictable natural disaster.


Climate Change




The last time CO2 levels were this high, global surface temperatures were 6 °C higher, oceans were 100 feet higher, and modern humans didn’t exist. Unless the international community makes massive strides towards the Paris Agreement goals, atmospheric CO2 could rise to 560 ppm by 2050 — double the concentration in 1958, and a sign of much more global warming to come.





By Kirsten Gronlund

Dubbed “the evil twin of global warming,” ocean acidification is a growing crisis that poses a threat to both water-dwelling species and human communities that rely on the ocean for food and livelihood.


What We’ve Been Up to This Month


Max Tegmark gave the 2018 Beyond Annual Lecture at Arizona State University this month, where he argued that it will require planning and hard work to ensure that AI continues to benefit humanity as it eclipses our intelligence. In the lecture he explored challenges that we need to overcome and also talked about exciting opportunities.

Ariel Conn participated in this year’s ASU Origins Project Workshop, Artificial Intelligence and Autonomous Weapons Systems: Technology, Warfare, and Our Most Destructive Machines, which brought together experts in AI, autonomous weapons, and nuclear weapons, including Former Defense Secretary William Perry.

Jessica Cussins participated in the Global Governance of Artificial Intelligence Roundtable in Dubai, which was held for the first time as part of the annual World Government Summit. This was organized by the AI Initiative from the Future Society at the Harvard Kennedy School and H.E. Omar bin Sultan Al Olama, the UAE’s Minister of State for Artificial Intelligence.

Lucas Perry participated in an Individual Outreach Forum with the Centre for Effective Altruism (CEA) to focus on finding the most capable and effective people and helping them work on the world’s most pressing problems.

FLI in the News

IEEE SPECTRUM: Debating Slaughterbots and the Future of Autonomous Weapons
“People can look at the same technology and disagree about how it will shape the future, explains Paul Scharre as he shares a final perspective on the Slaughterbots debate.”

BOSS MAGAZINE: Boston Dynamics Debuts Spotmini and People Are Freaked Out
“A superintelligence might help us eradicate war, disease, and poverty, and so the creation of strong AI might be the biggest event in human history,” according to Max Tegmark.

SCIENCE AT AMNH: Being Human in the Age of Artificial Intelligence with Max Tegmark and Neil deGrasse Tyson
Artificial intelligence is growing at an astounding rate, but are we ready for the consequences? Cosmologist and MIT physics professor Max Tegmark guides us through the state of artificial intelligence today and the many paths we might take in further developing this technology. Hayden Planetarium director Neil deGrasse Tyson moderates.

If you’re interested in job openings, research positions, and volunteer opportunities at FLI and our partner organizations, please visit our Get Involved page.

Our newsletter

Regular updates about the Future of Life Institute, in your inbox

Subscribe to our newsletter and join over 20,000+ people who believe in our mission to preserve the future of life.

Recent newsletters

Future of Life Institute Newsletter: A pause didn’t happen. So what did?

Reflections on the one-year Pause Letter anniversary, the EU AI Act passes in EU Parliament, updates from our policy team, and more.
April 2, 2024

Future of Life Institute Newsletter: FLI x The Elders, and #BanDeepfakes

Former world leaders call for action on pressing global threats, launching the campaign to #BanDeepfakes, new funding opportunities from our Futures program, and more.
March 4, 2024

Future of Life Institute Newsletter: The Year of Fake

Deepfakes are dominating headlines - with much more disruption expected, the Doomsday Clock has been set for 2024, AI governance updates, and more.
February 2, 2024
All Newsletters

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram