Skip to content
All Podcast Episodes

Evan Hubinger on Inner Alignment, Outer Alignment, and Proposals for Building Safe Advanced AI

Published
1 July, 2020
Video

It's well-established in the AI alignment literature what happens when an AI system learns or is given an objective that doesn't fully capture what we want.  Human preferences and values are inevitably left out and the AI, likely being a powerful optimizer, will take advantage of the dimensions of freedom afforded by the misspecified objective and set them to extreme values. This may allow for better optimization on the goals in the objective function, but can have catastrophic consequences for human preferences and values the system fails to consider. Is it possible for misalignment to also occur between the model being trained and the objective function used for training? The answer looks like yes. Evan Hubinger from the Machine Intelligence Research Institute joins us on this episode of the AI Alignment Podcast to discuss how to ensure alignment between a model being trained and the objective function used to train it, as well as to evaluate three proposals for building safe advanced AI.

 Topics discussed in this episode include:

  • Inner and outer alignment
  • How and why inner alignment can fail
  • Training competitiveness and performance competitiveness
  • Evaluating imitative amplification, AI safety via debate, and microscope AI

 

Timestamps: 

0:00 Intro 

2:07 How Evan got into AI alignment research

4:42 What is AI alignment?

7:30 How Evan approaches AI alignment

13:05 What are inner alignment and outer alignment?

24:23 Gradient descent

36:30 Testing for inner alignment

38:38 Wrapping up on outer alignment

44:24 Why is inner alignment a priority?

45:30 How inner alignment fails

01:11:12 Training competitiveness and performance competitiveness

01:16:17 Evaluating proposals for building safe and advanced AI via inner and outer alignment, as well as training and performance competitiveness

01:17:30 Imitative amplification

01:23:00 AI safety via debate

01:26:32 Microscope AI

01:30:19 AGI timelines and humanity's prospects for succeeding in AI alignment

01:34:45 Where to follow Evan and find more of his work

 

Works referenced: 

Risks from Learned Optimization in Advanced Machine Learning Systems

An overview of 11 proposals for building safe advanced AI 

Evan's work at the Machine Intelligence Research Institute

Twitter

GitHub

LinkedIn

Facebook

 

We hope that you will continue to join in the conversations by following us or subscribing to our podcasts on Youtube, Spotify, SoundCloud, iTunes, Google Play, StitcheriHeartRadio, or your preferred podcast site/application. You can find all the AI Alignment Podcasts here.

Transcript

View transcript

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and focus areas.
View previous editions
cloudmagnifiercrossarrow-up
linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram