Skip to content
All Podcast Episodes

AI Alignment Podcast: An Overview of Technical AI Alignment in 2018 and 2019 with Buck Shlegeris and Rohin Shah

Published
16 April, 2020

 Topics discussed in this episode include:

  • Rohin's and Buck's optimism and pessimism about different approaches to aligned AI
  • Traditional arguments for AI as an x-risk
  • Modeling agents as expected utility maximizers
  • Ambitious value learning and specification learning/narrow value learning
  • Agency and optimization
  • Robustness
  • Scaling to superhuman abilities
  • Universality
  • Impact regularization
  • Causal models, oracles, and decision theory
  • Discontinuous and continuous takeoff scenarios
  • Probability of AI-induced existential risk
  • Timelines for AGI
  • Information hazards

Timestamps: 

0:00 Intro

3:48 Traditional arguments for AI as an existential risk

5:40 What is AI alignment?

7:30 Back to a basic analysis of AI as an existential risk

18:25 Can we model agents in ways other than as expected utility maximizers?

19:34 Is it skillful to try and model human preferences as a utility function?

27:09 Suggestions for alternatives to modeling humans with utility functions

40:30 Agency and optimization

45:55 Embedded decision theory

48:30 More on value learning

49:58 What is robustness and why does it matter?

01:13:00 Scaling to superhuman abilities

01:26:13 Universality

01:33:40 Impact regularization

01:40:34 Causal models, oracles, and decision theory

01:43:05 Forecasting as well as discontinuous and continuous takeoff scenarios

01:53:18 What is the probability of AI-induced existential risk?

02:00:53 Likelihood of continuous and discontinuous take off scenarios

02:08:08 What would you both do if you had more power and resources?

02:12:38 AI timelines

02:14:00 Information hazards

02:19:19 Where to follow Buck and Rohin and learn more

 

Works referenced: 

AI Alignment 2018-19 Review

Takeoff Speeds by Paul Christiano

Discontinuous progress investigation by AI Impacts

An Overview of Technical AI Alignment with Rohin Shah (Part 1)

An Overview of Technical AI Alignment with Rohin Shah (Part 2)

Alignment Newsletter

Intelligence Explosion Microeconomics

AI Alignment: Why It's Hard and Where to Start

AI Risk for Computer Scientists

 

We hope that you will continue to join in the conversations by following us or subscribing to our podcasts on Youtube, Spotify, SoundCloud, iTunes, Google Play, StitcheriHeartRadio, or your preferred podcast site/application. You can find all the AI Alignment Podcasts here.

Transcript

View transcript

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and focus areas.
View previous editions
cloudmagnifiercrossarrow-up
linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram