Skip to content

Samuel Albanie

Position
Assistant Professor
Organisation
University of Cambridge
Biography

Why do you care about AI Existential Safety?

As technology grows more powerful, the consequences of failure for that technology also grow. Over the last few decades, significant advances in research across both hardware and software have yielded meaningful gains in AI capabilities. In the absence of hard limits imposed by physics that preclude further gains, I believe it is prudent to consider the question of what happens if progress continues and to allocate research effort towards mitigating the risks of this eventuality.

Please give at least one example of your research interests related to AI existential safety:

  1. Foundation models have yielded striking gains across a broad suite of tasks spanning text, vision and code. However, the self-supervised pretraining objectives of these models often do not precisely match the desired end objective of the user. Given this disparity, I'm interested in the question of how these models can be induced to be maximally helpful for human end users. Currently, I'm considering this problem through the lens of natural language prompting - communicating tasks through utterances.
  2. I'm also interested in better understanding the potential for machine learning models to learn manipulation strategies (see e.g. https://arxiv.org/abs/1701.04895)
Read more
Our content

Content from this author

Sort order

Sort order

Category

Category
  • Recent News (217)
  • AI (137)
  • Nuclear (47)
  • Grants Program (25)
  • Climate & Environment (23)
  • AI Research (22)
  • Partner Orgs (20)
  • Biotech (18)
  • AI Safety Principles (13)
  • FLI projects (10)

Content type

Content type
  • Posts (200)
  • Newsletters (16)
  • Podcasts (1)

Number of results

June 8, 2023
podcast
June 1, 2023

Yawen Duan

person
June 1, 2023
person
June 1, 2023

Caspar Oesterheld

person
June 1, 2023
person
June 1, 2023

Kayo Yin

person
June 1, 2023
person
June 1, 2023

Dr. Peter S. Park

person
June 1, 2023
person
May 26, 2023
podcast
May 12, 2023
podcast
April 16, 2023

Landon Klein

person
April 16, 2023
person
April 13, 2023

Signature email verified

page
April 13, 2023
page
1 2 3 12

Load more

Sign up for the Future of Life Institute newsletter

Join 20,000+ others receiving periodic updates on our work and cause areas.
View previous editions
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram