Skip to content

Yixin Wang

Position
Assistant Professor
Organisation
University of Michigan
Biography

Why do you care about AI Existential Safety?

While we chase the next frontier of machine learning performance as researchers, we are increasingly impacted by the potentially wide adoption of machine learning algorithms in many aspects of our daily lives. Prediction errors made by machine learning algorithms may amount to a 1% drop in the performance reported in a research paper; but when the algorithms are deployed, this 1% drop will have profound implications for the lives of the 1% (and often much more) of individuals affected by it. To this end, I care about the reliability and safety of machine learning algorithms and the alignment of machine learning algorithms with the tasks they are deployed in.

Please give at least one example of your research interests related to AI existential safety:

I study the reliability of machine learning algorithms from causal and probabilistic perspectives. From the causal perspective, I study what leads machine learning models to make certain predictions. Understanding the causality underlying the data-generating process and the algorithm's decision-making process could help us align the algorithms better with the actual tasks they are deployed in. From the probabilistic perspective, I am interested in building algorithms that know what they don't know. When the data cannot assist us with our questions of interest, the algorithms ideally avoid random guessing but acknowledge its ignorance.

Read more
Our content

Content from this author

Sort order

Sort order

Category

Category
  • AI (317)
  • Recent News (137)
  • Featured (63)
  • Partner Orgs (37)
  • Grants Program (31)
  • AI Research (25)
  • Nuclear (19)
  • FLI projects (17)
  • Biotech (16)
  • AI Safety Principles (15)

Content type

Content type
  • Posts (202)
  • Podcasts (68)
  • Newsletters (35)
  • Open Letters (9)
  • Cause areas (1)
  • FLA Awards (1)
  • Resources (1)

Number of results

June 8, 2023
podcast
June 1, 2023

Yawen Duan

person
June 1, 2023
person
June 1, 2023

Caspar Oesterheld

person
June 1, 2023
person
June 1, 2023

Kayo Yin

person
June 1, 2023
person
June 1, 2023

Dr. Peter S. Park

person
June 1, 2023
person
May 26, 2023
podcast
May 12, 2023
podcast
April 16, 2023

Landon Klein

person
April 16, 2023
person
April 13, 2023

Signature email verified

page
April 13, 2023
page
1 2 3 12

Load more

Sign up for the Future of Life Institute newsletter

Join 20,000+ others receiving periodic updates on our work and cause areas.
View previous editions
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram