Skip to content

Fazl Barez

Organisation
University of Edinburgh & University of Oxford
Biography

Why do you care about AI Existential Safety?

My aim is to have the largest positive impact in the world, especially for existing humans and the future generations. I believe the largest existential threat to the future of humanity is un-aligned AIs that are far more capable than the humans which requires more attention and investment. Therefore, I would like to work on projects that are safe, aligned, robust and trustworthy both for current humans and those in the future.

Please give at least one example of your research interests related to AI existential safety:

One example of my research interests related to AI existential safety is focusing on the mechanistic understanding of the underlying functions of ML systems. By analyzing the design and performance of these systems, I aim to identify key factors that contribute to their safety, trusworthiness, and reliability. My work involves simulating scenarios where these systems are tasked to perform various functions, which allows us to gain valuable insights into their design mechanisms and ultimately improve their alignment with human values and goals.

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram