Skip to content

Linas Marius Nasvytis

Organisation
University of Oxford
Biography

Why do you care about AI Existential Safety?

It’s increasingly clear that AI systems will have a profound impact on human lives. But given the complexity of the real world, it’s essential to ensure that these systems are well aligned with our own intentions, and that their behavior is robust to different situations in an environment. To this end, I aim to develop more cooperative AI systems, and work on their alignment with human preferences.

Please give at least one example of your research interests related to AI existential safety:

My main research interests in AI existential safety revolve around cooperative AI and value alignment. On the one hand, I am interested in how AI systems could reliably learn the preferences of a single individual, given the noisiness of human decision-making, and how this learning process could be advanced with more efficient incorporation of human feedback. But even if we can successfully align an agent with the preferences of a single human, it is equally important to develop algorithms that can jointly maximize their goals, especially if they have been aligned with conflicting interests. To this end, I am interested in studying multi-agent systems and agent incentives in light of AI alignment. Lastly, I believe that it’s important to ensure that the behavior of AI systems is robust to small perturbations in their environment.

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram