Elad Hazan
Why do you care about AI Existential Safety?
For the same reasons as in the Macaskill book “”what do we owe the future””.
Please give at least one example of your research interests related to AI existential safety:
I believe that it is very hard to compete with a superior intelligence, but it may be feasible to design mechanisms that make it aligned. I’m interested in regret minimization in games, as it pertains to safety and alignment.