Reuth Mirsky
Tufts University
Why do you care about AI Existential Safety?
I think that we should look ahead, not only by designing safe AI (an AI that does nothing can also be considered “safe”), but also safe agents that have the autonomy to reason and make informed decisions about how to act to be truly safe and beneficial to humans.
Please give at least one example of your research interests related to AI existential safety:
I proposed the Seeing-Eye Robot Challenge (Mirsky and Stone, 2022) to develop a robot that can serve as a guide dog and disobey unsafe commands such as crossing the road when a car is coming.