Skip to content

Riley Harris

Position
DPhil Student
Organisation
Oxford University
Biography

Why do you care about AI Existential Safety?

Insofar as AI presents a risk of extinction, disempowerment, permanent totalitarianism, and other significant reductions in the expected value of humanities future, it is worth taking these risks seriously.

Please give at least one example of your research interests related to AI existential safety:

Here is are the abstracts of two papers I’m working on:

Existential risk from artificial intelligence in the absence of a singularity: Several arguments indicate that future developments in artificial intelligence systems pose a risk of human extinction. These arguments rely on the idea that an AI system will be able to recursively self-improve and quickly become orders of magnitude more intelligent than humans, an idea that has recently come under scrutiny. I show that these arguments only need to rely on much weaker claims about the capabilities of future AI systems. If future systems reach broadly human-level performance there are plausible paths to an existential catastrophe.

Safe or useful? An impossibility theorem for AI safety: Some hope that we will be able to shutdown advanced AI systems if they come to present a significant risk to society. I show that systems that allow us to shut them down will fail to be useful, because they choose stochastically dominated options. On the other hand, systems that are useful, will not let us turn them off, because they satisfy time-step dominance (Thornley, MS). This suggests that future AI systems will resist shutdown.

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram