Skip to content

Jacy Anthis

Organisation
UC Berkeley
Biography

Why do you care about AI Existential Safety?

AI is unlike other technologies in that, instead of just making a powerful tool for us to use, we aim to create systems that learn and develop on their own with emergent capabilities that we will not by default control. We need much more work on building safe and beneficial AI, rather than just plunging headfirst into an era of more and more powerful systems. The future turns on the extent and manner in which we’re able to embed social values in these unprecedentedly powerful systems.

Please give at least one example of your research interests related to AI existential safety:

The increasing capabilities and autonomy of AI, particularly large language models, is leading to radically new forms of human-AI interaction. I argue that we can understand such dynamics by building on the longstanding scaffolding in human-computer interaction theory, including computers as social actors, moral attribution, mind perception, and anthropomorphism. Namely, I characterize “digital minds” as having or appearing to have mental faculties (e.g., agency, intent, reasoning, emotion) that circumscribe interaction. We can mitigate existential risk through operationalizing these faculties and researching the complex systems that emerge between digital minds and humans, as well as digital minds with each other.

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram