Skip to content

Karina Vold

Position
Assistant Professor
Organisation
University of Toronto
Biography

Why do you care about AI Existential Safety?

As AI systems become more autonomous and more integrated into critical sectors like healthcare, finance, and security, concerns arise about unintended consequences, including catastrophic and existential risks. I’m interested in studying longer term ethical and safety risks that could emerge from future advanced AI systems. These include things like conscious AI systems, agentic systems, artificial general AI systems (AGI), as well as powerful narrow AI systems. I’m also interested in how AI technologies are affecting human cognition–how we think, how we make decisions, our memory capacities, our consciousness, our autonomy, etc. I’ve written on all of these topics (please see my personal website, www.kkvd.com, my google scholar page, and my Research Lab website, www.periscope.org). Ultimately, I *care* about AI existential safety because I care about humanity and the precariousness of the human experience and human condition. As a child, these were some of the questions that kept me up at night. As an academic philosopher, they are the questions that preoccupy my

Please give at least one example of your research interests related to AI existential safety:

One example of my work on AI Xrisk is the chapter I co-wrote for The Oxford Handbook of Digital Ethics, titled “How does Artificial Intelligence Pose an Existential Risk?” Using the argumentative and rigorous methods of philosophy, this paper tried to make as explicit as possible the reasons for thinking that AI poses an existential risk at all. We articulate what exactly constitutes an existential risk and how, exactly, AI poses such a threat. In particular, we will critically explore three commonly cited reasons for thinking that AI poses an existential threat to humanity: the control problem, the possibility of global disruption from an AI race dynamic, and the possible weaponization of AI. This paper was written to convince and inform the philosophical community as well as the academic community more broadly, which back in 2019 (when the paper was written) still needed some convincing on this point (and perhaps still do!). It was also written to serve as a pedagogical tool for scholars looking to teach units on the existential risk of AI to university and college students. To my knowledge, it has received over 30 citations and been used in classrooms around the world.

Other related research interests I have are in how to use AI to enhance human oversight capabilities, how to assess AI capabilities, how strong the instrumental convergence thesis really is (what evidence supports it), the limits of machine intelligence (I regularly teach a course on this topic, which has received some media attention), and our future with AI–how humans can learn from AI systems and not be ‘left behind’.

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and focus areas.
cloudmagnifiercrossarrow-up
linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram