Skip to content

Michele Campolo

Organisation
School for Advanced Studies, University of Udine
Biography

Why do you care about AI Existential Safety?

Due to the great potential that technological development and AI itself might unlock for humanity, reducing existential risk is mandatory. Studying the risks posed by AI allows us to understand possible future scenarios and increase the chances of a positive outcome.

Please give one or more examples of research interests relevant to AI existential safety:

At AI Safety Camp 2020 I worked in a team on the topic of goal-directedness. In the AI Alignment community, it has been argued that goal-directed agents might pose a greater risk than other agents whose behaviour is not strongly driven by external goals. We investigated this claim, and produced a literature review on goal-directedness, published on the Alignment Forum. At CEEALAR I have spent two years studying AI Alignment itself and the literature on AGI, to better understand the main characteristics of systems that possess general intelligence, their potential benefits, and the role they might play in catastrophic scenarios. I am currently working on an AI Alignment project that takes inspiration from some philosophical ideas in the field of metaethics. Even though the relation between metaethics and AI Alignment has been recognised by many different thinkers, some of whom were also guests at the AI Alignment Podcast by FLI, there are not many projects on this topic in the field of AI Safety. When completed, the research project might show a new path to AI Alignment, enabling the design of artificial intelligence that is aligned not only with human values, but all sentient life.

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram