Skip to content

Brian Green

Organisation
Santa Clara University
Biography

Why do you care about AI Existential Safety?

I care about AI existential safety because human life, culture, and civilization are worth protecting. I have dedicated my life to helping people make better decisions, especially in the realm of technology and AI. Humankind is rapidly growing in power and this growth poses a fundamentally ethical problem: how should we use this power? What are the proper and improper uses of this power? Artificial intelligence takes the most human of capacities – our intelligence – and casts it into the world as machines and programs that can act autonomously, in increasingly more general ways. Power gives us the capacity to act, intelligence tells us which actions might be desirable, and ethics can help to tell us which desirable actions might actually be good. As a technology ethicist at the Markkula Center for Applied Ethics I have worked directly and extensively with the Partnership on AI, the World Economic Forum, the Vatican, fellow academics, and corporations together worth well over $4 trillion. My goal is to equip people with tools for thinking about ethical problems, so they can make better decisions related to technology in general, and AI in particular, and through their decisions create a better future.

Please give one or more examples of research interests relevant to AI existential safety:

I have very broad interests in the topic of AI existential safety and I engage the topic through four main paths. The first path is direct: I work on issues immediately relevant to problems and solutions on AI risk and safety. The second path is training current AI practitioners, future practitioners, and others involved in training and educating practitioners, in practical ethical tools for technology. This includes considering questions on the full spectrum of safety from short to long term. The third path is an adaptation strategy towards risk rather than a mitigation strategy, and involves creating refuges from existential risks, both on and off of the Earth. My fourth path is through cultural institutions such as the Vatican and other organizations which need to learn more about AI and the dangers and opportunities that it poses to the world. I have worked extensively with the Pontifical Council for Culture which seeks to promote a broad cross-cultural dialogue on AI.

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram