Skip to content

Chad DeChant

Organisation
Columbia University
Biography

Why do you care about AI Existential Safety?

Many technologies can pose a threat to humanity. The use of nuclear weapons and the misuse of biotechnology, for example, could pose such a threat. But these and most other dangers posed by technology are relatively well understood and their use or misuse is ultimately under the direct control of their creators. AI could pose a different kind of threat to the extent that it usurps the decision making and agency of humans. AI will also serve to magnify and accelerate more traditional threats, particularly by being used in autonomous weapons systems.

Please give one or more examples of research interests relevant to AI existential safety:

My current research is focused on enabling AI agents to accurately report, summarize, and answer questions about their past actions in natural language. The first step in controlling AI is knowing what it’s doing. And unless we’re going to follow and constantly supervise every AI agent at all times, we will have to rely on those agents themselves to tell us what they are doing. This may seem basic but I believe insufficient attention has been paid to developing this ability, which should be one of the foundations of AI safety. If we could rely on AI agents to accurately report in an understandable manner what they have done and what they plan to do, that would go a long way toward addressing many AI safety concerns.

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram