Skip to content

Aidan Kierans

Organisation
University of Connecticut
Biography

Why do you care about AI Existential Safety?

Artificial intelligence will be the most important technology of the 21st century. As with any powerful technology, there is a risk it could be misused or mismanaged. I developed my passion for AI research while exploring how I could best use my technical and analytical skills in the public interest. To this end, I earned concurrent bachelor’s degrees in computer science and philosophy and have devoted much of my free time to extracurricular research in both fields. With the knowledge and skills I’ve developed in both fields, I am more confident than ever that AI poses non-negligible and unacceptable existential and catastrophic risks. I aim to chip away at these risks.

Please give one or more examples of research interests relevant to AI existential safety:

I am interested in methods for measuring and producing intelligence and honesty in AI. My current research project, titled “Quantifying Misalignment Between Agents,” defines and models “misalignment” in relation to agents and their goals. Following from this, I would like to develop methods of qualitative analysis that can describe an agent’s intelligence and honesty, then follow up with quantitative benchmarks for honest AI. Incorporating the methods and knowledge I applied in my undergraduate epistemology research, I would investigate what it means for an AI system to hold beliefs and how we can ensure that those beliefs are being expressed in good faith. Answering these questions would move us closer to capable, trustworthy AI.

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram