Skip to content

Timothy Kintu

Position
Research Associate
Organisation
Infectious Diseases Institute
Biography

Why do you care about AI Existential Safety?

I care about AI existential safety because it’s a commitment to ensuring that powerful technologies remain beneficial, equitable, and anchored in human values as they evolve. Ignoring the broader consequences of AI systems, especially in these nascent stages of development, could lead to outcomes we struggle to control. For me, this extends directly to healthcare: if diagnostic tools or treatment algorithms become more capable than any human team, we must ensure they truly serve all patients, especially those already marginalized by healthcare inequities such as those in sub-saharan Africa.

Please give at least one example of your research interests related to AI existential safety:

The project I’m currently working on is the formative development of an AI-driven chatbot for adolescents and young people living with HIV in Uganda. This chatbot aims to offer peer support, health education, and accurate medical information. In designing it, one of the issues I’m actively exploring is how to incorporate fail-safes and ethical guardrails to prevent biased or misleading outputs, especially given that we are dealing with a socially vulnerable group. By deployment, I want to ensure the system can handle delicate health inquiries without propagating misinformation or harmful content, an issue that aligns with the wider AI safety concern of reward hacking and unintended consequences.

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and focus areas.
cloudmagnifiercrossarrow-up
linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram