Skip to content

Hrvoje Kukina

Organisation
MIT
Biography

Why do you care about AI Existential Safety?

I care deeply about AI existential safety because I’ve seen firsthand how powerful AI can be in fields like medicine and technology. My work developing reinforcement learning models for sepsis treatment showed me both the incredible benefits and the serious risks of AI. Leading AI projects and mentoring startups have made me realize how quickly these systems can become central to people’s lives. As AI grows, so does the responsibility to ensure it doesn’t harm or exploit people. I worry about how these systems can reinforce biases or act unpredictably without proper oversight. I’ve also seen that the more advanced AI becomes, the harder it is for humans to fully understand or control it. My studies in machine learning and data science have convinced me that careful, responsible development is the only way forward. Above all, I want to make sure AI helps humanity without causing harm.

Please give at least one example of your research interests related to AI existential safety:

One example of my work that relates to AI existential safety is a project on using distributional reinforcement learning to improve treatments for sepsis. The models I developed were able to outperform human doctors in certain scenarios, which was both exciting and a bit unsettling. It showed me how AI can become incredibly powerful very quickly, but also how important it is to ensure that these systems are safe and reliable. I shared this work at international conferences and published it in respected journals, always highlighting the importance of transparency and oversight. In my teaching and mentorship roles, I encourage students and teams to see not just the potential of AI, but also the responsibility that comes with it. I also work on the mathematical foundations of these algorithms to understand their behavior and potential pitfalls, because I believe a solid theoretical grasp is essential for building safe AI. Beyond the technical aspects, I’ve seen how important it is to think about the broader impacts of these systems on society. Ultimately, I see this research as part of a bigger effort to ensure that AI advances in ways that truly benefit humanity and avoid unintended harms.

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and focus areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram