Skip to content

Uroš Ćemalović

Organisation
Institute of European Studies
Biography

Why do you care about AI Existential Safety?

Two major focuses of my academic and research interests are environmental protection and intellectual property. In both of them, AI is more and more involved, bringing some fundamental concerns, but also conveying unexpected potentials for fast and significant improvements. In both cases, the ways AI is used and developed are, fundamentally, the issues of existential safety. In other words, almost all my personal and professional interests turn around two major issues: how to preserve the environment and how to make the best possible use of human intellectual potential, by establishing better policies and introducing improved regulatory framework. Both were heavily impacted by the advent of AI. This is why I care about AI existential safety and this is the reason of my wish to cooperate with other academics and researchers, from various fields and of different nationalities, working on AI related issues, and, especially, on AI existential safety.

Please give at least one example of your research interests related to AI existential safety:

I will give three different, but mutually dependent examples of how my recent and ongoing (2021-2025) research activities and interests are related to AI existential safety. The first concerns education on climate change and AI existential safety, the second regulatory and ethical concerns related to copyright infringements made by AI and the third the issue of AI-related threats and potentials for energy transition.

  1. Education on climate change and AI existential safety – Research project “AI and education on climate change mitigation – from King Midas problem to a golden opportunity?” – Grantee of the Future of Life Institute (May-September 2024) When I started working on this assignment, I was only partially aware of how and to what extent the use of AI can represent an existential threat (but also a huge potential) for global endeavors to mitigate climate change and, even more, to provide meaningful education (ECCM – education on climate change mitigation) on this issue. While the existing, general AI-assisted educational tools are, on the one hand, adaptable to quick societal changes and rapidly changing environment, on the other, they can easily be used for deceptive and manipulative purposes, conveying potentially devastating conspiracy theories. My major conclusion was that what is really needed are new, innovative AI-assisted educational tools adapted to ECCM. Apart from their cross-disciplinarity, ability to better illustrate the complexity of sustainability related issues and efficiency in combatting prejudices, misbelieves and conspiracy theories, the new AI-based educational tools have to respond to the challenges related to AI existential safety.
  2. Human creativity, intellectual property and AI existential safety – Research project “Regulatory and ethical evaluation of the outputs of AI-based text-to-image software solutions” – Project Leader, Innovation Fund of the Republic of Serbia (October 2023 – May 2025) The aptitude of AI-based tools to “create” various pieces of art has deeply worried not only artists, but many of us. As a lawyer with Ph.D. in intellectual property law (University of Strasbourg, France, 2010), I was particularly concerned for individual artists and smaller creative communities. Not only for them, the issue of intellectual property rights (IPRs) is the issue of existential safety. How to create the best possible regulatory framework, where genuinely human creativity will be protected and remunerated, but also allow the regulated development and use of AI in creative industries? (At least some) answers are not expected before May 2025, when this project should end.
  3. Energy transition and AI existential safety – Research project “Energy transition through cross-border inter-municipal cooperation” – Grantee of the Institute for Philosophy and Social Theory (July 2022 – June 2023) In this project, I examined how to use AI in energy transition in a safe and ethical way, and in supranational context.

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram