Skip to content

The Anh Han

Position
Professor of Computer Science
Organisation
Teesside University
Biography

Why do you care about AI Existential Safety?

AI technologies can pose significant global risks to our civilization (which can be even existential), if not safely developed and appropriately regulated. In my research group, we have developed computational models (both analytic and simulated) that capture key factors of an AI development race, revealing which strategic behaviors regarding safety compliance would likely emerge in different conditions and hypothetical scenarios of the race, and how incentives can be used to drive the race into a more positive direction. This research is part of a FLI funded AI Safety grant (https://futureoflife.org/2018-ai-grant-recipients/#Han).

For development of suitable and realistic models, it is important to capture different scenarios and contexts of AI safety development (e.g., what is the relationship between safety technologies and AI capacity and the level of risks of AI systems), so as to provide suitable regulatory actions. On the other hand, our behavioral modelling work informs e.g. what is the acceptable level of risk without leading to unncessary regulation (i.e. over-regulation).

I believe it’s important to be part of this community to learn about AI Safety research and to inform my own research agenda on AI development race/competition modelling.

Please give one or more examples of research interests relevant to AI existential safety:

My relevant research interest is to understand the dynamics of cooperation and competition of AI safety development behaviours (e.g., by companies, governments) and how incentives such as reward of safety-compliant behaviours and punishment of non-compliant ones can improve safety behaviour.

Some of my relevant publications in this direction:

1) T. A. Han, L. M. Pereira, F. C. Santos and T. Lenaerts. To Regulate or Not: A Social Dynamics Analysis of an Idealised AI Race. Vol 69, pages 881-921, Journal of Artificial Intelligence Research, 2020.
Link to publication:
https://jair.org/index.php/jair/article/view/12225

2) T. A. Han, L. M. Pereira, T. Lenaerts and F. C. Santos. Mediating artificial intelligence developments through negative and positive incentives. PloS one 16.1 (2021): e0244592.
Link to publication:
https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0244592

3) T. A. Han, L. M. Pereira, T. Lenaerts. Modelling and Influencing the AI Bidding War: A Research Agenda. AAAI/ACM conference on AI, Ethics and Society, pages 5-11, Honolulu, Hawaii, 2019.
Link to publication:
https://dl.acm.org/doi/abs/10.1145/3306618.3314265

4) A press release article by TheConversation: https://theconversation.com/ai-developers-often-ignore-safety-in-the-pursuit-of-a-breakthrough-so-how-do-we-regulate-them-without-blocking-progress-155825?utm_source=twitter&utm_medium=bylinetwitterbutton

5) A preprint showing the impact of network structures on the AI race dynamics and safety behavioral outcome
Link: https://arxiv.org/abs/2012.15234

6) A preprint showing our analysis of a new proposal for AI regulation and governance through voluntary safety commitments
Link: https://arxiv.org/abs/2104.03741

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram