Skip to content

Anna Katariina Wisakanto

Organisation
Chalmers University
Biography

Why do you care about AI Existential Safety

Risks from global systems, systemic challenges that could lead to or exacerbate a global catastrophe or threat to society, are under-researched considering what is (currently) taken to be at stake. One risk from a global assembly of artificial decision making is that such interconnected and reinforcing systems impact our thinking. The impact can be assessed along two dimensions. First, individually the impact could harm our capability to make value decisions with our moral faculties in place. Second, collectively we might find ourselves perceiving a reality that no longer corresponds to the natural world. Some arbitrary ideas might prove to be pervasive. As a consequence, advantageous cultural development in society may slow down, and potentially create an exclusive environment in which key ideas needed to assess existential risks or ensure human prosperity will not persevere.

Please give at least one example of your research interests related to AI existential safety:

Risks from global systems — automated decision making impact on human decision making and thinking.

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram