Grant
Lethal Autonomous Weapons, Artificial Intelligence and Meaningful Human Control
$136,918.00
Heather Roff, University of Denver
There is a growing concern over the deployment of autonomous weapons systems, and how the partnering of artificial intelligence (AI) and weapons will change the future of conflict. The United Nations recently took up the subject of autonomous weapons, and many governments and key international organizations are arguing that such systems require meaningful human control to be acceptable. However, what is human control, and how do we ensure that it is meaningful? This project helps the international community, scholars and practitioners by providing answers those questions and helping to protect the essential elements of human control over the application of force. Bringing together computer scientists, roboticists, ethicists, lawyers and diplomats, the project will produce a conceptual framework that can shape new research and international policy for the future. Moreover, it will create a freely downloadable dataset on existing and emerging semi-autonomous weapons. Through this data, we can gain clarity on how and where autonomous functions are already deployed and on how such functions are kept under human control. A focus on current and emerging technologies makes it clear that the relationship between AI and weapons is not a problem for the distant future, but is a pressing issue now.
The project addresses the relationships between artificial intelligence (AI), weapons systems and society. In particular, the project provides a framework for meaningful human control (MHC) of autonomous weapons systems. In international discussions, a number of governments and organizations adopted MHC as a tool for approaching problems and potential solutions raised by autonomous weapons. However, the content of MHC was left open. While useful for policy reasons, the international community, academics and practioners are calling for further work on this issue. This project responds to that call by bringing together a multidisciplinary and multi-stakeholder team to address key questions. For example, we question the values associated with MHC, what rules should inform the design of the systems “both in software and hardware” and how existing and currently developing weapons systems advance possible relationships between human control, autonomy and AI. To achieve impact across academic, industry and policy arenas, we will produce academic publications, policy briefs, an open access database on ‘semi-autonomous’ weapons, and will sponsor multi-sector stakeholder discussions on how human values can be maintained as systems develop. Furthermore, the organization Article 36 will channel outputs directly into the international diplomatic community to achieve impact in international legal and policy forums.
Published by the Future of Life Institute on 1 February, 2023