Skip to content

Paul Matthew Salmon

Position
Professor
Organisation
Uni of the Sunshine Coast
Biography

Why do you care about AI Existential Safety?

As a Human Factors practitioner, I have spent my career attempting to understand and optimise Human health and wellbeing in a broad set of domains. Since the work of Bainbridge on the ironies of automation, AI safety has formed a central component of my work. I am passionate about AI safety as, whilst I can see that AI could potentially revolutionise human health and wellbeing on a global scale, I am also keenly aware of the many risks and unwanted emergent properties that could arise. It is my firm belief that the key to managing these risks, some of which are existential, is the application of Human Factors theory, methods, and knowledge to support the design of safe, ethical, and usable AI. As such, a large component of my work to date has focused on AI safety, and my current research interests centre around the use of Human Factors to manage the risks associated with AI and AGI.

Please give at least one example of your research interests related to AI existential safety:

I am currently the chief investigator on an Australian Research Council funded Discovery project involving the application of sociotechnical systems theory and Human Factors methods to manage the risks associated with AGI. The research involves the application of prospective modelling techniques to identify the risks associated with 2 ‘envisioned world’ AGI systems, an uncrewed combat aerial vehicle system (The Executor) and a road transport management AGI (MILTON). Based on the risks identified we are currently developing recommendations for a suite of controls that are required to ensure that risks are effectively managed and that the benefits of AGI can be realised. The outputs of the research will provide designers, organisations, regulators and governments with the information required to support development and implementation of the controls needed to ensure that AGI systems operate safely and efficiently.

I am also the chief investigator on a Trusted Autonomous Systems funded program of work that involves the development of a Human Factors framework to support the design of safe, ethical, and usable AI in defence.

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram