Skip to content

The Future of Artificial Intelligence

Published:
February 1, 2015
Author:
Seán Ó hÉigeartaigh

Contents

Seán Ó hÉigeartaigh is the Executive Director of the Centre for the Study of Existential Risk, based at the University of Cambridge.

Artificial intelligence leaders in academia and industry, and legal, economic and risk experts worldwide recently signed an open letter calling for the robust and beneficial development of artificial intelligence. The letter follows a recent private conference organised by the Future of Life Institute and funded by FLI and CSER’s Jaan Tallinn, in which the future opportunities and societal challenges posed by artificial intelligence were explored by AI leaders and interdisciplinary researchers.

The conference resulted in a set of research priorities aimed at making progress on the technical, legal, and economic challenges posed by this rapidly developing field.

This conference, the research preceding it, and the support for the concerns raised in the letter, may make this a pivotal moment in the development of this transformative field. But why is this happening now?

Why now?

An exciting new wave of progress in artificial intelligence is happening due to the success of a set of new approaches – “hot” areas include deep learning and other statistical learning methods. Advances in related fields like probability, decision theory, neuroscience and control theory are also contributing. These have kick-started rapid improvements on problems where progress has been very slow until now: image and speech recognition, perception and movement in robotics, and performance of autonomous vehicles are just a few examples. As a result, impacts on society that seemed far away now suddenly seem pressing.

Is society ready for the opportunities – and challenges – of AI?

Artificial intelligence is a general purpose technology – one that will affect the development of a lot of different technologies. As a result, it will affect society deeply and in a lot of different ways. The near- and long-term benefits will be great – it will increase the world’s economic prosperity, and enhance our ability to make progress on many important problems. In particular, any area where progress depends on analyzing and using huge amounts of data – climate change, health research, biotechnology – could be accelerated.

However, even impacts that are positive in the long-run can cause a lot of near-term challenges. What happens when swathes of the labour market become automated?  Can our legal systems assign blame when there is an accident involving a self-driving car? Does the use of autonomous weapons in war conflict with basic human rights?

It’s no longer enough to ask “can we build it?” Now that it looks like we can, we have to ask: “How can we build it to provide most benefit? And how must we update our own systems – legal, economic, ethical – so that the transition is smooth, and we make the most of the positives while minimizing the negatives?” These questions need careful analysis, with technical AI experts, legal experts, economists, policymakers, and philosophers working together. And as this affects society at large, the public also needs to be represented in the discussions and decisions that are made.

Safe, predictable design of powerful systems

There are also deep technical challenges as these systems get more powerful and more complex. We have already seen unexpected behaviour from systems that weren’t carefully enough thought through – for example, the role of algorithms in the 2010 financial flash crash. It is essential that powerful AI systems don’t become black boxes operating in ways that we can’t entirely understand or predict. This will require better ways to make systems transparent and easier to verify, better security so that systems can’t be hacked, and a deeper understanding of logic and decision theory so that we predict the behaviour of our systems in the different situations they will act in. There are open questions to be answered: can we design these powerful systems with perfect confidence that they will always do exactly what we want them to do? And if not, how do we design them with limits that guarantee only safe actions?

Shaping the development of a transformative technology

The societal and technical challenges posed by AI are hard, and will become harder the longer we wait. They will need insights and cooperation from the best minds in computer science, but also from experts in all the domains that AI will impact. But by making progress now, we will lay the foundations we need for the bigger changes that lie ahead.

Some commentators have raised the prospect of human-level general artificial intelligence. As Stephen Hawking and others have said, this would be the most transformative and potentially risky invention in human history, and will need to be approached very carefully. Luckily, we’re decades away at least, according to most experts and surveys, and possibly even centuries. But we need that time. We need to start work on today’s challenges – how to design AI so that we can understand it and control it, and how to change our societal systems so we gain the great benefits AI offers – if we’re to be remotely ready for that. We can’t assume we’ll get it right by default.

The benefits of this technology cannot be understated. Developed correctly, AI will allow us to make better progress on the hard scientific problems we will face in coming decades, and might prove crucial to a more sustainable life for our world’s 7 billion inhabitants. It will change the world for the better – if we take the time to think and plan carefully. This is the motivation that has brought AI researchers, and experts from all the disciplines it impacts – together to sign this letter.

This content was first published at futureoflife.org on February 1, 2015.

About the Future of Life Institute

The Future of Life Institute (FLI) is a global non-profit with a team of 20+ full-time staff operating across the US and Europe. FLI has been working to steer the development of transformative technologies towards benefitting life and away from extreme large-scale risks since its founding in 2014. Find out more about our mission or explore our work.

Our content

Related content

Other posts about 

If you enjoyed this content, you also might also be interested in:

Could we switch off a dangerous AI?

New research validates age-old concerns about the difficulty of constraining powerful AI systems.
27 December, 2024

Why You Should Care About AI Agents

Powerful AI agents are about to hit the market. Here we explore the implications.
4 December, 2024
Our content

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up
linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram