Here are the July and August global catastrophic risk news summaries, written by Robert de Neufville of the Global Catastrophic Risk Institute. The July summary covers the Iran deal, Russia’s new missile early warning system, dangers of AI, new Ebola cases, and more. The August summary covers the latest confrontation between North and South Korea, the world’s first low-enriched uranium storage bank, the “Islamic Declaration on Global Climate Change”, global food system vulnerabilities, and more.
This event occurred on September 1, 2015.
When one of the world’s leading experts in Artificial Intelligence makes a speech suggesting that a third of existing British jobs could be made obsolete by automation, it is time for think tanks and the policymaking community to take notice. This observation – by Associate Professor of Machine Learning at the University of Oxford, Michael Osborne – was one of many thought provoking comments made at a special event on the policy implications of the rise of AI we held this week with the Cambridge Centre for the Study of Existential Risk.
The event formed part of Policy Exchange’s long-running research programme looking at reforms that are needed in policies relating to welfare and the workplace – as well as other long-term challenges facing the country. We gathered together the world’s leading authorities on AI to consider the rise of this new technology, which will form one of the greatest challenges facing our society in this century. The speakers were the following:
• Huw Price, Bertrand Russell Professor of Philosophy and a Fellow of Trinity College at the University of Cambridge, and co-founder of the Centre for the Study of Existential Risk.
• Stuart Russell, Professor at the University of California at Berkeley and author of the standard textbook on AI.
• Nick Bostrom, Professor at Oxford University, author of “Superintelligence” and Founding Director of the Future of Humanity Institute.
• Michael Osborne. Associate Professor at Oxford University and co-director of the Oxford Martin programme on Technology and Employment.
• Murray Shanahan. Professor at Imperial College London, scientific advisor to the film Ex Machina, and author of “The Technological Singularity”
In this bulletin, we ask two of our speakers to share their views about the potential and risks from future AI:
Michael Osborne: Machine Learning and the Future of Work
Machine learning is the study of algorithms that can learn and act. Why use a machine when we already have over six billion humans to choose from? One reason is that algorithms are significantly cheaper, and becoming ever more so. But just as important, algorithms can often do better than humans, avoiding the biases that taint human decision making.
There are big benefits to be gained from the rise of the algorithms. Big data is already leading to programs that can handle increasingly sophisticated tasks, such as translation. Computational health informatics will transform health monitoring, allowing us to release patients from their hospital beds much earlier and freeing up resources in the NHS. Self-driving cars will allow us to cut down on the 90% of traffic accidents caused by human error, while the data generated by their constant monitoring of the impact will have big consequences for mapping, insurance, and the law.
Nevertheless, there will be big challenges from the disruption automation creates. New technologies derived from mobile machine learning and robotics threaten employment in logistics, sales and clerical occupations. Over the next few decades, 47% of jobs in America are at high risk of automation, and 35% of jobs in the UK. Worse, it will be the already vulnerable who are most at risk, while high-skilled jobs are relatively resistant to computerisation. New jobs are emerging to replace the old, but only slowly – only 0.5% of the US workforce is employed in new industries created in the 21st century.
Policy makers are going to have to do more to ensure that we can all share in the great prosperity promised by technology.
Stuart Russell: Killer Robots, the End of Humanity, and All That: Policy Implications
Everything civilisation offers is the product of intelligence. If we can use AI to amplify our intelligence, the benefits to humanity are potentially immeasurable.
The good news is that progress is accelerating. Solid theoretical foundations, more data and computing, and huge investments from private industry have created a virtuous cycle of iterative improvement. On the current trajectory, further real-world impact is inevitable.
Of course, not all impact is good. As technology companies unveil ever more impressive demos, newspapers have been full of headlines warning of killer robots, or the loss of half of all jobs, or even the end of humanity. But how credible exactly are these nightmare scenarios? The short answer is we should not panic, but there are real risks that are worth taking seriously.
In the short term, lethal autonomous weapons or weapon systems that can select and fire upon targets on their own are worth taking seriously. According to defence experts, including the Ministry of Defence, these are probably feasible now, and they have already been the subject of three UN meetings in 2014-5. In the future, they are likely to be relatively cheap to mass produce, potentially making them much harder to control or contain than nuclear weapons. A recent open letter from 3,000 AI researchers argued for a total ban on the technology to prevent the start of a new arms race.
Looking further ahead, what, however, if we succeed in creating an AI system that can make decisions as well, or even significantly better than humans? The first thing to say is that we are several conceptual breakthroughs away from constructing such a general artificial intelligence, as compared to the more specific algorithms needed for an autonomous weapon or self-driving car.
It is highly unlikely that we will be able to create such an AI within the next five to ten years, but then conceptual breakthroughs are by their very nature hard to predict. The day before Leo Szilard conceived of neutron-induced chain reactions, the key to nuclear power, Lord Rutherford was claiming that, “anyone who looks for a source of power in the transformation of the atoms is talking moonshine.”
The danger from such a “superintelligent” AI is that it would not by default share the same goals as us. Even if we could agree amongst ourselves what the best human values were, we do not understand how to reliably formalise them into a programme. If we accidentally give a superintelligent AI the wrong goals, it could prove very difficult to stop.
For example, for many benign-sounding final goals we might try to give the computer, two plausible intermediate goals for an AI are to gain as many physical resources as possible and to refuse to allow itself to be terminated. We might think that we are just asking the machine to calculate as many digits of pi as possible, but it could judge that the best way to do so is turn the whole Earth into a supercomputer.
In short, we are moving with increasing speed towards what could be the biggest event in human history. Like global warming, there are significant uncertainties involved and the pain might not come for another few decades – but equally like global warming, the sooner we start to look at potential solutions the more likely we are to be successful. Given the complexity of the problem, we need much more technical research on what an answer might look like.