This event occurred on September 1, 2015.
When one of the world’s leading experts in Artificial Intelligence makes a speech suggesting that a third of existing British jobs could be made obsolete by automation, it is time for think tanks and the policymaking community to take notice. This observation – by Associate Professor of Machine Learning at the University of Oxford, Michael Osborne – was one of many thought provoking comments made at a special event on the policy implications of the rise of AI we held this week with the Cambridge Centre for the Study of Existential Risk.
The event formed part of Policy Exchange’s long-running research programme looking at reforms that are needed in policies relating to welfare and the workplace – as well as other long-term challenges facing the country. We gathered together the world’s leading authorities on AI to consider the rise of this new technology, which will form one of the greatest challenges facing our society in this century. The speakers were the following:
• Huw Price, Bertrand Russell Professor of Philosophy and a Fellow of Trinity College at the University of Cambridge, and co-founder of the Centre for the Study of Existential Risk.
• Stuart Russell, Professor at the University of California at Berkeley and author of the standard textbook on AI.
• Nick Bostrom, Professor at Oxford University, author of “Superintelligence” and Founding Director of the Future of Humanity Institute.
• Michael Osborne. Associate Professor at Oxford University and co-director of the Oxford Martin programme on Technology and Employment.
• Murray Shanahan. Professor at Imperial College London, scientific advisor to the film Ex Machina, and author of “The Technological Singularity”
In this bulletin, we ask two of our speakers to share their views about the potential and risks from future AI:
Michael Osborne: Machine Learning and the Future of Work
Machine learning is the study of algorithms that can learn and act. Why use a machine when we already have over six billion humans to choose from? One reason is that algorithms are significantly cheaper, and becoming ever more so. But just as important, algorithms can often do better than humans, avoiding the biases that taint human decision making.
There are big benefits to be gained from the rise of the algorithms. Big data is already leading to programs that can handle increasingly sophisticated tasks, such as translation. Computational health informatics will transform health monitoring, allowing us to release patients from their hospital beds much earlier and freeing up resources in the NHS. Self-driving cars will allow us to cut down on the 90% of traffic accidents caused by human error, while the data generated by their constant monitoring of the impact will have big consequences for mapping, insurance, and the law.
Nevertheless, there will be big challenges from the disruption automation creates. New technologies derived from mobile machine learning and robotics threaten employment in logistics, sales and clerical occupations. Over the next few decades, 47% of jobs in America are at high risk of automation, and 35% of jobs in the UK. Worse, it will be the already vulnerable who are most at risk, while high-skilled jobs are relatively resistant to computerisation. New jobs are emerging to replace the old, but only slowly – only 0.5% of the US workforce is employed in new industries created in the 21st century.
Policy makers are going to have to do more to ensure that we can all share in the great prosperity promised by technology.
Stuart Russell: Killer Robots, the End of Humanity, and All That: Policy Implications
Everything civilisation offers is the product of intelligence. If we can use AI to amplify our intelligence, the benefits to humanity are potentially immeasurable.
The good news is that progress is accelerating. Solid theoretical foundations, more data and computing, and huge investments from private industry have created a virtuous cycle of iterative improvement. On the current trajectory, further real-world impact is inevitable.
Of course, not all impact is good. As technology companies unveil ever more impressive demos, newspapers have been full of headlines warning of killer robots, or the loss of half of all jobs, or even the end of humanity. But how credible exactly are these nightmare scenarios? The short answer is we should not panic, but there are real risks that are worth taking seriously.
In the short term, lethal autonomous weapons or weapon systems that can select and fire upon targets on their own are worth taking seriously. According to defence experts, including the Ministry of Defence, these are probably feasible now, and they have already been the subject of three UN meetings in 2014-5. In the future, they are likely to be relatively cheap to mass produce, potentially making them much harder to control or contain than nuclear weapons. A recent open letter from 3,000 AI researchers argued for a total ban on the technology to prevent the start of a new arms race.
Looking further ahead, what, however, if we succeed in creating an AI system that can make decisions as well, or even significantly better than humans? The first thing to say is that we are several conceptual breakthroughs away from constructing such a general artificial intelligence, as compared to the more specific algorithms needed for an autonomous weapon or self-driving car.
It is highly unlikely that we will be able to create such an AI within the next five to ten years, but then conceptual breakthroughs are by their very nature hard to predict. The day before Leo Szilard conceived of neutron-induced chain reactions, the key to nuclear power, Lord Rutherford was claiming that, “anyone who looks for a source of power in the transformation of the atoms is talking moonshine.”
The danger from such a “superintelligent” AI is that it would not by default share the same goals as us. Even if we could agree amongst ourselves what the best human values were, we do not understand how to reliably formalise them into a programme. If we accidentally give a superintelligent AI the wrong goals, it could prove very difficult to stop.
For example, for many benign-sounding final goals we might try to give the computer, two plausible intermediate goals for an AI are to gain as many physical resources as possible and to refuse to allow itself to be terminated. We might think that we are just asking the machine to calculate as many digits of pi as possible, but it could judge that the best way to do so is turn the whole Earth into a supercomputer.
In short, we are moving with increasing speed towards what could be the biggest event in human history. Like global warming, there are significant uncertainties involved and the pain might not come for another few decades – but equally like global warming, the sooner we start to look at potential solutions the more likely we are to be successful. Given the complexity of the problem, we need much more technical research on what an answer might look like.
This event was held January 2-5, 2015 in San Juan, Puerto Rico.
We organized our first conference, The Future of AI: Opportunities and Challenges. This conference brought together the world’s leading AI builders from academia and industry to engage with each other and experts in economics, law and ethics. The goal was to identify promising research directions that can help maximize the future benefits of AI while avoiding pitfalls. To facilitate candid and constructive discussions, there was no media present and Chatham House Rules: nobody’s talks or statements will be shared without their permission.
This event was held Thursday, November 6, 2014 in Harvard auditorium Jefferson Hall 250.
Our Earth is 45 million centuries old. But this century is the first when one species ours can determine the biosphere’s fate. Threats from the collective “footprint” of 9 billion people seeking food, resources and energy are widely discussed. But less well studied is the potential vulnerability of our globally-linked society to the unintended consequences of powerful technologies not only nuclear, but (even more) biotech, advanced AI, geo-engineering and so forth. More information here.
This event was held Thursday, September 4th, 2014 in Harvard auditorium Emerson 105.
What happens when machines surpass humans in general intelligence? Will artificial agents save or destroy us? In his new book – Superintelligence: Paths, Dangers, Strategies – Professor Bostrom explores these questions, laying the foundation for understanding the future of humanity and intelligent life.
|Photos from the talk|
The coming decades promise dramatic progress in technologies from synthetic biology to artificial intelligence, with both great benefits and great risks. Please watch the video below for a fascinating discussion about what we can do now to improve the chances of reaping the benefits and avoiding the risks, moderated by Alan Alda and featuring George Church (synthetic biology), Ting Wu (personal genetics), Andrew McAfee(second machine age, economic bounty and disparity), Frank Wilczek (near-term AI and autonomous weapons) and Jaan Tallinn (long-term AI and singularity scenarios).
- Alan Alda is an Oscar-nominated actor, writer, director, and science communicator, whose contributions range from M*A*S*H to Scientific American Frontiers.
- George Church is a professor of genetics at Harvard Medical School, initiated the Personal Genome Project, and invented DNA array synthesizers.
- Andrew McAfee is Associate Director of the MIT Center for Digital Business and author of the New York Times bestseller The Second Machine Age.
- Jaan Tallinn is a founding engineer of Skype and philanthropically supports numerous research organizations aimed at reducing existential risk.
- Frank Wilczek is a physics professor at MIT and a 2004 Nobel laureate for his work on the strong nuclear force.
- Ting Wu is a professor of Genetics at Harvard Medical School and Director of the Personal Genetics Education project.
|Photos from the talk|