When Should Machines Make Decisions?

Contents
Click here to see this page in other languages: Chinese   RussianÂ
Human Control: Humans should choose how and whether to delegate decisions to AI systems, to accomplish human-chosen objectives.
When is it okay to let a machine make a decision instead of a person? Most of us allow Google Maps to choose the best route to a new location. Many of us are excited to let self-driving cars take us to our destinations while we work or daydream. But are you ready to let your car choose your destination for you? The car might recognize that your ultimate objective is to eat or to shop or to run some errand, but most of the time, we have specific stores or restaurants that we want to go to, and we may not want the vehicle making those decisions for us.
What about more challenging decisions? Should weapons be allowed to choose who to kill? If so, how do they make that choice? And how do we address the question of control when artificial intelligence becomes much smarter than people? If an AI knows more about the world and our preferences than we do, would it be better if the AI made all of our decisions for us?
Questions like these are not easy to address. In fact, two of the AI experts I interviewed responded to this Principle with comments like, âYeah, this is tough,â and âRight, thatâs very, very tricky.â
And everyone I talked to agreed that this question of human control taps into some of the most challenging problems facing the design of AI.
âI think this is hugely important,â said Susan Craw, a Research Professor at Robert Gordon University Aberdeen. âOtherwise youâll have systems wanting to do things for you that you donât necessarily want them to do, or situations where you donât agree with the way that systems are doing something.â
What does human control mean?
Joshua Greene, a psychologist at Harvard, cut right to the most important questions surrounding this Principle.
âThis is an interesting one because itâs not clear what it would mean to violate that rule,â Greene explained. âWhat kind of decision could an AI system make that was not in some sense delegated to the system by a human? AI is a human creation. This principle, in practice, is more about what specific decisions we consciously choose to let the machines make. One way of putting it is that we donât mind letting the machines make decisions, but whatever decisions they make, we want to have decided that they are the ones making those decisions.
âIn, say, a navigating robot that walks on legs like a human, the person controlling it is not going to decide every angle of every movement. The humans wonât be making decisions about where exactly each foot will land, but the humans will have said, âIâm comfortable with the machine making those decisions as long as it doesnât conflict with some other higher level command.ââ
Roman Yampolskiy, an AI researcher at the University of Louisville, suggested that we might be even closer to giving AI decision-making power than many realize.
âIn many ways we have already surrendered control to machines,â Yampolskiy said. âAIs make over 85% of all stock trades, control operation of power plants, nuclear reactors, electric grid, traffic light coordination and in some cases military nuclear response aka âdead hand.â Complexity and speed required to meaningfully control those sophisticated processes prevent meaningful human control. We are simply not quick enough to respond to ultrafast events, such as those in algorithmic trading and more and more seen in military drones. We are also not capable enough to keep thousands of variables in mind or to understand complicated mathematical models. Our reliance on machines will only increase but as long as they make good decisions (decisions we would make if we were smart enough, had enough data and enough time) we are OK with them making such decisions. It is only in cases where machine decisions diverge from ours that we would like to be able to intervene. Of course figuring out cases in which we diverge is exactly the unsolved Value Alignment Problem.â
Greene also elaborated on this idea: âThe worry is when you have machines that are making more complicated and consequential decisions than âwhere do to put the next footstep.â When you have a machine that can behave in an open-ended flexible way, how do you delegate anything without delegating everything? When you have someone who works for you and you have some problem that needs to be solved and you say, âGo figure it out,â you donât specify, âBut donât murder anybody in the process. Donât break any laws and donât spend all the companyâs money trying to solve this one small-sized problem.â There are assumptions in the background that are unspecified and fairly loose, but nevertheless very important.
âI like the spirit of this principle. Itâs a specification of what follows from the more general idea of responsibility, that every decision is either made by a person or specifically delegated to the machine. But this one will be especially hard to implement once AI systems start behaving in more flexible, open-ended ways.â
Trust and Responsibility
AI is often compared to a child, both in terms of what level of learning a system has achieved and also how the system is learning. And just as we would be with a child, weâre hesitant to give a machine too much control until itâs proved it can be trusted to be safe and accountable. Artificial intelligence systems may have earned our trust when it comes to maps, financial trading, and the operation of power grids, but some question whether this trend can continue as AI systems become even more complex or when safety and well-being are at greater risk.
John Havens, the Executive Director of The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems, explained, âUntil universally systems can show that humans can be completely out of the loop and more often than not it will be beneficial, then I think humans need to be in the loop.â
âHowever, the research Iâve seen also shows that right now is the most dangerous time, where humans are told, âJust sit there, the system works 99% of the time, and weâre good.â Thatâs the most dangerous situation,â he added, in reference to recent research that has found people stop paying attention if a system, like a self-driving car, rarely has problems. The research indicates that when problems do arise, people struggle to refocus and address the problem.
âI think it still has to be humans delegating first,â Havens concluded.
In addition to the issues already mentioned with decision-making machines, Patrick Lin, a philosopher at California Polytechnic State University, doesnât believe itâs clear who would be held responsible if something does go wrong.
âI wouldnât say that you must always have meaningful human control in everything you do,â Lin said. âI mean, it depends on the decision, but also I think this gives rise to new challenges. ⌠This is related to the idea of human control and responsibility. If you donât have human control, it could be unclear whoâs responsible … the context matters. It really does depend on what kind of decisions weâre talking about, that will help determine how much human control there needs to be.â
Susan Schneider, a philosopher at the University of Connecticut, also worried about how these problems could be exacerbated if we achieve superintelligence.
âEven now itâs sometimes difficult to understand why a deep learning system made the decisions that it did,â she said, adding later, âIf we delegate decisions to a system thatâs vastly smarter than us, I donât know how weâll be able to trust it, since traditional methods of verification seem break down.â
What do you think?
Should humans be in control of a machineâs decisions at all times? Is that even possible? When is it appropriate for a machine to take over, and when do we need to make sure a person is âawake at the wheel,â so to speak? There are clearly times when machines are more equipped to safely address a situation than humans, but is that all that matters? When are you comfortable with a machine making decisions for you, and when would you rather remain in control?
This article is part of a series on the 23 Asilomar AI Principles. The Principles offer a framework to help artificial intelligence benefit as many people as possible. But, as AI expert Toby Walsh said of the Principles, âOf course, itâs just a start. ⌠a work in progress.â The Principles represent the beginning of a conversation, and now we need to follow up with broad discussion about each individual principle. You can read the discussions about previous principles here.
About the Future of Life Institute
The Future of Life Institute (FLI) is a global think tank with a team of 20+ full-time staff operating across the US and Europe. FLI has been working to steer the development of transformative technologies towards benefitting life and away from extreme large-scale risks since its founding in 2014. Find out more about our mission or explore our work.
Related content
Other posts about AI, AI Safety Principles, Recent News

The U.S. Public Wants Regulation (or Prohibition) of Expert‑Level and Superhuman AI


