Skip to content

Digital Analogues (Intro): Artificial Intelligence Systems Should Be Treated Like…

Published:
June 9, 2016
Author:
Matt Scherer

Contents

This piece was originally published on Medium in Imaginary Papers, an online publication of Arizona State University’s Center for Science and the Imagination.  Matt Scherer runs the Law and AI blog.


Artificial intelligence (A.I.) systems are becoming increasingly ubiquitous in our economy and society, and are being designed with an ever-increasing ability to operate free of direct human supervision. Algorithmic trading systems account for a huge and still-growing share of stock market transactions, and autonomous vehicles with A.I. “drivers” are already being tested on the roads. Because they operate with less human supervision and control than earlier technologies, the rising prevalence of autonomous A.I. raises the question of how legal systems can ensure that victims receive compensation if (read: when) an A.I. system causes physical or economic harm during the course of its operations.

An increasingly hot topic in the still-small world of people interested in the legal issues surrounding A.I. is whether an autonomous A.I. system should be treated like a “person” in the eyes of the law. In other words, should we give A.I. systems some of the rights and responsibilities normally associated with natural persons (i.e., humans)? If so, precisely what rights should be granted to A.I. systems and what responsibilities should be imposed on them? Should human actors be assigned certain responsibilities in terms of directing and supervising the actions of autonomous systems? How should legal responsibility for an A.I. system’s behavior be allocated between the system itself and its human owner, operator, or supervisor?

Because it seems unlikely that Congress will be passing A.I. liability legislation in the near future, it likely will fall to the court system to answer these questions. In so doing, American courts will likely use the tried-and-tested common law approach of analogizing A.I. systems to something(s) in other areas of the law.

So, what are the potential analogues that could serve as a model for how the legal system treats A.I.?

Corporate personhood provides what is perhaps the most obvious model for A.I. personhood. Corporations have, at a minimum, a right to enter into contracts as well as to buy, own, and sell property. A corporation’s shareholders are only held liable for the debts of and injuries caused by the corporation if the shareholders engage in misconduct of some kind — say, by knowingly failing to invest enough money in the corporation to cover its debts, or by treating the corporation’s financial assets as a personal piggy bank. We could bestow a similar type of limited legal personhood on A.I. systems: give them property rights and the ability to sue and be sued, and only leave their owners on the hook under limited sets of circumstances. Of course, the tensions created by corporate personhood would likely be repeated with A.I. systems. Should personhood for an A.I. system include a right to free speech and direct liability for criminal acts?

Alternatively, we could treat the relationship between an A.I. system and its owner as akin to the relationship between an animal and its owner. Under traditional common law, if a “wild” animal that is considered dangerous by nature and kept as a pet, the animal’s owner is “strictly liable” for any harm that the animal causes. That means that if Farmer Jones’ pet wolf Fang escapes and kills two of Farmer Smith’s chickens, Farmer Jones is legally responsible for compensating Farmer Smith for the lost chickens, even if Fang had always been perfectly tame previously.

For domestic animals kept as pets, however, the owner generally must have some knowledge of that specific animal’s “dangerous propensities.” If Fang was a Chihuahua instead of a wolf, Farmer Smith might be out of luck unless he could show that Fang had previously shown flashes of violence. Perhaps certain A.I. systems that seem particularly risky, like autonomous weapon systems, could be treated like wild animals, while systems that seem particularly innocuous or that have a proven safety record are treated like domestic animals.

If we want to anthropomorphize the legal treatment of A.I. systems, we could treat them like employees and their owners like employers. American employers generally have a duty to exercise care in the hiring and supervision of employees. We might similarly require owners to exercise care when buying an A.I. system to serve in a particular role and to ensure that a system receives an adequate level of supervision, particularly if the system’s owner knows that it poses a particular risk.

And if we really want to anthropomorphize A.I. systems, we could analogize them to children and impose parent-like responsibilities on their owners. Like children, we could recognize only very limited types of rights for novel A.I. systems, but grant them additional rights as they “mature” — at least as long as they are not naughty. And like parents, we could hold a system’s owner civilly — and perhaps even criminally — liable if the system causes harm while in the “care” of the owner.

To close on a completely different note, perhaps A.I. systems should be treated like prisoners. Prisoners start out as ordinary citizens from the perspective of the law, but they lose civil rights and are required to take on additional responsibilities after they commit criminal acts. A recklessly forward-thinking approach to A.I. personhood might similarly start with the assumption that A.I. systems are people too, and give them the full panoply of civil rights that human beings enjoy. If a system breaks the law, however, society would reserve the right to punish it, whether by placing it on a form of “probation” requiring additional supervision, “boxing it in” by limiting its freedom to operate, or even by imposing the digital equivalent of the death penalty. Of course, these punishments would prove difficult to impose if the system is cloud-based or is otherwise inseparably distributed across multiple jurisdictions.


Which of these analogies appeals to you most likely depends on how skeptical you are of A.I. technologies and whether you believe it is morally and ethically acceptable to recognize “personhood” in artificial systems. In the end, legal systems will undoubtedly come up with unique ways of handling cases involving A.I.-caused harm. But these five digital analogues may provide us with a glimpse of how this emerging area of law may develop.

This content was first published at futureoflife.org on June 9, 2016.

About the Future of Life Institute

The Future of Life Institute (FLI) is a global non-profit with a team of 20+ full-time staff operating across the US and Europe. FLI has been working to steer the development of transformative technologies towards benefitting life and away from extreme large-scale risks since its founding in 2014. Find out more about our mission or explore our work.

Our content

Related content

Other posts about ,

If you enjoyed this content, you also might also be interested in:

Why You Should Care About AI Agents

Powerful AI agents are about to hit the market. Here we explore the implications.
4 December, 2024
Our content

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up
linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram