Skip to content

Op-Ed: If AI Systems Can Be “Persons,” What Rights Should They Have?

Published:
July 20, 2016
Author:
Matt Scherer

Contents

The last segment in this series noted that corporations came into existence and were granted certain rights because society believed it would be economically and socially beneficial to do so.  There has, of course, been much push-back on that front.  Many people both inside and outside of the legal world ask if we have given corporations too many rights and treat them a little too much like people.  So what rights and responsibilities should we grant to AI systems if we decide to treat them as legal “persons” in some sense?

Uniquely in this series, this post will provide more questions than answers.  This is in part because the concept of “corporate personhood” has proven to be so malleable over the years.  Even though corporations are the oldest example of artificial “persons” in the legal world, we still have not decided with any firmness what rights and responsibilities a corporation should have.  Really, I can think of only one ground rule for legal “personhood”: “personhood” in a legal sense requires, at a minimum, the right to sue and the ability to be sued.  Beyond that, the meaning of “personhood” has proven to be pretty flexible.  That means that for the most part, we should be able decide the rights and responsibilities included within the concept of AI personhood on a right-by-right and responsibility-by-responsibility basis.

With corporations, property rights and the responsibilities that come with it lie at the core of corporate personhood. Corporations can enter into contracts, but also can be held liable if they breach a contract. They can hold patents but also can be sued for patent infringement. Should comparable rights to property–whether real estate, intellectual property, or other assets–be granted to AI systems?  If so, should AI systems be able to buy or sell such property without a human’s say-so?

If the goal of AI personhood is to encourage more ambitious investments in AI, then the answer to these questions is probably “yes.”  If an AI system has its own assets, there will be an obvious source of funds from which victims could be compensated if the AI system causes harm.  But many people will likely be uncomfortable with AI systems not only executing financial and property transactions–which electronic systems already do–but owning the underlying property and financial assets themselves.

What about the right to engage in free speech/make political contributions?  Corporations are obviously not capable of “speech” in the literal sense, but the Supreme Court’s (in)famous Citizens United case firmly established that corporations have the right to engage in political expression by making campaign contributions.  And AI systems, unlike corporations, are actually capable of literal (if simulated) speech.  The antics of Tay the Racist Chatbot back in March demonstrate that AI-generated speech can be offensive and even hurtful.  Should governments be able to restrict the things that AI systems can and cannot say?  Or should AI systems have a human-like right to express itself free from government interference?

Or how about this: should an AI system be entitled to religious freedom protections?  It might sound strange to describe a robot as having religious beliefs, but the U.S. Supreme Court has held that closely-held corporations–i.e., corporations that have a small number of owners and are not traded on the stock market–are entitled to some measure of religious protection, at least in the sense that such corporations cannot be compelled to engage in activities that would violate the religious beliefs of its owners and managers.

Should a similar principle apply to AI systems?  For example, should a pharmaceutical AI system have the right to refuse to dispense contraceptives?  And who should have the right to set an AI system’s religious beliefs?  The people who designed the system?  Or the end user?  Should AI designers be allowed to program AI systems with religious beliefs that the end user cannot alter?

It is worth pausing here to note a key practical distinction between corporations and AI systems.  A corporation is a theoretical construct, something that effectively exists only on paper.  AI systems, by contrast, actually exist in the physical world.  A corporation has no ability to do anything without the aid of human agents to act on its behalf.  An autonomous AI system is, pretty much by definition, not subject to such inherent limitations.  The whole point of an autonomous vehicle, weapon, or electronic trading system is that they can do things without humans specifically telling them to do so.

Unfortunately, it’s not clear to me what the legal implications of AI systems’ autonomy and physical existence should be.  On one hand, the autonomy of AI systems and their ability to directly manipulate the physical world raises accountability concerns for AI personhood that far exceed the already-significant accountability concerns that surround corporations.  With a corporation, we can always reassure ourselves that humans are pulling the levers, even if the corporation is its own “person” in the eyes of the law.  No such reassurance will be available if we recognize AI-based persons.  That might suggest that we want to place greater limits on the scope of AI personhood than has been the case for corporate personhood.

On the other hand, the potential for greater autonomy and physical presence makes AI systems seem more human-like than corporations.  That might suggest that we should grant AI systems more rights than corporations.  It will be interesting to see which of these two views prevail if legal systems decide to recognize some form of personhood for AI systems.

The next entry in this series will examine another potential analogue for AI systems: animals.

This content was first published at futureoflife.org on July 20, 2016.

About the Future of Life Institute

The Future of Life Institute (FLI) is a global non-profit with a team of 20+ full-time staff operating across the US and Europe. FLI has been working to steer the development of transformative technologies towards benefitting life and away from extreme large-scale risks since its founding in 2014. Find out more about our mission or explore our work.

Our content

Related content

Other posts about ,

If you enjoyed this content, you also might also be interested in:

The Pause Letter: One year later

It has been one year since our 'Pause AI' open letter sparked a global debate on whether we should temporarily halt giant AI experiments.
March 22, 2024

Catastrophic AI Scenarios

Concrete examples of how AI could go wrong
February 1, 2024

Gradual AI Disempowerment

Could an AI takeover happen gradually?
February 1, 2024

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram