Skip to content

Digital Analogues (Part 2): Would corporate personhood be a good model for “AI personhood”?

Published:
June 20, 2016
Author:
Matt Scherer

Contents

This post is part of the Digital Analogues series, which examines the various types of persons or entities to which legal systems might analogize artificial intelligence (AI) systems. This post is the first of two that examines corporate personhood as a potential model for “AI personhood.”  Future posts will examine how AI could also be analogized to pets, wild animals, employees, children, and prisoners.


Could the legal concept of “corporate personhood” serve as a model for how legal systems treat AI?  Ever since the US Supreme Court’s Citizens United decision, corporate personhood has been a controversial topic in American political and legal discourse.  Count me in the group that thinks that Citizens United was a horrible decision and that the law treats corporations a little too much like ‘real’ people.  But I think the fundamental concept of corporate personhood is still sound.  Moreover, the historical reasons that led to the creation of “corporate personhood”–namely, the desire to encourage ambitious investments and the new technologies that come with them–holds lessons for how we may eventually decide to treat AI.

An Overview of Corporate Personhood

For the uninitiated, here is a brief and oversimplified review of how and why corporations came to be treated like “persons” in the eyes of the law.  During late antiquity and the Middle Ages, a company generally had no separate legal existence apart from its owner (or, in the case of partnerships, owners).  Because a company was essentially an extension of its owners, owners were personally liable for companies’ debts and other liabilities.  In the legal system, this meant that a plaintiff who successfully sued a company would be able to go after all of an owner’s personal assets.

This unlimited liability exposure meant that entrepreneurs were unlikely to invest in a company unless they could have a great deal of control over how that company would operate.  That, in turn, meant that companies rarely had more than a handful of owners, which made it very difficult to raise enough money for capital-intensive ventures.  When the rise of colonial empires and (especially) the Industrial Revolution created a need for larger companies capable of taking on more ambitious projects, the fact that companies had no separate legal existence and that their owners were subject to unlimited liability proved frustrating obstacles to economic growth.

The modern corporation was created to resolve these problems, primarily through two key features: legal personhood and limited liability.  “Personhood” means that under the law, corporations are treated like artificial persons, with a legal existence separate from their owners (shareholders).  Like natural persons (i.e., humans), corporations have the right to enter into contracts, own and dispose of assets, and file lawsuits–all in their own name.  “Limited liability” means that the owners of a corporation only stand to lose the amount of money, or capital, that they have invested in the corporation.  Plaintiffs cannot go after a corporate shareholder’s personal assets unless the shareholder engaged in unusual misconduct. Together, these features give a corporation a legal existence that is largely separate from its creators and owners.

The underlying theory of corporate personhood is based on economic and (supposedly) social utility. Corporations were granted contract and property rights to encourage investment and to facilitate large financial and property transactions, both of which were made easier by treating a corporation as an entity that is legally separate from its owner(s). Over time, corporations have accreted additional rights and responsibilities to promote other economic and social goals. The basic premise is that a corporation is a legal entity that the state treats like a “person” with respect to certain rights and responsibilities–and a person is, generally speaking, only responsible for his or her own conduct.

Would “AI Personhood” Be a Good Idea?

What lessons might corporate personhood hold for AI?  Well, just as the traditional rules of business liability hindered the creation of sophisticated business entities, the traditional rules of products liability might hinder the creation of sophisticated AI systems.  AI personhood, like corporate personhood, might prove an elegant solution to that potential problem.

Here’s why liability concerns might create a desire for AI personhood.  As it stands, my anecdotal impressions from conversations with other lawyers is that courts would apply the ordinary rules of products liability to AI systems that cause harm.  Those liability rules (at least in the United States) are in some ways reminiscent of the unlimited liability rules that traditionally applied to the owners of non-corporate companies.  In most instances, a company is strictly liable for any harm caused by a defective product that it designed, manufactured, or sold.  The “strict” part of strict liability means that the company can be held liable even if it was in no way negligent–just as a co-owner of a pre-corporate company could be liable even for debts and contracts that he played no role in creating.

Strict liability actually makes perfect sense and seems perfectly fair (to me, at least) for technologies whose mode of operation is mostly predictable.  But with the advent of machine learning, how a learning AI system operates will be at least partially a function of its ‘experiences’ after it reaches a consumer–a period when the companies who designed, manufactured, and sold the AI system would no longer be able to control the system’s operation.

Because even the most careful designers and manufacturers will not be able to control or predict what an AI system will experience after it leaves their care, companies will have a strong incentive to limit the learning capabilities of AI systems, thus reducing the scope of AI systems’ ability to do things that it wouldn’t predict.  That might sound like a good thing, just as keeping companies small and closely monitored by their owners sounds like a good thing in the business world.  But the ability of AI systems to do things that their designers would not have predicted is actually part of what makes AI systems so enticing, as I argued in my recently published article:

Because A.I. systems are not inherently limited by the preconceived notions, rules of thumb, and conventional wisdom upon which most human decision-makers rely, A.I. systems have the capacity to come up with solutions that humans may not have considered, or that they considered and rejected in favor of more intuitively appealing options. It is precisely this ability to generate unique solutions that makes the use of A.I. attractive in an ever-increasing variety of fields.

Additionally, machine learning offers remarkable promise in terms of creating systems that can be tailored to the specific needs of each user in fields ranging from health care to consumer goods.

Consequently, to ensure that we can unlock the full potential of AI, we may eventually reach a point where it would make sense to recognize AI systems as having their own separate legal existence, with a corporation-like limitation of liability for the designers, manufacturers, and owners of AI systems.

What might AI personhood look like?  What rights and responsibilities should AI systems have under the law?  Under what circumstances might the designers, manufacturers, and owners of AI system be held liable for AI-caused harm if the AI system itself is a distinct person?  And what are the drawbacks to recognizing this type of artificial “personhood?”  The next installment in this series will examine those questions.

This content was first published at futureoflife.org on June 20, 2016.

About the Future of Life Institute

The Future of Life Institute (FLI) is a global non-profit with a team of 20+ full-time staff operating across the US and Europe. FLI has been working to steer the development of transformative technologies towards benefitting life and away from extreme large-scale risks since its founding in 2014. Find out more about our mission or explore our work.

Our content

Related content

Other posts about ,

If you enjoyed this content, you also might also be interested in:

Catastrophic AI Scenarios

Concrete examples of how AI could go wrong
February 1, 2024

Gradual AI Disempowerment

Could an AI takeover happen gradually?
February 1, 2024

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram