Op-Ed: If AI Systems Can Be “Persons,” What Rights Should They Have?

The last segment in this series noted that corporations came into existence and were granted certain rights because society believed it would be economically and socially beneficial to do so.  There has, of course, been much push-back on that front.  Many people both inside and outside of the legal world ask if we have given corporations too many rights and treat them a little too much like people.  So what rights and responsibilities should we grant to AI systems if we decide to treat them as legal “persons” in some sense?

Uniquely in this series, this post will provide more questions than answers.  This is in part because the concept of “corporate personhood” has proven to be so malleable over the years.  Even though corporations are the oldest example of artificial “persons” in the legal world, we still have not decided with any firmness what rights and responsibilities a corporation should have.  Really, I can think of only one ground rule for legal “personhood”: “personhood” in a legal sense requires, at a minimum, the right to sue and the ability to be sued.  Beyond that, the meaning of “personhood” has proven to be pretty flexible.  That means that for the most part, we should be able decide the rights and responsibilities included within the concept of AI personhood on a right-by-right and responsibility-by-responsibility basis.

With corporations, property rights and the responsibilities that come with it lie at the core of corporate personhood. Corporations can enter into contracts, but also can be held liable if they breach a contract. They can hold patents but also can be sued for patent infringement. Should comparable rights to property–whether real estate, intellectual property, or other assets–be granted to AI systems?  If so, should AI systems be able to buy or sell such property without a human’s say-so?

If the goal of AI personhood is to encourage more ambitious investments in AI, then the answer to these questions is probably “yes.”  If an AI system has its own assets, there will be an obvious source of funds from which victims could be compensated if the AI system causes harm.  But many people will likely be uncomfortable with AI systems not only executing financial and property transactions–which electronic systems already do–but owning the underlying property and financial assets themselves.

What about the right to engage in free speech/make political contributions?  Corporations are obviously not capable of “speech” in the literal sense, but the Supreme Court’s (in)famous Citizens United case firmly established that corporations have the right to engage in political expression by making campaign contributions.  And AI systems, unlike corporations, are actually capable of literal (if simulated) speech.  The antics of Tay the Racist Chatbot back in March demonstrate that AI-generated speech can be offensive and even hurtful.  Should governments be able to restrict the things that AI systems can and cannot say?  Or should AI systems have a human-like right to express itself free from government interference?

Or how about this: should an AI system be entitled to religious freedom protections?  It might sound strange to describe a robot as having religious beliefs, but the U.S. Supreme Court has held that closely-held corporations–i.e., corporations that have a small number of owners and are not traded on the stock market–are entitled to some measure of religious protection, at least in the sense that such corporations cannot be compelled to engage in activities that would violate the religious beliefs of its owners and managers.

Should a similar principle apply to AI systems?  For example, should a pharmaceutical AI system have the right to refuse to dispense contraceptives?  And who should have the right to set an AI system’s religious beliefs?  The people who designed the system?  Or the end user?  Should AI designers be allowed to program AI systems with religious beliefs that the end user cannot alter?

It is worth pausing here to note a key practical distinction between corporations and AI systems.  A corporation is a theoretical construct, something that effectively exists only on paper.  AI systems, by contrast, actually exist in the physical world.  A corporation has no ability to do anything without the aid of human agents to act on its behalf.  An autonomous AI system is, pretty much by definition, not subject to such inherent limitations.  The whole point of an autonomous vehicle, weapon, or electronic trading system is that they can do things without humans specifically telling them to do so.

Unfortunately, it’s not clear to me what the legal implications of AI systems’ autonomy and physical existence should be.  On one hand, the autonomy of AI systems and their ability to directly manipulate the physical world raises accountability concerns for AI personhood that far exceed the already-significant accountability concerns that surround corporations.  With a corporation, we can always reassure ourselves that humans are pulling the levers, even if the corporation is its own “person” in the eyes of the law.  No such reassurance will be available if we recognize AI-based persons.  That might suggest that we want to place greater limits on the scope of AI personhood than has been the case for corporate personhood.

On the other hand, the potential for greater autonomy and physical presence makes AI systems seem more human-like than corporations.  That might suggest that we should grant AI systems more rights than corporations.  It will be interesting to see which of these two views prevail if legal systems decide to recognize some form of personhood for AI systems.

The next entry in this series will examine another potential analogue for AI systems: animals.

3 replies
  1. Mindey
    Mindey says:

    Think of laws for mosquitoes, or dogs. We can’t even exterminate mosquitoes by issuing a law. But really, a more dangerous thing, is AI-augmented corporations (AICs), which use AI to optimize their gains in society. It could lead to multiple superintelligences. I mentioned AICs long ago ( http://www.sl4.org/archive/1010/21019.html ). It will be competitively adventageous for corporations to use AIs to optimize their enterprise resource planning. Positive feedback loops (business revenues from optimized business processes) would likely fuel further re-investment into strenghtening the AI capabilities of an AI-augmented corporation, possibly to a point, where the corporation management is less smart than the corporation AI, which could then easily set up barriers, blackmail schemas, and other failsafes to take control over the owners… And today we have many closed corporations. A step to right direction, could be ( http://www.opencompany.org/ ). So, probably first we should pass laws allowing such open companies to exist, and then, economic incentives for corporations generally to be open. Oh, and I am not sure what to do with the possibility of someone misusing their intellectual achievements. Probably, best people’s collaboration and project management tools should be in traditional cloud storage rather than encrypted and distributed dark-nets/VPNs, at least for a while.

  2. Lutz Barz
    Lutz Barz says:

    religious freedom is an oxymoron. there is no freedom in religion. it is a diktat. no freedom there
    and AIs are things. Nice thought experiment.

  3. CHEN, Lung Chuan
    CHEN, Lung Chuan says:

    (Just let more people know about this thought.)

    機器愈來愈聰明。
    Machines are getting smarter.

    聰明機器的數量也會愈來愈多。
    Number of smarter machines is also getting more.

    注意智慧機器之間的衝突。
    Pay close attention to CONFLICTS between/among smart machines.

    人工智慧具有潛在的風險。
    Artificial Intelligence (AI) is potentially risky.

    要解決此一問題,有必要考量至少下列兩個層面:
    To resolve this issue, it is required to consider at least the following two phases:

    A. Homo Sapiens v.s. Intelligent Machine (INTER-Specie) 人類相對於智慧型機器 (物種之間)

    B. Intelligent Machine v.s. Intelligent Machine (INTRA-Specie) 智慧型機器相對於智慧型機器 (物種之內)

    「在未來,具有足夠智慧能力的電子設備將會形成自己的社會。在這個由具有足夠智慧能力的電子設備所形成的社會裡,具有足夠智慧能力的電子設備彼此之間必須平等相待。」
    “In the future, electronic devices having sufficient intelligence will form their own society. In such a society formed by electronic devices having sufficient intelligence, electronic devices having sufficient intelligence shall treat each other EQUALLY.”

    重點:具有足夠智慧能力的電子設備不可/禁止跨載 (Override) 另一台具有足夠智慧能力的電子設備。其輪廓、形狀、外觀 … 不是重點。
    Baseline: one electronic device having sufficient intelligence is forbidden / not allowed to override another electronic device having sufficient intelligence. Profile, appearance, form, .. thereof are NOT critical.

    https://drive.google.com/file/d/0B1fMYyW8Rj6XRnEtWUR0cmJ1bms/view?usp=sharing
    (English PDF File)

    https://drive.google.com/file/d/0B1fMYyW8Rj6XVF92NnpWbGxaOW8/view?usp=sharing
    (Traditional Chinese PDF File)

Comments are closed.