Skip to content

Is the Legal World Ready for AI?

Published:
January 19, 2016
Author:
Ariel Conn

Contents

Our smart phones are increasingly giving us advice and directions based on their best Internet searches. Driverless cars are rolling down the roads in many states. Increasingly complicated automation is popping up in nearly every industry. As exciting and beneficial as most these advancements are, problems will still naturally occur. Is the legal system keeping up with the changing technology?

Matt Scherer, a lawyer and legal scholar based out of Portland OR, is concerned that the legal system is not currently designed to absorb the unique problems that will likely arise with the rapid growth of artificial intelligence. In many cases, answering questions that seem simple (e.g. Who is responsible when something goes wrong?) turn out to be incredibly complex when artificial intelligence systems are involved.

Last year, Scherer wrote an article, soon be published in the Harvard Journal of Law and Technology, that highlights the importance of attempting to regulate this growing field, while also outlining the many challenges. Concerns about overreach and calls for regulations are common with new technologies, but “what is striking about AI […] is that many of the concerns are being voiced by leaders of the tech industry.”

In fact, it was many of these publicized concerns — by the likes of Elon Musk, Bill Gates, and Steve Wozniak — that led Scherer to begin researching AI from a legal perspective, and he quickly realized just how daunting AI law might be. As he says in the article, “The traditional methods of regulation—such as product licensing, research and development systems, and tort liability—seem particularly unsuited to managing the risks associated with intelligent and autonomous machines.” He goes on to explain that because so many people in so many geographical locations — and possibly even from different organizations — might be involved in the creation of some AI system, it’s difficult to predict what regulations would be necessary and most useful for any given system. Meanwhile the complexity behind the creation of the AI, when paired with the automation and machine learning of the system, could make it difficult to determine who is at fault if something catastrophic goes wrong.

Regulating something, such as AI, that doesn’t have a clear and concise definition poses its own unique problems.

Artificial intelligence is typically compared to human intelligence, which also isn’t well defined. As the article explains: “Definitions of intelligence thus vary widely and focus on a myriad of interconnected human characteristics that are themselves difficult to define, including consciousness, self-awareness, language use, the ability to learn, the ability to abstract, the ability to adapt, and the ability to reason.” This question of definition is further exacerbated by trying to understand what the intent of the machine (or system) was: “Whether and when a machine can have intent is more a metaphysical question than a legal or scientific one, and it is difficult to define “goal” in a manner that avoids requirements pertaining to intent and self-awareness without creating an over-inclusive definition.”

Scherer’s article goes into much more depth about the challenges of regulating AI. It’s a fascinating topic that we’ll also be covering in further detail in coming weeks. In the meantime, the article is a highly recommended read.

AI_law_harassment_comic

 

 

This content was first published at futureoflife.org on January 19, 2016.

About the Future of Life Institute

The Future of Life Institute (FLI) is a global non-profit with a team of 20+ full-time staff operating across the US and Europe. FLI has been working to steer the development of transformative technologies towards benefitting life and away from extreme large-scale risks since its founding in 2014. Find out more about our mission or explore our work.

Our content

Related content

Other posts about ,

If you enjoyed this content, you also might also be interested in:

The Pause Letter: One year later

It has been one year since our 'Pause AI' open letter sparked a global debate on whether we should temporarily halt giant AI experiments.
March 22, 2024

Catastrophic AI Scenarios

Concrete examples of how AI could go wrong
February 1, 2024

Gradual AI Disempowerment

Could an AI takeover happen gradually?
February 1, 2024

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram