About Matt Scherer
Matt Scherer is an attorney and legal scholar based in Portland, Oregon and the editor of LawAndAI.com. He is also the author of Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies, and Strategies, which is being published in the Spring 2016 issue of the Harvard Journal of Law and Technology. When he's not writing about robots, Matt practices in the thoroughly human field of employment law at Buchanan Angeli Altschul & Sullivan LLP.
Entries by
Op-ed: Poll Shows Strong Support for AI Regulation Though Respondents Admit Limited Knowledge of AI
April 13, 2017 in AI, recent news /by Matt SchererOn April 11, Morning Consult released perhaps the most wide-ranging public survey ever conducted on AI-related issues. In the poll, 2200 Americans answered 39 poll questions about AI (plus a number of questions on other issues). The headline result that Morning Consult is highlighting is that overwhelming majorities of respondents supported national regulation (71% support) and
Op-Ed: If AI Systems Can Be “Persons,” What Rights Should They Have?
July 20, 2016 /3 Comments/in AI, recent news /by Matt SchererThe last segment in this series noted that corporations came into existence and were granted certain rights because society believed it would be economically and socially beneficial to do so. There has, of course, been much push-back on that front. Many people both inside and outside of the legal world ask if we have given
Op-ed: On Robot-delivered Bombs
July 11, 2016 in AI, recent news /by Matt Scherer“In An Apparent First, Police Used A Robot To Kill.” So proclaimed a headline on NPR’s website, referring to the method Dallas police used to end the standoff with Micah Xavier Johnson. Johnson, an army veteran, shot 12 police officers Thursday night, killing five of them. After his attack, he holed himself up in a
The Challenge of Diversity in the AI World
June 27, 2016 /3 Comments/in AI, recent news /by Matt SchererLet me start this post with a personal anecdote. At one of the first AI conferences I attended, literally every single one of the 15 or so speakers who presented on the conference’s first day were men. Finally, about 3/4 of the way through the two-day conference, a quartet of presentations on the social and economic
Digital Analogues (Part 2): Would corporate personhood be a good model for “AI personhood”?
June 20, 2016 in AI, recent news /by Matt SchererThis post is part of the Digital Analogues series, which examines the various types of persons or entities to which legal systems might analogize artificial intelligence (AI) systems. This post is the first of two that examines corporate personhood as a potential model for “AI personhood.” Future posts will examine how AI could also be analogized
Digital Analogues (Intro): Artificial Intelligence Systems Should Be Treated Like…
June 9, 2016 in AI, recent news /by Matt SchererThis piece was originally published on Medium in Imaginary Papers, an online publication of Arizona State University’s Center for Science and the Imagination. Matt Scherer runs the Law and AI blog. Artificial intelligence (A.I.) systems are becoming increasingly ubiquitous in our economy and society, and are being designed with an ever-increasing ability to operate free of
Too smart for our own good?
April 28, 2016 /3 Comments/in AI, recent news /by Matt SchererSource: Dilbert Comic Strip on 1992-02-11 | Dilbert by Scott Adams Two stories this past week caught my eye. The first is Nvidia’s revelation of the new, AI-focused Tesla P100 computer chip. Introduced at April’s annual GPU Technology Conference, the P100 is the largest computer chip in history in terms of the number of transistors, “the product of
Tay the Racist Chatbot: Who is responsible when a machine learns to be evil?
March 27, 2016 /1 Comment/in AI, recent news /by Matt SchererBy far the most entertaining AI news of the past week was the rise and rapid fall of Microsoft’s teen-girl-imitation Twitter chatbot, Tay, whose Twitter tagline described her as “Microsoft’s AI fam* from the internet that’s got zero chill.” (* Btw, I’m officially old–I had to consult Urban Dictionary to confirm that I was correctly understanding what “fam” and
Who’s to Blame (Part 6): Potential Legal Solutions to the AWS Accountability Problem
March 11, 2016 in AI, recent news /by Matt SchererThe law abhors a vacuum. So it is all but certain that, sooner or later, international law will come up with mechanisms for fixing the autonomous weapon system (AWS) accountability problem. How might the current AWS accountability gap be filled? The simplest solution—and the one advanced by Human Rights Watch (HRW) and the not-so-subtly-named Campaign to Stop Killer Robots (CSKR)—is to
Who’s to Blame (Part 5): A Deeper Look at Predicting the Actions of Autonomous Weapons
March 2, 2016 in AI, recent news /by Matt SchererSource: Dilbert Comic Strip on 2011-03-06 | Dilbert by Scott Adams An autonomous weapon system (AWS) is designed and manufactured in a collaborative project between American and Indian defense contractors. It is sold to numerous countries around the world. This model of AWS is successfully deployed in conflicts in Latin America, the Caucuses, and Polynesia without
Who’s to Blame (Part 4): Who’s to Blame if an Autonomous Weapon Breaks the Law?
February 24, 2016 /1 Comment/in AI, recent news /by Matt SchererThe previous entry in this series examined why it would be very difficult to ensure that autonomous weapon systems (AWSs) consistently comply with the laws of war. So what would happen if an attack by an AWS resulted in the needless death of civilians or otherwise constituted a violation of the laws of war? Who would be
Who’s to Blame (Part 3): Could Autonomous Weapon Systems Navigate the Law of Armed Conflict
February 17, 2016 in AI, recent news /by Matt Scherer“Robots won’t commit war crimes. We just have to program them to follow the laws of war.” This is a rather common response to the concerns surrounding autonomous weapons, and it has even been advanced as a reason that robot soldiers might be less prone to war crimes than human soldiers. But designing such autonomous weapon
Who’s to Blame (Part 2): What is an “autonomous” weapon?
February 10, 2016 /1 Comment/in AI, recent news /by Matt SchererThe following is the second in a series about the limited legal oversight of autonomous weapons. The first segment can be found here. Source: Peanuts by Charles Schulz, January 31, 2016 Via @GoComics Before turning in greater detail to the legal challenges that autonomous weapon systems (AWSs) will present, it is essential to define what “autonomous”
Who’s to Blame (Part 1): The Legal Vacuum Surrounding Autonomous Weapons
February 3, 2016 in AI, recent news /by Matt SchererThe year is 2020 and intense fighting has once again broken out between Israel and Hamas militants based in Gaza. In response to a series of rocket attacks, Israel rolls out a new version of its Iron Dome air defense system. Designed in a huge collaboration involving defense companies headquartered in the United States, Israel, and