October 8, 2015

BARELY LEGAL | A Convention to Police Artificial Intelligence

Print More

By TREVOR WHITE

From Frankenstein to the Terminator, some of history’s greatest horror stories speak to the fear of engineering something inhuman that evolves beyond our control. Fortunately, our society has laws, and robots must abide by them. Corporations have free speech, and inanimate objects or property can be named as parties to a lawsuit; by comparison, taking a Google Car to court sounds pretty plausible. But modern law is ill-equipped to address the diversity of technologies dabbling in artificial intelligence (A.I). What’s a judge to decide when a robot inflicts injury — accidentally or intentionally — while doing something its owner didn’t request?

Some experts believe A.I. will never gain independence, and an operator or manufacturer is always responsible. Others believe superintelligent computers, especially “killer robots” that can self-select targets, should be banned before they become uncontrollable. However, I believe a compromise is possible: What the law needs is an international treaty to shift liability onto certain machines as they approach autonomy. By merging scattered laws, nations can unite over a “Convention on A.I. Liability.”

The first step is establishing a legal definition for “artificial intelligence.” Thinking and intelligence aren’t synonymous; many systems you might call “A.I.,” like Siri, can recall information or react to stimuli but can’t make liberated decisions. Drawing on recent scholarship, I suggest a higher threshold for “intelligence:” In order to be considered intelligent, a robot should have complex communication and interaction skills, a sense of self, goals and/or creativity and the ability to coexist “communally” (rationally and reasonably). The Convention could consider a robot with all three traits a “person,” and thus individually liable. Meanwhile, a robot with one or two traits is like a pet Rottweiler: If it “bites,” the owner’s on the hook, but only if he or she should’ve know it was dangerous.

For the four most prominent A.I.-enabled machines, some additional standards are reasonable whether or not they are deemed “people:”

Surgical Robots: An A.I. must display the same skill a human does to be a certified physician, whether practicing locally or remotely. Anything less, and it is an “employee” or tool, immune to malpractice claims.

Lethal Autonomous Weapons Systems: Many parties to the Convention on Conventional Weapons (forbidding excessively injurious artillery, with the United States, China and Israel among 121 signatories) have encouraged an amendment encapsulating LAWS. Building upon that, governments could punish errant drones under the Convention like soldiers disobeying rules of engagement. If a drone following orders is at issue, a state will be strictly liable for its pilots’ strikes, though it can still invoke sovereign immunity unless plaintiffs allege a tort committed in the court’s territory.

Commercial Drones: Amazon has actually suggested a solid scheme that would layer the altitudes where drones can fly: Slow drones stay low, while faster and smarter ones take one level up, and commercial airspace follows after a hundred-foot buffer. The FAA and like agencies could apply additional no-fly zones on a case-by-case basis.

General Consumer Goods: Ideally, the burden of proof is on the plaintiff to prove (1) a manufacturing defect (i.e., hacking vulnerabilities) existed that rendered the product unreasonably dangerous, and (2) a reasonable alternative design would’ve prevented it. Failing that, “enterprise liability,” a doctrine which holds complex chains of distribution liable as a team, is a reliable alternative. A self-driving car, for example, may take a dozen companies to design its software and hardware — so any number of them could be implicated in a fatal glitch. Alternatively, robots could be sold with a built-in insurance cash pool. If a third party’s interference or the purchaser’s own warranty breach caused the injury, though, the fault is theirs alone. Product liability differs considerably between countries, but NATO, WTO and others would ideally help limit reservations between nations.

Granted, this quartet cannot neatly cover all of the questions surrounding robot-related injuries. For one, intangible bots and “neural networks” (the latter able to learn like a brain by analyzing Internet data) might randomly buy drugs or tweet a death threat — as happened to two unfortunate, unrelated programmers in Europe this last year. The above “bite” standard could work for bots, but “negligent coding” would be a slippery rule. And should a “robocalypse” really happen, legal responses grow fuzzier; Convention parties could assign an international court or war crimes tribunal in the aftermath, but assuming deactivation/destruction is the only punishment robots can conceptualize, it’d likely be a waste of resources. Besides, one theory posits that a superintelligent A.I. could always manipulate people out of destroying it.  Personally, I find the alternative suggestion of transferring a rogue A.I. to an isolated digital jail more credible, although how to imprison a computer program is beyond this column’s scope.

Whatever the real-life outcome, the United States and other countries already have the potential to hold intelligent machines — or their owners — accountable. Like robots themselves, their laws simply need some assembly, rewiring and upgrading. Hopefully, the world’s nations will download the “software update” that is this proposed Convention before the machines start making their own laws.

Trevor White is a student at Cornell Law School. Responses can be sent to [email protected]. Barely Legal appears alternate Fridays this semester.