Is your autonomous vehicle Sally the sports car or blood-thirsty Christine?

0
7
- Advertisement -

When automobiles first started to appear alongside horse-drawn buggies, horses were the initial victims of the technology. They would not be struck by the slow-moving vehicles so much as be frightened into runaways. Sometimes the horses themselves suffered injury; other times it was property damage and pedestrian injury as the terrified steeds trampled everything in their paths.

As cars got faster and more numerous, pedestrians began to fall direct victim to moving vehicles, and it wasn’t long before rules of the road, and product and tort liability laws, imposed order to avoid carnage. Still, even today we have an ever-growing number of distracted and inept drivers turning our crowded highways into a dystopic version of real-life Frogger.  

Enter the autonomous vehicle. All the benefits of driving, without having to drive! Proponents of driverless cars believe that autonomous technology will make cars safer and lead to a 90% reduction in accident frequency by 2050, as more than 90% of car crashes are caused by driver error.

There is certainly no shortage of news stories about injuries and fatalities resulting from drunk or distracted driving, and other accidents caused by drivers’ behavior. Text your friends or binge watch Black Mirror; with an autonomous vehicle, it’s OK. Or is it? All goes well until your AV decides that the pedestrian in front of you isn’t really there, or mistakes the debris trailing a garbage truck for lane guidance, and steers you into a concrete barrier. 

Some companies are closer than others to completely driverless vehicles, but the edge case driving situations are still a challenge, as we sadly found not long ago when an AV hit a pedestrian walking her bicycle across a dark highway in Arizona. Although there was a driver present who could have taken the controls, she didn’t. One can hardly blame her inattention, for the whole point of autonomous driving technology is to allow for, if not encourage, drivers to disengage from the task.

The “autonomous vehicle paradox” of inducing drivers to disconnect because they are not needed most of the time is confounding. At least in the interim, until autonomous systems can reliably achieve better than a 98% safety rate (roughly the rate of human drivers), autonomous systems will need to be supplemented by a human driver for emergencies and other unexpected situations.

What happens and who is at fault when an accident occurs during or even after this transition period? Before the advent of autonomous vehicle technology, car accidents would typically invoke one of two legal theories:  driver negligence and manufacturers’ products liability. The legal theory of negligence seeks to hold people accountable for their actions and leads to financial compensation from drivers, or more commonly their insurance companies, for the drivers’ conduct behind the wheel. Products liability legal theories, on the other hand, are directed at companies that make and sell the injury-causing products, such as defective air bags, ignition switches, tires, or the cars themselves. Applying current legal theories to autonomous vehicle accident situations presents many challenges.   

Suppose artificial intelligence (AI), or whatever makes a car autonomous, fails to detect or correct for a slippery curve. Perhaps a coolant leak from some car ahead covers the road with antifreeze, which can seen by the human behind the wheel, yet is all but invisible to the AI system. If the AV has manual override controls and an accident occurs, is the driver at fault for not taking the controls to avoid the crash? Is the car manufacturer at fault for not sensing the road condition or correcting for it? If both, how should fault be apportioned? 

If a conventional vehicle was involved, the case against the driver may depend on proof that their behavior fell below an applicable standard of care. Not having one’s hands on the steering wheel would most likely be considered negligent behavior with such a car, and likely, so would being distracted by texting on a smartphone. But the self-driving feature of an autonomous vehicle by its very nature encourages driver inattention and lack of engagement with the controls. So would we be willing to find the driver at fault in the above instance for not taking over?

Car manufacturers have expressed different views on liability.

As to the manufacturer of a conventional vehicle, liability might depend on whether a system or part was defective. A conventional vehicle in good condition, with no suspension, brake or steering defects, would likely allow the manufacturer to escape the brunt of liability in the above scenario. The manufacturer of an autonomous vehicle with human override controls, however, might try to shift at least some portion of fault to the driver, but would or should society allow that? The driver might argue he or she reasonably relied upon the AV, but should the manufacturer instead be held responsible where the hazard was visible and driver intervention could have avoided the accident?

The outcome might differ if the vehicle was completely autonomous and no human possibly could have intervened, but that vehicle may be years away.

When such an AV comes to market, would, or should it be considered “defective” if it fails to detect or correct for the unexpectedly slippery surface? And if so, would it be considered defective merely because the failures occurred, or would proof also require some showing of errors in the AI software? Given that AI algorithms can evolve on their own and be dependent on millions of miles or hours of training data, how would one prove a “defect” in the software? Would it be fair to hold the programmer or software supplier accountable if the algorithm at the time of the accident differed substantially from the original, and the changes were effected by the AI algorithm having “taught” itself?

Another issue is the “hive mind.” One way AI could learn is by processing the collective experiences of other connected AVs, a process at one time used by Tesla. But if a significant proportion of other AVs upload erroneous data that is acted upon, what then?

In light of these issues, and as technology moves toward complete control of the vehicle with increasingly less human intervention, we may see the law evolve to place more emphasis on products liability theories and perhaps strict liability rather than negligence. It is not far-fetched that the price tag of an AV of the future not only include the R&D and component costs, but an “insurance” component to cover the costs of accidents. Such an evolution would be consistent with the decreasing role of the human driver, though it is somewhat inconsistent with a car manufacturer’s ability to exert full control over an AI system’s learning process, not to mention the driving environment.

In the present interim period when at least some human intervention is required, car manufacturers have expressed different views on liability. Some, like Volvo, have publicly stated that they will accept full responsibility whenever one of their vehicles is involved in an accident while in autonomous mode. But others, like Tesla, are attempting to shift liability to drivers when accidents happen by requiring some modicum of driver engagement, even in autonomous mode.

For example, to activate the capability to pass other cars in autonomous mode, drivers of Teslas once had to trigger the turn signal (Tesla recently announced a new version that would dispense with this requirement). Having drivers perform this seemingly insignificant but deliberate action could help auto manufacturers shift legal liability to the driver. Performing that simple action not only tells the car to pass, but suggests the driver has made a decision that the maneuver is safe and therefore is willing to, or should, accept responsibility for the consequences if it is not.  

What about the ethics of the AI programming or training?

The underlying technology itself presents further complications to ascertain who is at fault. As alluded to above, one aspect of AI, better characterized as “machine learning,” is that its behavior is more or less a “black box” developed from millions of a variety of inputs and cannot be well-understood as might a strictly math-based algorithm.

Put another way, we might be incapable of knowing exactly how the machine decided to act as it did. In such an instance, if the AI box was negligently trained, or “trained” on a simulator rather than based on real-world driving, could the author of the simulator instead be held accountable for the box’s failure to handle the edge case scenario that resulted in the accident? 

What about the ethics of the AI programming or training? A recent study found that current AI systems are perhaps 20% less likely to identify pedestrians if they are people of color. Was that due to the AI training on an insufficiently diverse subject base, or is there some other explanation? A recent survey conducted by MIT concluded that people ascribe a hierarchy to whose lives might be spared in edge cases where a crash is unavoidable and the question is not whether, but which, lives will perish. According to survey participants, human lives should be spared over those of animals; the lives of many should be spared over those of a few; and the young should be spared at the expense of the aged.

Interestingly, people also thought there should be a preference for someone pushing a stroller and observing traffic laws, the bottom line being that if an AV is programmed according to such ethics, your odds of being hit by an AV might increase significantly if you are a lone person jay-walking across a busy highway. In the moral hierarchy of the study, being a cat, dog or criminal is at the lowest level of protection, though one must wonder how a vehicle will be able to distinguish a human criminal from a non-criminal — real-time connection to prison records? And what happens if, say, animal activist hackers alter the programming to prefer saving animals over people?

If the MIT survey is to be believed, such a hierarchy and variability exist today, only tucked away in the subconsciousness of human drivers rather than in machines. Think about that the next time you cross the street.

Written by David Riggs
This news first appeared on https://techcrunch.com/2019/05/28/is-your-autonomous-vehicle-sally-the-sports-car-or-blood-thirsty-christine/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+Techcrunch+%28TechCrunch%29 under the title “Is your autonomous vehicle Sally the sports car or blood-thirsty Christine?”. Bolchha Nepal is not responsible or affiliated towards the opinion expressed in this news article.