Autonomous Vehicles, Technological Progress, and the Scope Problem in Products Liability

2019 ◽  
Author(s):  
Alexander Lemann
2019 ◽  
Vol 12 (2) ◽  
pp. 157-212
Author(s):  
Alexander B. Lemann

Abstract Autonomous vehicles are widely expected to save tens of thousands of lives each year by making car crashes attributable to human error – currently the overwhelming majority of fatal crashes – a thing of the past. How the legal system should attribute responsibility for the (hopefully few) crashes autonomous vehicles cause is an open and hotly debated question. Most tort scholars approach this question by asking what liability rule is most likely to achieve the desired policy outcome: promoting the adoption of this lifesaving technology without destroying manufacturers’ incentives to optimize it. This approach has led to a wide range of proposals, many of which suggest replacing standard rules of products liability with some new system crafted specifically for autonomous vehicles and creating immunity or absolute liability or something in between. But, I argue, the relative safety of autonomous vehicles should not be relevant in determining whether and in what ways manufacturers are held liable for their crashes. The history of products liability litigation over motor vehicle design shows that the tort system has been hesitant to indulge in such comparisons, as it generally declines both to impose liability on older, more dangerous cars simply because they lack the latest safety features and to grant immunity to newer, safer cars simply because of their superior aggregate performance. These are instances in which products liability law fails to promote efficient outcomes and instead provides redress for those who have been wronged by defective products. Applying these ideas to the four fatalities that have so far been caused by autonomous vehicles suggests that just as conventional vehicles should not be considered defective in relying on a human driver, autonomous vehicles should not be immune when their defects cause injury.


2018 ◽  
Vol 11 (1) ◽  
pp. 71-143 ◽  
Author(s):  
Donald G. Gifford

AbstractWaves of technological change explain the most important transformations of American tort law. In this Article, I begin by examining historical instances of this linkage. Following the Industrial Revolution, for example, machines, no longer humans and animals, powered production. With greater force, locomotives and other machines inflicted far more severe injuries. These dramatic technological changes prompted the replacement of the preexisting strict liability tort standard with the negligence regime. Similarly, later technological changes caused the enactment of workers’ compensation statutes, the implementation of automobile no-fault systems in some states and routinized automobile settlement practices in others that resemble a no-fault system, and the adoption of “strict” products liability. From this history, I derive a model explaining how technological innovation alters (1) the frequency of personal injuries, (2) the severity of such injuries, (3) the difficulty of proving claims, and (4) the new technology’s social utility. These four factors together determine the choice among three liability standards: strict liability, negligence, and no-fault liability with limited damages. I then apply this model to the looming technological revolution in which autonomous vehicles, robots, and other Artificial Intelligence machines will replace human decision-making as well as human force. I conclude that the liability system governing autonomous vehicles is likely to be one similar to the workers’ compensation system in which the victim is relieved of the requirement of proving which party acted tortiously and caused the accident.


AI Magazine ◽  
2017 ◽  
Vol 38 (4) ◽  
pp. 27-34 ◽  
Author(s):  
Paul Bello ◽  
Will Bridewell

For decades AI researchers have built agents that are capable of carrying out tasks that require human-level or human-like intelligence. During this time, questions of how these programs compared in kind to humans have surfaced and led to beneficial interdisciplinary discussions, but conceptual progress has been slower than technological progress. Within the past decade, the term agency has taken on new import as intelligent agents have become a noticeable part of our everyday lives. Research on autonomous vehicles and personal assistants has expanded into private industry with new and increasingly capable products surfacing as a matter of routine. This wider use of AI technologies has raised questions about legal and moral agency at the highest levels of government (National Science and Technology Council 2016) and drawn the interest of other academic disciplines and the general public. Within this context, the notion of an intelligent agent in AI is too coarse and in need of refinement. We suggest that the space of AI agents can be subdivided into classes, where each class is defined by an associated degree of control.


2021 ◽  
Vol 2 ◽  
pp. 34-40
Author(s):  
Peter Jucha ◽  
Tatiana Corejova

Technological progress is becoming more significant every year, and people are witnessing a number of innovations that are becoming part of their daily lives. The development of technology is advancing at great speed because the needs and requirements of people are becoming more and more difficult to meet, and so innovations are being developed to help fulfill these needs. However, not all people accept technological progress and innovation positively. The aim of the paper is to evaluate the opinion of people, specifically students of the selected higher education institution, on new technologies and innovations. In particular we wish to evaluate their general attitude towards technological innovation, but also their views on the use of specific technologies such as robots, drones or autonomous vehicles. Students' responses as to whether they would like them and whether they would be satisfied with the aforementioned innovations being widely used in the future vary. Some would benefit from the use of such innovations, others would not. Some students don't like it because people could lose their jobs and others don't really believe in such innovations. The results of the paper provide an evaluation of all the answers given by the students.


2018 ◽  
Author(s):  
W. Bradley Wendel

The trolley problem is a well-known thought experiment in moral philosophy, used to explore issues such as rights, deontological reasons, and intention and the doctrine of double effect. Recently it has featured prominently in popular discussions of decision making by autonomous vehicle systems. For example, a Mercedes-Benz executive stated that, if faced with the choice between running over a child that had unexpectedly darted into the road and steering suddenly, causing a rollover accident that would kill the driver, an automated Mercedes would opt to kill the child. This paper considers not the ethical issues raised by such dilemmas, but the liability of vehicle manufacturers for injuries that foreseeably result from the design of autonomous systems. Some of the recent commentary on the liability of autonomous vehicle manufacturers suggests unfamiliarity with modern products liability law, particularly the design-defect standard in the Third Restatement of Torts. A superficial understanding of products liability principles – for example, believing it is a regime of strict liability in any meaningful sense – can lead to serious errors in the application of this area of law to autonomous vehicles. It is also a mistake to believe that the economic approach to negligence liability, as developed by Posner and Calabresi, accurately characterizes modern products liability principles. Under the Third Restatement approach, a court or jury will consider whether a product embodies a reasonable balance of safety and utility, and “reasonable” can be interpreted in accordance with ordinary community ethical standards. Thus, some of the issues that are central to resolving trolley problems in moral philosophy may actually recur in design-defect litigation.


2020 ◽  
Vol 20 (1) ◽  
Author(s):  
Sean Bollman

Driverless automobiles may redefine public safety and efficiency, while turning the automobile industry on its head. These innovative machines will pose a challenge to regulatory schemes spanning from transportation and insurance to products liability and internet laws. Enormous companies like BMW, Audi, Uber, and Google have already taken part in placing this rapidly growing technology into consumer hands. The rift that this innovation will create in other industries, coupled with the safety and privacy concerns surrounding its design, will be the catalyst for contentious legislative and legal debates. This Note will explore the ways in which industry flexibility, state and federal involvement, and clearer regulations may be carefully balanced to help the driverless car industry stay on the road. Part one will address the development and historical challenges of driverless vehicles, while parts two and three will look at potential solutions to these challenges.


Sign in / Sign up

Export Citation Format

Share Document