scholarly journals Meaningful human control as reason-responsiveness: the case of dual-mode vehicles

2019 ◽  
Vol 22 (2) ◽  
pp. 103-115 ◽  
Author(s):  
Giulio Mecacci ◽  
Filippo Santoni de Sio

AbstractIn this paper, in line with the general framework of value-sensitive design, we aim to operationalize the general concept of “Meaningful Human Control” (MHC) in order to pave the way for its translation into more specific design requirements. In particular, we focus on the operationalization of the first of the two conditions (Santoni de Sio and Van den Hoven 2018) investigated: the so-called ‘tracking’ condition. Our investigation is led in relation to one specific subcase of automated system: dual-mode driving systems (e.g. Tesla ‘autopilot’). First, we connect and compare meaningful human control with a concept of control very popular in engineering and traffic psychology (Michon 1985), and we explain to what extent tracking resembles and differs from it. This will help clarifying the extent to which the idea of meaningful human control is connected to, but also goes beyond, current notions of control in engineering and psychology. Second, we take the systematic analysis of practical reasoning as traditionally presented in the philosophy of human action (Anscombe, Bratman, Mele) and we adapt it to offer a general framework where different types of reasons and agents are identified according to their relation to an automated system’s behaviour. This framework is meant to help explaining what reasons and what agents (should) play a role in controlling a given system, thereby enabling policy makers to produce usable guidelines and engineers to design systems that properly respond to selected human reasons. In the final part, we discuss a practical example of how our framework could be employed in designing automated driving systems.

Author(s):  
Robert Audi

This book provides an overall theory of perception and an account of knowledge and justification concerning the physical, the abstract, and the normative. It has the rigor appropriate for professionals but explains its main points using concrete examples. It accounts for two important aspects of perception on which philosophers have said too little: its relevance to a priori knowledge—traditionally conceived as independent of perception—and its role in human action. Overall, the book provides a full-scale account of perception, presents a theory of the a priori, and explains how perception guides action. It also clarifies the relation between action and practical reasoning; the notion of rational action; and the relation between propositional and practical knowledge. Part One develops a theory of perception as experiential, representational, and causally connected with its objects: as a discriminative response to those objects, embodying phenomenally distinctive elements; and as yielding rich information that underlies human knowledge. Part Two presents a theory of self-evidence and the a priori. The theory is perceptualist in explicating the apprehension of a priori truths by articulating its parallels to perception. The theory unifies empirical and a priori knowledge by clarifying their reliable connections with their objects—connections many have thought impossible for a priori knowledge as about the abstract. Part Three explores how perception guides action; the relation between knowing how and knowing that; the nature of reasons for action; the role of inference in determining action; and the overall conditions for rational action.


Author(s):  
Lucero Rodriguez Rodriguez ◽  
Carlos Bustamante Orellana ◽  
Jayci Landfair ◽  
Corey Magaldino ◽  
Mustafa Demir ◽  
...  

As technological advancements and lowered costs make self-driving cars available to more people, it becomes important to understand the dynamics of human-automation interactions for safety and efficacy. We used a dynamical approach to examine data from a previous study on simulated driving with an automated driving assistant. To maximize effect size in this preliminary study, we focused the current analysis on the two lowest and two highest-performing participants. Our visual comparisons were the utilization of the automated system and the impact of perturbations. Low-performing participants toggled and maintained reliance either on automation or themselves for longer periods of time. Decision making of high-performing participants was using the automation briefly and consistently throughout the driving task. Participants who displayed an early understanding of automation capabilities opted for tactical use. Further exploration of individual differences and automation usage styles will help to understand the optimal human-automation-team dynamic and increase safety and efficacy.


Systems ◽  
2020 ◽  
Vol 8 (2) ◽  
pp. 10 ◽  
Author(s):  
Clifford D. Johnson ◽  
Michael E. Miller ◽  
Christina F. Rusnock ◽  
David R. Jacques

Levels of Automation (LOA) provide a method for describing authority granted to automated system elements to make individual decisions. However, these levels are technology-centric and provide little insight into overall system operation. The current research discusses an alternate classification scheme, referred to as the Level of Human Control Abstraction (LHCA). LHCA is an operator-centric framework that classifies a system’s state based on the required operator inputs. The framework consists of five levels, each requiring less granularity of human control: Direct, Augmented, Parametric, Goal-Oriented, and Mission-Capable. An analysis was conducted of several existing systems. This analysis illustrates the presence of each of these levels of control, and many existing systems support system states which facilitate multiple LHCAs. It is suggested that as the granularity of human control is reduced, the level of required human attention and required cognitive resources decreases. Thus, it is suggested that designing systems that permit the user to select among LHCAs during system control may facilitate human-machine teaming and improve the flexibility of the system.


Author(s):  
J.J. Ch. Meyer

Agent technology is a rapidly growing subdiscipline of computer science on the borderline of artificial intelligence and software engineering that studies the construction of intelligent systems. It is centered around the concept of an (intelligent/rational/autonomous) agent. An agent is a software entity that displays some degree of autonomy; it performs actions in its environment on behalf of its user but in a relatively independent way, taking initiatives to perform actions on its own by deliberating its options to achieve its goal(s). The field of agent technology emerged out of philosophical considerations about how to reason about courses of action, and human action, in particular. In analytical philosophy there is an area occupied with so-called practical reasoning, in which one studies so-called practical syllogisms, that constitute patterns of inference regarding actions. By way of an example, a practical syllogism may have the following form (Audi, 1999, p. 728): Would that I exercise. Jogging is exercise. Therefore, I shall go jogging. Although this has the form of a deductive syllogism in the familiar Aristotelian tradition of “theoretical reasoning,” on closer inspection it appears that this syllogism does not express a purely logical deduction. (The conclusion does not follow logically from the premises.) It rather constitutes a representation of a decision of the agent (going to jog), where this decision is based on mental attitudes of the agent, namely, his/her beliefs (“jogging is exercise”) and his/her desires or goals (“would that I exercise”). So, practical reasoning is “reasoning directed toward action—the process of figuring out what to do,” as Wooldridge (2000, p. 21) puts it. The process of reasoning about what to do next on the basis of mental states such as beliefs and desires is called deliberation (see Figure 1). The philosopher Michael Bratman has argued that humans (and more generally, resource-bounded agents) also use the notion of an intention when deliberating their next action (Bratman, 1987). An intention is a desire that the agent is committed to and will try to fulfill till it believes it has achieved it or has some other rational reason to abandon it. Thus, we could say that agents, given their beliefs and desires, choose some desire as their intention, and “go for it.” This philosophical theory has been formalized through several studies, in particular the work of Cohen and Levesque (1990); Rao and Georgeff (1991); and Van der Hoek, Van Linder, and Meyer (1998), and has led to the so-called Belief- Desire-Intention (BDI) model of intelligent or rational agents (Rao & Georgeff, 1991). Since the beginning of the 1990s researchers have turned to the problem of realizing artificial agents. We will return to this hereafter.


2021 ◽  
Vol 13 (1) ◽  
Author(s):  
Serio Angelo Maria Agriesti ◽  
Marco Ponti ◽  
Giovanna Marchionni ◽  
Paolo Gandini

Abstract Introduction In the near future, automated vehicles will drive on public roads together with traditional vehicles. Even though almost the whole academia agrees on that statement, the possible interferences between the two different kinds of driver are still to be analyzed and the real impacts on the traffic flow to be under-stood. Objectives Aim of this paper is to study one of the most likely L3 automated system to be deployed on public roads in the short term: Highway Chauffeur. The analysis of this system is carried out on a roadwork scenario to assess the positive impacts arising from a joint implementation of the automated system and the C-ITS Use Case signaling the closure of a lane. In fact, the main contribution of this paper is the assessment of the possible benefits in travel times and driving regime arising from the joint implementation of the Highway Chauffeur system and of C-ITS messages, both for the vehicles equipped with both technologies and for the surrounding traffic. Methods The assessment is achieved through traffic simulations carried out with the VISSIM software and a Python script developed by the authors. The overall process is described and the obtained results are provided, commented and compared to define the implementation of the C-ITS Use Case that could maximize the benefits of L3 driving. Results These results showed how triggering the take-over maneuver in ad-vance fosters the bottleneck efficiency (the same speed values reached between 80 and 100% Market Penetration for around 700 m range of the C-ITS message are reached at 50% Market Penetration with a 1500 m range). Besides, an in-creased speed up to 30 km/h at the bottleneck is recorded, depending on the mar-ket penetration and the message range. Finally, the delay upstream the roadworks entrance is reduced by 6% and arises at around 700 m, without the need to deploy the message up to 1500 m. Conclusions The paper investigates the impacts of take-over maneuvers and of automated driving while considering different operational parameters such as the message range. The results suggest all the potentialities of the Use Case while providing interesting figures that frame the trends related to the different imple-mentations. Finally, the tool developed to carry out the presented analysis is re-ported and made available so that hopefully the Use Case may be explored further and a precise impact assessment may be carried out with different prototypes of AVs and on different infrastructures.


1970 ◽  
pp. 11
Author(s):  
Kenneth Hudson

You are allfamiliar with the philosophical notion that nothing exists until and unless there is a word to describe and define it. Let us suppose, for instance, that my language and yours had no word for 'weather'. We would all have experienced sunshine, rain, frost, snow and wind, and we would know that these conditions came and went, but we should only be able to think of them as separate phenomena, without any general concept, expressed by the term 'weather: to bind them together as natural happenings beyond human control. The presence ofthe word 'weather' in the language changes our attitude to its individual components. They are all 'weather: so that we are able to have weatherforecasts and weather reports, instead of merely rainforecasts and snow reports. 


Author(s):  
Giulio Bianchi Piccinini ◽  
Esko Lehtonen ◽  
Fabio Forcolin ◽  
Johan Engström ◽  
Deike Albers ◽  
...  

Objective This paper aims to describe and test novel computational driver models, predicting drivers’ brake reaction times (BRTs) to different levels of lead vehicle braking, during driving with cruise control (CC) and during silent failures of adaptive cruise control (ACC). Background Validated computational models predicting BRTs to silent failures of automation are lacking but are important for assessing the safety benefits of automated driving. Method Two alternative models of driver response to silent ACC failures are proposed: a looming prediction model, assuming that drivers embody a generative model of ACC, and a lower gain model, assuming that drivers’ arousal decreases due to monitoring of the automated system. Predictions of BRTs issued by the models were tested using a driving simulator study. Results The driving simulator study confirmed the predictions of the models: (a) BRTs were significantly shorter with an increase in kinematic criticality, both during driving with CC and during driving with ACC; (b) BRTs were significantly delayed when driving with ACC compared with driving with CC. However, the predicted BRTs were longer than the ones observed, entailing a fitting of the models to the data from the study. Conclusion Both the looming prediction model and the lower gain model predict well the BRTs for the ACC driving condition. However, the looming prediction model has the advantage of being able to predict average BRTs using the exact same parameters as the model fitted to the CC driving data. Application Knowledge resulting from this research can be helpful for assessing the safety benefits of automated driving.


Sign in / Sign up

Export Citation Format

Share Document