Automation, Expert Systems Calibration of Trust in Automated Driving: A Matter of Initial Level of Trust and Automated Driving Style?

Author(s):  
J. B. Manchon ◽  
Mercedes Bueno ◽  
Jordan Navarro

Objective Automated driving is becoming a reality, and such technology raises new concerns about human–machine interaction on road. This paper aims to investigate factors influencing trust calibration and evolution over time. Background Numerous studies showed trust was a determinant in automation use and misuse, particularly in the automated driving context. Method Sixty-one drivers participated in an experiment aiming to better understand the influence of initial level of trust (Trustful vs. Distrustful) on drivers’ behaviors and trust calibration during two sessions of simulated automated driving. The automated driving style was manipulated as positive (smooth) or negative (abrupt) to investigate human–machine early interactions. Trust was assessed over time through questionnaires. Drivers’ visual behaviors and take-over performances during an unplanned take-over request were also investigated. Results Results showed an increase of trust over time, for both Trustful and Distrustful drivers regardless the automated driving style. Trust was also found to fluctuate over time depending on the specific events handled by the automated vehicle. Take-over performances were not influenced by the initial level of trust nor automated driving style. Conclusion Trust in automated driving increases rapidly when drivers’ experience such a system. Initial level of trust seems to be crucial in further trust calibration and modulate the effect of automation performance. Long-term trust evolutions suggest that experience modify drivers’ mental model about automated driving systems. Application In the automated driving context, trust calibration is a decisive question to guide such systems’ proper utilization, and road safety.

2021 ◽  
Author(s):  
J. B. Manchon ◽  
Mercedes Bueno ◽  
Jordan Navarro

Automated driving is becoming a reality, such technology raises new concerns about human-machine interaction on-road. Sixty-one drivers participated in an experiment aiming to better understand the influence of initial level of trust (Trustful vs Distrustful) on drivers’ behaviors and trust calibration during simulated Highly Automated Driving (HAD). The automated driving style was manipulated as positive (smooth) or negative (abrupt) to investigate human-machine early interactions. Trust was assessed over time through questionnaires. Drivers’ visual behaviors and take-over performances during an unplanned take-over request were also investigated. Results showed an increase of trust in automation over time, for both Trustful and Distrustful drivers regardless the automated driving style. Trust was also found to fluctuate over time depending on the specific events handled by the automated vehicle. Take-over performances were not influenced by the initial level of trust nor automated driving style.


Information ◽  
2021 ◽  
Vol 12 (1) ◽  
pp. 21
Author(s):  
Johannes Ossig ◽  
Stephanie Cramer ◽  
Klaus Bengler

In the human-centered research on automated driving, it is common practice to describe the vehicle behavior by means of terms and definitions related to non-automated driving. However, some of these definitions are not suitable for this purpose. This paper presents an ontology for automated vehicle behavior which takes into account a large number of existing definitions and previous studies. This ontology is characterized by an applicability for various levels of automated driving and a clear conceptual distinction between characteristics of vehicle occupants, the automation system, and the conventional characteristics of a vehicle. In this context, the terms ‘driveability’, ‘driving behavior’, ‘driving experience’, and especially ‘driving style’, which are commonly associated with non-automated driving, play an important role. In order to clarify the relationships between these terms, the ontology is integrated into a driver-vehicle system. Finally, the ontology developed here is used to derive recommendations for the future design of automated driving styles and in general for further human-centered research on automated driving.


Author(s):  
Johannes Kraus ◽  
David Scholz ◽  
Dina Stiegemeier ◽  
Martin Baumann

Objective This paper presents a theoretical model and two simulator studies on the psychological processes during early trust calibration in automated vehicles. Background The positive outcomes of automation can only reach their full potential if a calibrated level of trust is achieved. In this process, information on system capabilities and limitations plays a crucial role. Method In two simulator experiments, trust was repeatedly measured during an automated drive. In Study 1, all participants in a two-group experiment experienced a system-initiated take-over, and the occurrence of a system malfunction was manipulated. In Study 2 in a 2 × 2 between-subject design, system transparency was manipulated as an additional factor. Results Trust was found to increase during the first interactions progressively. In Study 1, take-overs led to a temporary decrease in trust, as did malfunctions in both studies. Interestingly, trust was reestablished in the course of interaction for take-overs and malfunctions. In Study 2, the high transparency condition did not show a temporary decline in trust after a malfunction. Conclusion Trust is calibrated along provided information prior to and during the initial drive with an automated vehicle. The experience of take-overs and malfunctions leads to a temporary decline in trust that was recovered in the course of error-free interaction. The temporary decrease can be prevented by providing transparent information prior to system interaction. Application Transparency, also about potential limitations of the system, plays an important role in this process and should be considered in the design of tutorials and human-machine interaction (HMI) concepts of automated vehicles.


Author(s):  
Manwen Zhang ◽  
Xinglin Tao ◽  
Ran Yu ◽  
Yangyang He ◽  
Xinpan Li ◽  
...  

Flexible sensors which can transduce various stimuli (e.g., strain, pressure, temperature) into electrical signals are highly in demand due to the development of human-machine interaction. However, it is still a...


2012 ◽  
Vol 18 (3) ◽  
Author(s):  
David Ellison

This article examines the production and reception of incidental machine noise, specifically the variably registered sounds emanating from automata in the eighteenth and nineteenth centuries. The argument proposed here is that the audience for automata performances demonstrated a capacity to screen out mechanical noise that may have otherwise interfered with the narrative theatricality of their display. In this regard the audience may be said to resemble auditors at musical performances who learned to suppress the various noises associated with the physical mechanics of performance, and the faculty of attention itself. For William James among others, attention demands selection among competing stimuli. As the incidental noise associated with automata disappears from sensibility over time, its capacity to signify in other contexts emerges. In the examples traced here, such noise is a means of distinguishing a specifically etherealised human-machine interaction. This is in sharp distinction from other more degrading forms of relationship such as the sound of bodies labouring at machines. In this regard, the barely detected sound of the automata in operation may be seen as a precursor to the white noise associated with modern, corporate productivity.


2014 ◽  
Vol 47 (3) ◽  
pp. 6344-6349 ◽  
Author(s):  
Chouki Sentouh ◽  
Jean-Christophe Popieul ◽  
Serge Debernard ◽  
Serge Boverie

2019 ◽  
Vol 16 (04) ◽  
pp. 1950017
Author(s):  
Sheng Liu ◽  
Yangqing Wang ◽  
Fengji Dai ◽  
Jingxiang Yu

Motion detection and object tracking play important roles in unsupervised human–machine interaction systems. Nevertheless, the human–machine interaction would become invalid when the system fails to detect the scene objects correctly due to occlusion and limited field of view. Thus, robust long-term tracking of scene objects is vital. In this paper, we present a 3D motion detection and long-term tracking system with simultaneous 3D reconstruction of dynamic objects. In order to achieve the high precision motion detection, an optimization framework with a novel motion pose estimation energy function is provided in the proposed method by which the 3D motion pose of each object can be estimated independently. We also develop an accurate object-tracking method which combines 2D visual information and depth. We incorporate a novel boundary-optimization segmentation based on 2D visual information and depth to improve the robustness of tracking significantly. Besides, we also introduce a new fusion and updating strategy in the 3D reconstruction process. This strategy brings higher robustness to 3D motion detection. Experiments results show that, for synthetic sequences, the root-mean-square error (RMSE) of our system is much smaller than Co-Fusion (CF); our system performs extremely well in 3D motion detection accuracy. In the case of occlusion or out-of-view on real scene data, CF will suffer the loss of tracking or object-label changing, by contrast, our system can always keep the robust tracking and maintain the correct labels for each dynamic object. Therefore, our system is robust to occlusion and out-of-view application scenarios.


Author(s):  
John D. Lee ◽  
Shu-Yuan Liu ◽  
Joshua Domeyer ◽  
Azadeh DinparastDjadid

Objective: This study examines how driving styles of fully automated vehicles affect drivers’ trust using a statistical technique—the two-part mixed model—that considers the frequency and magnitude of drivers’ interventions. Background: Adoption of fully automated vehicles depends on how people accept and trust them, and the vehicle’s driving style might have an important influence. Method: A driving simulator experiment exposed participants to a fully automated vehicle with three driving styles (aggressive, moderate, and conservative) across four intersection types (with and without a stop sign and with and without crossing path traffic). Drivers indicated their dissatisfaction with the automation by depressing the brake or accelerator pedals. A two-part mixed model examined how automation style, intersection type, and the distance between the automation’s driving style and the person’s driving style affected the frequency and magnitude of their pedal depression. Results: The conservative automated driving style increased the frequency and magnitude of accelerator pedal inputs; conversely, the aggressive style increased the frequency and magnitude of brake pedal inputs. The two-part mixed model showed a similar pattern for the factors influencing driver response, but the distance between driving styles affected how often the brake pedal was pressed, but it had little effect on how much it was pressed. Conclusion: Eliciting brake and accelerator pedal responses provides a temporally precise indicator of drivers’ trust of automated driving styles, and the two-part model considers both the discrete and continuous characteristics of this indicator. Application: We offer a measure and method for assessing driving styles.


Information ◽  
2021 ◽  
Vol 12 (10) ◽  
pp. 410
Author(s):  
Yannick Forster ◽  
Frederik Naujoks ◽  
Andreas Keinath

Empirical validation and verification procedures require the sophisticated development of research methodology. Therefore, researchers and practitioners in human–machine interaction and the automotive domain have developed standardized test protocols for user studies. These protocols are used to evaluate human–machine interfaces (HMI) for driver distraction or automated driving. A system or HMI is validated in regard to certain criteria that it can either pass or fail. One important aspect is the number of participants to include in the study and the respective number of potential failures concerning the pass/fail criteria of the test protocol. By applying binomial tests, the present work provides recommendations on how many participants should be included in a user study. It sheds light on the degree to which inferences from a sample with specific pass/fail ratios to a population is permitted. The calculations take into account different sample sizes and different numbers of observations within a sample that fail the criterion of interest. The analyses show that required sample sizes increase to high numbers with a rising degree of controllability that is assumed for a population. The required sample sizes for a specific controllability verification (e.g., 85%) also increase if there are observed cases of fails in regard to the safety criteria. In conclusion, the present work outlines potential sample sizes and valid inferences about populations and the number of observed failures in a user study.


Sign in / Sign up

Export Citation Format

Share Document