scholarly journals AFFECTIVE COMPUTING AND AUGMENTED REALITY FOR CAR DRIVING SIMULATORS

2017 ◽  
Vol 12 ◽  
pp. 13 ◽  
Author(s):  
Dragoș Datcu ◽  
Leon Rothkrantz

Car simulators are essential for training and for analyzing the behavior, the responses and the performance of the driver. Augmented Reality (AR) is the technology that enables virtual images to be overlaid on views of the real world. Affective Computing (AC) is the technology that helps reading emotions by means of computer systems, by analyzing body gestures, facial expressions, speech and physiological signals. The key aspect of the research relies on investigating novel interfaces that help building situational awareness and emotional awareness, to enable affect-driven remote collaboration in AR for car driving simulators. The problem addressed relates to the question about how to build situational awareness (using AR technology) and emotional awareness (by AC technology), and how to integrate these two distinct technologies [4], into a unique affective framework for training, in a car driving simulator.

2021 ◽  
Vol 12 ◽  
Author(s):  
Meng Zhang ◽  
Klas Ihme ◽  
Uwe Drewitz ◽  
Meike Jipp

Facial expressions are one of the commonly used implicit measurements for the in-vehicle affective computing. However, the time courses and the underlying mechanism of facial expressions so far have been barely focused on. According to the Component Process Model of emotions, facial expressions are the result of an individual's appraisals, which are supposed to happen in sequence. Therefore, a multidimensional and dynamic analysis of drivers' fear by using facial expression data could profit from a consideration of these appraisals. A driving simulator experiment with 37 participants was conducted, in which fear and relaxation were induced. It was found that the facial expression indicators of high novelty and low power appraisals were significantly activated after a fear event (high novelty: Z = 2.80, p < 0.01, rcontrast = 0.46; low power: Z = 2.43, p < 0.05, rcontrast = 0.50). Furthermore, after the fear event, the activation of high novelty occurred earlier than low power. These results suggest that multidimensional analysis of facial expression is suitable as an approach for the in-vehicle measurement of the drivers' emotions. Furthermore, a dynamic analysis of drivers' facial expressions considering of effects of appraisal components can add valuable information for the in-vehicle assessment of emotions.


2016 ◽  
Vol 78 (6-9) ◽  
Author(s):  
Rozmi Ismail ◽  
Mohamad Hanif Md Saad ◽  
Mohd Jailani Mohd Nor ◽  
Redzwan Rosli

Driving simulators have existed since the 1960s, but for a long time they remained too expensive to be used widely for training purposes. As IT related technology improved, the cost of driving simulators goes down in the last few years. Today, driving simulators have been used as a training device in the basic driver training. In developed countries the use of simulator in driving education is not new. In Malaysia, however, the use of simulators in training is quite new compared to other Asian countries. This paper presents the results of preliminary study on the readiness of Malaysian driver to use car driving simulator in training and education prior to driver licensing.  Survey method was used to get information on drivers’ perception toward the introduction of driving simulator in selected driving institutes. The results of this study showed that all respondents agreed on the importance and usefulness of the driving simulator in improving the driving training process. Based on the result of this study, it is suggested that driving simulator should be implemented in the driving curriculum in order to produce competent driver.


Sensors ◽  
2020 ◽  
Vol 20 (18) ◽  
pp. 5328
Author(s):  
Clarence Tan ◽  
Gerardo Ceballos ◽  
Nikola Kasabov ◽  
Narayan Puthanmadam Subramaniyam

Using multimodal signals to solve the problem of emotion recognition is one of the emerging trends in affective computing. Several studies have utilized state of the art deep learning methods and combined physiological signals, such as the electrocardiogram (EEG), electroencephalogram (ECG), skin temperature, along with facial expressions, voice, posture to name a few, in order to classify emotions. Spiking neural networks (SNNs) represent the third generation of neural networks and employ biologically plausible models of neurons. SNNs have been shown to handle Spatio-temporal data, which is essentially the nature of the data encountered in emotion recognition problem, in an efficient manner. In this work, for the first time, we propose the application of SNNs in order to solve the emotion recognition problem with the multimodal dataset. Specifically, we use the NeuCube framework, which employs an evolving SNN architecture to classify emotional valence and evaluate the performance of our approach on the MAHNOB-HCI dataset. The multimodal data used in our work consists of facial expressions along with physiological signals such as ECG, skin temperature, skin conductance, respiration signal, mouth length, and pupil size. We perform classification under the Leave-One-Subject-Out (LOSO) cross-validation mode. Our results show that the proposed approach achieves an accuracy of 73.15% for classifying binary valence when applying feature-level fusion, which is comparable to other deep learning methods. We achieve this accuracy even without using EEG, which other deep learning methods have relied on to achieve this level of accuracy. In conclusion, we have demonstrated that the SNN can be successfully used for solving the emotion recognition problem with multimodal data and also provide directions for future research utilizing SNN for Affective computing. In addition to the good accuracy, the SNN recognition system is requires incrementally trainable on new data in an adaptive way. It only one pass training, which makes it suitable for practical and on-line applications. These features are not manifested in other methods for this problem.


Author(s):  
Thomas Ludwig ◽  
Oliver Stickel ◽  
Peter Tolmie ◽  
Malte Sellmer

Abstract10 years ago, Castellani et al. (Journal of Computer Supported Cooperative Work, vol. 18, no. 2–3, 2009, pp. 199–227, 2009) showed that using just an audio channel for remote troubleshooting can lead to a range of problems and already envisioned a future in which augmented reality (AR) could solve many of these issues. In the meantime, AR technologies have found their way into our everyday lives and using such technologies to support remote collaboration has been widely studied within the fields of Human-Computer Interaction and Computer-Supported Cooperative Work. In this paper, we contribute to this body of research by reporting on an extensive empirical study within a Fab Lab of troubleshooting and expertise sharing and the potential relevance of articulation work to their realization. Based on the findings of this study, we derived design challenges that led to an AR-based concept, implemented as a HoloLens application, called shARe-it. This application is designed to support remote troubleshooting and expertise sharing through different communication channels and AR-based interaction modalities. Early testing of the application revealed that novel interaction modalities such as AR-based markers and drawings play only a minor role in remote collaboration due to various limiting factors. Instead, the transmission of a shared view and especially arriving at a shared understanding of the situation as a prerequisite for articulation work continue to be the decisive factors in remote troubleshooting.


Sensors ◽  
2020 ◽  
Vol 21 (1) ◽  
pp. 26
Author(s):  
David González-Ortega ◽  
Francisco Javier Díaz-Pernas ◽  
Mario Martínez-Zarzuela ◽  
Míriam Antón-Rodríguez

Driver’s gaze information can be crucial in driving research because of its relation to driver attention. Particularly, the inclusion of gaze data in driving simulators broadens the scope of research studies as they can relate drivers’ gaze patterns to their features and performance. In this paper, we present two gaze region estimation modules integrated in a driving simulator. One uses the 3D Kinect device and another uses the virtual reality Oculus Rift device. The modules are able to detect the region, out of seven in which the driving scene was divided, where a driver is gazing at in every route processed frame. Four methods were implemented and compared for gaze estimation, which learn the relation between gaze displacement and head movement. Two are simpler and based on points that try to capture this relation and two are based on classifiers such as MLP and SVM. Experiments were carried out with 12 users that drove on the same scenario twice, each one with a different visualization display, first with a big screen and later with Oculus Rift. On the whole, Oculus Rift outperformed Kinect as the best hardware for gaze estimation. The Oculus-based gaze region estimation method with the highest performance achieved an accuracy of 97.94%. The information provided by the Oculus Rift module enriches the driving simulator data and makes it possible a multimodal driving performance analysis apart from the immersion and realism obtained with the virtual reality experience provided by Oculus.


2011 ◽  
Vol 460-461 ◽  
pp. 704-709
Author(s):  
Shu Tao Zheng ◽  
Zheng Mao Ye ◽  
Jun Jin ◽  
Jun Wei Han

Vehicle driving simulators are widely employed in training and entertainment utilities because of its safe, economic and efficient. Amphibious vehicle driving simulator was used to simulate amphibious vehicle on land and in water. Because of the motion difference between aircraft and amphibious vehicle, it is necessary to design a reasonable 6-DOF motion system according to the flight simulator motion system standard and vehicle motion parameter. FFT of DSP and PSD were used to analysis the relationship between them. Finally according to the result analysis, a set of reasonable 6-DOF motion system motion parameter was given to realize the driving simulator motion cueing used to reproduce vehicle acceleration.


Author(s):  
I. Murph ◽  
M. McDonald ◽  
K. Richardson ◽  
M. Wilkinson ◽  
S. Robertson ◽  
...  

Within distracting environments, it is difficult to maintain attentional focus on complex tasks. Cognitive aids can support attention by adding relevant information to the environment, such as via augmented reality (AR). However, there may be a benefit in removing elements from the environment, such as irrelevant alarms, displays, and conversations. De-emphasis of distracting elements is a type of AR called Diminished Reality (DR). Although de-emphasizing distraction may help focus on a primary task, it may also reduce situational awareness (SA) of other activities that may become relevant. In the current study, participants will assemble a medical ventilator during a simulated emergency while experiencing varying levels of DR. Participants will also be probed to assess secondary SA. We anticipate that participants will have better accuracy and completion times in the full DR conditions but their SA will suffer. Future applications include the design of future DR systems and improved training methods.


Sign in / Sign up

Export Citation Format

Share Document