scholarly journals Comparing Expert Driving Behavior in Real World and Simulator Contexts

2013 ◽  
Vol 2013 ◽  
pp. 1-14 ◽  
Author(s):  
Hiran B. Ekanayake ◽  
Per Backlund ◽  
Tom Ziemke ◽  
Robert Ramberg ◽  
Kamalanath P. Hewagamage ◽  
...  

Computer games are increasingly used for purposes beyond mere entertainment, and current hi-tech simulators can provide quite, naturalistic contexts for purposes such as traffic education. One of the critical concerns in this area is the validity or transferability of acquired skills from a simulator to the real world context. In this paper, we present our work in which we compared driving in the real world with that in the simulator at two levels, that is, by using performance measures alone, and by combining psychophysiological measures with performance measures. For our study, we gathered data using questionnaires as well as by logging vehicle dynamics, environmental conditions, video data, and users' psychophysiological measurements. For the analysis, we used several novel approaches such as scatter plots to visualize driving tasks of different contexts and to obtain vigilance estimators from electroencephalographic (EEG) data in order to obtain important results about the differences between the driving in the two contexts. Our belief is that both experimental procedures and findings of our experiment are very important to the field of serious games concerning how to evaluate the fitness of driving simulators and measure driving performance.

10.2196/13961 ◽  
2020 ◽  
Vol 22 (4) ◽  
pp. e13961
Author(s):  
Kim Sarah Sczuka ◽  
Lars Schwickert ◽  
Clemens Becker ◽  
Jochen Klenk

Background Falls are a common health problem, which in the worst cases can lead to death. To develop reliable fall detection algorithms as well as suitable prevention interventions, it is important to understand circumstances and characteristics of real-world fall events. Although falls are common, they are seldom observed, and reports are often biased. Wearable inertial sensors provide an objective approach to capture real-world fall signals. However, it is difficult to directly derive visualization and interpretation of body movements from the fall signals, and corresponding video data is rarely available. Objective The re-enactment method uses available information from inertial sensors to simulate fall events, replicate the data, validate the simulation, and thereby enable a more precise description of the fall event. The aim of this paper is to describe this method and demonstrate the validity of the re-enactment approach. Methods Real-world fall data, measured by inertial sensors attached to the lower back, were selected from the Fall Repository for the Design of Smart and Self-Adaptive Environments Prolonging Independent Living (FARSEEING) database. We focused on well-described fall events such as stumbling to be re-enacted under safe conditions in a laboratory setting. For the purposes of exemplification, we selected the acceleration signal of one fall event to establish a detailed simulation protocol based on identified postures and trunk movement sequences. The subsequent re-enactment experiments were recorded with comparable inertial sensor configurations as well as synchronized video cameras to analyze the movement behavior in detail. The re-enacted sensor signals were then compared with the real-world signals to adapt the protocol and repeat the re-enactment method if necessary. The similarity between the simulated and the real-world fall signals was analyzed with a dynamic time warping algorithm, which enables the comparison of two temporal sequences varying in speed and timing. Results A fall example from the FARSEEING database was used to show the feasibility of producing a similar sensor signal with the re-enactment method. Although fall events were heterogeneous concerning chronological sequence and curve progression, it was possible to reproduce a good approximation of the motion of a person’s center of mass during fall events based on the available sensor information. Conclusions Re-enactment is a promising method to understand and visualize the biomechanics of inertial sensor-recorded real-world falls when performed in a suitable setup, especially if video data is not available.


2006 ◽  
Vol 5 (3) ◽  
pp. 53-58 ◽  
Author(s):  
Roger K. C. Tan ◽  
Adrian David Cheok ◽  
James K. S. Teh

For better or worse, technological advancement has changed the world to the extent that at a professional level demands from the working executive required more hours either in the office or on business trips, on a social level the population (especially the younger generation) are glued to the computer either playing video games or surfing the internet. Traditional leisure activities, especially interaction with pets have been neglected or forgotten. This paper introduces Metazoa Ludens, a new computer mediated gaming system which allows pets to play new mixed reality computer games with humans via custom built technologies and applications. During the game-play the real pet chases after a physical movable bait in the real world within a predefined area; infra-red camera tracks the pets' movements and translates them into the virtual world of the system, corresponding them to the movement of a virtual pet avatar running after a virtual human avatar. The human player plays the game by controlling the human avatar's movements in the virtual world, this in turn relates to the movements of the physical movable bait in the real world which moves as the human avatar does. This unique way of playing computer game would give rise to a whole new way of mixed reality interaction between the pet owner and her pet thereby bringing technology and its influence on leisure and social activities to the next level


Author(s):  
Kim R. Hammel ◽  
Donald L. Fisher ◽  
Anuj K. Pradhan

Driving simulators and eye tracking technology are increasingly being used to evaluate advanced telematics. Many such evaluations are easily generalizable only if drivers' scanning in the virtual environment is similar to their scanning behavior in real world environments. In this study we developed a virtual driving environment designed to replicate the environmental conditions of a previous, real world experiment (Recarte & Nunes, 2000). Our motive was to compare the data collected under three different cognitive loading conditions in an advanced, fixed-base driving simulator with that collected in the real world. In the study that we report, a head mounted eye tracker recorded eye movement data while participants drove the virtual highway in half-mile segments. There were three loading conditions: no loading, verbal loading and spatial loading. Each of the 24 subjects drove in all three conditions. We found that the patterns that characterized eye movement data collected in the simulator were virtually identical to those that characterized eye movement data collected in the real world. In particular, the number of speedometer checks and the functional field of view significantly decreased in the verbal conditions, with even greater effects for the spatial loading conditions.


2016 ◽  
Author(s):  
Felix D. Schönbrodt ◽  
Jens B. Asendorpf

Computer games are advocated as a promising tool bridging the gap between the controllability of a lab experiment and the mundane realism of a field experiment. At the same time, many authors stress the importance of observing real behavior instead of asking participants about possible or intended behaviors. In this article we introduce an online virtual social environment, which is inhabited by autonomous agents including the virtual spouse of the participant. Participants can freely explore the virtual world and interact with any other inhabitant, allowing the expression of spontaneous and unprompted behavior. We investigated the usefulness of this game for the assessment of interactions with a virtual spouse and their relations to intimacy and autonomy motivation as well as relationship satisfaction with the real life partner. Both the intimacy motive and the satisfaction with the real world relationship showed significant correlations with aggregated in-game behavior, which shows that some sort of transference between the real world and the virtual world took place. In addition, a process analysis of interaction quality revealed that relationship satisfaction and intimacy motive had different effects on the initial status and the time Course of the interaction quality. Implications for psychological assessment using virtual social environments are discussed.


Algorithms ◽  
2021 ◽  
Vol 14 (2) ◽  
pp. 57
Author(s):  
Ryan Feng ◽  
Yu Yao ◽  
Ella Atkins

Autonomous vehicles require fleet-wide data collection for continuous algorithm development and validation. The smart black box (SBB) intelligent event data recorder has been proposed as a system for prioritized high-bandwidth data capture. This paper extends the SBB by applying anomaly detection and action detection methods for generalized event-of-interest (EOI) detection. An updated SBB pipeline is proposed for the real-time capture of driving video data. A video dataset is constructed to evaluate the SBB on real-world data for the first time. SBB performance is assessed by comparing the compression of normal and anomalous data and by comparing our prioritized data recording with an FIFO strategy. The results show that SBB data compression can increase the anomalous-to-normal memory ratio by ∼25%, while the prioritized recording strategy increases the anomalous-to-normal count ratio when compared to an FIFO strategy. We compare the real-world dataset SBB results to a baseline SBB given ground-truth anomaly labels and conclude that improved general EOI detection methods will greatly improve SBB performance.


Electronics ◽  
2021 ◽  
Vol 10 (20) ◽  
pp. 2527
Author(s):  
Minji Jung ◽  
Heekyung Yang ◽  
Kyungha Min

The advancement and popularity of computer games make game scene analysis one of the most interesting research topics in the computer vision society. Among the various computer vision techniques, we employ object detection algorithms for the analysis, since they can both recognize and localize objects in a scene. However, applying the existing object detection algorithms for analyzing game scenes does not guarantee a desired performance, since the algorithms are trained using datasets collected from the real world. In order to achieve a desired performance for analyzing game scenes, we built a dataset by collecting game scenes and retrained the object detection algorithms pre-trained with the datasets from the real world. We selected five object detection algorithms, namely YOLOv3, Faster R-CNN, SSD, FPN and EfficientDet, and eight games from various game genres including first-person shooting, role-playing, sports, and driving. PascalVOC and MS COCO were employed for the pre-training of the object detection algorithms. We proved the improvement in the performance that comes from our strategy in two aspects: recognition and localization. The improvement in recognition performance was measured using mean average precision (mAP) and the improvement in localization using intersection over union (IoU).


1986 ◽  
Vol 30 (14) ◽  
pp. 1403-1404
Author(s):  
Marshall B. Jones ◽  
Robert S. Kennedy ◽  
Janet J. Turnage

The literature of applied psychology rarely, if ever, allows an unambiguous answer to a particular problem. Almost always there is a hiatus between what is known and what one wants to know. If the tasks are the same, personnel, performance measures, temporal relations or environmental conditions are different. Oftentimes nothing is quite the same as what has been studied in the literature. Inevitably, these gaps are closed by “expert judgment.” People who are experienced in the field extrapolate from what has been studied to the real-world case in hand. This inevitability is not, however, the end of the matter. Expert judgment can be utilized in many different ways and some ways are better than others. The principal issues are: precisely what are the experts to be asked, how is their consensus to be determined, and how is that consensus to be used relative to the real-world problem in hand. This discussion will describe one way of answering these questions. It is called “isoperformance.” The key feature of this approach is the design of an “ideal experiment.” This experiment then functions as a framework for both what is known in the literature and expert judgment.


Sensors ◽  
2019 ◽  
Vol 19 (20) ◽  
pp. 4456 ◽  
Author(s):  
Park ◽  
Wen ◽  
Sung ◽  
Cho

Nowadays, deep learning methods based on a virtual environment are widely applied to research and technology development for autonomous vehicle’s smart sensors and devices. Learning various driving environments in advance is important to handle unexpected situations that can exist in the real world and to continue driving without accident. For training smart sensors and devices of an autonomous vehicle well, a virtual simulator should create scenarios of various possible real-world situations. To create reality-based scenarios, data on the real environment must be collected from a real driving vehicle or a scenario analysis process conducted by experts. However, these two approaches increase the period and the cost of scenario generation as more scenarios are created. This paper proposes a scenario generation method based on deep learning to create scenarios automatically for training autonomous vehicle smart sensors and devices. To generate various scenarios, the proposed method extracts multiple events from a video which is taken on a real road by using deep learning and generates the multiple event in a virtual simulator. First, Faster-region based convolution neural network (Faster-RCNN) extracts bounding boxes of each object in a driving video. Second, the high-level event bounding boxes are calculated. Third, long-term recurrent convolution networks (LRCN) classify each type of extracted event. Finally, all multiple event classification results are combined into one scenario. The generated scenarios can be used in an autonomous driving simulator to teach multiple events that occur during real-world driving. To verify the performance of the proposed scenario generation method, experiments using real driving video data and a virtual simulator were conducted. The results for deep learning model show an accuracy of 95.6%; furthermore, multiple high-level events were extracted, and various scenarios were generated in a virtual simulator for smart sensors and devices of an autonomous vehicle.


Author(s):  
Chi Chung Ko ◽  
Chang Dong Cheng

We have discussed important Java 3D objects that are basically static in the last few chapters. Starting from this chapter, we will be looking at universe and objects that are dynamic in nature. Specifically, we will discuss issues on animation and interaction in this and the next chapter, respectively. As well demonstrated by popular interactive computer games, animation, and interaction are crucial in making a Java 3D world more interesting. Technically, animation is associated with changes in graphical objects and images as time passes without any direct user action, while interaction corresponds to any such change in response to an action or input from the user (Tso, Tharp, Zhang, & Tai, 1999). In any virtual reality or game application, animation and interaction are often crucial and critical. Through animation, the user is able to have a more realistic feel of the real 3D objects through looking at the object at different angles and perspectives. Through interaction with these objects, the user will become more integrated into the virtual 3D world in the same way as sensing our own reality in the real world. Under Java 3D, the “behavior” class is used to define and control both animation and interaction. However, note that the behavior class is an abstract class and cannot be directly used (Stromer, Quon, Gordon, Turinsky, & Sensen, 2005). Instead, there are three classes that extend the behavior class and that are commonly used. They are the “interpolator,” the “billboard,” and the “level of detail (LOD)” class. Furthermore, we can create a new behavior class by extending the behavior class to fit any special need. Briefly, in this chapter, we will discuss the important interpolator classes by using a number of illustrating examples, followed by some details discussions on the billboard and LOD classes.


2015 ◽  
Vol 13 (2) ◽  
pp. 174-192 ◽  
Author(s):  
Robert Sparrow ◽  
Rebecca Harrison ◽  
Justin Oakley ◽  
Brendan Keogh

In the cultural controversy surrounding “violent video games,” the manufacturers and players of games often insist that computer games are a form of harmless entertainment that is unlikely to influence the real-world activities of players. Yet games and military simulations are used by military organizations across the world to teach the modern arts of war, from how to shoot a gun to teamwork, leadership skills, military values, and cultural sensitivity. We survey a number of ways of reconciling these apparently contradictory claims and argue that none of them are ultimately successful. Thus, either military organizations are wrong to think that games and simulations have a useful role to play in training anything other than the most narrowly circumscribed physical skills or some recreational digital games do, in fact, have the power to influence the real-world behavior and dispositions of players in morally significant ways.


Sign in / Sign up

Export Citation Format

Share Document