scholarly journals Visual Behavior Modeling of Hazard Identification Assessment From Eye-Tracking Data

Author(s):  
Abner Cardoso Da Silva ◽  
Alberto Barbosa Raposo ◽  
Cesar Augusto Sierra Franco

The easier access to virtual reality head-mounted displays have assisted the use of this technology on research. In parallel, the integration of those devices with eye-trackers enabled new perspectives of visual attention analysis in virtual environments. Different research and application fields found in such technologies a viable way to train and assess individuals by reproducing, with low cost, situations that are not so easily recreated in real life. In this context, our proposal aims to develop a model to measure characteristics of safety professional’s gaze behavior during the hazard detection process.

2014 ◽  
Vol 4 (2) ◽  
pp. 1
Author(s):  
Vitor Reus ◽  
Márcio Mello ◽  
Luciana Nedel ◽  
Anderson Maciel

Head-mounted displays (HMD) allow a personal and immersive viewing of virtual environments, and can be used with almost any desktop computer. Most HMDs have inertial sensors embedded for tracking the user head rotations. These low-cost sensors have high quality and availability. However, even if they are very sensitive and precise, inertial sensors work with incremental information, easily introducing errors in the system. The most relevant is that head tracking suffers from drifting. In this paper we present important limitations that still prevent the wide use of inertial sensors for tracking. For instance, to compensate for the drifting, users of HMD-based immersive VEs move away from their suitable pose. We also propose a software solution for two problems: prevent the occurrence of drifting in incremental sensors, and avoid the user from move its body in relation to another tracking system that uses absolute sensors (e.g. MS Kinect). We analyze and evaluate our solutions experimentally, including user tests. Results show that our comfortable pose function is effective on eliminating drifting, and that it can be inverted and applied also to prevent the user from moving their body away of the absolute sensor range. The efficiency and accuracy of this method makes it suitable for a number of applications in immersive VR.


Author(s):  
Derek Harter ◽  
Shulan Lu ◽  
Pratyush Kotturu ◽  
Devin Pierce

We present an immersive virtual environment being developed to study questions of risk perception and their impacts on effective training. Immersion is known to effect the quality of training in virtual environments, and the successful transfer of skills to real world situations. However, the level of perceived immersiveness that an environment invokes is an ill defined concept, and effects of different types of immersion are known to have greater and lesser influences on training outcomes. We concentrate on how immersiveness effects perceived risk in virtual environments, and how risk impacts training effectiveness. Simulated risk can invoke an alief of danger in subjects using a virtual environment. Alief is a concept useful in virtual training that describes situations where the person experiencing a simulated scenario knows it is not real, but suspends disbelief (willingly or unwillingly). This suspension of belief (alief) can cause the person to experience the same sorts of autonomic reactions as if they were experiencing the situation in real life (for example, think of fear invoked on amusement park rides). Alief of risk or danger has been proposed as one phenomenon that can influence training outcomes, like the experience of immersion, when training in virtual environments. In this paper we present work on developing a low-cost virtual environment for the manipulation of immersion and risk for cognitive studies. In this environment we provide several alternative input modalities, from mouse to Wii remote interactivity, to control a virtual avatar’s hand and arm for performing risky every day tasks. Immersion can be manipulated in several ways, as well as the type and risk associated with tasks. Typical tasks include performing kitchen preparation work (using knives or hot items), or wood or metal working tasks (involving manipulation of dangerous tools). This paper describes the development and technologies used to create the virtual environment, and how we vary risk perception and immersion of users for various cognitive tasks. The capabilities and manipulations of immersiveness and risk are presented together with some findings on using Wii motes as input devices in several ways for virtual environments. The paper concludes with some preliminary results of varying perceived risk on cognitive task performance in the developed environment.


2019 ◽  
Vol 6 (2) ◽  
pp. 3-19 ◽  
Author(s):  
Thiago V. V. Batista ◽  
Liliane dos Santos Machado ◽  
Ana Maria Gondim Valença ◽  
Ronei Marcos de Moraes

One of the strategies used in recent years to increase the commitment and motivation of patients undergoing rehabilitation is the use of graphical systems, such as virtual environments and serious games. In addition to contributing to the motivation, these systems can simulate real life activities and provide means to measure and assess user performance. The use of natural interaction devices, originally conceived for the game market, has allowed the development of low cost and minimally invasive rehabilitation systems. With the advent of natural interaction devices based on electromyography, the user's electromyographic data can also be used to build these systems. This paper shows the development of a serious game focused on aiding the rehabilitation process of patients with hand motor problems, targeting to solve problems related to cost, adaptability and patient motivation in this type of application. The game uses an electromyography device to recognize the gestures being performed by the user. A gesture recognition system was developed to detect new gestures, complementing the device's own recognition system, which is responsible for interpreting the signals. An initial evaluation of the game was conducted with professional physiotherapists.


2021 ◽  
Vol 2 ◽  
Author(s):  
Philipp Maruhn

Virtual Reality is commonly applied as a tool for analyzing pedestrian behavior in a safe and controllable environment. Most such studies use high-end hardware such as Cave Automatic Virtual Environments (CAVEs), although, more recently, consumer-grade head-mounted displays have also been used to present these virtual environments. The aim of this study is first of all to evaluate the suitability of a Google Cardboard as low-cost alternative, and then to test subjects in their home environment. Testing in a remote setting would ultimately allow more diverse subject samples to be recruited, while also facilitating experiments in different regions, for example, investigations of cultural differences. A total of 60 subjects (30 female and 30 male) were provided with a Google Cardboard. Half of the sample performed the experiment in a laboratory at the university, the other half at home without an experimenter present. The participants were instructed to install a mobile application to their smartphones, which guided them through the experiment, contained all the necessary questionnaires, and presented the virtual environment in conjunction with the Cardboard. In the virtual environment, the participants stood at the edge of a straight road, on which two vehicles approached with gaps of 1–5 s and at speeds of either 30 or 50 km/h. Participants were asked to press a button to indicate whether they considered the gap large enough to be able to cross safely. Gap acceptance and the time between the first vehicle passing and the button being pressed were recorded and compared with data taken from other simulators and from a real-world setting on a test track. A Bayesian approach was used to analyze the data. Overall, the results were similar to those obtained with the other simulators. The differences between the two Cardboard test conditions were marginal, but equivalence could not be demonstrated with the evaluation method used. It is worth mentioning, however, that in the home setting with no experimenter present, significantly more data points had to be treated or excluded from the analysis.


2020 ◽  
Author(s):  
Andrew Fang ◽  
Jonathan Kia-Sheng Phua ◽  
Terrence Chiew ◽  
Daniel De-Liang Loh ◽  
Lincoln Ming Han Liow ◽  
...  

BACKGROUND During the Coronavirus Disease 2019 (COVID-19) outbreak, community care facilities (CCF) were set up as temporary out-of-hospital isolation facilities to contain the surge of cases in Singapore. Confined living spaces within CCFs posed an increased risk of communicable disease spread among residents. OBJECTIVE This inspired our healthcare team managing a CCF operation to design a low-cost communicable disease outbreak surveillance system (CDOSS). METHODS Our CDOSS was designed with the following considerations: (1) comprehensiveness, (2) efficiency through passive reconnoitering from electronic medical record (EMR) data, (3) ability to provide spatiotemporal insights, (4) low-cost and (5) ease of use. We used Python to develop a lightweight application – Python-based Communicable Disease Outbreak Surveillance System (PyDOSS) – that was able perform syndromic surveillance and fever monitoring. With minimal user actions, its data pipeline would generate daily control charts and geospatial heat maps of cases from raw EMR data and logged vital signs. PyDOSS was successfully implemented as part of our CCF workflow. We also simulated a gastroenteritis (GE) outbreak to test the effectiveness of the system. RESULTS PyDOSS was used throughout the entire duration of operation; the output was reviewed daily by senior management. No disease outbreaks were identified during our medical operation. In the simulated GE outbreak, PyDOSS was able to effectively detect an outbreak within 24 hours and provided information about cluster progression which could aid in contact tracing. The code for a stock version of PyDOSS has been made publicly available. CONCLUSIONS PyDOSS is an effective surveillance system which was successfully implemented in a real-life medical operation. With the system developed using open-source technology and the code made freely available, it significantly reduces the cost of developing and operating CDOSS and may be useful for similar temporary medical operations, or in resource-limited settings.


Author(s):  
José Capmany ◽  
Daniel Pérez

Programmable Integrated Photonics (PIP) is a new paradigm that aims at designing common integrated optical hardware configurations, which by suitable programming can implement a variety of functionalities that, in turn, can be exploited as basic operations in many application fields. Programmability enables by means of external control signals both chip reconfiguration for multifunction operation as well as chip stabilization against non-ideal operation due to fluctuations in environmental conditions and fabrication errors. Programming also allows activating parts of the chip, which are not essential for the implementation of a given functionality but can be of help in reducing noise levels through the diversion of undesired reflections. After some years where the Application Specific Photonic Integrated Circuit (ASPIC) paradigm has completely dominated the field of integrated optics, there is an increasing interest in PIP justified by the surge of a number of emerging applications that are and will be calling for true flexibility, reconfigurability as well as low-cost, compact and low-power consuming devices. This book aims to provide a comprehensive introduction to this emergent field covering aspects that range from the basic aspects of technologies and building photonic component blocks to the design alternatives and principles of complex programmable photonics circuits, their limiting factors, techniques for characterization and performance monitoring/control and their salient applications both in the classical as well as in the quantum information fields. The book concentrates and focuses mainly on the distinctive features of programmable photonics as compared to more traditional ASPIC approaches.


2018 ◽  
Vol 2018 ◽  
pp. 1-11 ◽  
Author(s):  
Yea Som Lee ◽  
Bong-Soo Sohn

3D maps such as Google Earth and Apple Maps (3D mode), in which users can see and navigate in 3D models of real worlds, are widely available in current mobile and desktop environments. Users usually use a monitor for display and a keyboard/mouse for interaction. Head-mounted displays (HMDs) are currently attracting great attention from industry and consumers because they can provide an immersive virtual reality (VR) experience at an affordable cost. However, conventional keyboard and mouse interfaces decrease the level of immersion because the manipulation method does not resemble actual actions in reality, which often makes the traditional interface method inappropriate for the navigation of 3D maps in virtual environments. From this motivation, we design immersive gesture interfaces for the navigation of 3D maps which are suitable for HMD-based virtual environments. We also describe a simple algorithm to capture and recognize the gestures in real-time using a Kinect depth camera. We evaluated the usability of the proposed gesture interfaces and compared them with conventional keyboard and mouse-based interfaces. Results of the user study indicate that our gesture interfaces are preferable for obtaining a high level of immersion and fun in HMD-based virtual environments.


Author(s):  
Qutaiba I. Ali ◽  
Issam Jafar

Aims: The aim of the Green Communication Infrastructure ‎‎(GCI) project is to understand the idea of a self ‎‎"sustainably" controlled correspondence foundation ‎fitting for smart city application fields. ‎ Background: This paper shows the endeavors to understand the idea of a ‎self "sustainably" energized communication foundation ‎fitting for smart city application fields. The recommended ‎Green Communication Infrastructure (CGI) comprises ‎different kinds of remote settled (or even versatile) hubs ‎performing diverse activities as per the application ‎requests. An imperative class of these hubs is the Wireless ‎Solar Router (WSR). Objective: The work in this venture was begun in 2009 with the aim ‎of demonstrating the essential advances that must be taken to ‎accomplish such framework and to proclaim the value of ‎embracing natural vitality assets in building mission ‎basic frameworks. Alternate destinations of this venture ‎are introducing a sensibly cost, solid, verified, and simple ‎to introduce correspondence foundation.‎ Method: The arrangement to actualize the GCI was accomplished ‎subsequent to passing two structure levels: device level and ‎system level. Result: The suggested system is highly applicable and serves a wide ‎range of smart city application fields and hence many ‎people and organizations can utilize this system. ‎ Conclusion: The presence of a reliable, secured, low cost, easy to install ‎and self-powered communication infrastructure is ‎mandatory in our nowadays. The communities in ‎developing countries or in rural areas need such a system ‎highly in order to communicate with other people in the ‎world which will affect positively their social and ‎economic situation.


Author(s):  
Martin Gebert ◽  
Wolfgang Steger ◽  
Ralph Stelzer

Virtual Reality (VR) visualization of product data in engineering applications requires a largely manual process of translating various product data into a 3D representation. Modern game engines allow low-cost, high-end visualization using latest stereoscopic Head-Mounted Displays (HMDs) and input controllers. Thus, using them for VR tasks in the engineering industry is especially appealing. As standardized formats for 3D product representations do not currently meet the requirements that arise from engineering applications, the presented paper suggests an Enhanced Scene Graph (ESG) that carries arbitrary product data derived from various engineering tools. The ESG contains formal descriptions of geometric and non-geometric data that are functionally structured. A VR visualization may be derived from the formal description in the ESG immediately. The generic elements of the ESG offer flexibility in the choice of both engineering tools and renderers that create the virtual scene. Furthermore, the ESG allows storing user annotations, thereby sending feedback from the visualization directly to the engineers involved in the product development process. Individual user interfaces for VR controllers can be assigned and their controls mapped, guaranteeing intuitive scene interaction. The use of the ESG promises significant value to the visualization process as particular tasks are being automated and greatly simplified.


Sign in / Sign up

Export Citation Format

Share Document