scholarly journals Eye Tracking Interaction on Unmodified Mobile VR Headsets Using the Selfie Camera

2021 ◽  
Vol 18 (3) ◽  
pp. 1-20
Author(s):  
Panagiotis Drakopoulos ◽  
George-alex Koulieris ◽  
Katerina Mania

Input methods for interaction in smartphone-based virtual and mixed reality (VR/MR) are currently based on uncomfortable head tracking controlling a pointer on the screen. User fixations are a fast and natural input method for VR/MR interaction. Previously, eye tracking in mobile VR suffered from low accuracy, long processing time, and the need for hardware add-ons such as anti-reflective lens coating and infrared emitters. We present an innovative mobile VR eye tracking methodology utilizing only the eye images from the front-facing (selfie) camera through the headset’s lens, without any modifications. Our system first enhances the low-contrast, poorly lit eye images by applying a pipeline of customised low-level image enhancements suppressing obtrusive lens reflections. We then propose an iris region-of-interest detection algorithm that is run only once. This increases the iris tracking speed by reducing the iris search space in mobile devices. We iteratively fit a customised geometric model to the iris to refine its coordinates. We display a thin bezel of light at the top edge of the screen for constant illumination. A confidence metric calculates the probability of successful iris detection. Calibration and linear gaze mapping between the estimated iris centroid and physical pixels on the screen results in low latency, real-time iris tracking. A formal study confirmed that our system’s accuracy is similar to eye trackers in commercial VR headsets in the central part of the headset’s field-of-view. In a VR game, gaze-driven user completion time was as fast as with head-tracked interaction, without the need for consecutive head motions. In a VR panorama viewer, users could successfully switch between panoramas using gaze.

Author(s):  
Dongxian Yu ◽  
Jiatao Kang ◽  
Zaihui Cao ◽  
Neha Jain

In order to solve the current traffic sign detection technology due to the interference of various complex factors, it is difficult to effectively carry out the correct detection of traffic signs, and the robustness is weak, a traffic sign detection algorithm based on the region of interest extraction and double filter is designed.First, in order to reduce environmental interference, the input image is preprocessed to enhance the main color of each logo.Secondly, in order to improve the extraction ability Of Regions Of Interest, a Region Of Interest (ROI) detector based on Maximally Stable Extremal Regions (MSER) and Wave Equation (WE) was defined, and candidate Regions were selected through the ROI detector.Then, an effective HOG (Histogram of Oriented Gradient) descriptor is introduced as the detection feature of traffic signs, and SVM (Support Vector Machine) is used to classify them into traffic signs or background.Finally, the context-aware filter and the traffic light filter are used to further identify the false traffic signs and improve the detection accuracy.In the GTSDB database, three kinds of traffic signs, which are indicative, prohibited and dangerous, are tested, and the results show that the proposed algorithm has higher detection accuracy and robustness compared with the current traffic sign recognition technology.


2006 ◽  
Vol 45 (7) ◽  
pp. 077201 ◽  
Author(s):  
Huibao Lin

2018 ◽  
Vol 7 (2.22) ◽  
pp. 35
Author(s):  
Kavitha M ◽  
Mohamed Mansoor Roomi S ◽  
K Priya ◽  
Bavithra Devi K

The Automatic Teller Machine plays an important role in the modern economic society. ATM centers are located in remote central which are at high risk due to the increasing crime rate and robbery.These ATM centers assist with surveillance techniques to provide protection. Even after installing the surveillance mechanism, the robbers fool the security system by hiding their face using mask/helmet. Henceforth, an automatic mask detection algorithm is required to, alert when the ATM is at risk. In this work, the Gaussian Mixture Model (GMM) is applied for foreground detection to extract the regions of interest (ROI) i.e. Human being. Face region is acquired from the foreground region through  the torso partitioning and applying Viola-Jones algorithm in this search space. Parts of the face such as Eye pair, Nose, and Mouth are extracted and a state model is developed to detect  mask.  


2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Vincent Majanga ◽  
Serestina Viriri

Recent advances in medical imaging analysis, especially the use of deep learning, are helping to identify, detect, classify, and quantify patterns in radiographs. At the center of these advances is the ability to explore hierarchical feature representations learned from data. Deep learning is invaluably becoming the most sought out technique, leading to enhanced performance in analysis of medical applications and systems. Deep learning techniques have achieved great performance results in dental image segmentation. Segmentation of dental radiographs is a crucial step that helps the dentist to diagnose dental caries. The performance of these deep networks is however restrained by various challenging features of dental carious lesions. Segmentation of dental images becomes difficult due to a vast variety in topologies, intricacies of medical structures, and poor image qualities caused by conditions such as low contrast, noise, irregular, and fuzzy edges borders, which result in unsuccessful segmentation. The dental segmentation method used is based on thresholding and connected component analysis. Images are preprocessed using the Gaussian blur filter to remove noise and corrupted pixels. Images are then enhanced using erosion and dilation morphology operations. Finally, segmentation is done through thresholding, and connected components are identified to extract the Region of Interest (ROI) of the teeth. The method was evaluated on an augmented dataset of 11,114 dental images. It was trained with 10 090 training set images and tested on 1024 testing set images. The proposed method gave results of 93 % for both precision and recall values, respectively.


2021 ◽  
Vol 38 (1) ◽  
pp. 215-220
Author(s):  
Bin Wu ◽  
Chunmei Wang ◽  
Wei Huang ◽  
Da Huang ◽  
Hang Peng

Classroom teaching, as the basic form of teaching, provides students with an important channel to acquire information and skills. The academic performance of students can be evaluated and predicted objectively based on the data on their classroom behaviors. Considering the complexity of classroom environment, this paper firstly envisages a moving target detection algorithm for student behavior recognition in class. Based on region of interest (ROI) and face tracking, the authors proposed two algorithms to recognize the standing behavior of students in class. Moreover, a recognition algorithm was developed for hand raising in class based on skin color detection. Through experiments, the proposed algorithms were proved as effective in recognition of student classroom behaviors.


Sign in / Sign up

Export Citation Format

Share Document