scholarly journals Automotive augmented reality 3D head-up display based on light-field rendering with eye-tracking

2020 ◽  
Vol 28 (20) ◽  
pp. 29788 ◽  
Author(s):  
Jin-ho Lee ◽  
Igor Yanusik ◽  
Yoonsun Choi ◽  
Byongmin Kang ◽  
Chansol Hwang ◽  
...  
Author(s):  
Seok Lee ◽  
Juyong Park ◽  
Jingu Heo ◽  
Byungmin Kang ◽  
Dongwoo Kang ◽  
...  

Author(s):  
Jin-Ho Lee ◽  
Igor Yanusik ◽  
Yoonsun Choi ◽  
Byongmin Kang ◽  
Chansol Hwang ◽  
...  

2021 ◽  
Vol 52 (1) ◽  
pp. 369-372
Author(s):  
Chen Gao ◽  
Yifan (Evan) Peng ◽  
Haifeng Li ◽  
Xu Liu

Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 2234
Author(s):  
Sebastian Kapp ◽  
Michael Barz ◽  
Sergey Mukhametov ◽  
Daniel Sonntag ◽  
Jochen Kuhn

Currently an increasing number of head mounted displays (HMD) for virtual and augmented reality (VR/AR) are equipped with integrated eye trackers. Use cases of these integrated eye trackers include rendering optimization and gaze-based user interaction. In addition, visual attention in VR and AR is interesting for applied research based on eye tracking in cognitive or educational sciences for example. While some research toolkits for VR already exist, only a few target AR scenarios. In this work, we present an open-source eye tracking toolkit for reliable gaze data acquisition in AR based on Unity 3D and the Microsoft HoloLens 2, as well as an R package for seamless data analysis. Furthermore, we evaluate the spatial accuracy and precision of the integrated eye tracker for fixation targets with different distances and angles to the user (n=21). On average, we found that gaze estimates are reported with an angular accuracy of 0.83 degrees and a precision of 0.27 degrees while the user is resting, which is on par with state-of-the-art mobile eye trackers.


Information ◽  
2021 ◽  
Vol 12 (6) ◽  
pp. 226
Author(s):  
Lisa-Marie Vortmann ◽  
Leonid Schwenke ◽  
Felix Putze

Augmented reality is the fusion of virtual components and our real surroundings. The simultaneous visibility of generated and natural objects often requires users to direct their selective attention to a specific target that is either real or virtual. In this study, we investigated whether this target is real or virtual by using machine learning techniques to classify electroencephalographic (EEG) and eye tracking data collected in augmented reality scenarios. A shallow convolutional neural net classified 3 second EEG data windows from 20 participants in a person-dependent manner with an average accuracy above 70% if the testing data and training data came from different trials. This accuracy could be significantly increased to 77% using a multimodal late fusion approach that included the recorded eye tracking data. Person-independent EEG classification was possible above chance level for 6 out of 20 participants. Thus, the reliability of such a brain–computer interface is high enough for it to be treated as a useful input mechanism for augmented reality applications.


Author(s):  
Goran Petrovic ◽  
Aneez Kadermohideen Shahulhameed ◽  
Sveta Zinger ◽  
Peter H. N. De With

Fast track article for IS&T International Symposium on Electronic Imaging 2021: Stereoscopic Displays and Applications XXXII proceedings.


Sign in / Sign up

Export Citation Format

Share Document