eye location
Recently Published Documents


TOTAL DOCUMENTS

102
(FIVE YEARS 7)

H-INDEX

10
(FIVE YEARS 0)

2021 ◽  
Vol 7 (9) ◽  
pp. 162
Author(s):  
Sorin Valcan ◽  
Mihail Gaianu

Labeling is a very costly and time consuming process that aims to generate datasets for training neural networks in several functionalities and projects. In the automotive field of driver monitoring it has a huge impact, where much of the budget is used for image labeling. This paper presents an algorithm that will be used for generating ground truth data for 2D eye location in infrared images of drivers. The algorithm is implemented with many detection restrictions, which makes it very accurate but not necessarily very constant. The resulting dataset shall not be modified by any human factor and will be used to train neural networks, which we expect to have a very good accuracy and a much better consistency for eye detection than the initial algorithm. This paper proves that we can automatically generate very good quality ground truth data for training neural networks, which is still an open topic in the automotive industry.


IEEE Access ◽  
2021 ◽  
pp. 1-1
Author(s):  
Yancheng Ling ◽  
Ruifa Luo ◽  
Xiaoxian Dong ◽  
Xiaoxiong Weng

2020 ◽  
Vol 5 (4) ◽  
pp. 182-186
Author(s):  
Jubin Xing ◽  
Zhonghe Ke ◽  
Liping Liu ◽  
Chenhong Li ◽  
Xiaoling Gong ◽  
...  

2020 ◽  
Author(s):  
David Harris ◽  
Mark Wilson ◽  
Samuel James Vine

Directing ocular fixations towards a target assists the planning and control of visually-guided actions. In far aiming tasks, the quiet eye, an instance of pre-movement gaze anchoring, has been extensively studied as a key performance variable. However, theories of quiet eye are yet to establish the exact functional role of the location and duration of the fixation. The present work used immersive virtual reality to manipulate key parameters of the quiet eye – location (experiment 1) and duration (experiment 2) – to test competing theoretical predictions about their importance. Across two pre-registered experiments, novice participants (n=127) completed a series of golf putts while their eye movements, putting accuracy, and putting kinematics were recorded. In experiment 1, participants’ pre-movement fixation was cued to locations on the ball, near the ball, and far from the ball. In experiment 2, long and short quiet eye durations were induced using auditory tones as cues to movement phases. Linear mixed effects models indicated that manipulations of location and duration had little effect on performance or movement kinematics. The findings suggest that, for novices, the spatial and temporal parameters of the final fixation may not be critical for movement pre-programming and may instead reflect attentional control or movement inhibition functions.


Eye movements are integrated with cognitive processes, which indeed make it a helpful research basis for the investigation of human practices. Eye movements can be deployed in discovering several cognitive processes of the brain. This research utilizes low-resolution webcam to develop an eye tracker and saccades measurement tool to extensively lower the gadgets expenses. A consistent algorithm is developed to suit the quality of the webcam using open-source software (Python) to record the time series of the eye location. Likewise, several algorithms are proposed to extract high-level eye movement saccadic measurements from the raw gaze outputs. A pilot study is performed on ten normal participants and Multiple Sclerosis (MS) patients. Experimental results demonstrate that the proposed system is quick, simple and efficient for eye tracking and saccade measurement. The developed tool can be used by clinicians and medical physicians for the diagnosis and identification of neurological disorders


2019 ◽  
Vol 15 (10) ◽  
pp. 2744
Author(s):  
Wu Lihua ◽  
Bai Xu ◽  
Zheng Dianshuang ◽  
Gai Jianxin

Zootaxa ◽  
2018 ◽  
Vol 4459 (3) ◽  
pp. 507
Author(s):  
YUSUKE HIBINO ◽  
RYOICHI TABATA

A new catfish, Silurus tomodai, is described based on 37 specimens; 132-514 mm > 139-514 mm [132–514 mm standard length (SL)] collected from streams of Mie, Aichi, Gifu, Shizuoka and Nagano prefectures of central Honshu Island, Japan. Although S. tomodai is closely related to S. lithophilus (Tomoda, 1961) based on partial mitochondrial DNA sequences, the former can be distinguished from the latter by the position of dorsal fin (predorsal-fin length 28.5–32.1% vs. 30.1–33.7% SL), a shorter head (18.5–21.2% vs. 19.5–22.2% SL) that is more broadly rounded in ventral view, a more slender body (depth at 10th anal-fin ray 12.9–18.3% vs. 15.7–18.8% SL, and 86.8–100.3% vs. 97.2–109.7% of body depth at anal-fin origin), longer mandibular barbel [20.4–47.7% vs. 10.7–35.3% of head length (HL)], shorter anal-fin rays (10th anal-fin ray length 32.2–38.3% vs. 37.3–45.3% HL), eye slightly protruding laterally beyond profile in dorsal view, and the shape of medial depression on anterior face of mesethmoid (deep and narrow vs. shallow and wide). Silurus tomodai differs from S. asotus, a species widely distributed in Japan including central Honshu Island, in the shape of vomerine-tooth band (typically separated into two distinct lenticular patches vs. continuous), the shape and size of teeth (small and slightly recurved vs. relatively large and recurved), snout length (34.7–38.9% vs. 33.0–36.5% HL), the length of lower jaw (110–124% vs. 124–138% of snout length), interorbital width (53.0–61.3% vs. 46.3–52.8% HL), eye location (vertical through anterior margin of pupil usually posterior vs. anterior terminus of lips), inter-mandibular barbel width (24.7–32.5% vs. 21.7–26.7% HL), vertebral count (62–65 vs. 58–63), pigmentation on underside of head (usually mottled with dark pigmentation vs. uniformly white, rarely dark with pale band along posteroventral margin of lower jaw) and eggs yellow (vs. light green). 


2018 ◽  
Vol 146 (7) ◽  
pp. 2089-2101 ◽  
Author(s):  
Kenneth R. Knapp ◽  
Christopher S. Velden ◽  
Anthony J. Wimmers

Abstract Intense tropical cyclones (TCs) generally produce a cloud-free center with calm winds, called the eye. The Automated Rotational Center Hurricane Eye Retrieval (ARCHER) algorithm is used to analyze Hurricane Satellite (HURSAT) B1 infrared satellite imagery data for storms occurring globally from 1982 to 2015. HURSAT B1 data provide 3-hourly observations of TCs. The result is a 34-yr climatology of eye location and size. During that time period, eyes are identified in about 13% of all infrared images and slightly more than half of all storms produced an eye. Those that produce an eye have (on average) 30 h of eye scenes. Hurricane Ioke (1992) had the most eye images (98, which is 12 complete days with an eye). The median wind speed of a system with an eye is 97 kt (50 m s−1) [cf. 35 kt (18 m s−1) for those without an eye]. Eyes are much more frequent in the Northern Hemisphere (particularly in the western Pacific) but eyes are larger in the Southern Hemisphere. The regions where eyes occur are expanding poleward, thus expanding the area at risk of TC-related damage. Also, eye scene occurrence can provide an objective measure of TC activity in place of those based on maximum wind speeds, which can be affected by available observations and forecast agency practices.


2018 ◽  
Vol 2018 ◽  
pp. 1-8 ◽  
Author(s):  
Bin Li ◽  
Hong Fu

An accurate and efficient eye detector is essential for many computer vision applications. In this paper, we present an efficient method to evaluate the eye location from facial images. First, a group of candidate regions with regional extreme points is quickly proposed; then, a set of convolution neural networks (CNNs) is adopted to determine the most likely eye region and classify the region as left or right eye; finally, the center of the eye is located with other CNNs. In the experiments using GI4E, BioID, and our datasets, our method attained a detection accuracy which is comparable to existing state-of-the-art methods; meanwhile, our method was faster and adaptable to variations of the images, including external light changes, facial occlusion, and changes in image modality.


Sign in / Sign up

Export Citation Format

Share Document