scholarly journals Do Expert Fencers Engage the Same Visual Perception Strategies as Beginners?

2021 ◽  
Vol 78 (1) ◽  
pp. 49-58
Author(s):  
Mateusz Witkowski ◽  
Ewa Tomczak ◽  
Łukasz Bojkowski ◽  
Zbigniew Borysiuk ◽  
Maciej Tomczak

Abstract An effective visual perception strategy helps a fencer quickly react to an opponent’s actions. This study aimed to examine and compare visual perception strategies used by high-performance foil fencers (experts) and beginners. In an eye tracking experiment, we analysed to which areas beginning and expert fencers paid attention during duels. Novices paid attention to all examined areas of interest comprising the guard, foil (blade and tip), armed hand, lower torso, and upper torso of their opponents. Experts, however, paid significantly less attention to the foil, picking up information from other areas, mainly the upper torso and the armed hand. These results indicate that expert fencers indeed engage different visual perception strategies than beginners. The present findings highlight the fact that beginner fencers should be taught already in the early stages of their careers how to pick up information from various body areas of their opponents.

Sensors ◽  
2020 ◽  
Vol 20 (15) ◽  
pp. 4173
Author(s):  
Salma Samiei ◽  
Pejman Rasti ◽  
Paul Richard ◽  
Gilles Galopin ◽  
David Rousseau

Since most computer vision approaches are now driven by machine learning, the current bottleneck is the annotation of images. This time-consuming task is usually performed manually after the acquisition of images. In this article, we assess the value of various egocentric vision approaches in regard to performing joint acquisition and automatic image annotation rather than the conventional two-step process of acquisition followed by manual annotation. This approach is illustrated with apple detection in challenging field conditions. We demonstrate the possibility of high performance in automatic apple segmentation (Dice 0.85), apple counting (88 percent of probability of good detection, and 0.09 true-negative rate), and apple localization (a shift error of fewer than 3 pixels) with eye-tracking systems. This is obtained by simply applying the areas of interest captured by the egocentric devices to standard, non-supervised image segmentation. We especially stress the importance in terms of time of using such eye-tracking devices on head-mounted systems to jointly perform image acquisition and automatic annotation. A gain of time of over 10-fold by comparison with classical image acquisition followed by manual image annotation is demonstrated.


2019 ◽  
Vol 57 (3) ◽  
pp. 321-326
Author(s):  
Emily Karp ◽  
Andrew Scott ◽  
Katherine Martin ◽  
Hanan Zavala ◽  
Siva Chinnadurai ◽  
...  

Objective: To develop a protocol that will be used to measure children’s perception of secondary cleft lip deformity (SCLD) using objective eye-tracking technology. Design: Cross-sectional study. Data collection May and June of 2018. Setting: Single tertiary care pediatric hospital with a well-established cleft team. Participants: Participants were recruited from a general pediatric otolaryngology clinic. Sixty participants from 4 age groups (5-6, 10, 13, and 16 years) were enrolled on a voluntary basis. Intervention: Pediatric participants viewed images of children’s faces while wearing eye-tracking glasses. Ten images with unilateral SCLD and 2 control images with no facial scarring were viewed as gaze was assessed. Main Outcome and Measure: Successful gaze fixation was recorded across all age groups. Results: This article illustrates the types of data generated from glasses-based eye tracking in children. All children, regardless of age, spent more time with their gaze on a SCLD images (mean = 4.23 seconds; standard deviation [SD] = 1.41 seconds) compared to control images (mean = 3.97 seconds; SD = 1.42). Younger age groups spent less time looking at specific areas of interest in SCLD images. Conclusion: In this pilot study, we were able to successfully use eye-tracking technology in children to demonstrate gaze preference and a trend toward visual perception of SCLD changing with age. This protocol will allow for a future study, with larger and more diverse populations. Better understanding of how SCLD is perceived among children and adolescents has the potential to guide future interventions for SCLD and other facial deformities in pediatric patients.


2020 ◽  
Vol 57 (12) ◽  
pp. 1392-1401
Author(s):  
Mark P. Pressler ◽  
Emily L. Geisler ◽  
Rami R. Hallac ◽  
James R. Seaward ◽  
Alex A. Kane

Introduction and Objectives: Surgical treatment for trigonocephaly aims to eliminate a stigmatizing deformity, yet the severity that captures unwanted attention is unknown. Surgeons intervene at different points of severity, eliciting controversy. This study used eye tracking to investigate when deformity is perceived. Material and Methods: Three-dimensional photogrammetric images of a normal child and a child with trigonocephaly were mathematically deformed, in 10% increments, to create a spectrum of 11 images. These images were shown to participants using an eye tracker. Participants’ gaze patterns were analyzed, and participants were asked if each image looked “normal” or “abnormal.” Results: Sixty-six graduate students were recruited. Average dwell time toward pathologic areas of interest (AOIs) increased proportionally, from 0.77 ± 0.33 seconds at 0% deformity to 1.08 ± 0.75 seconds at 100% deformity ( P < .0001). A majority of participants did not agree an image looked “abnormal” until 90% deformity from any angle. Conclusion: Eye tracking can be used as a proxy for attention threshold toward orbitofrontal deformity. The amount of attention toward orbitofrontal AOIs increased proportionally with severity. Participants did not generally agree there was “abnormality” until deformity was severe. This study supports the assertion that surgical intervention may be best reserved for more severe deformity.


Author(s):  
Liqin Wu ◽  
Cuihua Xi

Switch cost and cost site have been controversial issues in the code-switching studies. This research conducted an eye tracking experiment on eight bilingual subjects to measure their switch cost and cost site in comprehending the intra-sentential code-switching (Chinese and English) and the unilingual (pure Chinese) stimuli. The English words and their Chinese translations or equivalents were assumed as the key words in either a unilingual or an intra-sentential code-switching paragraph. These key words were located as areas of interest (AOI) with the same height and consisted of three word-frequency levels. After the experiment, the subjects were required to do a comprehension test to ensure their real understanding of the English words. Their performances in two different reading contexts were compared by adopting a paired sample t-test. Their eye movement data were validated by using 2 x 3 repeated measures ANOVA. It was revealed that: 1) the subjects’ scores in the intra-sentential code-switching contexts were higher than those in the unilingual ones, i.e. reading efficiency increased in the intra-sentential code-switching contexts; 2) word frequency had little effect on word recognition speed in the intra-sentential code-switching contexts, i.e., the least frequently used words did not necessarily take the subjects’ more time or vice versa; 3) even if a switch cost occurred(on rare occasions), it was not necessarily at the switching site, and low frequency words in alternating languages did impair performance even when the switch occurred at a sentence boundary.


2021 ◽  
Vol 12 ◽  
Author(s):  
Molly Winston ◽  
Kritika Nayar ◽  
Emily Landau ◽  
Nell Maltman ◽  
John Sideris ◽  
...  

Atypical visual attention patterns have been observed among carriers of the fragile X mental retardation gene (FMR1) premutation (PM), with some similarities to visual attention patterns observed in autism spectrum disorder (ASD) and among clinically unaffected relatives of individuals with ASD. Patterns of visual attention could constitute biomarkers that can help to inform the neurocognitive profile of the PM, and that potentially span diagnostic boundaries. This study examined patterns of eye movement across an array of fixation measurements from three distinct eye-tracking tasks in order to investigate potentially overlapping profiles of visual attention among PM carriers, ASD parents, and parent controls. Logistic regression analyses were conducted to examine whether variables constituting a PM-specific looking profile were able to effectively predict group membership. Participants included 65PM female carriers, 188 ASD parents, and 84 parent controls. Analyses of fixations across the eye-tracking tasks, and their corresponding areas of interest, revealed a distinct visual attention pattern in carriers of the FMR1 PM, characterized by increased fixations on the mouth when viewing faces, more intense focus on bodies in socially complex scenes, and decreased fixations on salient characters and faces while narrating a wordless picture book. This set of variables was able to successfully differentiate individuals with the PM from controls (Sensitivity = 0.76, Specificity = 0.85, Accuracy = 0.77) as well as from ASD parents (Sensitivity = 0.70, Specificity = 0.80, Accuracy = 0.72), but did not show a strong distinction between ASD parents and controls (Accuracy = 0.62), indicating that this set of variables comprises a profile that is unique to PM carriers. Regarding predictive power, fixations toward the mouth when viewing faces was able to differentiate PM carriers from both ASD parents and controls, whereas fixations toward other social stimuli did not differentiate PM carriers from ASD parents, highlighting some overlap in visual attention patterns that could point toward shared neurobiological mechanisms. Results demonstrate a profile of visual attention that appears strongly associated with the FMR1 PM in women, and may constitute a meaningful biomarker.


Author(s):  
Diego Jesus Serrano-Carrasco ◽  
Antonio Jesus Diaz-Honrubia ◽  
Pedro Cuenca

AbstractWith the advent of smartphones and tablets, video traffic on the Internet has increased enormously. With this in mind, in 2013 the High Efficiency Video Coding (HEVC) standard was released with the aim of reducing the bit rate (at the same quality) by 50% with respect to its predecessor. However, new contents with greater resolutions and requirements appear every day, making it necessary to further reduce the bit rate. Perceptual video coding has recently been recognized as a promising approach to achieving high-performance video compression and eye tracking data can be used to create and verify these models. In this paper, we present a new algorithm for the bit rate reduction of screen recorded sequences based on the visual perception of videos. An eye tracking system is used during the recording to locate the fixation point of the viewer. Then, the area around that point is encoded with the base quantization parameter (QP) value, which increases when moving away from it. The results show that up to 31.3% of the bit rate may be saved when compared with the original HEVC-encoded sequence, without a significant impact on the perceived quality.


Sign in / Sign up

Export Citation Format

Share Document