scanning behavior
Recently Published Documents


TOTAL DOCUMENTS

125
(FIVE YEARS 24)

H-INDEX

23
(FIVE YEARS 3)

2021 ◽  
Vol 12 ◽  
Author(s):  
Fabrice Damon ◽  
Nawel Mezrai ◽  
Logan Magnier ◽  
Arnaud Leleu ◽  
Karine Durand ◽  
...  

A recent body of research has emerged regarding the interactions between olfaction and other sensory channels to process social information. The current review examines the influence of body odors on face perception, a core component of human social cognition. First, we review studies reporting how body odors interact with the perception of invariant facial information (i.e., identity, sex, attractiveness, trustworthiness, and dominance). Although we mainly focus on the influence of body odors based on axillary odor, we also review findings about specific steroids present in axillary sweat (i.e., androstenone, androstenol, androstadienone, and estratetraenol). We next survey the literature showing body odor influences on the perception of transient face properties, notably in discussing the role of body odors in facilitating or hindering the perception of emotional facial expression, in relation to competing frameworks of emotions. Finally, we discuss the developmental origins of these olfaction-to-vision influences, as an emerging literature indicates that odor cues strongly influence face perception in infants. Body odors with a high social relevance such as the odor emanating from the mother have a widespread influence on various aspects of face perception in infancy, including categorization of faces among other objects, face scanning behavior, or facial expression perception. We conclude by suggesting that the weight of olfaction might be especially strong in infancy, shaping social perception, especially in slow-maturing senses such as vision, and that this early tutoring function of olfaction spans all developmental stages to disambiguate a complex social environment by conveying key information for social interactions until adulthood.


PLoS ONE ◽  
2021 ◽  
Vol 16 (9) ◽  
pp. e0257054
Author(s):  
Marie J. Zahn ◽  
Kristin L. Laidre ◽  
Peter Stilz ◽  
Marianne H. Rasmussen ◽  
Jens C. Koblitz

Echolocation signals of wild beluga whales (Delphinapterus leucas) were recorded in 2013 using a vertical, linear 16-hydrophone array at two locations in the pack ice of Baffin Bay, West Greenland. Individual whales were localized for 4:42 minutes of 1:04 hours of recordings. Clicks centered on the recording equipment (i.e. on-axis clicks) were isolated to calculate sonar parameters. We report the first sonar beam estimate of in situ recordings of wild belugas with an average -3 dB asymmetrical vertical beam width of 5.4°, showing a wider ventral beam. This narrow beam width is consistent with estimates from captive belugas; however, our results indicate that beluga sonar beams may not be symmetrical and may differ in wild and captive contexts. The mean apparent source level for on-axis clicks was 212 dB pp re 1 μPa and whales were shown to vertically scan the array from 120 meters distance. Our findings support the hypothesis that highly directional sonar beams and high source levels are an evolutionary adaptation for Arctic odontocetes to reduce unwanted surface echoes from sea ice (i.e., acoustic clutter) and effectively navigate through leads in the pack ice (e.g., find breathing holes). These results provide the first baseline beluga sonar metrics from free-ranging animals using a hydrophone array and are important for acoustic programs throughout the Arctic, particularly for acoustic classification between belugas and narwhals (Monodon monoceros).


Author(s):  
Steven W. Savage ◽  
Lily Zhang ◽  
Garrett Swan ◽  
Alex R. Bowers

Objective We conducted a driving simulator study to investigate scanning and hazard detection before entering an intersection. Background Insufficient scanning has been suggested as a factor contributing to intersection crashes. However, little is known about the relative importance of the head and eye movement components of that scanning in peripheral hazard detection. Methods Eleven older (mean 67 years) and 18 younger (mean 27 years) current drivers drove in a simulator while their head and eye movements were tracked. They completed two city drives (42 intersections per drive) with motorcycle hazards appearing at 16 four-way intersections per drive. Results Older subjects missed more hazards (10.2% vs. 5.2%). Failing to make a scan with a substantial head movement was the primary reason for missed hazards. When hazards were detected, older drivers had longer RTs (2.6s vs. 2.3s), but drove more slowly; thus, safe response rates did not differ between the two groups (older 83%; younger 82%). Safe responses were associated with larger (28.8° vs. 20.6°) and more numerous (9.4 vs. 6.6) gaze scans. Scans containing a head movement were stronger predictors of safe responses than scans containing only eye movements. Conclusion Our results highlight the importance of making large scans with a substantial head movement before entering an intersection. Eye-only scans played little role in detection and safe responses to peripheral hazards. Application Driver training programs should address the importance of making large scans with a substantial head movement before entering an intersection.


PLoS ONE ◽  
2021 ◽  
Vol 16 (8) ◽  
pp. e0244118
Author(s):  
Karl Marius Aksum ◽  
Lars Brotangen ◽  
Christian Thue Bjørndal ◽  
Lukas Magnaguagno ◽  
Geir Jordet

Visual perception in football (“soccer” in the U.S.) is increasingly becoming a key area of interest for researchers and practitioners. This exploratory case study investigated a sub-set of visual perception, namely visual exploratory scanning. The aim of this study was to examine the scanning of four elite football midfield players in an 11 vs. 11 real-game environment using mobile eye-tracking technology. More specifically, we measured the duration and information (number of teammates and opponents) of the players’ scanning behavior. The results showed that the players’ scanning duration was influenced by the ball context and the action undertaken with the ball at the moment of scan initiation. Furthermore, fixations were found in only 2.3% of the scans. Additionally, the results revealed that the stop point is the most information-rich part of a scan and that the players had more opponents than teammates inside their video frame during scans. Practical applications and further research recommendations are presented.


2021 ◽  
Vol 13 (2) ◽  
Author(s):  
Gregor Hardiess ◽  
Caecilie Weissert

In our exploratory study, we ask how naive observers, without a distinct religious background, approach biblical art that combines image and text. For this purpose, we choose the book ‘New biblical figures of the Old and New Testament’ published in 1569 as source of the stimuli. This book belongs to the genre of illustrated Bibles, which were very popular during the Reformation. Since there is no empirical knowledge regarding the interaction between image and text during the process of such biblical art reception, we selected four relevant images from the book and measured the eye movements of participants in order to characterize and quantify their scanning behavior related to such stimuli in terms of i) looking at text (text usage), ii) text vs. image interaction measures (semantic or contextual relevance of text), and iii) narration. We show that texts capture attention early in the process of inspection and that text and image interact. Moreover, semantics of texts are used to guide eye movements later through the image, supporting the formation of the narrative.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Jiahe Song ◽  
Kang-Bok Lee ◽  
Zhongyun Zhou ◽  
Lin Jia ◽  
Casey Cegielski ◽  
...  

PurposeThe purpose of this study is to investigate the relationship between social media and sensing capability for supply chain management (SCM) from an environmental scanning perspective. The authors consider upstream supply and downstream customer markets as two aspects of social media-enabled environmental scanning (SMES). The moderating effects of three uncertainties are explored.Design/methodology/approachThe data were collected from 178 supply chain professionals through a survey. Generalized estimating equations (GEE) were used to analyze the data.FindingsSMES in both supply and customer markets enhance sensing capability. Interestingly, the results reveal an accelerating effect on sensing by the incremental effort of SMES-supply. However, that of SMES-customer leads to a decelerating outcome for sensing. Also, uncertainties, especially the demand- and technology-related, play a series of interacting effects according to SMES levels.Research limitations/implicationsThis research contributes to the literature of operations and supply chains regarding social media strategies and dynamic capabilities. It opens the black box of environmental scanning behavior on social media and adds new knowledge on the dynamic influence of such behavior toward organizational sensing capability for SCM. In addition, further understanding on supply chain uncertainty as a moderator is also strengthened through this research.Originality/valueThis research is the first to empirically uncover the effect of social media on sensing capability for SCM through the lens of environmental scanning. The results support the employment of social networking for improving supply and demand sensing.


PLoS ONE ◽  
2020 ◽  
Vol 15 (12) ◽  
pp. e0240201
Author(s):  
Laura Mikula ◽  
Sergio Mejía-Romero ◽  
Romain Chaumillon ◽  
Amigale Patoine ◽  
Eduardo Lugo ◽  
...  

Driving is an everyday task involving a complex interaction between visual and cognitive processes. As such, an increase in the cognitive and/or visual demands can lead to a mental overload which can be detrimental for driving safety. Compiling evidence suggest that eye and head movements are relevant indicators of visuo-cognitive demands and attention allocation. This study aims to investigate the effects of visual degradation on eye-head coordination as well as visual scanning behavior during a highly demanding task in a driving simulator. A total of 21 emmetropic participants (21 to 34 years old) performed dual-task driving in which they were asked to maintain a constant speed on a highway while completing a visual search and detection task on a navigation device. Participants did the experiment with optimal vision and with contact lenses that introduced a visual perturbation (myopic defocus). The results indicate modifications of eye-head coordination and the dynamics of visual scanning in response to the visual perturbation induced. More specifically, the head was more involved in horizontal gaze shifts when the visual needs were not met. Furthermore, the evaluation of visual scanning dynamics, based on time-based entropy which measures the complexity and randomness of scanpaths, revealed that eye and gaze movements became less explorative and more stereotyped when vision was not optimal. These results provide evidence for a reorganization of both eye and head movements in response to increasing visual-cognitive demands during a driving task. Altogether, these findings suggest that eye and head movements can provide relevant information about visuo-cognitive demands associated with complex tasks. Ultimately, eye-head coordination and visual scanning dynamics may be good candidates to estimate drivers’ workload and better characterize risky driving behavior.


Author(s):  
Alexandre Marois ◽  
Daniel Lafond ◽  
Alexandre Williot ◽  
François Vachon ◽  
Sébastien Tremblay

Security surveillance entails many cognitive challenges (e.g., task interruption, vigilance decrements, cognitive overload). To help surveillance operators overcome these difficulties and perform more efficient visual search, gaze-based intelligent systems can be developed. The present study aimed at testing the impact of the Scantracker system—which pinpointed neglected cameras while detecting and correcting attentional tunneling and vigilance decrease—on human scanning behavior and surveillance performance. Participants took part in a surveillance simulation, monitoring cameras and searching for ongoing incidents, and half of them was supported by the Scantracker. Although behavioral surveillance performance was not improved, participants supported by the Scantracker showed more efficient gaze-based measures of surveillance. Moreover, some of these measures were associated with performance, suggesting that scan pattern improvements might lead indirectly to more efficient incident detection. Overall, these results speak to the potential of using gaze- aware intelligent systems to support surveillance operators.


Author(s):  
Laura Mikula ◽  
Sergio Mejía-Romero ◽  
Romain Chaumillon ◽  
Amigale Patoine ◽  
Eduardo Lugo ◽  
...  

AbstractDriving is an everyday task involving a complex interaction between visual and cognitive processes. As such, an increase in the cognitive and/or visual demands can lead to a mental overload which can be detrimental for driving safety. Compiling evidence suggest that eye and head movements are relevant indicators of visuo-cognitive demands and attention allocation. This study aims to investigate the effects of visual degradation on eye-head coordination as well as visual scanning behavior during a highly demanding task in a driving simulator. A total of 21 emmetropic participants (21 to 34 years old) performed dual-task driving in which they were asked to maintain a constant speed on a highway while completing a visual search and detection task on a navigation device. Participants did the experiment with optimal vision and with contact lenses that introduced a visual perturbation (myopic defocus). The results indicate modifications of eye-head coordination and the dynamics of visual scanning in response to the visual perturbation induced. More specifically, the head was more involved in horizontal gaze shifts when the visual needs were not met. Furthermore, the evaluation of visual scanning dynamics, based on time-based entropy which measures the complexity and randomness of scanpaths, revealed that eye and gaze movements became less explorative and more stereotyped when vision was not optimal. These results provide evidence for a reorganization of both eye and head movements in response to increasing visual-cognitive demands during a driving task. Altogether, these findings suggest that eye and head movements can provide relevant information about visuo-cognitive demands associated with complex tasks. Ultimately, eye-head coordination and visual scanning dynamics may be good candidates to estimate drivers’ workload and better characterize risky driving behavior.


Sign in / Sign up

Export Citation Format

Share Document