scholarly journals Deep Convolutional Symmetric Encoder—Decoder Neural Networks to Predict Students’ Visual Attention

Symmetry ◽  
2021 ◽  
Vol 13 (12) ◽  
pp. 2246
Author(s):  
Tomasz Hachaj ◽  
Anna Stolińska ◽  
Magdalena Andrzejewska ◽  
Piotr Czerski

Prediction of visual attention is a new and challenging subject, and to the best of our knowledge, there are not many pieces of research devoted to the anticipation of students’ cognition when solving tests. The aim of this paper is to propose, implement, and evaluate a machine learning method that is capable of predicting saliency maps of students who participate in a learning task in the form of quizzes based on quiz questionnaire images. Our proposal utilizes several deep encoder–decoder symmetric schemas which are trained on a large set of saliency maps generated with eye tracking technology. Eye tracking data were acquired from students, who solved various tasks in the sciences and natural sciences (computer science, mathematics, physics, and biology). The proposed deep convolutional encoder–decoder network is capable of producing accurate predictions of students’ visual attention when solving quizzes. Our evaluation showed that predictions are moderately positively correlated with actual data with a coefficient of 0.547 ± 0.109. It achieved better results in terms of correlation with real saliency maps than state-of-the-art methods. Visual analyses of the saliency maps obtained also correspond with our experience and expectations in this field. Both source codes and data from our research can be downloaded in order to reproduce our results.

2015 ◽  
Vol 43 (6) ◽  
pp. 561-574 ◽  
Author(s):  
Patricia Huddleston ◽  
Bridget K. Behe ◽  
Stella Minahan ◽  
R. Thomas Fernandez

Purpose – The purpose of this paper is to elucidate the role that visual measures of attention to product and information and price display signage have on purchase intention. The authors assessed the effect of visual attention to the product, information or price sign on purchase intention, as measured by likelihood to buy. Design/methodology/approach – The authors used eye-tracking technology to collect data from Australian and US garden centre customers, who viewed eight plant displays in which the signs had been altered to show either price or supplemental information (16 images total). The authors compared the role of visual attention to price and information sign, and the role of visual attention to the product when either sign was present on likelihood to buy. Findings – Overall, providing product information on a sign without price elicited higher likelihood to buy than providing a sign with price. The authors found a positive relationship between visual attention to price on the display sign and likelihood to buy, but an inverse relationship between visual attention to information and likelihood to buy. Research limitations/implications – An understanding of the attention-capturing power of merchandise display elements, especially signs, has practical significance. The findings will assist retailers in creating more effective and efficient display signage content, for example, featuring the product information more prominently than the price. The study was conducted on a minimally packaged product, live plants, which may reduce the ability to generalize findings to other product types. Practical implications – The findings will assist retailers in creating more effective and efficient display signage content. The study used only one product category (plants) which may reduce the ability to generalize findings to other product types. Originality/value – The study is one of the first to use eye-tracking in a macro-level, holistic investigation of the attention-capturing value of display signage information and its relationship to likelihood to buy. Researchers, for the first time, now have the ability to empirically test the degree to which attention and decision-making are linked.


2021 ◽  
Vol 12 ◽  
Author(s):  
Kendra Gimhani Kandana Arachchige ◽  
Wivine Blekic ◽  
Isabelle Simoes Loureiro ◽  
Laurent Lefebvre

Numerous studies have explored the benefit of iconic gestures in speech comprehension. However, only few studies have investigated how visual attention was allocated to these gestures in the context of clear versus degraded speech and the way information is extracted for enhancing comprehension. This study aimed to explore the effect of iconic gestures on comprehension and whether fixating the gesture is required for information extraction. Four types of gestures (i.e., semantically and syntactically incongruent iconic gestures, meaningless configurations, and congruent iconic gestures) were presented in a sentence context in three different listening conditions (i.e., clear, partly degraded or fully degraded speech). Using eye tracking technology, participants’ gaze was recorded, while they watched video clips after which they were invited to answer simple comprehension questions. Results first showed that different types of gestures differently attract attention and that the more speech was degraded, the less participants would pay attention to gestures. Furthermore, semantically incongruent gestures appeared to particularly impair comprehension although not being fixated while congruent gestures appeared to improve comprehension despite also not being fixated. These results suggest that covert attention is sufficient to convey information that will be processed by the listener.


2018 ◽  
Vol 10 (9) ◽  
pp. 3038 ◽  
Author(s):  
Tsai Wang ◽  
Chia Tsai ◽  
Ta Tang

The beautiful, natural environment in a tourist hotel’s marketing images can evoke relaxing and soothing emotions. However, can tourist hotels use nature as a servicescape to make their performing arts services more attractive? Based on attention restoration and servicescape theory, this study explores and compares the influence of tourist hotels’ performing arts images with nature- or built-based servicescapes on the advertising effectiveness (i.e., customer visual attention and behavioral intention). To analyze visual attention on the marketing images, this study uses eye-tracking technology to record customer visual trajectories. This experiment used a total of 113 participants. The sample size of the nature-based servicescape group was 59 (age with mean = 39.04), and that of the built-based servicescape group was 54 (age with mean = 40.17). A tourist hotel’s (Volando Urai Spring Spa & Resort) marketing images were chosen as stimuli. All participants were randomly assigned to the nature-based or the built-based servicescape group. In each experimental group, all the images were randomly presented to reduce any order effects of the images. By using eye-tracking analysis, the experimental findings were as follows: (1) A nature-based servicescape can arouse more visual attention of customers than a built-based servicescape can; (2) Marketing images with performing arts activities in nature-based servicescapes attract the visual attention of customers; (3) Nature-based servicescapes stimulate higher behavioral intentions of consumers than built-based servicescape.


2018 ◽  
Vol 61 (5) ◽  
pp. 1157-1170 ◽  
Author(s):  
Jiali Liang ◽  
Krista Wilkinson

Purpose A striking characteristic of the social communication deficits in individuals with autism is atypical patterns of eye contact during social interactions. We used eye-tracking technology to evaluate how the number of human figures depicted and the presence of sharing activity between the human figures in still photographs influenced visual attention by individuals with autism, typical development, or Down syndrome. We sought to examine visual attention to the contents of visual scene displays, a growing form of augmentative and alternative communication support. Method Eye-tracking technology recorded point-of-gaze while participants viewed 32 photographs in which either 2 or 3 human figures were depicted. Sharing activities between these human figures are either present or absent. The sampling rate was 60 Hz; that is, the technology gathered 60 samples of gaze behavior per second, per participant. Gaze behaviors, including latency to fixate and time spent fixating, were quantified. Results The overall gaze behaviors were quite similar across groups, regardless of the social content depicted. However, individuals with autism were significantly slower than the other groups in latency to first view the human figures, especially when there were 3 people depicted in the photographs (as compared with 2 people). When participants' own viewing pace was considered, individuals with autism resembled those with Down syndrome. Conclusion The current study supports the inclusion of social content with various numbers of human figures and sharing activities between human figures into visual scene displays, regardless of the population served. Study design and reporting practices in eye-tracking literature as it relates to autism and Down syndrome are discussed. Supplemental Material https://doi.org/10.23641/asha.6066545


2021 ◽  
Vol 26 (3) ◽  
pp. 718-726
Author(s):  
Jin Hui Lee ◽  
Ji Young Na ◽  
Su Hyang Lee ◽  
Bong Won Yi

Objectives: This study aims to investigate patterns of visual attention on a target object in VSDs (Visual Scene Displays) when they are designed with/without an action of usage of the object. We used eye-tracking technology to evaluate how the action of usage of an object in still photographs influenced the visual attention of adults without disabilities. We tried to examine visual attention on the contents of visual scene displays (VSDs).Methods: 25 college students participated in the study. Eye-tracking technology recorded point-of-gaze while participants viewed 20 photographs. Data from eye-tracking provided information on where participants were visually fixated and paid more attention on the presented VSDs including a target object.Results: Both total fixation duration and average fixation count were statistically significant. Participants visually fixated on the target object longer and more often when the object was being used in the presented VSDs. For AOI (Area Of Interest) time of the first fixation, after analyzing only a partial group that had the data match due to the difference in gaze pattern per subject, the average AOI time of the first fixation was shown to be faster when using an object in 6 out of 10 objects.Conclusion: This study supports the inclusion of an action of an object usage in VSDs suggesting that the act of object usage can partially influence the visual attention pattern of a user.


2021 ◽  
Vol 13 (4) ◽  
pp. 85-99
Author(s):  
Kristýna Mudrychová ◽  
Martina Houšková Beránková ◽  
Tereza Horáková ◽  
Milan Houška ◽  
Jitka Mudrychová

This study was focused on agricultural waste disposal (AWD) textual materials. Two educational texts are compared: designed texts traditionally with no purposeful design and structured knowledge texts, including the textual form of knowledge units. Eye-tracking technology is employed for retrieving the values of critical indicators specifying the way of reading the texts. We analysed users' visual attention and looking behaviour during the reading process. Thirty-three students worked with 45 pieces of educational texts accompanied by a didactic test. Statistical analyses show statistically significant differences neither in any indicator within studying the texts nor in the users' success rate in the didactic test. The users can work with the knowledge structured texts equivalently with the designed texts in the traditional way. The positive effect for AWD is that users can process knowledge structured texts with better results.


2015 ◽  
Author(s):  
Miroslawa Sajka ◽  
◽  
Roman Rosiek ◽  

Eye-tracking technology was used to analyze the participants’ visual attention while solving a multiple-choice science problem. The research encompassed 103 people of varying levels of knowledge, from pupils to scientists. The respondents in general devoted more time to analyze the chosen fields. However, the trend is reversed for people with high scientific expertise and criticism and with extreme motivation to solve a problem. The trend depends also on the strategies of solving a problem and conviction about the correctness of the answer. Key words: eye-tracking, mathematics and physics education, problem solving, new technology in didactics of science.


Author(s):  
Trixie A Katz ◽  
Danielle D Weinberg ◽  
Claire E Fishman ◽  
Vinay Nadkarni ◽  
Patrice Tremoulet ◽  
...  

ObjectiveA respiratory function monitor (RFM) may improve positive pressure ventilation (PPV) technique, but many providers do not use RFM data appropriately during delivery room resuscitation. We sought to use eye-tracking technology to identify RFM parameters that neonatal providers view most commonly during simulated PPV.DesignMixed methods study. Neonatal providers performed RFM-guided PPV on a neonatal manikin while wearing eye-tracking glasses to quantify visual attention on displayed RFM parameters (ie, exhaled tidal volume, flow, leak). Participants subsequently provided qualitative feedback on the eye-tracking glasses.SettingLevel 3 academic neonatal intensive care unit.ParticipantsTwenty neonatal resuscitation providers.Main outcome measuresVisual attention: overall gaze sample percentage; total gaze duration, visit count and average visit duration for each displayed RFM parameter. Qualitative feedback: willingness to wear eye-tracking glasses during clinical resuscitation.ResultsTwenty providers participated in this study. The mean gaze sample captured wa s 93% (SD 4%). Exhaled tidal volume waveform was the RFM parameter with the highest total gaze duration (median 23%, IQR 13–51%), highest visit count (median 5.17 per 10 s, IQR 2.82–6.16) and longest visit duration (median 0.48 s, IQR 0.38–0.81 s). All participants were willing to wear the glasses during clinical resuscitation.ConclusionWearable eye-tracking technology is feasible to identify gaze fixation on the RFM display and is well accepted by providers. Neonatal providers look at exhaled tidal volume more than any other RFM parameter. Future applications of eye-tracking technology include use during clinical resuscitation.


Sign in / Sign up

Export Citation Format

Share Document