Custom emoji based emotion recognition system for dynamic business webpages

2022 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Fatima Isiaka ◽  
Zainab Adamu

PurposeOne of the contributions of artificial intelligent (AI) in modern technology is emotion recognition which is mostly based on facial expression and modification of its inference engine. The facial recognition scheme is mostly built to understand user expression in an online business webpage on a marketing site but has limited abilities to recognise elusive expressions. The basic emotions are expressed when interrelating and socialising with other personnel online. At most times, studying how to understand user expression is often a most tedious task, especially the subtle expressions. An emotion recognition system can be used to optimise and reduce complexity in understanding users' subconscious thoughts and reasoning through their pupil changes.Design/methodology/approachThis paper demonstrates the use of personal computer (PC) webcam to read in eye movement data that includes pupil changes as part of distinct user attributes. A custom eye movement algorithm (CEMA) is used to capture users' activity and record the data which is served as an input model to an inference engine (artificial neural network (ANN)) that helps to predict user emotional response conveyed as emoticons on the webpage.FindingsThe result from the error in performance shows that ANN is most adaptable to user behaviour prediction and can be used for the system's modification paradigm.Research limitations/implicationsOne of the drawbacks of the analytical tool is its inability in some cases to set some of the emoticons within the boundaries of the visual field, this is a limitation to be tackled within subsequent runs with standard techniques.Originality/valueThe originality of the proposed model is its ability to predict basic user emotional response based on changes in pupil size between average recorded baseline boundaries and convey the emoticons chronologically with the gaze points.

2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Wencheng Su ◽  
Zhangping Lu ◽  
Yinglin Sun ◽  
Guifeng Liu

PurposeWayfinding efficiency is an extremely influential factor to improve users' library interior experience. However, few research has studied the different functions of various wayfinding signages for university library users through mobile visual experiment. To fill this gap, the purpose of this paper is to explore the relationship between university library signage system design and patrons' wayfinding behavior features.Design/methodology/approachIn this article, an eye movement tracking method was introduced to record eye movement data during the wayfinding process of participants in the library interior, targeting the cognition and psychology of library users in the wayfinding signage system. The visual guiding usability of landmarks, informational signages and directional signages were quantitatively tested, and the fixation on the signage system between orientation strategy users and route strategy users was compared. This study also investigated the effects of library users' spatial anxiety and environmental familiarity on their fixation on the area of interest of the wayfinding signage system using the differential test and regression.FindingsThis paper observed that informational signage had the best visual navigating competence. The difference of fixation duration and searching duration between patrons used various wayfinding strategies was significant. The informational signage was most attended by the route strategy users, and the orientation strategy users rarely focused on the directional signage. And participants with high anxiety tended to ignore the visually auxiliary function of the landmarks but paid attention to the directional signage. The participants with low anxiety could capture the landmarks that could not be easily found by the route strategy users. And participants less familiar with the environment were more sensitive to the landmarks. Furthermore, this paper offers optimization measures for university library wayfinding signage system, from the perspectives of informational signage understandability improvement, directional signage physical specification design and wayfinding assistant system with automatic landmark technology.Originality/valueThis article adds to the relatively sparse literature on university library user wayfinding experimental study in China. The experimental findings of this paper also have important practical implications for academic libraries' wayfinding system evaluation. The whole process could be seen as a repeatable and standard framework and methodology to inspect university library's wayfinding signage system usability and user wayfinding behavior performance.


2019 ◽  
Vol 24 (4) ◽  
pp. 297-311
Author(s):  
José David Moreno ◽  
José A. León ◽  
Lorena A. M. Arnal ◽  
Juan Botella

Abstract. We report the results of a meta-analysis of 22 experiments comparing the eye movement data obtained from young ( Mage = 21 years) and old ( Mage = 73 years) readers. The data included six eye movement measures (mean gaze duration, mean fixation duration, total sentence reading time, mean number of fixations, mean number of regressions, and mean length of progressive saccade eye movements). Estimates were obtained of the typified mean difference, d, between the age groups in all six measures. The results showed positive combined effect size estimates in favor of the young adult group (between 0.54 and 3.66 in all measures), although the difference for the mean number of fixations was not significant. Young adults make in a systematic way, shorter gazes, fewer regressions, and shorter saccadic movements during reading than older adults, and they also read faster. The meta-analysis results confirm statistically the most common patterns observed in previous research; therefore, eye movements seem to be a useful tool to measure behavioral changes due to the aging process. Moreover, these results do not allow us to discard either of the two main hypotheses assessed for explaining the observed aging effects, namely neural degenerative problems and the adoption of compensatory strategies.


2014 ◽  
Author(s):  
Bernhard Angele ◽  
Elizabeth R. Schotter ◽  
Timothy Slattery ◽  
Tara L. Chaloukian ◽  
Klinton Bicknell ◽  
...  

Author(s):  
Ayush Kumar ◽  
Prantik Howlader ◽  
Rafael Garcia ◽  
Daniel Weiskopf ◽  
Klaus Mueller

2021 ◽  
Vol 1757 (1) ◽  
pp. 012021
Author(s):  
Yuqiong Wang ◽  
Zehui Zhao ◽  
Zhiwei Huang

Sensors ◽  
2021 ◽  
Vol 21 (15) ◽  
pp. 5178
Author(s):  
Sangbong Yoo ◽  
Seongmin Jeong ◽  
Seokyeon Kim ◽  
Yun Jang

Gaze movement and visual stimuli have been utilized to analyze human visual attention intuitively. Gaze behavior studies mainly show statistical analyses of eye movements and human visual attention. During these analyses, eye movement data and the saliency map are presented to the analysts as separate views or merged views. However, the analysts become frustrated when they need to memorize all of the separate views or when the eye movements obscure the saliency map in the merged views. Therefore, it is not easy to analyze how visual stimuli affect gaze movements since existing techniques focus excessively on the eye movement data. In this paper, we propose a novel visualization technique for analyzing gaze behavior using saliency features as visual clues to express the visual attention of an observer. The visual clues that represent visual attention are analyzed to reveal which saliency features are prominent for the visual stimulus analysis. We visualize the gaze data with the saliency features to interpret the visual attention. We analyze the gaze behavior with the proposed visualization to evaluate that our approach to embedding saliency features within the visualization supports us to understand the visual attention of an observer.


2017 ◽  
Vol 13 (4) ◽  
pp. 408-418 ◽  
Author(s):  
Mustafa S. Aljumaily ◽  
Ghaida A. Al-Suhail

Purpose Recently, many researches have been devoted to studying the possibility of using wireless signals of the Wi-Fi networks in human-gesture recognition. They focus on classifying gestures despite who is performing them, and only a few of the previous work make use of the wireless channel state information in identifying humans. This paper aims to recognize different humans and their multiple gestures in an indoor environment. Design/methodology/approach The authors designed a gesture recognition system that consists of channel state information data collection, preprocessing, features extraction and classification to guess the human and the gesture in the vicinity of a Wi-Fi-enabled device with modified Wi-Fi-device driver to collect the channel state information, and process it in real time. Findings The proposed system proved to work well for different humans and different gestures with an accuracy that ranges from 87 per cent for multiple humans and multiple gestures to 98 per cent for individual humans’ gesture recognition. Originality/value This paper used new preprocessing and filtering techniques, proposed new features to be extracted from the data and new classification method that have not been used in this field before.


Sign in / Sign up

Export Citation Format

Share Document