scholarly journals An Eye-Tracking Study of Receptive Verb Knowledge in Toddlers

2018 ◽  
Vol 61 (12) ◽  
pp. 2917-2933 ◽  
Author(s):  
Matthew James Valleau ◽  
Haruka Konishi ◽  
Roberta Michnick Golinkoff ◽  
Kathy Hirsh-Pasek ◽  
Sudha Arunachalam

Purpose We examined receptive verb knowledge in 22- to 24-month-old toddlers with a dynamic video eye-tracking test. The primary goal of the study was to examine the utility of eye-gaze measures that are commonly used to study noun knowledge for studying verb knowledge. Method Forty typically developing toddlers participated. They viewed 2 videos side by side (e.g., girl clapping, same girl stretching) and were asked to find one of them (e.g., “Where is she clapping?”). Their eye-gaze, recorded by a Tobii T60XL eye-tracking system, was analyzed as a measure of their knowledge of the verb meanings. Noun trials were included as controls. We examined correlations between eye-gaze measures and score on the MacArthur–Bates Communicative Development Inventories (CDI; Fenson et al., 1994), a standard parent report measure of expressive vocabulary to see how well various eye-gaze measures predicted CDI score. Results A common measure of knowledge—a 15% increase in looking time to the target video from a baseline phase to the test phase—did correlate with CDI score but operationalized differently for verbs than for nouns. A 2nd common measure, latency of 1st look to the target, correlated with CDI score for nouns, as in previous work, but did not for verbs. A 3rd measure, fixation density, correlated for both nouns and verbs, although the correlation went in different directions. Conclusions The dynamic nature of videos depicting verb knowledge results in differences in eye-gaze as compared to static images depicting nouns. An eye-tracking assessment of verb knowledge is worthwhile to develop. However, the particular dependent measures used may be different than those used for static images and nouns.

Author(s):  
Federico Cassioli ◽  
Laura Angioletti ◽  
Michela Balconi

AbstractHuman–computer interaction (HCI) is particularly interesting because full-immersive technology may be approached differently by users, depending on the complexity of the interaction, users’ personality traits, and their motivational systems inclination. Therefore, this study investigated the relationship between psychological factors and attention towards specific tech-interactions in a smart home system (SHS). The relation between personal psychological traits and eye-tracking metrics is investigated through self-report measures [locus of control (LoC), user experience (UX), behavioral inhibition system (BIS) and behavioral activation system (BAS)] and a wearable and wireless near-infrared illumination based eye-tracking system applied to an Italian sample (n = 19). Participants were asked to activate and interact with five different tech-interaction areas with different levels of complexity (entrance, kitchen, living room, bathroom, and bedroom) in a smart home system (SHS), while their eye-gaze behavior was recorded. Data showed significant differences between a simpler interaction (entrance) and a more complex one (living room), in terms of number of fixation. Moreover, slower time to first fixation in a multifaceted interaction (bathroom), compared to simpler ones (kitchen and living room) was found. Additionally, in two interaction conditions (living room and bathroom), negative correlations were found between external LoC and fixation count, and between BAS reward responsiveness scores and fixation duration. Findings led to the identification of a two-way process, where both the complexity of the tech-interaction and subjects’ personality traits are important impacting factors on the user’s visual exploration behavior. This research contributes to understand the user responsiveness adding first insights that may help to create more human-centered technology.


2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Alexandros Karargyris ◽  
Satyananda Kashyap ◽  
Ismini Lourentzou ◽  
Joy T. Wu ◽  
Arjun Sharma ◽  
...  

AbstractWe developed a rich dataset of Chest X-Ray (CXR) images to assist investigators in artificial intelligence. The data were collected using an eye-tracking system while a radiologist reviewed and reported on 1,083 CXR images. The dataset contains the following aligned data: CXR image, transcribed radiology report text, radiologist’s dictation audio and eye gaze coordinates data. We hope this dataset can contribute to various areas of research particularly towards explainable and multimodal deep learning/machine learning methods. Furthermore, investigators in disease classification and localization, automated radiology report generation, and human-machine interaction can benefit from these data. We report deep learning experiments that utilize the attention maps produced by the eye gaze dataset to show the potential utility of this dataset.


2021 ◽  
Vol 2120 (1) ◽  
pp. 012030
Author(s):  
J K Tan ◽  
W J Chew ◽  
S K Phang

Abstract The field of Human-Computer Interaction (HCI) has been developing tremendously since the past decade. The existence of smartphones or modern computers is already a norm in society these days which utilizes touch, voice and typing as a means for input. To further increase the variety of interaction, human eyes are set to be a good candidate for another form of HCI. The amount of information which the human eyes contain are extremely useful, hence, various methods and algorithm for eye gaze tracking are implemented in multiple sectors. However, some eye-tracking method requires infrared rays to be projected into the eye of the user which could potentially cause enzyme denaturation when the eye is subjected to those rays under extreme exposure. Therefore, to avoid potential harm from the eye-tracking method that utilizes infrared rays, this paper proposes an image-based eye tracking system using the Viola-Jones algorithm and Circular Hough Transform (CHT) algorithm. The proposed method uses visible light instead of infrared rays to control the mouse pointer using the eye gaze of the user. This research aims to implement the proposed algorithm for people with hand disability to interact with computers using their eye gaze.


2021 ◽  
Vol 12 ◽  
Author(s):  
Irene Cadime ◽  
Ana Lúcia Santos ◽  
Iolanda Ribeiro ◽  
Fernanda Leopoldina Viana

This study presents the validation analysis of the European Portuguese version of the MacArthur-Bates Communicative Development Inventory III (CDI-III-PT). The CDI-III-PT is a parental report measure allowing researchers to assess expressive vocabulary and the syntactic abilities of children aged 2;6–4;0. In this study, we present a version comprising a lexical subscale which follows the Swedish adaptation and an original syntactic subscale allowing us to include language-specific structures. The reports of 739 children were collected; in addition, a standardized measure of language was also administered to a sub-sample of these children and the reports of preschool teachers were collected for another sub-sample. The results indicate a high internal consistency of the lexical and syntactic subscales. As for sociodemographic variables often found to be predictors of language development, as measured by this type of instrument, the results indicate that age and maternal education are significant predictors of the scores, and that first-born children attain higher scores in vocabulary than later born children, but no significant gender differences were found. The scores of the CDI-III-PT are positively correlated with the ones obtained in the standardized language measure, thus supporting their validity. A high agreement between the reports of parents and teachers was also found. These findings indicate that the CDI-III-PT has adequate psychometric properties and that it can be a useful tool for research and clinical practice. The age-based norms that are now provided can be used to evaluate whether a child is performing poorly compared to their peers.


2020 ◽  
Vol 12 (2) ◽  
pp. 43
Author(s):  
Mateusz Pomianek ◽  
Marek Piszczek ◽  
Marcin Maciejewski ◽  
Piotr Krukowski

This paper describes research on the stability of the MEMS mirror for use in eye tracking systems. MEMS mirrors are the main element in scanning methods (which is one of the methods of eye tracking). Due to changes in the mirror pitch, the system can scan the area of the eye with a laser and collect the signal reflected. However, this method works on the assumption that the inclinations are constant in each period. The instability of this causes errors. The aim of this work is to examine the error level caused by pitch instability at different points of work. Full Text: PDF ReferencesW. Fuhl, M. Tonsen, A. Bulling, and E. Kasneci, "Pupil detection for head-mounted eye tracking in the wild: an evaluation of the state of the art," Mach. Vis. Appl., vol. 27, no. 8, pp. 1275-1288, 2016, CrossRef X. Wang, S. Koch, K. Holmqvist, and M. Alexa, "Tracking the gaze on objects in 3D," ACM Trans. Graph., vol. 37, no. 6, pp. 1-18, Dec. 2018 CrossRef X. Xiong and H. Xie, "MEMS dual-mode electrostatically actuated micromirror," Proc. 2014 Zo. 1 Conf. Am. Soc. Eng. Educ. - "Engineering Educ. Ind. Involv. Interdiscip. Trends", ASEE Zo. 1 2014, no. Dmd, 2014 CrossRef E. Pengwang, K. Rabenorosoa, M. Rakotondrabe, and N. Andreff, "Scanning micromirror platform based on MEMS technology for medical application," Micromachines, vol. 7, no. 2, 2016 CrossRef J. P. Giannini, A. G. York, and H. Shroff, "Anticipating, measuring, and minimizing MEMS mirror scan error to improve laser scanning microscopy's speed and accuracy," PLoS One, vol. 12, no. 10, pp. 1-14, 2017 CrossRef C. Hennessey, B. Noureddin, and P. Lawrence, "A single camera eye-gaze tracking system with free head motion," Eye Track. Res. Appl. Symp., vol. 2005, no. March, pp. 87-94, 2005 CrossRef C. H. Morimoto and M. R. M. Mimica, "Eye gaze tracking techniques for interactive applications," Comput. Vis. Image Underst., vol. 98, no. 1, pp. 4-24, Apr. 2005 CrossRef S. T. S. Holmström, U. Baran, and H. Urey, "MEMS laser scanners: A review," J. Microelectromechanical Syst., vol. 23, no. 2, pp. 259-275, 2014 CrossRef C. W. Cho, "Gaze Detection by Wearable Eye-Tracking and NIR LED-Based Head-Tracking Device Based on SVR," ETRI J., vol. 34, no. 4, pp. 542-552, Aug. 2012 CrossRef T. Santini, W. Fuhl, and E. Kasneci, "PuRe: Robust pupil detection for real-time pervasive eye tracking," Comput. Vis. Image Underst., vol. 170, pp. 40-50, May 2018 CrossRef O. Solgaard, A. A. Godil, R. T. Howe, L. P. Lee, Y. A. Peter, and H. Zappe, "Optical MEMS: From micromirrors to complex systems," J. Microelectromechanical Syst., vol. 23, no. 3, pp. 517-538, 2014 CrossRef J. Wang, G. Zhang, and Z. You, "UKF-based MEMS micromirror angle estimation for LiDAR," J. Micromechanics Microengineering, vol. 29, no. 3, 201 CrossRef


2020 ◽  
Author(s):  
Ana Maria Gonzalez-Barrero ◽  
Esther Schott ◽  
Krista Byers-Heinlein

Vocabulary size is one of the most important early metrics of language development. Assessing vocabulary in bilingual children is complex because bilinguals learn words in two languages, which include translation equivalents (cross-language synonyms). We collected expressive vocabulary data from English and French monolinguals (n = 220), and English–French bilinguals (n = 184) aged 18–33 months, via parent report using the MacArthur-Bates Communicative Development Inventories, and quantified bilinguals’ vocabulary size using different approaches to counting translation equivalents. Our results showed that traditional approaches yield larger (word vocabulary) or smaller (concept vocabulary) quantification of bilinguals’ vocabulary knowledge relative to monolinguals. We propose a new metric, the bilingual adjusted vocabulary, that yields similar vocabulary sizes for monolinguals and bilinguals across different ages. Uniquely, this approach counts translation equivalents differently depending on the child’s age. This developmentally-informed bilingual vocabulary measure reveals differences in word learning abilities across ages, and provides a new approach to measure vocabulary in bilingual toddlers.


2021 ◽  
pp. 112972982098736
Author(s):  
Kaji Tatsuru ◽  
Yano Keisuke ◽  
Onishi Shun ◽  
Matsui Mayu ◽  
Nagano Ayaka ◽  
...  

Purpose: Real-time ultrasound (RTUS)-guided central venipuncture using the short-axis approach is complicated and likely to result in losing sight of the needle tip. Therefore, we focused on the eye gaze in our evaluation of the differences in eye gaze between medical students and experienced participants using an eye tracking system. Methods: Ten medical students (MS group), five residents (R group) and six pediatric surgeon fellows (F group) performed short-axis RTUS-guided venipuncture simulation using a modified vessel training system. The eye gaze was captured by the tracking system (Tobii Eye Tacker 4C) and recorded. The evaluation endpoints were the task completion time, total time and number of occurrences of the eye tracking marker outside US monitor and success rate of venipuncture. Result: There were no significant differences in the task completion time and total time of the tracking marker outside the US monitor. The number of occurrences of the eye tracking marker outside US monitor in the MS group was significantly higher than in the F group (MS group: 9.5 ± 3.4, R group: 6.0 ± 2.9, F group: 5.2 ± 1.6; p  = 0.04). The success rate of venipuncture in the R group tended to be better than in the F group. Conclusion: More experienced operators let their eye fall outside the US monitor fewer times than less experienced ones. The eye gaze was associated with the success rate of RTUS-guided venipuncture. Repeated training while considering the eye gaze seems to be pivotal for mastering RTUS-guided venipuncture.


1999 ◽  
Vol 42 (2) ◽  
pp. 482-496 ◽  
Author(s):  
Donna J. Thal ◽  
Laureen O'Hanlon ◽  
Mary Clemmons ◽  
LaShon Fralin

Previous research has documented the validity of parent report for measuring vocabulary and grammar in typically developing toddlers. In this project, two studies examined the validity of parent report for measuring language in children with specific language delay who are older than the normative group, but who have language levels within the range measured by the instruments. In Study 1, scores on the MacArthur Communicative Development Inventory: Words and Sentences were compared to behavioral measures of production of vocabulary and grammar in 39- to 49-month-old children with language delay. Results indicated moderately high to high concurrent validity correlations in both domains (.67–.86). In Study 2, scores on the MacArthur Communicative Inventory: Words and Gestures were compared to behavioral measures of vocabulary comprehension and production and gesture production in 24- to 32-month-old children with language delay. Results indicated a moderately high concurrent validity correlation for vocabulary production (.66). Parent report of comprehension and gesture scores did not correlate significantly with their behavioral counterparts, but gesture scores were moderately highly correlated with language comprehension (.65).


2016 ◽  
Vol 47 (1) ◽  
pp. 44-58 ◽  
Author(s):  
Pamela A. Hadley ◽  
Matthew Rispoli ◽  
Ning Hsu

Purpose The goals of this study were to quantify longitudinal expectations for verb lexicon growth and to determine whether verb lexicon measures were better predictors of later grammatical outcomes than noun lexicon measures. Method Longitudinal parent-report measures from the MacArthur-Bates Communicative Development Inventory (Fenson et al., 2007) from ages 21 to 30 months from an archival database were used to model growth in common noun and verb lexicon size for 45 typically developing toddlers. Communicative Development Inventory growth coefficients and 24-month measures of lexical diversity from spontaneous language samples were used to predict 30-month grammatical outcomes on the Index of Productive Syntax (Scarborough, 1990). Results Average verb growth was characterized by 50.57 verbs at 24 months, with linear growth of 8.29 verbs per month and deceleration overall. Children with small verb lexicons or slow linear growth at 24 months accelerated during this developmental period. Verb lexicon measures were better predictors of grammatical outcomes than noun lexicon measures, accounting for 47.8% of the variance in Index of Productive Syntax scores. Lexical verb diversity in spontaneous speech emerged as the single best predictor. Conclusion Measures of verb lexicon size and diversity should be included as part of early language assessment to guide clinical decision making with young children at risk for language impairment.


2020 ◽  
Vol 57 (9) ◽  
pp. 1117-1124
Author(s):  
Adriane Baylis ◽  
Linda D. Vallino ◽  
Juliana Powell ◽  
David J. Zajac

Objective: To determine vocabulary and lexical selectivity characteristics of children with and without repaired cleft palate at 24 months of age, based on parent report. Participants: Forty-nine children with repaired cleft palate, with or without cleft lip (CP±L; 25 males; 21 cleft lip and palate, 28 CP only), 29 children with a history of otitis media (OM) and ventilation tubes (21 males), and 25 typically developing (TD) children (13 males). Main Outcome Measure(s): Parent-reported expressive vocabulary was determined using the MacArthur Communicative Development Inventory: Words and Sentences. Results: Vocabulary size was reduced for children with repaired CP±L compared to children in the TD group ( P = .025) but not the OM group ( P = .403). Mean percentage of words beginning with sonorants did not differ across groups ( P = .383). Vocabulary size predicted sonorant use for all groups ( P = .001). Conclusions: Children with repaired CP±L exhibit similar lexical selectivity relative to word initial sounds compared to noncleft TD and OM peers at 24 months of age, based on parent report.


Sign in / Sign up

Export Citation Format

Share Document