scholarly journals Auditory–Visual Matching of Conspecifics and Non-Conspecifics by Dogs and Human Infants

Animals ◽  
2019 ◽  
Vol 9 (1) ◽  
pp. 17 ◽  
Author(s):  
Anna Gergely ◽  
Eszter Petró ◽  
Katalin Oláh ◽  
József Topál

We tested whether dogs and 14–16-month-old infants are able to integrate intersensory information when presented with conspecific and heterospecific faces and vocalisations. The looking behaviour of dogs and infants was recorded with a non-invasive eye-tracking technique while they were concurrently presented with a dog and a female human portrait accompanied with acoustic stimuli of female human speech and a dog’s bark. Dogs showed evidence of both con- and heterospecific intermodal matching, while infants’ looking preferences indicated effective auditory–visual matching only when presented with the audio and visual stimuli of the non-conspecifics. The results of the present study provided further evidence that domestic dogs and human infants have similar socio-cognitive skills and highlighted the importance of comparative examinations on intermodal perception.

2021 ◽  
Author(s):  
Diane Caroline Meziere ◽  
Lili Yu ◽  
Erik Reichle ◽  
Titus von der Malsburg ◽  
Genevieve McArthur

Research on reading comprehension assessments suggests that they measure overlapping but not identical cognitive skills. In this paper, we examined the potential of eye-tracking as a tool for assessing reading comprehension. We administered three widely-used reading comprehension tests with varying task demands to 79 typical adult readers while monitoring their eye movements. In the York Assessment for Reading Comprehension (YARC), participants were given passages of text to read silently, followed by comprehension questions. In the Gray Oral Reading Test (GORT-5), participants were given passages of text to read aloud, followed by comprehension questions. In the sentence comprehension subtest of the Wide Range Achievement Test (WRAT-4), participants were given sentences with a missing word to read silently, and had to provide the missing word (i.e., a cloze task). Results from linear models predicting comprehension scores from eye-tracking measures yielded different patterns of results between the three tests. Models with eye-tracking measures always explained significantly more variance compared to baseline models with only reading speed, with R-squared 4 times higher for the YARC, 3 times for the GORT, and 1.3 times for the WRAT. Importantly, despite some similarities between the tests, no common good predictor of comprehension could be identified across the tests. Overall, the results suggest that reading comprehension tests do not measure the same cognitive skills to the same extent, and that participants adapted their reading strategies to the tests’ varying task demands. Finally, this study suggests that eye-tracking may provide a useful alternative for measuring reading comprehension.


2018 ◽  
Vol 29 (9) ◽  
pp. 1405-1413 ◽  
Author(s):  
Christine M. Johnson ◽  
Jessica Sullivan ◽  
Jane Jensen ◽  
Cara Buck ◽  
Julie Trexel ◽  
...  

In this study, paradigms that test whether human infants make social attributions to simple moving shapes were adapted for use with bottlenose dolphins. The dolphins observed animated displays in which a target oval would falter while moving upward, and then either a “prosocial” oval would enter and help or caress it or an “antisocial” oval would enter and hinder or hit it. In subsequent displays involving all three shapes, when the pro- and antisocial ovals moved offscreen in opposite directions, the dolphins reliably predicted—based on anticipatory head turns when the target briefly moved behind an occluder—that the target oval would follow the prosocial one. When the roles of the pro- and antisocial ovals were reversed toward a new target, the animals’ continued success suggests that such attributions may be dyad specific. Some of the dolphins also directed high arousal behaviors toward these displays, further supporting that they were socially interpreted.


2020 ◽  
Vol 11 ◽  
Author(s):  
Maria Richter ◽  
Mariella Paul ◽  
Barbara Höhle ◽  
Isabell Wartenburger

One of the most important social cognitive skills in humans is the ability to “put oneself in someone else’s shoes,” that is, to take another person’s perspective. In socially situated communication, perspective taking enables the listener to arrive at a meaningful interpretation of what is said (sentence meaning) and what is meant (speaker’s meaning) by the speaker. To successfully decode the speaker’s meaning, the listener has to take into account which information he/she and the speaker share in their common ground (CG). We here further investigated competing accounts about when and how CG information affects language comprehension by means of reaction time (RT) measures, accuracy data, event-related potentials (ERPs), and eye-tracking. Early integration accounts would predict that CG information is considered immediately and would hence not expect to find costs of CG integration. Late integration accounts would predict a rather late and effortful integration of CG information during the parsing process that might be reflected in integration or updating costs. Other accounts predict the simultaneous integration of privileged ground (PG) and CG perspectives. We used a computerized version of the referential communication game with object triplets of different sizes presented visually in CG or PG. In critical trials (i.e., conflict trials), CG information had to be integrated while privileged information had to be suppressed. Listeners mastered the integration of CG (response accuracy 99.8%). Yet, slower RTs, and enhanced late positivities in the ERPs showed that CG integration had its costs. Moreover, eye-tracking data indicated an early anticipation of referents in CG but an inability to suppress looks to the privileged competitor, resulting in later and longer looks to targets in those trials, in which CG information had to be considered. Our data therefore support accounts that foresee an early anticipation of referents to be in CG but a rather late and effortful integration if conflicting information has to be processed. We show that both perspectives, PG and CG, contribute to socially situated language processing and discuss the data with reference to theoretical accounts and recent findings on the use of CG information for reference resolution.


2019 ◽  
Vol 142 (3) ◽  
Author(s):  
E. Kwon ◽  
J. D. Ryan ◽  
A. Bazylak ◽  
L. H. Shu

Abstract Divergent thinking, an aspect of creativity, is often studied by measuring performance on the Alternative Uses Test (AUT). There is, however, a gap in creativity research concerning how visual stimuli on the AUT are perceived. Memory and attention researchers have used eye-tracking studies to reveal insights into how people think and how they perceive visual stimuli. Thus, the current work uses eye tracking to study how eye movements are related to creativity. Participants orally listed alternative uses for twelve objects, each visually presented for 2 min in four different views. Using eye tracking, we specifically explored where and for how long participants fixate their eyes at visual presentations of objects during the AUT. Eye movements before and while naming alternative uses were analyzed. Results revealed that naming new instances and categories of alternative uses correlated more strongly with visual fixation toward multiple views than toward single views of objects. Alternative uses in new, previously unnamed categories were also more likely named following increased visual fixation toward blank space. These and other findings reveal the cognitive-thinking styles and eye-movement behaviors associated with naming new ideas. Such findings may be applied to enhance divergent thinking during design.


2016 ◽  
Vol 29 (8) ◽  
pp. 749-771 ◽  
Author(s):  
Min Hooi Yong ◽  
Ted Ruffman

Dogs respond to human emotional expressions. However, it is unknown whether dogs can match emotional faces to voices in an intermodal matching task or whether they show preferences for looking at certain emotional facial expressions over others, similar to human infants. We presented 52 domestic dogs and 24 seven-month-old human infants with two different human emotional facial expressions of the same gender simultaneously, while listening to a human voice expressing an emotion that matched one of them. Consistent with most matching studies, neither dogs nor infants looked longer at the matching emotional stimuli, yet dogs and humans demonstrated an identical pattern of looking less at sad faces when paired with happy or angry faces (irrespective of the vocal stimulus), with no preference for happyversusangry faces. Discussion focuses on why dogs and infants might have an aversion to sad faces, or alternatively, heightened interest in angry and happy faces.


Sign in / Sign up

Export Citation Format

Share Document