scholarly journals Western scrub-jays conceal auditory information when competitors can hear but cannot see

2009 ◽  
Vol 5 (5) ◽  
pp. 583-585 ◽  
Author(s):  
Gert Stulp ◽  
Nathan J. Emery ◽  
Simon Verhulst ◽  
Nicola S. Clayton

Western scrub-jays ( Aphelocoma californica ) engage in a variety of cache-protection strategies to reduce the chances of cache theft by conspecifics. Many of these strategies revolve around reducing visual information to potential thieves. This study aimed to determine whether the jays also reduce auditory information during caching. Each jay was given the opportunity to cache food in two trays, one of which was filled with small pebbles that made considerable noise when cached in (‘noisy’ tray), whereas the other one contained soil that made little detectable noise when cached in (‘quiet’ tray). When the jays could be heard, but not seen, by a competitor, they cached proportionally less food items in the ‘noisy’ substrate than when they cached alone in the room, or when they could be seen and heard by competitors. These results suggest that western scrub-jays know when to conceal auditory information, namely when a competitor cannot see but can hear the caching event.

PeerJ ◽  
2018 ◽  
Vol 6 ◽  
pp. e4451 ◽  
Author(s):  
Katharina F. Brecht ◽  
Ljerka Ostojić ◽  
Edward W. Legg ◽  
Nicola S. Clayton

Previous research has suggested that videos can be used to experimentally manipulate social stimuli. In the present study, we used the California scrub-jays’ cache protection strategies to assess whether video playback can be used to simulate conspecifics in a social context. In both the lab and the field, scrub-jays are known to exhibit a range of behaviours to protect their caches from potential pilferage by a conspecific, for example by hiding food in locations out of the observer’s view or by re-caching previously made caches once the observer has left. Here, we presented scrub-jays with videos of a conspecific observer as well as two non-social conditions during a caching period and assessed whether they would cache out of the observer’s “view” (Experiment 1) or would re-cache their caches once the observer was no longer present (Experiment 2). In contrast to previous studies using live observers, the scrub-jays’ caching and re-caching behaviour was not influenced by whether the observer was present or absent. These findings suggest that there might be limitations in using video playback of social agents to mimic real-life situations when investigating corvid decision making.


2015 ◽  
Vol 3 (1-2) ◽  
pp. 88-101 ◽  
Author(s):  
Kathleen M. Einarson ◽  
Laurel J. Trainor

Recent work examined five-year-old children’s perceptual sensitivity to musical beat alignment. In this work, children watched pairs of videos of puppets drumming to music with simple or complex metre, where one puppet’s drumming sounds (and movements) were synchronized with the beat of the music and the other drummed with incorrect tempo or phase. The videos were used to maintain children’s interest in the task. Five-year-olds were better able to detect beat misalignments in simple than complex metre music. However, adults can perform poorly when attempting to detect misalignment of sound and movement in audiovisual tasks, so it is possible that the moving stimuli actually hindered children’s performance. Here we compared children’s sensitivity to beat misalignment in conditions with dynamic visual movement versus still (static) visual images. Eighty-four five-year-old children performed either the same task as described above or a task that employed identical auditory stimuli accompanied by a motionless picture of the puppet with the drum. There was a significant main effect of metre type, replicating the finding that five-year-olds are better able to detect beat misalignment in simple metre music. There was no main effect of visual condition. These results suggest that, given identical auditory information, children’s ability to judge beat misalignment in this task is not affected by the presence or absence of dynamic visual stimuli. We conclude that at five years of age, children can tell if drumming is aligned to the musical beat when the music has simple metric structure.


2017 ◽  
Vol 30 (7-8) ◽  
pp. 653-679 ◽  
Author(s):  
Nida Latif ◽  
Agnès Alsius ◽  
K. G. Munhall

During conversations, we engage in turn-taking behaviour that proceeds back and forth effortlessly as we communicate. In any given day, we participate in numerous face-to-face interactions that contain social cues from our partner and we interpret these cues to rapidly identify whether it is appropriate to speak. Although the benefit provided by visual cues has been well established in several areas of communication, the use of visual information to make turn-taking decisions during conversation is unclear. Here we conducted two experiments to investigate the role of visual information in identifying conversational turn exchanges. We presented clips containing single utterances spoken by single individuals engaged in a natural conversation with another. These utterances were from either right before a turn exchange (i.e., when the current talker would finish and the other would begin) or were utterances where the same talker would continue speaking. In Experiment 1, participants were presented audiovisual, auditory-only and visual-only versions of our stimuli and identified whether a turn exchange would occur or not. We demonstrated that although participants could identify turn exchanges with unimodal information alone, they performed best in the audiovisual modality. In Experiment 2, we presented participants audiovisual turn exchanges where the talker, the listener or both were visible. We showed that participants suffered a cost at identifying turns exchanges when visual cues from the listener were not available. Overall, we demonstrate that although auditory information is sufficient for successful conversation, visual information plays an important role in the overall efficiency of communication.


2017 ◽  
Vol 13 (7) ◽  
pp. 20170242
Author(s):  
Laura A. Kelley ◽  
Nicola S. Clayton

Some animals hide food to consume later; however, these caches are susceptible to theft by conspecifics and heterospecifics. Caching animals can use protective strategies to minimize sensory cues available to potential pilferers, such as caching in shaded areas and in quiet substrate. Background matching (where object patterning matches the visual background) is commonly seen in prey animals to reduce conspicuousness, and caching animals may also use this tactic to hide caches, for example, by hiding coloured food in a similar coloured substrate. We tested whether California scrub-jays ( Aphelocoma californica ) camouflage their food in this way by offering them caching substrates that either matched or did not match the colour of food available for caching. We also determined whether this caching behaviour was sensitive to social context by allowing the birds to cache when a conspecific potential pilferer could be both heard and seen (acoustic and visual cues present), or unseen (acoustic cues only). When caching events could be both heard and seen by a potential pilferer, birds cached randomly in matching and non-matching substrates. However, they preferentially hid food in the substrate that matched the food colour when only acoustic cues were present. This is a novel cache protection strategy that also appears to be sensitive to social context. We conclude that studies of cache protection strategies should consider the perceptual capabilities of the cacher and potential pilferers.


2019 ◽  
Vol 31 (8) ◽  
pp. 1110-1125 ◽  
Author(s):  
Maria V. Stuckenberg ◽  
Erich Schröger ◽  
Andreas Widmann

Predictions about forthcoming auditory events can be established on the basis of preceding visual information. Sounds being incongruent to predictive visual information have been found to elicit an enhanced negative ERP in the latency range of the auditory N1 compared with physically identical sounds being preceded by congruent visual information. This so-called incongruency response (IR) is interpreted as reduced prediction error for predicted sounds at a sensory level. The main purpose of this study was to examine the impact of probability manipulations on the IR. We manipulated the probability with which particular congruent visual–auditory pairs were presented (83/17 vs. 50/50 condition). This manipulation led to two conditions with different strengths of the association of visual with auditory information. A visual cue was presented either above or below a fixation cross and was followed by either a high- or low-pitched sound. In 90% of trials, the visual cue correctly predicted the subsequent sound. In one condition, one of the sounds was presented more frequently (83% of trials), whereas in the other condition both sounds were presented with equal probability (50% of trials). Therefore, in the 83/17 condition, one congruent combination of visual cue and corresponding sound was presented more frequently than the other combinations presumably leading to a stronger visual–auditory association. A significant IR for unpredicted compared with predicted but otherwise identical sounds was observed only in the 83/17 condition, but not in the 50/50 condition, where both congruent visual cue–sound combinations were presented with equal probability. We also tested whether the processing of the prediction violation is dependent on the task relevance of the visual information. Therefore, we contrasted a visual–auditory matching task with a pitch discrimination task. It turned out that the task only had an impact on the behavioral performance but not on the prediction error signals. Results suggest that the generation of visual-to-auditory sensory predictions is facilitated by a strong association between the visual cue and the predicted sound (83/17 condition) but is not influenced by the task relevance of the visual information.


Perception ◽  
1998 ◽  
Vol 27 (1) ◽  
pp. 69-86 ◽  
Author(s):  
Michel-Ange Amorim ◽  
Jack M Loomis ◽  
Sergio S Fukusima

An unfamiliar configuration lying in depth and viewed from a distance is typically seen as foreshortened. The hypothesis motivating this research was that a change in an observer's viewpoint even when the configuration is no longer visible induces an imaginal updating of the internal representation and thus reduces the degree of foreshortening. In experiment 1, observers attempted to reproduce configurations defined by three small glowing balls on a table 2 m distant under conditions of darkness following ‘viewpoint change’ instructions. In one condition, observers reproduced the continuously visible configuration using three other glowing balls on a nearer table while imagining standing at the distant table. In the other condition, observers viewed the configuration, it was then removed, and they walked in darkness to the far table and reproduced the configuration. Even though the observers received no additional information about the stimulus configuration in walking to the table, they were more accurate (less foreshortening) than in the other condition. In experiment 2, observers reproduced distant configurations on a nearer table more accurately when doing so from memory than when doing so while viewing the distant stimulus configuration. In experiment 3, observers performed both the real and imagined perspective change after memorizing the remote configuration. The results of the three experiments indicate that the continued visual presence of the target configuration impedes imaginary perspective-change performance and that an actual change in viewpoint does not increase reproduction accuracy substantially over that obtained with an imagined change in viewpoint.


1984 ◽  
Vol 59 (1) ◽  
pp. 227-232 ◽  
Author(s):  
Luciano Mecacci ◽  
Dario Salmaso

Visual evoked potentials were recorded for 6 adult male subjects in response to single vowels and consonants in printed and script forms. Analysis showed the vowels in the printed form to have evoked responses with shorter latency (component P1 at about 133 msec.) and larger amplitude (component P1-N1) than the other letter-typeface combinations. No hemispheric asymmetries were found. The results partially agree with the behavioral data on the visual information-processing of letters.


Sign in / Sign up

Export Citation Format

Share Document