scholarly journals A generalized sense of number

2014 ◽  
Vol 281 (1797) ◽  
pp. 20141791 ◽  
Author(s):  
Roberto Arrighi ◽  
Irene Togoli ◽  
David C. Burr

Much evidence has accumulated to suggest that many animals, including young human infants, possess an abstract sense of approximate quantity, a number sense . Most research has concentrated on apparent numerosity of spatial arrays of dots or other objects, but a truly abstract sense of number should be capable of encoding the numerosity of any set of discrete elements, however displayed and in whatever sensory modality. Here, we use the psychophysical technique of adaptation to study the sense of number for serially presented items. We show that numerosity of both auditory and visual sequences is greatly affected by prior adaptation to slow or rapid sequences of events. The adaptation to visual stimuli was spatially selective (in external, not retinal coordinates), pointing to a sensory rather than cognitive process. However, adaptation generalized across modalities, from auditory to visual and vice versa. Adaptation also generalized across formats : adapting to sequential streams of flashes affected the perceived numerosity of spatial arrays. All these results point to a perceptual system that transcends vision and audition to encode an abstract sense of number in space and in time.

2018 ◽  
Vol 29 (9) ◽  
pp. 1405-1413 ◽  
Author(s):  
Christine M. Johnson ◽  
Jessica Sullivan ◽  
Jane Jensen ◽  
Cara Buck ◽  
Julie Trexel ◽  
...  

In this study, paradigms that test whether human infants make social attributions to simple moving shapes were adapted for use with bottlenose dolphins. The dolphins observed animated displays in which a target oval would falter while moving upward, and then either a “prosocial” oval would enter and help or caress it or an “antisocial” oval would enter and hinder or hit it. In subsequent displays involving all three shapes, when the pro- and antisocial ovals moved offscreen in opposite directions, the dolphins reliably predicted—based on anticipatory head turns when the target briefly moved behind an occluder—that the target oval would follow the prosocial one. When the roles of the pro- and antisocial ovals were reversed toward a new target, the animals’ continued success suggests that such attributions may be dyad specific. Some of the dolphins also directed high arousal behaviors toward these displays, further supporting that they were socially interpreted.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Nienke B. Debats ◽  
Herbert Heuer ◽  
Christoph Kayser

AbstractTo organize the plethora of sensory signals from our environment into a coherent percept, our brain relies on the processes of multisensory integration and sensory recalibration. We here asked how visuo-proprioceptive integration and recalibration are shaped by the presence of more than one visual stimulus, hence paving the way to study multisensory perception under more naturalistic settings with multiple signals per sensory modality. We used a cursor-control task in which proprioceptive information on the endpoint of a reaching movement was complemented by two visual stimuli providing additional information on the movement endpoint. The visual stimuli were briefly shown, one synchronously with the hand reaching the movement endpoint, the other delayed. In Experiment 1, the judgments of hand movement endpoint revealed integration and recalibration biases oriented towards the position of the synchronous stimulus and away from the delayed one. In Experiment 2 we contrasted two alternative accounts: that only the temporally more proximal visual stimulus enters integration similar to a winner-takes-all process, or that the influences of both stimuli superpose. The proprioceptive biases revealed that integration—and likely also recalibration—are shaped by the superposed contributions of multiple stimuli rather than by only the most powerful individual one.


2018 ◽  
Vol 373 (1740) ◽  
pp. 20170043 ◽  
Author(s):  
Marco Zorzi ◽  
Alberto Testolin

The finding that human infants and many other animal species are sensitive to numerical quantity has been widely interpreted as evidence for evolved, biologically determined numerical capacities across unrelated species, thereby supporting a ‘nativist’ stance on the origin of number sense. Here, we tackle this issue within the ‘emergentist’ perspective provided by artificial neural network models, and we build on computer simulations to discuss two different approaches to think about the innateness of number sense. The first, illustrated by artificial life simulations, shows that numerical abilities can be supported by domain-specific representations emerging from evolutionary pressure. The second assumes that numerical representations need not be genetically pre-determined but can emerge from the interplay between innate architectural constraints and domain-general learning mechanisms, instantiated in deep learning simulations. We show that deep neural networks endowed with basic visuospatial processing exhibit a remarkable performance in numerosity discrimination before any experience-dependent learning, whereas unsupervised sensory experience with visual sets leads to subsequent improvement of number acuity and reduces the influence of continuous visual cues. The emergent neuronal code for numbers in the model includes both numerosity-sensitive (summation coding) and numerosity-selective response profiles, closely mirroring those found in monkey intraparietal neurons. We conclude that a form of innatism based on architectural and learning biases is a fruitful approach to understanding the origin and development of number sense. This article is part of a discussion meeting issue ‘The origins of numerical abilities'.


2021 ◽  
Vol 12 ◽  
Author(s):  
LomaJohn T. Pendergraft ◽  
John M. Marzluff ◽  
Donna J. Cross ◽  
Toru Shimizu ◽  
Christopher N. Templeton

Social interaction among animals can occur under many contexts, such as during foraging. Our knowledge of the regions within an avian brain associated with social interaction is limited to the regions activated by a single context or sensory modality. We used 18F-fluorodeoxyglucose positron emission tomography (FDG-PET) to examine American crow (Corvus brachyrhynchos) brain activity in response to conditions associated with communal feeding. Using a paired approach, we exposed crows to either a visual stimulus (the sight of food), an audio stimulus (the sound of conspecifics vocalizing while foraging) or both audio/visual stimuli presented simultaneously and compared to their brain activity in response to a control stimulus (an empty stage). We found two regions, the nucleus taenia of the amygdala (TnA) and a medial portion of the caudal nidopallium, that showed increased activity in response to the multimodal combination of stimuli but not in response to either stimulus when presented unimodally. We also found significantly increased activity in the lateral septum and medially within the nidopallium in response to both the audio-only and the combined audio/visual stimuli. We did not find any differences in activation in response to the visual stimulus by itself. We discuss how these regions may be involved in the processing of multimodal stimuli in the context of social interaction.


2020 ◽  
Author(s):  
Kang HiJee ◽  
Auksztulewicz Ryszard ◽  
Chan Chi Hong ◽  
Cappotto Drew ◽  
Rajendran Vani Gurusamy ◽  
...  

AbstractPerception is sensitive to statistical regularities in the environment, including temporal characteristics of sensory inputs. Interestingly, temporal patterns implicitly learned within one modality can also be recognised in another modality. However, it is unclear how cross-modal learning transfer affects neural responses to sensory stimuli. Here, we recorded neural activity of human volunteers (N=24, 12 females, 12 males) using electroencephalography (EEG), while participants were exposed to brief sequences of randomly-timed auditory or visual pulses. Some trials consisted of a repetition of the temporal pattern within the sequence, and subjects were tasked with detecting these trials. Unknown to the participants, some trials reappeared throughout the experiment, inducing implicit learning. Replicating previous behavioural findings, we showed that participants benefit from temporal information learned in one modality, and that they can apply this information to stimuli presented in another modality. Furthermore, using an analysis of EEG response learning curves, we showed that learning temporal structures within modalities modulates single-trial EEG response amplitudes, and that these effects could be localised to modality-specific cortical regions. Furthermore, learning transfer across modalities was associated with modulations of single-trial EEG response amplitudes, as well as beta-band power in the right inferior frontal gyrus. The neural effects of learning transfer were similar both when temporal information learned in audition was transferred to visual stimuli and vice versa. Thus, both modality-specific mechanisms for learning of temporal information, and general mechanisms which mediate learning transfer across modalities, have distinct physiological signatures that are observable in the EEG.Significance statementTemporal patterns governing sensory stimuli can be extracted and used to optimise perceptual processing. However, it is unclear what brain mechanisms mediate the learning of temporal information within a sensory modality, and how the effects of learning can be applied to another modality. Here, we presented auditory and visual stimuli to human participants while recording their brain activity using electroencephalography (EEG). We observed behavioural benefits and neural signatures of subconscious temporal pattern learning within a sensory modality, as well as transfer of patterns from one modality to another (audition to vision and vice versa). Interestingly, the neural correlates of temporal learning within modalities relied on modality-specific brain regions, while learning transfer affected activity in frontal regions, suggesting distinct mechanisms.


2019 ◽  
Author(s):  
Elisa Filevich ◽  
Christina Koß ◽  
Nathan Faivre

AbstractConfidence judgements are a central tool for research in metacognition. In a typical task, participants first perform perceptual (first-order) decisions and then rate their confidence in these decisions. The relationship between confidence and first-order accuracy is taken as measure of metacognitive performance. Confidence is often assumed to stem from decision-monitoring processes alone, but processes that co-occur with the first-order decision may also play a role in confidence formation. In fact, across a broad range of tasks, trials with quick reaction times to the first-order task are often judged with relatively higher confidence than those with slow responses. This robust finding suggests that confidence could be informed by a readout of reaction times in addition to decision-monitoring processes. To test this possibility, we assessed the contribution of response-related signals to confidence and, in particular, to metacognitive performance (i.e., a measure of the adequacy of these confidence judgements). In a factorial design, we measured the effect of making an overt (vs. covert) decision, as well as the effect of pairing a motor action to the stimulus about which the first-order decision is made. Against our expectations, we found no differences in overall confidence or metacognitive performance when first-order responses were covert as opposed to overt. Further, actions paired to visual stimuli presented led to higher confidence ratings, but did not affect metacognitive performance. These results suggest that some of the relationships between first-order decisional signals and confidence might indeed be correlational, and attributable to an upstream cognitive process, common to the two of them.


2015 ◽  
Vol 15 (12) ◽  
pp. 797
Author(s):  
Vladislav Ayzenberg ◽  
Matthew Longo ◽  
Stella Lourenco
Keyword(s):  

2019 ◽  
Author(s):  
AT Zai ◽  
S Cavé-Lopez ◽  
M Rolland ◽  
N Giret ◽  
RHR Hahnloser

AbstractSensory substitution is a promising therapeutic approach for replacing a missing or diseased sensory organ by translating inaccessible information into another sensory modality. What aspects of substitution are important such that subjects accept an artificial sense and that it benefits their voluntary action repertoire? To obtain an evolutionary perspective on affective valence implied in sensory substitution, we introduce an animal model of deaf songbirds. As a substitute of auditory feedback, we provide binary visual feedback. Deaf birds respond appetitively to song-contingent visual stimuli, they skillfully adapt their songs to increase the rate of visual stimuli, showing that auditory feedback is not required for making targeted changes to a vocal repertoire. We find that visually instructed song learning is basal-ganglia dependent. Because hearing birds respond aversively to the same visual stimuli, sensory substitution reveals a bias for actions that elicit feedback to meet animals’ manipulation drive, which has implications beyond rehabilitation.


2018 ◽  
Vol 31 (3-4) ◽  
pp. 213-225 ◽  
Author(s):  
Jenni Heikkilä ◽  
Petra Fagerlund ◽  
Kaisa Tiippana

In the course of normal aging, memory functions show signs of impairment. Studies of memory in the elderly have previously focused on a single sensory modality, although multisensory encoding has been shown to improve memory performance in children and young adults. In this study, we investigated how audiovisual encoding affects auditory recognition memory in older (mean age 71 years) and younger (mean age 23 years) adults. Participants memorized auditory stimuli (sounds, spoken words) presented either alone or with semantically congruent visual stimuli (pictures, text) during encoding. Subsequent recognition memory performance of auditory stimuli was better for stimuli initially presented together with visual stimuli than for auditory stimuli presented alone during encoding. This facilitation was observed both in older and younger participants, while the overall memory performance was poorer in older participants. However, the pattern of facilitation was influenced by age. When encoding spoken words, the gain was greater for older adults. When encoding sounds, the gain was greater for younger adults. These findings show that semantically congruent audiovisual encoding improves memory performance in late adulthood, particularly for auditory verbal material.


Sign in / Sign up

Export Citation Format

Share Document