Temporal-Order Discrimination for Selected Auditory and Visual Stimulus Dimensions

1998 ◽  
Vol 41 (2) ◽  
pp. 300-314 ◽  
Author(s):  
Dennis J. McFarland ◽  
Anthony T. Cacace ◽  
Gavin Setzen

Thresholds for the discrimination of temporal order were determined for selected auditory and visual stimulus dimensions in 10 normal-adult volunteers. Auditory stimuli consisted of binary pure tones varying in frequency or sound pressure level, and visual stimuli consisted of binary geometric forms varying in size, orientation, or color. We determined the effect of psychophysical method and the reliability of performance across stimulus dimensions. Using a single-track adaptive procedure, Experiment 1 showed that temporal-order thresholds (TOTs) varied with stimulus dimension, being lowest for auditory frequency, intermediate for size, orientation, and auditory level, and longest for color. Test performance improved over sessions and the profile of thresholds across stimulus dimensions had a modest reliability. Experiment 2 used a double-interleaved adaptive procedure and TOTs were similarly ordered as in Experiment 1. However, TOT swere significantly lower for initially ascending versus descending tracks. With this method, the reliability of the profile across stimulus dimensions and tracks was relatively low. In Experiment 3, psychometric functions were obtained for each of the stimulus dimensions and thresholds were defined as the interpolated 70.7% correct point. The relative ordering of TOTs was similar to those obtained in the first two experiments. Non-monotonicities were found in some of the psychometric functions, with the most prominent being for the color dimension. A crossexperiment comparison of results demonstrates that TOTs and their reliability are significantly influenced by the psychophysical method. Taken together, these results support the notion that the temporal resolution of ordered stimuli involves perceptual mechanisms specific to a given sensory modality or submodality.

2021 ◽  
pp. 1-12
Author(s):  
Anna Borgolte ◽  
Ahmad Bransi ◽  
Johanna Seifert ◽  
Sermin Toto ◽  
Gregor R. Szycik ◽  
...  

Abstract Synaesthesia is a multimodal phenomenon in which the activation of one sensory modality leads to an involuntary additional experience in another sensory modality. To date, normal multisensory processing has hardly been investigated in synaesthetes. In the present study we examine processes of audiovisual separation in synaesthesia by using a simultaneity judgement task. Subjects were asked to indicate whether an acoustic and a visual stimulus occurred simultaneously or not. Stimulus onset asynchronies (SOA) as well as the temporal order of the stimuli were systematically varied. Our results demonstrate that synaesthetes are better in separating auditory and visual events than control subjects, but only when vision leads.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Nienke B. Debats ◽  
Herbert Heuer ◽  
Christoph Kayser

AbstractTo organize the plethora of sensory signals from our environment into a coherent percept, our brain relies on the processes of multisensory integration and sensory recalibration. We here asked how visuo-proprioceptive integration and recalibration are shaped by the presence of more than one visual stimulus, hence paving the way to study multisensory perception under more naturalistic settings with multiple signals per sensory modality. We used a cursor-control task in which proprioceptive information on the endpoint of a reaching movement was complemented by two visual stimuli providing additional information on the movement endpoint. The visual stimuli were briefly shown, one synchronously with the hand reaching the movement endpoint, the other delayed. In Experiment 1, the judgments of hand movement endpoint revealed integration and recalibration biases oriented towards the position of the synchronous stimulus and away from the delayed one. In Experiment 2 we contrasted two alternative accounts: that only the temporally more proximal visual stimulus enters integration similar to a winner-takes-all process, or that the influences of both stimuli superpose. The proprioceptive biases revealed that integration—and likely also recalibration—are shaped by the superposed contributions of multiple stimuli rather than by only the most powerful individual one.


1979 ◽  
Vol 10 (2) ◽  
pp. 126-131
Author(s):  
Glee C. Hoskins ◽  
Carolyn S. Collins

Second-, fourth-, sixth-, and eighth-grade children’s comprehension of temporal order was measured in center-embedded and right-embedded relative clause sentences that described a specific order of events through the variation of verb tenses in each clause. The results indicate differences in the comprehension of these sentences according to the type of embedding and the subject’s age. A variety of errors suggested that subjects used several strategies. However, an order-of-mention strategy appeared to be the major technique used to determine the temporal order of events described in each sentence. The results have educational implications regarding children’s test performance, reading comprehension, and ability to follow directions.


1998 ◽  
Vol 10 (5) ◽  
pp. 581-589 ◽  
Author(s):  
Elisabetta Làdavas ◽  
Giuseppe di Pellegrino ◽  
Alessandro Farnè ◽  
Gabriele Zeloni

Current interpretations of extinction suggest that the disorder is due to an unbalanced competition between ipsilesional and contralesional representations of space. The question addressed in this study is whether the competition between left and right representations of space in one sensory modality (i.e., touch) can be reduced or exacerbated by the activation of an intact spatial representation in a different modality that is functionally linked to the damaged representation (i.e., vision). This hypothesis was tested in 10 right-hemisphere lesioned patients who suffered from reliable tactile extinction. We found that a visual stimulus presented near the patient's ipsilesional hand (i.e., visual peripersonal space) inhibited the processing of a tactile stimulus delivered on the contralesional hand (cross-modal visuotactile extinction) to the same extent as did an ipsilesional tactile stimulation (unimodal tactile extinction). It was also found that a visual stimulus presented near the contralesional hand improved the detection of a tactile stimulus applied to the same hand. In striking contrast, less modulatory effects of vision on touch perception were observed when a visual stimulus was presented far from the space immediately around the patient's hand (i.e., extrapersonal space). This study clearly demonstrates the existence of a visual peripersonal space centered on the hand in humans and its modulatory effects on tactile perception. These findings are explained by referring to the activity of bimodal neurons in premotor and parietal cortex of macaque, which have tactile receptive fields on the hand and corresponding visual receptive fields in the space immediately adjacent to the tactile fields.


2010 ◽  
Vol 18 (1) ◽  
pp. 87-98 ◽  
Author(s):  
Lisa A. Barella ◽  
Jennifer L. Etnier ◽  
Yu-Kai Chang

Research on the acute effects of exercise on cognitive performance by older adults is limited by a focus on nonhealthy populations. Furthermore, the duration of cognitive improvements after exercise has not been examined. Thus, this study was designed to test the immediate and delayed effects of acute exercise on cognitive performance of healthy older adults. Cognitive performance was assessed using the Stroop task. Participants were randomly assigned to an exercise (20 min of walking) or control (sitting quietly) condition. The Stroop task was administered at baseline and at 12 time points after treatment. Acute exercise resulted in better Stroop test performance immediately postexercise; however, the effects were limited to the color test. No effects of exercise on performance were observed for the Stroop interference or inhibition tests. Findings suggest that acute exercise performed by healthy older adults has short-term benefits for speed of processing but does not affect other types of cognitive functioning.


2020 ◽  
Vol 29 (3S) ◽  
pp. 564-576 ◽  
Author(s):  
Alessia Paglialonga ◽  
Edoardo Maria Polo ◽  
Marco Zanet ◽  
Giulia Rocco ◽  
Toon van Waterschoot ◽  
...  

Purpose The aim of this study was to develop and evaluate a novel, automated speech-in-noise test viable for widespread in situ and remote screening. Method Vowel–consonant–vowel sounds in a multiple-choice consonant discrimination task were used. Recordings from a professional male native English speaker were used. A novel adaptive staircase procedure was developed, based on the estimated intelligibility of stimuli rather than on theoretical binomial models. Test performance was assessed in a population of 26 young adults (YAs) with normal hearing and in 72 unscreened adults (UAs), including native and nonnative English listeners. Results The proposed test provided accurate estimates of the speech recognition threshold (SRT) compared to a conventional adaptive procedure. Consistent outcomes were observed in YAs in test/retest and in controlled/uncontrolled conditions and in UAs in native and nonnative listeners. The SRT increased with increasing age, hearing loss, and self-reported hearing handicap in UAs. Test duration was similar in YAs and UAs irrespective of age and hearing loss. The test–retest repeatability of SRTs was high (Pearson correlation coefficient = .84), and the pass/fail outcomes of the test were reliable in repeated measures (Cohen's κ = .8). The test was accurate in identifying ears with pure-tone thresholds > 25 dB HL (accuracy = 0.82). Conclusion This study demonstrated the viability of the proposed test in subjects of varying language in terms of accuracy, reliability, and short test time. Further research is needed to validate the test in a larger population across a wider range of languages and hearing loss and to identify optimal classification criteria for screening purposes.


2021 ◽  
Author(s):  
Vincent van de Ven

We experience our daily lives as an ongoing series of impressions, but we cognitively process those impressions as segmented events. Segmentation is based on contextual changes, such as spatial environment, moment in time or social surrounding, which are used as event boundaries that associate experiences from different contexts to different event models. However, event segmentation affects perceptual and mnemonic processing depending on the temporal proximity of experiences to event boundaries. Most event segmentation studies have used unisensory visual or auditory contexts, of which the visual modality is overrepresented. In this study, we directly compared the effect of unisensory vs. multisensory boundaries on event segmentation. Participants encoded lists of visual objects while changes in unisensory (audio or visual) or multisensory (audiovisual) context changes occurred at a regular interval. We assessed the effect of contextual changes on an encoding task and two memory tasks for perceptual recognition and temporal order of encoded objects. We found that audio and audiovisual contexts resulted in longer encoding times than the visual context. Contextual changes impaired recognition memory for boundary items and impaired temporal order memory for item pairs crossing a boundary, but these effects did not differ between unisensory and multisensory contexts. Our findings suggest that the sensory modality of event boundaries modulated perceptual but not mnemonic event processing, and provide further understanding in how we segment our experiences in perception and memory.


1998 ◽  
Vol 9 (2) ◽  
pp. 135-138 ◽  
Author(s):  
Jüri Allik ◽  
Kairi Kreegipuu

It has long been known that a dark visual stimulus is seen later than a bright one, with a delay up to several 10s of milliseconds. Systematic studies of various phenomena demonstrating this delay have revealed that the perceptual latency decreases monotonically as the stimulus intensity increases. Because latencies measured by psychological methods and cortical evoked responses are very similar to electroretinogram latencies, it has become a common belief that there is little in the intensity-dependent latency function that cannot be explained by retinal processes. In this study, we report evidence that there is no one absolute visual delay common to the whole visual system, but rather that the delay varies considerably in different perceptual subsystems. The relative visual latency was found to be considerably shorter in the task involving detecting the direction of movement than in other perceptual tasks that presume visual awareness of the beginning or temporal order of visual events.


2021 ◽  
Vol 12 ◽  
Author(s):  
LomaJohn T. Pendergraft ◽  
John M. Marzluff ◽  
Donna J. Cross ◽  
Toru Shimizu ◽  
Christopher N. Templeton

Social interaction among animals can occur under many contexts, such as during foraging. Our knowledge of the regions within an avian brain associated with social interaction is limited to the regions activated by a single context or sensory modality. We used 18F-fluorodeoxyglucose positron emission tomography (FDG-PET) to examine American crow (Corvus brachyrhynchos) brain activity in response to conditions associated with communal feeding. Using a paired approach, we exposed crows to either a visual stimulus (the sight of food), an audio stimulus (the sound of conspecifics vocalizing while foraging) or both audio/visual stimuli presented simultaneously and compared to their brain activity in response to a control stimulus (an empty stage). We found two regions, the nucleus taenia of the amygdala (TnA) and a medial portion of the caudal nidopallium, that showed increased activity in response to the multimodal combination of stimuli but not in response to either stimulus when presented unimodally. We also found significantly increased activity in the lateral septum and medially within the nidopallium in response to both the audio-only and the combined audio/visual stimuli. We did not find any differences in activation in response to the visual stimulus by itself. We discuss how these regions may be involved in the processing of multimodal stimuli in the context of social interaction.


Sign in / Sign up

Export Citation Format

Share Document