The Role of Vehicle Characteristics in Drivers' Perception of Automobile Velocity

1978 ◽  
Vol 22 (1) ◽  
pp. 110-114
Author(s):  
Michael L. Matthews ◽  
Lawrence R. Cousins

Velocity production in the absence of speedometer information is investigated as a function of car size. In the first experiment three vehicles of different size were supplied by the experimenters; in experiment two a different sample of drivers used their own vehicles. In both experiments subjects performed under normal and auditory attenuated conditions. Results indicated greater production accuracy in small compared with large cars and a tendency for drivers of small cars to make greater use of auditory information.

Author(s):  
Christina M. Vanden Bosch der Nederlanden ◽  
J. Eric T. Taylor ◽  
Jessica A. Grahn

To understand and enjoy music, it is important to be able to hear the beat and move your body to the rhythm. However, impaired rhythm processing has a broader impact on perception and cognition beyond music-specific tasks. We also experience rhythms in our everyday interactions, through the lip and jaw movements of watching someone speak, the syllabic structure of words on the radio, and in the movements of our limbs when we walk. Impairments in the ability to perceive and produce rhythms are related to poor language outcomes, such as dyslexia, and they can provide an index of a primary symptom in movement disorders, such as Parkinson’s disease. The chapter summarizes a growing body of literature examining the neural underpinnings of rhythm perception and production. It highlights the importance of auditory-motor relationships in finding and producing a beat in music by reviewing evidence from a number of methodologies. These approaches illustrate how rhythmic auditory information capitalizes on auditory-motor interactions to influence motor excitability, and how beat perception emerges as a function of nonlinear oscillatory dynamics of the brain. Together these studies highlight the important role of rhythm in human development, evolutionary comparisons, multi-modal perception, mirror neurons, language processing, and music.


2021 ◽  
Author(s):  
James McGregor ◽  
Abigail Grassler ◽  
Paul I. Jaffe ◽  
Amanda Louise Jacob ◽  
Michael Brainard ◽  
...  

Songbirds and humans share the ability to adaptively modify their vocalizations based on sensory feedback. Prior studies have focused primarily on the role that auditory feedback plays in shaping vocal output throughout life. In contrast, it is unclear whether and how non-auditory information drives vocal plasticity. Here, we first used a reinforcement learning paradigm to establish that non-auditory feedback can drive vocal learning in adult songbirds. We then assessed the role of a songbird basal ganglia-thalamocortical pathway critical to auditory vocal learning in this novel form of vocal plasticity. We found that both this circuit and its dopaminergic inputs are necessary for non-auditory vocal learning, demonstrating that this pathway is not specialized exclusively for auditory-driven vocal learning. The ability of this circuit to use both auditory and non-auditory information to guide vocal learning may reflect a general principle for the neural systems that support vocal plasticity across species.


2020 ◽  
pp. 002383091989888
Author(s):  
Luma Miranda ◽  
Marc Swerts ◽  
João Moraes ◽  
Albert Rilliard

This paper presents the results of three perceptual experiments investigating the role of auditory and visual channels for the identification of statements and echo questions in Brazilian Portuguese. Ten Brazilian speakers (five male) were video-recorded (frontal view of the face) while they produced a sentence (“ Como você sabe”), either as a statement (meaning “ As you know.”) or as an echo question (meaning “ As you know?”). Experiments were set up including the two different intonation contours. Stimuli were presented in conditions with clear and degraded audio as well as congruent and incongruent information from both channels. Results show that Brazilian listeners were able to distinguish statements and questions prosodically and visually, with auditory cues being dominant over visual ones. In noisy conditions, the visual channel improved the interpretation of prosodic cues robustly, while it degraded them in conditions where the visual information was incongruent with the auditory information. This study shows that auditory and visual information are integrated during speech perception, also when applied to prosodic patterns.


1990 ◽  
Vol 13 (2) ◽  
pp. 201-233 ◽  
Author(s):  
Risto Näätänen

AbstractThis article examines the role of attention and automaticity in auditory processing as revealed by event-related potential (ERP) research. An ERP component called the mismatch negativity, generated by the brain's automatic response to changes in repetitive auditory input, reveals that physical features of auditory stimuli are fully processed whether or not they are attended. It also suggests that there exist precise neuronal representations of the physical features of recent auditory stimuli, perhaps the traces underlying acoustic sensory (“echoic”) memory. A mechanism of passive attention switching in response to changes in repetitive input is also implicated.Conscious perception of discrete acoustic stimuli might be mediated by some of the mechanisms underlying another ERP component (NI), one sensitive to stimulus onset and offset. Frequent passive attentional shifts might accountforthe effect cognitive psychologists describe as “the breakthrough of the unattended” (Broadbent 1982), that is, that even unattended stimuli may be semantically processed, without assuming automatic semantic processing or late selection in selective attention.The processing negativity supports the early-selection theory and may arise from a mechanism for selectively attending to stimuli defined by certain features. This stimulus selection occurs in the form ofa matching process in which each input is compared with the “attentional trace,” a voluntarily maintained representation of the task-relevant features of the stimulus to be attended. The attentional mechanism described might underlie the stimulus-set mode of attention proposed by Broadbent. Finally, a model of automatic and attentional processing in audition is proposed that is based mainly on the aforementioned ERP components and some other physiological measures.


2019 ◽  
Vol 80 (02) ◽  
pp. 111-119 ◽  
Author(s):  
Kelsey Dumanch ◽  
Gayla Poling

Objectives To provide an introduction to the role of audiological evaluations with special reference to patients with skull base disease. Design Review article with case-based overview of the current state of the practice of diagnostic audiology through highlighting the multifaceted clinical toolbox and the value of mechanism-based audiological evaluations that contribute to otologic differential diagnosis. Setting Current state of the practice of diagnostic audiology. Main Outcome Measures Understanding of audiological evaluation results in clinical practice and value of contributions to interdisciplinary teams to identify and quantify dysfunction along the auditory pathway and its subsequent effects. Results Accurate auditory information is best captured with a test battery that consists of various assessment crosschecks and mechanism-driven assessments. Conclusion Audiologists utilize a comprehensive clinical toolbox to gather information for assessment, diagnosis, and management of numerous pathologies. This information, in conjunction with thorough medical review, provides mechanism-specific contributions to the otologic and lateral skull base differential diagnosis.


2007 ◽  
Vol 21 (3-4) ◽  
pp. 251-264 ◽  
Author(s):  
Carles Escera ◽  
M.J. Corral

It has been proposed that the functional role of the mismatch negativity (MMN) generating process is to issue a call for focal attention toward any auditory change violating the preceding acoustic regularity. This paper reviews the evidence supporting such a functional role and outlines a model of how the attentional system controls the flow of bottom-up auditory information with regard to ongoing-task demands to organize goal-oriented behavior. Specifically, the data obtained in auditory-auditory and auditory-visual distraction paradigms demonstrated that the unexpected occurrence of deviant auditory stimuli or novel sounds captures attention involuntarily, as they distract current task performance. These data indicate that such a process of distraction takes place in three successive stages associated, respectively, to MMN, P3a/novelty-P3, and reorienting negativity (RON), and that the latter two are modulated by the demands of the task at hand.


2017 ◽  
Vol 30 (7-8) ◽  
pp. 653-679 ◽  
Author(s):  
Nida Latif ◽  
Agnès Alsius ◽  
K. G. Munhall

During conversations, we engage in turn-taking behaviour that proceeds back and forth effortlessly as we communicate. In any given day, we participate in numerous face-to-face interactions that contain social cues from our partner and we interpret these cues to rapidly identify whether it is appropriate to speak. Although the benefit provided by visual cues has been well established in several areas of communication, the use of visual information to make turn-taking decisions during conversation is unclear. Here we conducted two experiments to investigate the role of visual information in identifying conversational turn exchanges. We presented clips containing single utterances spoken by single individuals engaged in a natural conversation with another. These utterances were from either right before a turn exchange (i.e., when the current talker would finish and the other would begin) or were utterances where the same talker would continue speaking. In Experiment 1, participants were presented audiovisual, auditory-only and visual-only versions of our stimuli and identified whether a turn exchange would occur or not. We demonstrated that although participants could identify turn exchanges with unimodal information alone, they performed best in the audiovisual modality. In Experiment 2, we presented participants audiovisual turn exchanges where the talker, the listener or both were visible. We showed that participants suffered a cost at identifying turns exchanges when visual cues from the listener were not available. Overall, we demonstrate that although auditory information is sufficient for successful conversation, visual information plays an important role in the overall efficiency of communication.


2015 ◽  
Vol 9 (3-4) ◽  
pp. 161 ◽  
Author(s):  
Rebecca S. Schaefer

Music is created in the listener as it is perceived and interpreted - its meaning derived from our unique sense of it; likely driving the range of interpersonal differences found in music processing. Person-specific mental representations of music are thought to unfold on multiple levels as we listen, spanning from an entire piece of music to regularities detected across notes. As we track incoming auditory information, predictions are generated at different levels for different musical aspects, leading to specific percepts and behavioral outputs, illustrating a tight coupling of cognition, perception and action. This coupling, together with a prominent role of prediction in music processing, fits well with recently described ideas about the role of predictive processing in cognitive function, which appears to be especially suitable to account for the role of mental models in musical perception and action. Investigating the cerebral correlates of constructive music imagination offers an experimentally tractable approach to clarifying how mental models of music are represented in the brain. I suggest here that mental representations underlying imagery are multimodal, informed and modulated by the body and its in- and outputs, while perception and action are informed and modulated by predictions based on mental models.  


Author(s):  
Anita Senthinathan ◽  
Scott Adams ◽  
Allyson D. Page ◽  
Mandar Jog

Purpose Hypophonia (low speech intensity) is the most common speech symptom experienced by individuals with Parkinson's disease (IWPD). Previous research suggests that, in IWPD, there may be abnormal integration of sensory information for motor production of speech intensity. In the current study, intensity of auditory feedback was systematically manipulated (altered in both positive and negative directions) during sensorimotor conditions that are known to modulate speech intensity in everyday contexts in order to better understand the role of auditory feedback for speech intensity regulation. Method Twenty-six IWPD and 24 neurologically healthy controls were asked to complete the following tasks: converse with the experimenter, start vowel production, and read sentences at a comfortable loudness, while hearing their own speech intensity randomly altered. Altered intensity feedback conditions included 5-, 10-, and 15-dB reductions and increases in the feedback intensity. Speech tasks were completed in no noise and in background noise. Results IWPD displayed a reduced response to the altered intensity feedback compared to control participants. This reduced response was most apparent when participants were speaking in background noise. Specific task-based differences in responses were observed such that the reduced response by IWPD was most pronounced during the conversation task. Conclusions The current study suggests that IWPD have abnormal processing of auditory information for speech intensity regulation, and this disruption particularly impacts their ability to regulate speech intensity in the context of speech tasks with clear communicative goals (i.e., conversational speech) and speaking in background noise.


Sign in / Sign up

Export Citation Format

Share Document