scholarly journals Rapid ocular responses are a robust marker for bottom-up driven auditory salience

2018 ◽  
Author(s):  
Sijia Zhao ◽  
Nga Wai Yum ◽  
Lucas Benjamin ◽  
Elia Benhamou ◽  
Shigeto Furukawa ◽  
...  

Despite the prevalent use of alerting sounds in alarms and human-machine interface systems, we have only a rudimentary understanding of what determines auditory salience - the automatic attraction of attention by sound - and the brain mechanisms which underlie this process. A major roadblock to understanding has been the lack of a robust, objective means of quantifying sound - driven attentional capture. Here we demonstrate that microsaccade inhibition - an oculomotor response arising within 300ms of sound onset - is a robust marker for the salience of brief sounds. Specifically, we show that a 'crowd sourced' (N=911) subjective salience ranking correlated robustly with the superior colliculus (SC) mediated ocular freezing response measured in naive, passively listening participants (replicated in two groups of N=15 each). More salient sounds evoked earlier microsaccadic inhibition, consistent with a faster orienting response. These results establish that microsaccade-indexed activity within the SC is a practical objective measure for auditory salience.

Author(s):  
Caroline A. Miller ◽  
Laura L. Bruce

The first visual cortical axons arrive in the cat superior colliculus by the time of birth. Adultlike receptive fields develop slowly over several weeks following birth. The developing cortical axons go through a sequence of changes before acquiring their adultlike morphology and function. To determine how these axons interact with neurons in the colliculus, cortico-collicular axons were labeled with biocytin (an anterograde neuronal tracer) and studied with electron microscopy.Deeply anesthetized animals received 200-500 nl injections of biocytin (Sigma; 5% in phosphate buffer) in the lateral suprasylvian visual cortical area. After a 24 hr survival time, the animals were deeply anesthetized and perfused with 0.9% phosphate buffered saline followed by fixation with a solution of 1.25% glutaraldehyde and 1.0% paraformaldehyde in 0.1M phosphate buffer. The brain was sectioned transversely on a vibratome at 50 μm. The tissue was processed immediately to visualize the biocytin.


1990 ◽  
Author(s):  
B. Bly ◽  
P. J. Price ◽  
S. Park ◽  
S. Tepper ◽  
E. Jackson ◽  
...  

Symmetry ◽  
2021 ◽  
Vol 13 (4) ◽  
pp. 687
Author(s):  
Jinzhen Dou ◽  
Shanguang Chen ◽  
Zhi Tang ◽  
Chang Xu ◽  
Chengqi Xue

With the development and promotion of driverless technology, researchers are focusing on designing varied types of external interfaces to induce trust in road users towards this new technology. In this paper, we investigated the effectiveness of a multimodal external human–machine interface (eHMI) for driverless vehicles in virtual environment, focusing on a two-way road scenario. Three phases of identifying, decelerating, and parking were taken into account in the driverless vehicles to pedestrian interaction process. Twelve eHMIs are proposed, which consist of three visual features (smile, arrow and none), three audible features (human voice, warning sound and none) and two physical features (yielding and not yielding). We conducted a study to gain a more efficient and safer eHMI for driverless vehicles when they interact with pedestrians. Based on study outcomes, in the case of yielding, the interaction efficiency and pedestrian safety in multimodal eHMI design was satisfactory compared to the single-modal system. The visual modality in the eHMI of driverless vehicles has the greatest impact on pedestrian safety. In addition, the “arrow” was more intuitive to identify than the “smile” in terms of visual modality.


Author(s):  
Saverio Trotta ◽  
Dave Weber ◽  
Reinhard W. Jungmaier ◽  
Ashutosh Baheti ◽  
Jaime Lien ◽  
...  

2020 ◽  
pp. 1-12
Author(s):  
Linuo Wang

Injuries and hidden dangers in training have a greater impact on athletes ’careers. In particular, the brain function that controls the motor function area has a greater impact on the athlete ’s competitive ability. Based on this, it is necessary to adopt scientific methods to recognize brain functions. In this paper, we study the structure of motor brain-computer and improve it based on traditional methods. Moreover, supported by machine learning and SVM technology, this study uses a DSP filter to convert the preprocessed EEG signal X into a time series, and adjusts the distance between the time series to classify the data. In order to solve the inconsistency of DSP algorithms, a multi-layer joint learning framework based on logistic regression model is proposed, and a brain-machine interface system of sports based on machine learning and SVM is constructed. In addition, this study designed a control experiment to improve the performance of the method proposed by this study. The research results show that the method in this paper has a certain practical effect and can be applied to sports.


Primates ◽  
2021 ◽  
Author(s):  
Rie Asano

AbstractA central property of human language is its hierarchical structure. Humans can flexibly combine elements to build a hierarchical structure expressing rich semantics. A hierarchical structure is also considered as playing a key role in many other human cognitive domains. In music, auditory-motor events are combined into hierarchical pitch and/or rhythm structure expressing affect. How did such a hierarchical structure building capacity evolve? This paper investigates this question from a bottom-up perspective based on a set of action-related components as a shared basis underlying cognitive capacities of nonhuman primates and humans. Especially, I argue that the evolution of hierarchical structure building capacity for language and music is tractable for comparative evolutionary study once we focus on the gradual elaboration of shared brain architecture: the cortico-basal ganglia-thalamocortical circuits for hierarchical control of goal-directed action and the dorsal pathways for hierarchical internal models. I suggest that this gradual elaboration of the action-related brain architecture in the context of vocal control and tool-making went hand in hand with amplification of working memory, and made the brain ready for hierarchical structure building in language and music.


2021 ◽  
Vol 11 (8) ◽  
pp. 3397
Author(s):  
Gustavo Assunção ◽  
Nuno Gonçalves ◽  
Paulo Menezes

Human beings have developed fantastic abilities to integrate information from various sensory sources exploring their inherent complementarity. Perceptual capabilities are therefore heightened, enabling, for instance, the well-known "cocktail party" and McGurk effects, i.e., speech disambiguation from a panoply of sound signals. This fusion ability is also key in refining the perception of sound source location, as in distinguishing whose voice is being heard in a group conversation. Furthermore, neuroscience has successfully identified the superior colliculus region in the brain as the one responsible for this modality fusion, with a handful of biological models having been proposed to approach its underlying neurophysiological process. Deriving inspiration from one of these models, this paper presents a methodology for effectively fusing correlated auditory and visual information for active speaker detection. Such an ability can have a wide range of applications, from teleconferencing systems to social robotics. The detection approach initially routes auditory and visual information through two specialized neural network structures. The resulting embeddings are fused via a novel layer based on the superior colliculus, whose topological structure emulates spatial neuron cross-mapping of unimodal perceptual fields. The validation process employed two publicly available datasets, with achieved results confirming and greatly surpassing initial expectations.


Sign in / Sign up

Export Citation Format

Share Document