receptive fields
Recently Published Documents


TOTAL DOCUMENTS

2951
(FIVE YEARS 419)

H-INDEX

147
(FIVE YEARS 9)

Author(s):  
Line Sofie Loken ◽  
Helena Backlund Wasling ◽  
Håkan Olausson ◽  
Francis McGlone ◽  
Johan Wessberg

Unmyelinated tactile (CT) afferents are abundant in arm hairy skin and have been suggested to signal features of social affective touch. Here we recorded from unmyelinated low-threshold mechanosensitive afferents in the peroneal and radial nerves, with the most distal receptive fields located on the proximal phalanx of the third finger for the superficial branch of the radial nerve, and near the lateral malleolus for the peroneal nerve. We found that the physiological properties with regard to conduction velocity and mechanical threshold, as well as their tuning to brush velocity, were similar in CT units across the antebrachial (n=27), radial (n=8) and peroneal nerves (n=4). Moreover, we found that while CT afferents are readily found during microneurography of the arm nerves, they appear to be much more sparse in the lower leg compared to C nociceptors. We continued to explore CT afferents with regard to their chemical sensitivity and found that they could not be activated by topical application to their receptive field of either the cooling agent menthol or the pruritogen histamine. In light of previous studies showing the combined effects that temperature and mechanical stimuli have on these neurons, these findings add to the growing body of research suggesting that CT afferents constitute a unique class of sensory afferents with highly specialized mechanisms for transducing gentle touch.


2022 ◽  
Vol 15 ◽  
Author(s):  
Chongwen Wang ◽  
Zicheng Wang

Facial action unit (AU) detection is an important task in affective computing and has attracted extensive attention in the field of computer vision and artificial intelligence. Previous studies for AU detection usually encode complex regional feature representations with manually defined facial landmarks and learn to model the relationships among AUs via graph neural network. Albeit some progress has been achieved, it is still tedious for existing methods to capture the exclusive and concurrent relationships among different combinations of the facial AUs. To circumvent this issue, we proposed a new progressive multi-scale vision transformer (PMVT) to capture the complex relationships among different AUs for the wide range of expressions in a data-driven fashion. PMVT is based on the multi-scale self-attention mechanism that can flexibly attend to a sequence of image patches to encode the critical cues for AUs. Compared with previous AU detection methods, the benefits of PMVT are 2-fold: (i) PMVT does not rely on manually defined facial landmarks to extract the regional representations, and (ii) PMVT is capable of encoding facial regions with adaptive receptive fields, thus facilitating representation of different AU flexibly. Experimental results show that PMVT improves the AU detection accuracy on the popular BP4D and DISFA datasets. Compared with other state-of-the-art AU detection methods, PMVT obtains consistent improvements. Visualization results show PMVT automatically perceives the discriminative facial regions for robust AU detection.


2022 ◽  
Author(s):  
Divyansh Gupta ◽  
Wiktor Mlynarski ◽  
Olga Symonova ◽  
Jan Svaton ◽  
Maximilian Joesch

Visual systems have adapted to the structure of natural stimuli. In the retina, center-surround receptive fields (RFs) of retinal ganglion cells (RGCs) appear to efficiently encode natural sensory signals. Conventionally, it has been assumed that natural scenes are isotropic and homogeneous; thus, the RF properties are expected to be uniform across the visual field. However, natural scene statistics such as luminance and contrast are not uniform and vary significantly across elevation. Here, by combining theory and novel experimental approaches, we demonstrate that this inhomogeneity is exploited by RGC RFs across the entire retina to increase the coding efficiency. We formulated three predictions derived from the efficient coding theory: (i) optimal RFs should strengthen their surround from the dimmer ground to the brighter sky, (ii) RFs should simultaneously decrease their center size and (iii) RFs centered at the horizon should have a marked surround asymmetry due to a stark contrast drop-off. To test these predictions, we developed a new method to image high-resolution RFs of thousands of RGCs in individual retinas. We found that the RF properties match theoretical predictions, and consistently change their shape from dorsal to the ventral retina, with a distinct shift in the RF surround at the horizon. These effects are observed across RGC subtypes, which were thought to represent visual space homogeneously, indicating that functional retinal streams share common adaptations to visual scenes. Our work shows that RFs of mouse RGCs exploit the non-uniform, panoramic structure of natural scenes at a previously unappreciated scale, to increase coding efficiency.


2022 ◽  
Author(s):  
Sarah Khalife ◽  
Susan T. Francis ◽  
Denis Schluppeck ◽  
Rosa-Maria Sanchez-Panchuelo ◽  
Julien Besle

The majority of fMRI studies investigating somatotopic body representations in the human cortex have used either block or phase-encoding stimulation designs. Event-related (ER) designs allow for more natural and flexible stimulation sequences, while enabling the independent estimation of responses to different body parts in the same cortical location. Here we compared an efficiency-optimized fast ER design (2s inter stimulus interval, ISI) to a slow ER design (8s ISI) for mapping fingertip voxelwise tuning properties in the sensorimotor cortex of 6 participants at 7 Tesla. The fast ER design resulted in similar, but more robust, estimates compared to the slow ER design. Concatenating the fast and slow ER data, we demonstrate in each individual brain the existence of two separate somatotopically-organized representations of the fingertips, one in S1 on the post-central gyrus and the other at the border of the motor and pre-motor cortices on the pre-central gyrus. In both post-central and pre-central representations, fingertip tuning width increases progressively, from narrowly-tuned Brodmann areas 3b and 4a respectively, towards parietal and frontal regions responding equally to all fingertips.


2022 ◽  
pp. 1-11
Author(s):  
Qin Zhou ◽  
Zuqiang Su ◽  
Lanhui Liu ◽  
Xiaolin Hu ◽  
Jianhang Yu

This study presents a fault diagnosis method for rolling bearing based on multi-scale deep subdomain adaptation network (MSDSAN). The proposed MSDSAN, as improvement of deep subdomain adaptation network (DSAN), is an unsupervised transfer learning method. MSDSAN reduces the subdomain distribution discrepancy between domains rather than marginal distribution discrepancy, and so better domain invariant fault features are derived to avoid misalignment between domains. Aiming at avoiding fault information loss by fixed receptive fields feature extraction, selective kernel convolution module is introduced into feature extraction of MSDSAN, by which multiple receptive fields are applied to ensure an optimal receptive field for each working condition. Moreover, contribution rates are adaptively assigned to all receptive fields, and the disturbing information extracted by inappropriate receptive fields is further eliminated. As a result, more comprehensive and effective fault information is derived for bearing fault diagnosis. Fault diagnosis experiment of bearings is performed to verify the superiority of the proposed method, and the experimental results demonstrate that MSDSAN achieves better transfer effects and higher accuracy than SOTA methods under varying working conditions.


2022 ◽  
Vol 15 ◽  
Author(s):  
Anthony Beh ◽  
Paul V. McGraw ◽  
Ben S. Webb ◽  
Denis Schluppeck

Loss of vision across large parts of the visual field is a common and devastating complication of cerebral strokes. In the clinic, this loss is quantified by measuring the sensitivity threshold across the field of vision using static perimetry. These methods rely on the ability of the patient to report the presence of lights in particular locations. While perimetry provides important information about the intactness of the visual field, the approach has some shortcomings. For example, it cannot distinguish where in the visual pathway the key processing deficit is located. In contrast, brain imaging can provide important information about anatomy, connectivity, and function of the visual pathway following stroke. In particular, functional magnetic resonance imaging (fMRI) and analysis of population receptive fields (pRF) can reveal mismatches between clinical perimetry and maps of cortical areas that still respond to visual stimuli after stroke. Here, we demonstrate how information from different brain imaging modalities—visual field maps derived from fMRI, lesion definitions from anatomical scans, and white matter tracts from diffusion weighted MRI data—provides a more complete picture of vision loss. For any given location in the visual field, the combination of anatomical and functional information can help identify whether vision loss is due to absence of gray matter tissue or likely due to white matter disconnection from other cortical areas. We present a combined imaging acquisition and visual stimulus protocol, together with a description of the analysis methodology, and apply it to datasets from four stroke survivors with homonymous field loss (two with hemianopia, two with quadrantanopia). For researchers trying to understand recovery of vision after stroke and clinicians seeking to stratify patients into different treatment pathways, this approach combines multiple, convergent sources of data to characterize the extent of the stroke damage. We show that such an approach gives a more comprehensive measure of residual visual capacity—in two particular respects: which locations in the visual field should be targeted and what kind of visual attributes are most suited for rehabilitation.


2021 ◽  
Author(s):  
Kyle P Blum ◽  
Max D Grogan ◽  
Yufei Wu ◽  
Alex J Harston ◽  
Lee E Miller ◽  
...  

Proprioception is one of the least understood senses yet fundamental for the control of movement. Even basic questions of how limb pose is represented in the somatosensory cortex are unclear. We developed a variational autoencoder with topographic lateral connectivity (topo-VAE) to compute a putative cortical map from a large set of natural movement data. Although not fitted to neural data, our model reproduces two sets of observations from monkey centre-out reaching: 1. The shape and velocity dependence of proprioceptive receptive fields in hand-centered coordinates despite the model having no knowledge of arm kinematics or hand coordinate systems. 2. The distribution of neuronal preferred directions (PDs) recorded from multi-electrode arrays. The model makes several testable predictions: 1. Encoding across the cortex has a blob-and-pinwheel-type geometry PDs. 2. Few neurons will encode just a single joint. Topo-VAE provides a principled basis for understanding of sensorimotor representations, and the theoretical basis of neural manifolds, with application the restoration of sensory feedback in brain-computer interfaces and the control of humanoid robots.


2021 ◽  
Author(s):  
Ran Wang ◽  
Xupeng Chen ◽  
Amirhossein Khalilian-Gourtani ◽  
Leyao Yu ◽  
Patricia Dugan ◽  
...  

AbstractSpeech production is a complex human function requiring continuous feedforward commands together with reafferent feedback processing. These processes are carried out by distinct frontal and posterior cortical networks, but the degree and timing of their recruitment and dynamics remain unknown. We present a novel deep learning architecture that translates neural signals recorded directly from cortex to an interpretable representational space that can reconstruct speech. We leverage state-of-the-art learnt decoding networks to disentangle feedforward vs. feedback processing. Unlike prevailing models, we find a mixed cortical architecture in which frontal and temporal networks each process both feedforward and feedback information in tandem. We elucidate the timing of feedforward and feedback related processing by quantifying the derived receptive fields. Our approach provides evidence for a surprisingly mixed cortical architecture of speech circuitry together with decoding advances that have important implications for neural prosthetics.


2021 ◽  
Author(s):  
Thomas Trevelyan James Sainsbury ◽  
Giovanni Diana ◽  
Martin Patrick Meyer

AbstractVisual neurons can have their tuning properties contextually modulated by the presence of visual stimuli in the area surrounding their receptive field, especially when that stimuli contains natural features. However, stimuli presented in specific egocentric locations may have greater behavioural relevance, raising the possibility that the extent of contextual modulation may vary with position in visual space. To explore this possibility we utilised the small size and optical transparency of the larval zebrafish to describe the form and spatial arrangement of contextually modulated cells throughout an entire tectal hemisphere. We found that the spatial tuning of tectal neurons to a prey-like stimulus sharpens when the stimulus is presented in the context of a naturalistic visual scene. These neurons are confined to a spatially restricted region of the tectum and have receptive fields centred within a region of visual space in which the presence of prey preferentially triggers hunting behaviour. Our results demonstrate that circuits that support behaviourally relevant modulation of tectal neurons are not uniformly distributed. These findings add to the growing body of evidence that the tectum shows regional adaptations for behaviour.


2021 ◽  
Author(s):  
James O’Keeffe ◽  
Vivek Nityananda ◽  
Jenny Read

AbstractWe present a simple model which can account for the stereoscopic sensitivity of praying mantis predatory strikes. The model consists of a single “disparity sensor”: a binocular neuron sensitive to stereoscopic disparity and thus to distance from the animal. The model is based closely on the known behavioural and neurophysiological properties of mantis stereopsis. The monocular inputs to the neuron reflect temporal change and are insensitive to contrast sign, making the sensor insensitive to interocular correlation. The monocular receptive fields have a excitatory centre and inhibitory surround, making them tuned to size. The disparity sensor combines inputs from the two eyes linearly, applies a threshold and then an exponent output nonlinearity. The activity of the sensor represents the model mantis’s instantaneous probability of striking. We integrate this over the stimulus duration to obtain the expected number of strikes in response to moving targets with different stereoscopic distance, size and vertical disparity. We optimised the parameters of the model so as to bring its predictions into agreement with our empirical data on mean strike rate as a function of stimulus size and distance. The model proves capable of reproducing the relatively broad tuning to size and narrow tuning to stereoscopic distance seen in mantis striking behaviour. The model also displays realistic responses to vertical disparity. Most surprisingly, although the model has only a single centre-surround receptive field in each eye, it displays qualitatively the same interaction between size and distance as we observed in real mantids: the preferred size increases as prey distance increases beyond the preferred distance. We show that this occurs because of a stereoscopic “false match” between the leading edge of the stimulus in one eye and its trailing edge in the other; further work will be required to find whether such false matches occur in real mantises. This is the first image-computable model of insect stereopsis, and reproduces key features of both neurophysiology and striking behaviour.


Sign in / Sign up

Export Citation Format

Share Document