The Neural Basis of Dim-Light Vision in Echolocating Bats

2019 ◽  
Vol 94 (Suppl. 1-4) ◽  
pp. 61-70 ◽  
Author(s):  
Susanne Hoffmann ◽  
Alexandra Bley ◽  
Mariana Matthes ◽  
Uwe Firzlaff ◽  
Harald Luksch

Echolocating bats evolved a sophisticated biosonar imaging system that allows for a life in dim-light habitats. However, especially for far-range operations such as homing, bats can support biosonar by vision. Large eyes and a retina that mainly consists of rods are assumed to be the optical adjustments that enable bats to use visual information at low light levels. In addition to optical mechanisms, many nocturnal animals evolved neural adaptations such as elongated integration times or enlarged spatial sampling areas to further increase the sensitivity of their visual system by temporal or spatial summation of visual information. The neural mechanisms that underlie the visual capabilities of echolocating bats have, however, so far not been investigated. To shed light on spatial and temporal response characteristics of visual neurons in an echolocating bat, Phyllostomus discolor, we recorded extracellular multiunit activity in the retino-recipient superficial layers of the superior colliculus (SC). We discovered that response latencies of these neurons were generally in the mammalian range, whereas neural spatial sampling areas were unusually large compared to those measured in the SC of other mammals. From this we suggest that echolocating bats likely use spatial but not temporal summation of visual input to improve visual performance under dim-light conditions. Furthermore, we hypothesize that bats compensate for the loss of visual spatial precision, which is a byproduct of spatial summation, by integration of spatial information provided by both the visual and the biosonar systems. Given that knowledge about neural adaptations to dim-light vision is mainly based on studies done in non-mammalian species, our novel data provide a valuable contribution to the field and demonstrate the suitability of echolocating bats as a nocturnal animal model to study the neurophysiological aspects of dim-light vision.

Author(s):  
Andrew J. Kolarik ◽  
Brian C. J. Moore ◽  
Silvia Cirstea ◽  
Rajiv Raman ◽  
Sarika Gopalakrishnan ◽  
...  

AbstractVisual spatial information plays an important role in calibrating auditory space. Blindness results in deficits in a number of auditory abilities, which have been explained in terms of the hypothesis that visual information is needed to calibrate audition. When judging the size of a novel room when only auditory cues are available, normally sighted participants may use the location of the farthest sound source to infer the nearest possible distance of the far wall. However, for people with partial visual loss (distinct from blindness in that some vision is present), such a strategy may not be reliable if vision is needed to calibrate auditory cues for distance. In the current study, participants were presented with sounds at different distances (ranging from 1.2 to 13.8 m) in a simulated reverberant (T60 = 700 ms) or anechoic room. Farthest distance judgments and room size judgments (volume and area) were obtained from blindfolded participants (18 normally sighted, 38 partially sighted) for speech, music, and noise stimuli. With sighted participants, the judged room volume and farthest sound source distance estimates were positively correlated (p < 0.05) for all conditions. Participants with visual losses showed no significant correlations for any of the conditions tested. A similar pattern of results was observed for the correlations between farthest distance and room floor area estimates. Results demonstrate that partial visual loss disrupts the relationship between judged room size and sound source distance that is shown by sighted participants.


2016 ◽  
Author(s):  
Janina Brandes ◽  
Farhad Rezvani ◽  
Tobias Heed

AbstractVisual spatial information is paramount in guiding bimanual coordination, but anatomical factors, too, modulate performance in bimanual tasks. Vision conveys not only abstract spatial information, but also informs about body-related aspects such as posture. Here, we asked whether, accordingly, visual information induces body-related, or merely abstract, perceptual-spatial constraints in bimanual movement guidance. Human participants made rhythmic, symmetrical and parallel, bimanual index finger movements with the hands held in the same or different orientations. Performance was more accurate for symmetrical than parallel movements in all postures, but additionally when homologous muscles were concurrently active, such as when parallel movements were performed with differently rather than identically oriented hands. Thus, both perceptual and anatomical constraints were evident. We manipulated visual feedback with a mirror between the hands, replacing the image of the left with that of the right hand and creating the visual impression of bimanual symmetry independent of the right hand’s true movement. Symmetrical mirror feedback impaired parallel, but improved symmetrical bimanual performance compared with regular hand view. Critically, these modulations were independent of hand posture and muscle homology. Thus, vision appears to contribute exclusively to spatial, but not to body-related, anatomical movement coding in the guidance of bimanual coordination.


2021 ◽  
Author(s):  
Margaret M. Henderson ◽  
Rosanne L. Rademaker ◽  
John T. Serences

Working memory (WM) provides flexible storage of information in service of upcoming behavioral goals. Some models propose specific fixed loci and mechanisms for the storage of visual information in WM, such as sustained spiking in parietal and prefrontal cortex during the maintenance of features. An alternative view is that information can be remembered in a flexible format that best suits current behavioral goals. For example, remembered visual information might be stored in sensory areas for easier comparison to future sensory inputs (i.e. a retrospective code) or might be remapped into a more abstract, output-oriented format and stored in motor areas (i.e. a prospective code). Here, we tested this hypothesis using a visual-spatial working memory task where the required behavioral response was either known or unknown during the memory delay period. Using fMRI and multivariate decoding, we found that there was less information about remembered spatial positions in early visual and parietal regions when the required response was known versus unknown. Further, a representation of the planned motor action emerged in primary somatosensory, primary motor, and premotor cortex on the same trials where spatial information was reduced in early visual cortex. These results suggest that the neural networks supporting WM can be strategically reconfigured depending on the specific behavioral requirements of canonical visual WM paradigms.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Loes Ottink ◽  
Marit Hoogendonk ◽  
Christian F. Doeller ◽  
Thea M. Van der Geest ◽  
Richard J. A. Van Wezel

AbstractIn this study, we compared cognitive map formation of small-scale models of city-like environments presented in visual or tactile/haptic modalities. Previous research often addresses only a limited amount of cognitive map aspects. We wanted to combine several of these aspects to elucidate a more complete view. Therefore, we assessed different types of spatial information, and consider egocentric as well as allocentric perspectives. Furthermore, we compared haptic map learning with visual map learning. In total 18 sighted participants (9 in a haptic condition, 9 visuo-haptic) learned three tactile maps of city-like environments. The maps differed in complexity, and had five marked locations associated with unique items. Participants estimated distances between item pairs, rebuilt the map, recalled locations, and navigated two routes, after learning each map. All participants overall performed well on the spatial tasks. Interestingly, only on the complex maps, participants performed worse in the haptic condition than the visuo-haptic, suggesting no distinct advantage of vision on the simple map. These results support ideas of modality-independent representations of space. Although it is less clear on the more complex maps, our findings indicate that participants using only haptic or a combination of haptic and visual information both form a quite accurate cognitive map of a simple tactile city-like map.


2001 ◽  
Vol 31 (5) ◽  
pp. 915-922 ◽  
Author(s):  
S. KÉRI ◽  
O. KELEMEN ◽  
G. BENEDEK ◽  
Z. JANKA

Background. The aim of this study was to assess visual information processing and cognitive functions in unaffected siblings of patients with schizophrenia, bipolar disorder and control subjects with a negative family history.Methods. The siblings of patients with schizophrenia (N = 25), bipolar disorder (N = 20) and the controls subjects (N = 20) were matched for age, education, IQ, and psychosocial functioning, as indexed by the Global Assessment of Functioning scale. Visual information processing was measured using two visual backward masking (VBM) tests (target location and target identification). The evaluation of higher cognitive functions included spatial and verbal working memory, Wisconsin Card Sorting Test, letter fluency, short/long delay verbal recall and recognition.Results. The relatives of schizophrenia patients were impaired in the VBM procedure, more pronouncedly at short interstimulus intervals (14, 28, 42 ms) and in the target location task. Marked dysfunctions were also found in the spatial working memory task and in the long delay verbal recall test. In contrast, the siblings of patients with bipolar disorder exhibited spared performances with the exception of a deficit in the long delay recall task.Conclusions. Dysfunctions of sensory-perceptual analysis (VBM) and working memory for spatial information distinguished the siblings of schizophrenia patients from the siblings of individuals with bipolar disorder. Verbal recall deficit was present in both groups, suggesting a common impairment of the fronto-hippocampal system.


2021 ◽  
Vol 33 (3) ◽  
pp. 506-511
Author(s):  
Sheikh Mohd Saleem ◽  
Chaitnya Aggarwal ◽  
Om Prakash Bera ◽  
Radhika Rana ◽  
Gurmandeep Singh ◽  
...  

"Geographic information system (GIS) collects various kinds of data based on the geographic relationship across space." Data in GIS is stored to visualize, analyze, and interpret geographic data to learn about an area, an ongoing project, site planning, business, health economics and health-related surveys and information. GIS has evolved from ancient disease maps to 3D digital maps and continues to grow even today. The visual-spatial mapping of the data has given us an insight into different diseases ranging from diarrhea, pneumonia to non-communicable diseases like diabetes mellitus, hypertension, cardiovascular diseases, or risk factors like obesity, being overweight, etc. All in a while, this information has highlighted health-related issues and knowledge about these in a contemporary manner worldwide. Researchers, scientists, and administrators use GIS for research project planning, execution, and disease management. Cases of diseases in a specific area or region, the number of hospitals, roads, waterways, and health catchment areas are examples of spatially referenced data that can be captured and easily presented using GIS. Currently, we are facing an epidemic of non-communicable diseases, and a powerful tool like GIS can be used efficiently in such a situation. GIS can provide a powerful and robust framework for effectively monitoring and identifying the leading cause behind such diseases.  GIS, which provides a spatial viewpoint regarding the disease spectrum, pattern, and distribution, is of particular importance in this area and helps better understand disease transmission dynamics and spatial determinants. The use of GIS in public health will be a practical approach for surveillance, monitoring, planning, optimization, and service delivery of health resources to the people at large. The GIS platform can link environmental and spatial information with the disease itself, which makes it an asset in disease control progression all over the globe.


Author(s):  
Laura M. DALE ◽  
André THEWIS ◽  
Ioan ROTAR ◽  
Juan A. FERNANDEZ PIERNA ◽  
Christelle BOUDRY ◽  
...  

Nowadays in agriculture, new analytical tools based on spectroscopic technologies are developed. Near Infrared Spectroscopy (NIRS) is a well known technology in the agricultural sector allowing the acquisition of chemical information from the samples with a large number of advantages, such as: easy to use tool, fast and simultaneous analysis of several components, non-polluting, noninvasive and non destructive technology, and possibility of online or field implementation. Recently, NIRS system was combined with imaging technologies creating the Near Infrared Hyperspectral Imaging system (NIR-HSI). This technology provides simultaneously spectral and spatial information from an object. The main differences between NIR-HSI and NIRS is that many spectra can be recorded simultaneously from a large area of an object with the former while with NIRS only one spectrum was recorded for analysis on a small area. In this work, both technologies are presented with special focus on the main spectrum and images analysis methods. Several qualitative and quantitative applications of NIRS and NIR-HSI in agricultural products are listed. Developments of NIRS and NIR-HSI will enhance progress in the field of agriculture by providing high quality and safe agricultural products, better plant and grain selection techniques or compound feed industry’s productivity among others.


2009 ◽  
Vol 21 (4) ◽  
pp. 821-836 ◽  
Author(s):  
Benjamin Straube ◽  
Antonia Green ◽  
Susanne Weis ◽  
Anjan Chatterjee ◽  
Tilo Kircher

In human face-to-face communication, the content of speech is often illustrated by coverbal gestures. Behavioral evidence suggests that gestures provide advantages in the comprehension and memory of speech. Yet, how the human brain integrates abstract auditory and visual information into a common representation is not known. Our study investigates the neural basis of memory for bimodal speech and gesture representations. In this fMRI study, 12 participants were presented with video clips showing an actor performing meaningful metaphoric gestures (MG), unrelated, free gestures (FG), and no arm and hand movements (NG) accompanying sentences with an abstract content. After the fMRI session, the participants performed a recognition task. Behaviorally, the participants showed the highest hit rate for sentences accompanied by meaningful metaphoric gestures. Despite comparable old/new discrimination performances (d′) for the three conditions, we obtained distinct memory-related left-hemispheric activations in the inferior frontal gyrus (IFG), the premotor cortex (BA 6), and the middle temporal gyrus (MTG), as well as significant correlations between hippocampal activation and memory performance in the metaphoric gesture condition. In contrast, unrelated speech and gesture information (FG) was processed in areas of the left occipito-temporal and cerebellar region and the right IFG just like the no-gesture condition (NG). We propose that the specific left-lateralized activation pattern for the metaphoric speech–gesture sentences reflects semantic integration of speech and gestures. These results provide novel evidence about the neural integration of abstract speech and gestures as it contributes to subsequent memory performance.


Author(s):  
Shilin Wang ◽  
Wing Hong Lau ◽  
Alan Wee-Chung Liew ◽  
Shu Hung Leung

Recently, lip image analysis has received much attention because the visual information extracted has been shown to provide significant improvement for speech recognition and speaker authentication, especially in noisy environments. Lip image segmentation plays an important role in lip image analysis. This chapter will describe different lip image segmentation techniques, with emphasis on segmenting color lip images. In addition to providing a review of different approaches, we will describe in detail the state-of-the-art classification-based techniques recently proposed by our group for color lip segmentation: “Spatial fuzzy c-mean clustering” (SFCM) and “fuzzy c-means with shape function” (FCMS). These methods integrate the color information along with different kinds of spatial information into a fuzzy clustering structure and demonstrate superiority in segmenting color lip images with natural low contrast in comparison with many traditional image segmentation techniques.


Sign in / Sign up

Export Citation Format

Share Document