multimodal integration
Recently Published Documents


TOTAL DOCUMENTS

181
(FIVE YEARS 42)

H-INDEX

25
(FIVE YEARS 5)

2021 ◽  
pp. 88-92
Author(s):  
Vittorio Gallese ◽  
David Freedberg ◽  
Maria Alessandra Umiltà

In this chapter, the authors summarize their research in the experimental aesthetics of visual art and cinema, motivated by the following assumptions: (1) vision is more complex than the mere activation of the “visual brain”; (2) our visual experience of the world is the outcome of multimodal integration processes, with the motor system as key player; (3) aesthetic experience must be framed within the broader notion of intersubjectivity, as artworks are mediators of the relationship between the subjectivities of artists/creators and beholders; and (4) empathy is an important ingredient of our response to works of art. Capitalizing on the results of their research, one privileging embodiment and the performative quality of perception and cognition, preliminary suggestions for a future research agenda are outlined. Embodied simulation, a model of perception and cognition, can provide a new take on these issues, fostering a newly based dialogue between neuroscience and the humanities.


2021 ◽  
Vol 15 ◽  
Author(s):  
Lutz Kettler ◽  
Hicham Sid ◽  
Carina Schaub ◽  
Katharina Lischka ◽  
Romina Klinger ◽  
...  

AP-2 is a family of transcription factors involved in many aspects of development, cell differentiation, and regulation of cell growth and death. AP-2δ is a member of this group and specific gene expression patterns are required in the adult mouse brain for the development of parts of the inferior colliculus (IC), as well as the cortex, dorsal thalamus, and superior colliculus. The midbrain is one of the central areas in the brain where multimodal integration, i.e., integration of information from different senses, occurs. Previous data showed that AP-2δ-deficient mice are viable but due to increased apoptosis at the end of embryogenesis, lack part of the posterior midbrain. Despite the absence of the IC in AP-2δ-deficient mice, these animals retain at least some higher auditory functions. Neuronal responses to tones in the neocortex suggest an alternative auditory pathway that bypasses the IC. While sufficient data are available in mammals, little is known about AP-2δ in chickens, an avian model for the localization of sounds and the development of auditory circuits in the brain. Here, we identified and localized AP-2δ expression in the chicken midbrain during embryogenesis. Our data confirmed the presence of AP-2δ in the inferior colliculus and optic tectum (TeO), specifically in shepherd’s crook neurons, which are an essential component of the midbrain isthmic network and involved in multimodal integration. AP-2δ expression in the chicken midbrain may be related to the integration of both auditory and visual afferents in these neurons. In the future, these insights may allow for a more detailed study of circuitry and computational rules of auditory and multimodal networks.


Author(s):  
Heidi Haavik ◽  
Nitika Kumari ◽  
Kelly Holt ◽  
Imran Khan Niazi ◽  
Imran Amjad ◽  
...  

Abstract Purpose There is growing evidence that vertebral column function and dysfunction play a vital role in neuromuscular control. This invited review summarises the evidence about how vertebral column dysfunction, known as a central segmental motor control (CSMC) problem, alters neuromuscular function and how spinal adjustments (high-velocity, low-amplitude or HVLA thrusts directed at a CSMC problem) and spinal manipulation (HVLA thrusts directed at segments of the vertebral column that may not have clinical indicators of a CSMC problem) alters neuromuscular function. Methods The current review elucidates the peripheral mechanisms by which CSMC problems, the spinal adjustment or spinal manipulation alter the afferent input from the paravertebral tissues. It summarises the contemporary model that provides a biologically plausible explanation for CSMC problems, the manipulable spinal lesion. This review also summarises the contemporary, biologically plausible understanding about how spinal adjustments enable more efficient production of muscular force. The evidence showing how spinal dysfunction, spinal manipulation and spinal adjustments alter central multimodal integration and motor control centres will be covered in a second invited review. Results Many studies have shown spinal adjustments increase voluntary force and prevent fatigue, which mainly occurs due to altered supraspinal excitability and multimodal integration. The literature suggests physical injury, pain, inflammation, and acute or chronic physiological or psychological stress can alter the vertebral column’s central neural motor control, leading to a CSMC problem. The many gaps in the literature have been identified, along with suggestions for future studies. Conclusion Spinal adjustments of CSMC problems impact motor control in a variety of ways. These include increasing muscle force and preventing fatigue. These changes in neuromuscular function most likely occur due to changes in supraspinal excitability. The current contemporary model of the CSMC problem, and our understanding of the mechanisms of spinal adjustments, provide a biologically plausible explanation for how the vertebral column’s central neural motor control can dysfunction, can lead to a self-perpetuating central segmental motor control problem, and how HVLA spinal adjustments can improve neuromuscular function.


Author(s):  
Chloé Berland ◽  
Dana M. Small ◽  
Serge Luquet ◽  
Giuseppe Gangarossa

Author(s):  
Jean-Michel Mongeau ◽  
Lorian E Schweikert ◽  
Alexander L Davis ◽  
Michael S Reichert ◽  
Jessleen K Kanwal

SYNOPSIS Locomotion is a hallmark of organisms that has enabled adaptive radiation to an extraordinarily diverse class of ecological niches, and allows animals to move across vast distances. Sampling from multiple sensory modalities enables animals to acquire rich information to guide locomotion. Locomotion without sensory feedback is haphazard, therefore sensory and motor systems have evolved complex interactions to generate adaptive behavior. Notably, sensory-guided locomotion acts over broad spatial and temporal scales to permit goal-seeking behavior, whether to localize food by tracking an attractive odor plume or to search for a potential mate. How does the brain integrate multimodal stimuli over different temporal and spatial scales to effectively control behavior? In this review, we classify locomotion into three ordinally ranked hierarchical layers that act over distinct spatiotemporal scales: stabilization, motor primitives, and higher-order tasks, respectively. We discuss how these layers present unique challenges and opportunities for sensorimotor integration. We focus on recent advances in invertebrate locomotion due to their accessible neural and mechanical signals from the whole brain, limbs and sensors. Throughout, we emphasize neural-level description of computations for multimodal integration in genetic model systems, including the fruit fly, Drosophila melanogaster, and the yellow fever mosquito, Aedes aegypti. We identify that summation (e.g. gating) and weighting—which are inherent computations of spiking neurons—underlie multimodal integration across spatial and temporal scales, therefore suggesting collective strategies to guide locomotion.


Vision ◽  
2021 ◽  
Vol 5 (1) ◽  
pp. 14
Author(s):  
Hilary C. Pearson ◽  
Jonathan M. P. Wilbiks

Previous studies have focused on topics such as multimodal integration and object discrimination, but there is limited research on the effect of multimodal learning in memory. Perceptual studies have shown facilitative effects of multimodal stimuli for learning; the current study aims to determine whether this effect persists with memory cues. The purpose of this study was to investigate the effect that audiovisual memory cues have on memory recall, as well as whether the use of multiple memory cues leads to higher recall. The goal was to orthogonally evaluate the effect of the number of self-generated memory cues (one or three), and the modality of the self-generated memory-cue (visual: written words, auditory: spoken words, or audiovisual). A recall task was administered where participants were presented with their self-generated memory cues and asked to determine the target word. There was a significant main effect for number of cues, but no main effect for modality. A secondary goal of this study was to determine which types of memory cues result in the highest recall. Self-reference cues resulted in the highest accuracy score. This study has applications to improving academic performance by using the most efficient learning techniques.


2021 ◽  
Author(s):  
Christian Xerri ◽  
Yoh’i Zennou-Azogui

Perceptual representations are built through multisensory interactions underpinned by dense anatomical and functional neural networks that interconnect primary and associative cortical areas. There is compelling evidence that primary sensory cortical areas do not work in segregation, but play a role in early processes of multisensory integration. In this chapter, we firstly review previous and recent literature showing how multimodal interactions between primary cortices may contribute to refining perceptual representations. Secondly, we discuss findings providing evidence that, following peripheral damage to a sensory system, multimodal integration may promote sensory substitution in deprived cortical areas and favor compensatory plasticity in the spared sensory cortices.


2021 ◽  
Vol 9 ◽  
pp. 1563-1579
Author(s):  
Sandro Pezzelle ◽  
Ece Takmaz ◽  
Raquel Fernández

Abstract This study carries out a systematic intrinsic evaluation of the semantic representations learned by state-of-the-art pre-trained multimodal Transformers. These representations are claimed to be task-agnostic and shown to help on many downstream language-and-vision tasks. However, the extent to which they align with human semantic intuitions remains unclear. We experiment with various models and obtain static word representations from the contextualized ones they learn. We then evaluate them against the semantic judgments provided by human speakers. In line with previous evidence, we observe a generalized advantage of multimodal representations over language- only ones on concrete word pairs, but not on abstract ones. On the one hand, this confirms the effectiveness of these models to align language and vision, which results in better semantic representations for concepts that are grounded in images. On the other hand, models are shown to follow different representation learning patterns, which sheds some light on how and when they perform multimodal integration.


2020 ◽  
Vol 57 (2) ◽  
pp. 65-78
Author(s):  
Gordana Čupković ◽  
Silvana Dunat

This paper deals with multimodal metaphors as the basis of parodic integration in selected videos and album covers by rap artist Krešo Bengalka and his band Kiša metaka. The case studies of parodic integration are marked by a spectacle that significantly contributes to the blend. The study focuses on multimodal integration and disintegration and on the reversal of the conventional way of representing both the relation between the interior and exterior and the relation between the static and the dynamic.


Sign in / Sign up

Export Citation Format

Share Document