multimodal networks
Recently Published Documents


TOTAL DOCUMENTS

58
(FIVE YEARS 11)

H-INDEX

13
(FIVE YEARS 1)

2021 ◽  
Vol 15 ◽  
Author(s):  
Lutz Kettler ◽  
Hicham Sid ◽  
Carina Schaub ◽  
Katharina Lischka ◽  
Romina Klinger ◽  
...  

AP-2 is a family of transcription factors involved in many aspects of development, cell differentiation, and regulation of cell growth and death. AP-2δ is a member of this group and specific gene expression patterns are required in the adult mouse brain for the development of parts of the inferior colliculus (IC), as well as the cortex, dorsal thalamus, and superior colliculus. The midbrain is one of the central areas in the brain where multimodal integration, i.e., integration of information from different senses, occurs. Previous data showed that AP-2δ-deficient mice are viable but due to increased apoptosis at the end of embryogenesis, lack part of the posterior midbrain. Despite the absence of the IC in AP-2δ-deficient mice, these animals retain at least some higher auditory functions. Neuronal responses to tones in the neocortex suggest an alternative auditory pathway that bypasses the IC. While sufficient data are available in mammals, little is known about AP-2δ in chickens, an avian model for the localization of sounds and the development of auditory circuits in the brain. Here, we identified and localized AP-2δ expression in the chicken midbrain during embryogenesis. Our data confirmed the presence of AP-2δ in the inferior colliculus and optic tectum (TeO), specifically in shepherd’s crook neurons, which are an essential component of the midbrain isthmic network and involved in multimodal integration. AP-2δ expression in the chicken midbrain may be related to the integration of both auditory and visual afferents in these neurons. In the future, these insights may allow for a more detailed study of circuitry and computational rules of auditory and multimodal networks.


Author(s):  
Roberto Francescon ◽  
Filippo Campagnaro ◽  
Emanuele Coccolo ◽  
Alberto Signori ◽  
Federico Guerra ◽  
...  

2021 ◽  
Vol 6 (2) ◽  
pp. 2822-2829
Author(s):  
Daniel Gehrig ◽  
Michelle Ruegg ◽  
Mathias Gehrig ◽  
Javier Hidalgo-Carrio ◽  
Davide Scaramuzza

Author(s):  
Juan A. Mesa ◽  
Francisco A. Ortega ◽  
Miguel A. Pozo ◽  
Ramón Piedra-de-la-Cuadra

2020 ◽  
Author(s):  
Mareike J. Hülsemann ◽  
Björn Rasch

AbstractOur thoughts, plans and intentions can influence physiological sleep, but the underlying mechanisms are unknown. According to the theoretical framework of “embodied cognition”, the semantic content of cognitive processes is represented by multimodal networks in the brain which also include body-related functions. Such multimodal representation could offer a mechanism which explains mutual influences between cognition and sleep. In the current study we tested whether sleep-related words are represented in multimodal networks by examining the effect of congruent vs. incongruent body positions on word processing during wakefulness.We experimentally manipulated the body position of 66 subjects (50 females, 16 males, 19-40 years old) between standing upright and lying down. Sleep- and activity-related words were presented around the individual speech recognition threshold to increase task difficulty. Our results show that word processing is facilitated in congruent body positions (sleep words: lying down and activity words: standing upright) compared with incongruent body positions, as indicated by a reduced N400 of the event-related potential (ERP) in the congruent condition with the lowest volume. In addition, early sensory components of the ERP (N180 and P280) were enhanced, suggesting that words were also acoustically better understood when the body position was congruent with the semantic meaning of the word. However, the difference in ERPs did not translate to differences on a behavioural level.Our results support the prediction of embodied processing of sleep- and activity-related words. Body position potentially induces a pre-activation of multimodal networks, thereby enhancing the access to the semantic concepts of words related to current the body position. The mutual link between semantic meaning and body-related function could be a key element in explaining influences of cognitive processing on sleep.


2020 ◽  
Vol 2020 ◽  
pp. 1-12
Author(s):  
Ying-Hwey Nai ◽  
Bernice W. Teo ◽  
Nadya L. Tan ◽  
Koby Yi Wei Chua ◽  
Chun Kit Wong ◽  
...  

Prostate segmentation in multiparametric magnetic resonance imaging (mpMRI) can help to support prostate cancer diagnosis and therapy treatment. However, manual segmentation of the prostate is subjective and time-consuming. Many deep learning monomodal networks have been developed for automatic whole prostate segmentation from T2-weighted MR images. We aimed to investigate the added value of multimodal networks in segmenting the prostate into the peripheral zone (PZ) and central gland (CG). We optimized and evaluated monomodal DenseVNet, multimodal ScaleNet, and monomodal and multimodal HighRes3DNet, which yielded dice score coefficients (DSC) of 0.875, 0.848, 0.858, and 0.890 in WG, respectively. Multimodal HighRes3DNet and ScaleNet yielded higher DSC with statistical differences in PZ and CG only compared to monomodal DenseVNet, indicating that multimodal networks added value by generating better segmentation between PZ and CG regions but did not improve the WG segmentation. No significant difference was observed in the apex and base of WG segmentation between monomodal and multimodal networks, indicating that the segmentations at the apex and base were more affected by the general network architecture. The number of training data was also varied for DenseVNet and HighRes3DNet, from 20 to 120 in steps of 20. DenseVNet was able to yield DSC of higher than 0.65 even for special cases, such as TURP or abnormal prostate, whereas HighRes3DNet’s performance fluctuated with no trend despite being the best network overall. Multimodal networks did not add value in segmenting special cases but generally reduced variations in segmentation compared to the same matched monomodal network.


2020 ◽  
Vol 32 (14) ◽  
pp. 10209-10228
Author(s):  
John Arevalo ◽  
Thamar Solorio ◽  
Manuel Montes-y-Gómez ◽  
Fabio A. González
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document