scholarly journals The Role of Unimodal Feedback Pathways in Gender Perception During Activation of Voice and Face Areas

2021 ◽  
Vol 15 ◽  
Author(s):  
Clement Abbatecola ◽  
Peggy Gerardin ◽  
Kim Beneyton ◽  
Henry Kennedy ◽  
Kenneth Knoblauch

Cross-modal effects provide a model framework for investigating hierarchical inter-areal processing, particularly, under conditions where unimodal cortical areas receive contextual feedback from other modalities. Here, using complementary behavioral and brain imaging techniques, we investigated the functional networks participating in face and voice processing during gender perception, a high-level feature of voice and face perception. Within the framework of a signal detection decision model, Maximum likelihood conjoint measurement (MLCM) was used to estimate the contributions of the face and voice to gender comparisons between pairs of audio-visual stimuli in which the face and voice were independently modulated. Top–down contributions were varied by instructing participants to make judgments based on the gender of either the face, the voice or both modalities (N = 12 for each task). Estimated face and voice contributions to the judgments of the stimulus pairs were not independent; both contributed to all tasks, but their respective weights varied over a 40-fold range due to top–down influences. Models that best described the modal contributions required the inclusion of two different top–down interactions: (i) an interaction that depended on gender congruence across modalities (i.e., difference between face and voice modalities for each stimulus); (ii) an interaction that depended on the within modalities’ gender magnitude. The significance of these interactions was task dependent. Specifically, gender congruence interaction was significant for the face and voice tasks while the gender magnitude interaction was significant for the face and stimulus tasks. Subsequently, we used the same stimuli and related tasks in a functional magnetic resonance imaging (fMRI) paradigm (N = 12) to explore the neural correlates of these perceptual processes, analyzed with Dynamic Causal Modeling (DCM) and Bayesian Model Selection. Results revealed changes in effective connectivity between the unimodal Fusiform Face Area (FFA) and Temporal Voice Area (TVA) in a fashion that paralleled the face and voice behavioral interactions observed in the psychophysical data. These findings explore the role in perception of multiple unimodal parallel feedback pathways.

2020 ◽  
Author(s):  
Clement Abbatecola ◽  
Kim Beneyton ◽  
Peggy Gerardin ◽  
Henry Kennedy ◽  
Kenneth Knoblauch

AbstractMultimodal integration provides an ideal framework for investigating top-down influences in perceptual integration. Here, we investigate mechanisms and functional networks participating in face-voice multimodal integration during gender perception by using complementary behavioral (Maximum Likelihood Conjoint Measurement) and brain imaging (Dynamic Causal Modeling of fMRI data) techniques. Thirty-six subjects were instructed to judge pairs of face-voice stimuli either according to the gender of the face (face task), the voice (voice task) or the stimulus (stimulus task; no specific modality instruction given). Face and voice contributions to the tasks were not independent, as both modalities significantly contributed to all tasks. The top-down influences in each task could be modeled as a differential weighting of the contributions of each modality with an asymmetry in favor of the auditory modality in terms of magnitude of the effect. Additionally, we observed two independent interaction effects in the decision process that reflect both the coherence of the gender information across modalities and the magnitude of the gender difference from neutral. In a second experiment we investigated with functional MRI the modulation of effective connectivity between the Fusiform Face Area (FFA) and the Temporal Voice Area (TVA), two cortical areas implicated in face and voice processing. Twelve participants were presented with multimodal face-voice stimuli and instructed to attend either to face, voice or any gender information. We found specific changes in effective connectivity between these areas in the same conditions that generated behavioral interactions. Taken together, we interpret these results as converging evidence supporting the existence of multiple parallel hierarchical systems in multi-modal integration.


2019 ◽  
Author(s):  
Kyesam Jung ◽  
Jiyoung Kang ◽  
Seungsoo Chung ◽  
Hae-Jeong Park

AbstractMulti-photon calcium imaging (CaI) is an important tool to assess activity among neural populations within a column in the sensory cortex. However, the complex asymmetrical interactions among neural populations, termed effective connectivity, cannot be directly assessed by measuring the activity of each neuron using CaI but calls for computational modeling. To estimate effective connectivity among neural populations, we proposed a dynamic causal model (DCM) for CaI by combining a convolution-based dynamic neural state model and a dynamic calcium ion concentration model for CaI signals. After conducting a simulation study to evaluate DCM for CaI, we applied it to an experimental CaI data measured at the layer 2/3 of a barrel cortical column that differentially responds to hit and error whisking trails in mice. We first identified neural populations and constructed computational models with intrinsic connectivity of neural populations within the layer 2/3 of the barrel cortex and extrinsic connectivity with latent external modes. Bayesian model inversion and comparison shows that a top-down model with latent inhibitory and excitatory external modes explains the observed CaI signals during hit and error trials better than any other model, with a single external mode or without any latent modes. The best model also showed differential intrinsic and extrinsic effective connectivity between hit and error trials (corresponding to the bottom-up and top-down processes) in the functional hierarchical architecture. Both simulation and experimental results suggest the usefulness of DCM for CaI in terms of exploration of the hierarchical interactions among neural populations observed in CaI.


2021 ◽  
Author(s):  
Ismail Bouziane ◽  
Moumita Das ◽  
Cesar Caballero-Gaudes ◽  
Dipanjan Ray

AbstractBackgroundFunctional neuroimaging research on anxiety has traditionally focused on brain networks associated with the complex psychological aspects of anxiety. In this study, instead, we target the somatic aspects of anxiety. Motivated by the growing recognition that top-down cortical processing plays crucial roles in perception and action, we investigate effective connectivity among hierarchically organized sensorimotor regions and its association with (trait) anxiety.MethodsWe selected 164 participants from the Human Connectome Project based on psychometric measures. We used their resting-state functional MRI data and Dynamic Causal Modeling (DCM) to assess effective connectivity within and between key regions in the exteroceptive, interoceptive, and motor hierarchy. Using hierarchical modeling of between-subject effects in DCM with Parametric Empirical Bayes we first established the architecture of effective connectivity in sensorimotor networks and investigated its association with fear somatic arousal (FSA) and fear affect (FA) scores. To probe the robustness of our results, we implemented a leave-one-out cross validation analysis.ResultsAt the group level, the top-down connections in exteroceptive cortices were inhibitory in nature whereas in interoceptive and motor cortices they were excitatory. With increasing FSA scores, the pattern of top-down effective connectivity was enhanced in all three networks: an observation that corroborates well with anxiety phenomenology. Anxiety associated changes in effective connectivity were of effect size sufficiently large to predict whether somebody has mild or severe somatic anxiety. Interestingly, the enhancement in top-down processing in sensorimotor cortices were associated with FSA but not FA scores, thus establishing the (relative) dissociation between somatic and cognitive dimensions of anxiety.ConclusionsOverall, enhanced top-down effective connectivity in sensorimotor cortices emerges as a promising and quantifiable candidate marker of trait somatic anxiety. These results pave the way for a novel approach into investigating the neural underpinnings of anxiety based on the recognition of anxiety as an embodied phenomenon and the emerging interest in top-down cortical processing.


2018 ◽  
Vol 29 (9) ◽  
pp. 3590-3605 ◽  
Author(s):  
Jodie Davies-Thompson ◽  
Giulia V Elli ◽  
Mohamed Rezk ◽  
Stefania Benetti ◽  
Markus van Ackeren ◽  
...  

Abstract The brain has separate specialized computational units to process faces and voices located in occipital and temporal cortices. However, humans seamlessly integrate signals from the faces and voices of others for optimal social interaction. How are emotional expressions, when delivered by different sensory modalities (faces and voices), integrated in the brain? In this study, we characterized the brains’ response to faces, voices, and combined face–voice information (congruent, incongruent), which varied in expression (neutral, fearful). Using a whole-brain approach, we found that only the right posterior superior temporal sulcus (rpSTS) responded more to bimodal stimuli than to face or voice alone but only when the stimuli contained emotional expression. Face- and voice-selective regions of interest, extracted from independent functional localizers, similarly revealed multisensory integration in the face-selective rpSTS only; further, this was the only face-selective region that also responded significantly to voices. Dynamic causal modeling revealed that the rpSTS receives unidirectional information from the face-selective fusiform face area, and voice-selective temporal voice area, with emotional expression affecting the connection strength. Our study promotes a hierarchical model of face and voice integration, with convergence in the rpSTS, and that such integration depends on the (emotional) salience of the stimuli.


2018 ◽  
Author(s):  
Hyehyeon Kim ◽  
Gayoung Kim ◽  
Sue-Hyun Lee

AbstractTop-down signals can influence our visual perception by providing guidance on information processing. Especially, top-down control between two basic frameworks, “Individuation” and “grouping”, is critical for information processing during face perception. Individuation of faces supports identity recognition while grouping subserves higher category level face perception such as race or gender. However, it still remains elusive how top-down dependent control between individuation and grouping affects cortical representations during face perception. Here we performed an fMRI experiment to investigate whether representations across early and high-level visual areas can be altered by top-down control between individuation and grouping process during face perception. Focusing on neural response patterns across the early visual cortex (EVC) and the face-selective area (the fusiform face area (FFA)), we found that the discriminability of individual faces from the response patterns was strong in the FFA but weak in the EVC during the individuation task whereas the EVC but not the FFA showed significant face discrimination during the grouping tasks. Thus, these findings suggest that the representation of face information across the early and high-level visual cortex is flexible depending on the top-down control of the perceptual framework between individuation and grouping.


2017 ◽  
Author(s):  
Jodie Davies-Thompson ◽  
Giulia V. Elli ◽  
Mohamed Rezk ◽  
Stefania Benetti ◽  
Markus van Ackeren ◽  
...  

ABSTRACTThe brain has separate specialized computational units to process faces and voices located in occipital and temporal cortices. However, humans seamlessly integrate signals from the faces and voices of others for optimal social interaction. How are emotional expressions, when delivered by different sensory modalities (faces and voices), integrated in the brain? In this study, we characterized the brains’ response to faces, voices, and combined face-voice information (congruent, incongruent), which varied in expression (neutral, fearful). Using a whole-brain approach, we found that only the right posterior superior temporal sulcus (rpSTS) responded more to bimodal stimuli than to face or voice alone but only when the stimuli contained emotional expression. Face-and voice-selective regions of interest extracted from independent functional localizers, similarly revealed multisensory integration in the face-selective rpSTS only; further, this was the only face-selective region that also responded significantly to voices. Dynamic Causal Modeling revealed that the rpSTS receives unidirectional information from the face-selective fusiform face area (FFA), and voice-selective temporal voice area (TVA), with emotional expression affecting the connection strength. Our study promotes a hierarchical model of face and voice integration, with convergence in the rpSTS, and that such integration depends on the (emotional) salience of the stimuli.


Author(s):  
Kumari Anshu ◽  
Loveleen Gaur ◽  
Arun Solanki

Chatbot has emerged as a significant resolution to the swiftly growing customer caredemands in recent times. Chatbot has emerged as one of the biggest technological disruption. Simply speaking, it is a software agent facilitating interaction between computers and humans in natural language. So basically, it is a simulated, intellectual dialogue agent functional in a range of consumer engagement circumstances. It is the easiest and simplest means enable interaction between the retailers and the customers. </p><p> • Purpose- Most of the research work done in this field is concerned with their technical aspects. The recent research on chatbot pay little attention to the impact it is creating on users’ experience. Through this work, author is making an effort to know the customer-oriented impact that the chatbot bear on the shoppers. The purpose of this study is to develop and empirically test a framework that identify the customer oriented attributes of chatbot and impact of these attributes on customers. </p><p> • Objectives- The study intends to bridge the gap between concepts and actual attributes and applications on the subject of Chatbot. The following research objectives can address the various aspects of Chatbot affecting the different characteristics of consumers shopping behaviors: a) Identify the various attributes of chatbot that bears an impression on consumer shopping behavior. b) Evaluate the impact of chatbot on consumer shopping behavior that leads to the development of chatbot usage and adoption among the customer. </p><p> • Design/Methodology/Approach – For the purpose of analysis, author has administered Factor analysis and Multiple regression using SPSS version 23 for identification of various attributes of Chatbot and knowing their impact on shoppers. A self-administered questionnaire from the review of literature is developed. Industry experts in the field of retailing and academician evaluate the questionnaire. Primary information from the respondents is gathered using this questionnaire. The questionnaire comprises of Likert scale on a scale of 1 to 5 where 1 stands for strongly disagree and 5 stands for strongly agree. Data is collected from 126 respondents, out of which 111 respondents were finally considered for study and analysis purpose. </p><p> • Findings – The empirical results show that the study identifies various attributes of chatbot like Trust, Usefulness, Satisfaction, Readiness to Use and Accessibility. It is also found that chatbot is really influencing the customers in providing them with shopping experience, which can be very helpful to the businesses for increasing the sales and creating repurchase intention among the customers. </p><p> • Originality/value – The recent research on chatbot pay little attention to the impact it is creating on customers who are actually interacting with it on regular basis. The research paper extends information for understanding and appreciating the customer oriented attributes of artificially intelligent Chatbot. In this regard, the author has developed a model framework and proposed the attributes identified. Through the work, author is also making an effort to test empirically the impact of the identified attributes on the shoppers.


2011 ◽  
Vol 55-57 ◽  
pp. 77-81
Author(s):  
Hui Ming Huang ◽  
He Sheng Liu ◽  
Guo Ping Liu

In this paper, we proposed an efficient method to address the problem of color face image segmentation that is based on color information and saliency map. This method consists of three stages. At first, skin colored regions is detected using a Bayesian model of the human skin color. Then, we get a chroma chart that shows likelihoods of skin colors. This chroma chart is further segmented into skin region that satisfy the homogeneity property of the human skin. The third stage, visual attention model are employed to localize the face region according to the saliency map while the bottom-up approach utilizes both the intensity and color features maps from the test image. Experimental evaluation on test shows that the proposed method is capable of segmenting the face area quite effectively,at the same time, our methods shows good performance for subjects in both simple and complex backgrounds, as well as varying illumination conditions and skin color variances.


Sign in / Sign up

Export Citation Format

Share Document