scholarly journals Emotional state dependence facilitates automatic imitation of visual speech

2019 ◽  
Vol 72 (12) ◽  
pp. 2833-2847 ◽  
Author(s):  
Jasmine Virhia ◽  
Sonja A Kotz ◽  
Patti Adank

Observing someone speak automatically triggers cognitive and neural mechanisms required to produce speech, a phenomenon known as automatic imitation. Automatic imitation of speech can be measured using the Stimulus Response Compatibility (SRC) paradigm that shows facilitated response times (RTs) when responding to a prompt (e.g., say aa) in the presence of a congruent distracter (a video of someone saying aa), compared with responding in the presence of an incongruent distracter (a video of someone saying oo). Current models of the relation between emotion and cognitive control suggest that automatic imitation can be modulated by varying the stimulus-driven task aspects, that is, the distracter’s emotional valence. It is unclear how the emotional state of the observer affects automatic imitation. The current study explored independent effects of emotional valence of the distracter (Stimulus-driven Dependence) and the observer’s emotional state (State Dependence) on automatic imitation of speech. Participants completed an SRC paradigm for visual speech stimuli. They produced a prompt superimposed over a neutral or emotional (happy or angry) distracter video. State Dependence was manipulated by asking participants to speak the prompt in a neutral or emotional (happy or angry) voice. Automatic imitation was facilitated for emotional prompts, but not for emotional distracters, thus implying a facilitating effect of State Dependence. The results are interpreted in the context of theories of automatic imitation and cognitive control, and we suggest that models of automatic imitation are to be modified to accommodate for state-dependent and stimulus-driven dependent effects.

2021 ◽  
Vol 17 (1) ◽  
pp. 44-52
Author(s):  
Dicle Çapan ◽  
Simay Ikier

Directed Forgetting (DF) studies show that it is possible to exert cognitive control to intentionally forget information. The aim of the present study was to investigate how aware individuals are of the control they have over what they remember and forget when the information is emotional. Participants were presented with positive, negative and neutral photographs, and each photograph was followed by either a Remember or a Forget instruction. Then, for each photograph, participants provided Judgments of Learning (JOLs) by indicating their likelihood of recognizing that item on a subsequent test. In the recognition phase, participants were asked to indicate all old items, irrespective of instruction. Remember items had higher JOLs than Forget items for all item types, indicating that participants believe they can intentionally forget even emotional information—which is not the case based on the actual recognition results. DF effect, which was calculated by subtracting recognition for Forget items from Remember ones was only significant for neutral items. Emotional information disrupted cognitive control, eliminating the DF effect. Response times for JOLs showed that evaluation of emotional information, especially negatively emotional information takes longer, and thus is more difficult. For both Remember and Forget items, JOLs reflected sensitivity to emotionality of the items, with emotional items receiving higher JOLs than the neutral ones. Actual recognition confirmed better recognition for only negative items but not for positive ones. JOLs also reflected underestimation of actual recognition performance. Discrepancies in metacognitive judgments due to emotional valence as well as the reasons for underestimation are discussed.


2018 ◽  
Author(s):  
Jasmine Virhia ◽  
Sonja A. Kotz ◽  
patti adank

This experiment provides evidence that the emotional valence of our own speech production affects the extent to which we are able to disregard conflicting distracting visual speech information. The emotional valence of the distracting information itself does not affect the extent to which we can ignore this information. Our results imply that our own emotional mood affects how much we automatically imitate our conversation partner, but that the emotional status of our interlocutor is less important. The results support theoretical accounts suggesting that imitation in everyday life is governed by general cognitive mechanisms. However, these accounts are to be extended to include predictions regarding the emotional valence of both interaction partners.


1994 ◽  
Vol 26 (02) ◽  
pp. 436-455 ◽  
Author(s):  
W. Henderson ◽  
B. S. Northcote ◽  
P. G. Taylor

It has recently been shown that networks of queues with state-dependent movement of negative customers, and with state-independent triggering of customer movement have product-form equilibrium distributions. Triggers and negative customers are entities which, when arriving to a queue, force a single customer to be routed through the network or leave the network respectively. They are ‘signals' which affect/control network behaviour. The provision of state-dependent intensities introduces queues other than single-server queues into the network. This paper considers networks with state-dependent intensities in which signals can be either a trigger or a batch of negative customers (the batch size being determined by an arbitrary probability distribution). It is shown that such networks still have a product-form equilibrium distribution. Natural methods for state space truncation and for the inclusion of multiple customer types in the network can be viewed as special cases of this state dependence. A further generalisation allows for the possibility of signals building up at nodes.


Author(s):  
Marius Ötting ◽  
Roland Langrock ◽  
Antonello Maruotti

AbstractWe investigate the potential occurrence of change points—commonly referred to as “momentum shifts”—in the dynamics of football matches. For that purpose, we model minute-by-minute in-game statistics of Bundesliga matches using hidden Markov models (HMMs). To allow for within-state dependence of the variables, we formulate multivariate state-dependent distributions using copulas. For the Bundesliga data considered, we find that the fitted HMMs comprise states which can be interpreted as a team showing different levels of control over a match. Our modelling framework enables inference related to causes of momentum shifts and team tactics, which is of much interest to managers, bookmakers, and sports fans.


Electronics ◽  
2021 ◽  
Vol 10 (9) ◽  
pp. 1051
Author(s):  
Si Jung Kim ◽  
Teemu H. Laine ◽  
Hae Jung Suk

Presence refers to the emotional state of users where their motivation for thinking and acting arises based on the perception of the entities in a virtual world. The immersion level of users can vary when they interact with different media content, which may result in different levels of presence especially in a virtual reality (VR) environment. This study investigates how user characteristics, such as gender, immersion level, and emotional valence on VR, are related to the three elements of presence effects (attention, enjoyment, and memory). A VR story was created and used as an immersive stimulus in an experiment, which was presented through a head-mounted display (HMD) equipped with an eye tracker that collected the participants’ eye gaze data during the experiment. A total of 53 university students (26 females, 27 males), with an age range from 20 to 29 years old (mean 23.8), participated in the experiment. A set of pre- and post-questionnaires were used as a subjective measure to support the evidence of relationships among the presence effects and user characteristics. The results showed that user characteristics, such as gender, immersion level, and emotional valence, affected their level of presence, however, there is no evidence that attention is associated with enjoyment or memory.


2014 ◽  
Vol 35 (2) ◽  
pp. 135-141 ◽  
Author(s):  
Adele Kuckartz Pergher ◽  
Roberto Carlos Lyra da Silva

Observational, descriptive, exploratory, case study with the objective of measuring the stimulus-response time of the team to alarms monitoring invasive blood pressure (IBP) and analyzing the implications of this time for the safety of the patient. From January to March 2013, 60 hours of structured observation were conducted with registration of the alarms activated by IBP monitors in an adult ICU at a military hospital in the city of Rio de Janeiro. 76 IBP alarms were recorded (1.26 alarms/hour), 21 of which (28%) were attended to and 55 (72%) considered as fatigued. The average response time to the alarms was 2 min. 45 sec. The deficit in human resource and physical layout were factors determining the delay in response to the alarms. The increase in response times to these alarms may compromise the safety of patients with hemodynamic instability, especially in situations such as shock and the use of vasoactive drugs.


2020 ◽  
Vol 11 ◽  
Author(s):  
Xiaohong Liu ◽  
Hongliang Zhou ◽  
Chenguang Jiang ◽  
Yanling Xue ◽  
Zhenhe Zhou ◽  
...  

Alcohol dependence (AD) presents cognitive control deficits. Event-related potential (ERP) P300 reflects cognitive control-related processing. The aim of this study was to investigate whether cognitive control deficits are a trait biomarker or a state biomarker in AD. Participants included 30 AD patients and 30 healthy controls (HCs). All participants were measured with P300 evoked by a three-stimulus auditory oddball paradigm at a normal state (time 1, i.e., just after the last alcohol intake) and abstinence (time 2, i.e., just after a 4-week abstinence). The results showed that for P3a and P3b amplitude, the interaction effect for group × time point was significant, the simple effect for group at time 1 level and time 2 level was significant, and the simple effect for time point at AD group level was significant; however, the simple effect for time point at HC group level was not significant. Above results indicated that compared to HCs, AD patients present reductions of P3a/3b amplitude, and after 4-week alcohol abstinence, although P3a/3b amplitudes were improved, they were still lower than those of HCs. For P3a and P3b latencies, no significant differences were observed. These findings conclude that AD patients present cognitive control deficits that are reflected by P3a/3b and that cognitive control deficits in AD are trait- and state-dependent. The implication of these findings is helpful to understand the psychological and neural processes for AD, and these findings suggest that improving the cognitive control function may impact the treatment effect for AD.


2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Raphaël Thézé ◽  
Mehdi Ali Gadiri ◽  
Louis Albert ◽  
Antoine Provost ◽  
Anne-Lise Giraud ◽  
...  

Abstract Natural speech is processed in the brain as a mixture of auditory and visual features. An example of the importance of visual speech is the McGurk effect and related perceptual illusions that result from mismatching auditory and visual syllables. Although the McGurk effect has widely been applied to the exploration of audio-visual speech processing, it relies on isolated syllables, which severely limits the conclusions that can be drawn from the paradigm. In addition, the extreme variability and the quality of the stimuli usually employed prevents comparability across studies. To overcome these limitations, we present an innovative methodology using 3D virtual characters with realistic lip movements synchronized on computer-synthesized speech. We used commercially accessible and affordable tools to facilitate reproducibility and comparability, and the set-up was validated on 24 participants performing a perception task. Within complete and meaningful French sentences, we paired a labiodental fricative viseme (i.e. /v/) with a bilabial occlusive phoneme (i.e. /b/). This audiovisual mismatch is known to induce the illusion of hearing /v/ in a proportion of trials. We tested the rate of the illusion while varying the magnitude of background noise and audiovisual lag. Overall, the effect was observed in 40% of trials. The proportion rose to about 50% with added background noise and up to 66% when controlling for phonetic features. Our results conclusively demonstrate that computer-generated speech stimuli are judicious, and that they can supplement natural speech with higher control over stimulus timing and content.


Sign in / Sign up

Export Citation Format

Share Document