Detection of alcoholic impact on visual event related potentials using beta band spectral entropy, repeated measures ANOVA and k-NN classifier

Author(s):  
N. Sriraam ◽  
T.K. Padma Shri
2015 ◽  
Vol 26 (04) ◽  
pp. 384-392 ◽  
Author(s):  
Yael Henkin ◽  
Yifat Yaar-Soffer ◽  
Lihi Givon ◽  
Minka Hildesheimer

Background: Integration of information presented to the two ears has been shown to manifest in binaural interaction components (BICs) that occur along the ascending auditory pathways. In humans, BICs have been studied predominantly at the brainstem and thalamocortical levels; however, understanding of higher cortically driven mechanisms of binaural hearing is limited. Purpose: To explore whether BICs are evident in auditory event-related potentials (AERPs) during the advanced perceptual and postperceptual stages of cortical processing. Research Design: The AERPs N1, P3, and a late negative component (LNC) were recorded from multiple site electrodes while participants performed an oddball discrimination task that consisted of natural speech syllables (/ka/ vs. /ta/) that differed by place-of-articulation. Participants were instructed to respond to the target stimulus (/ta/) while performing the task in three listening conditions: monaural right, monaural left, and binaural. Study Sample: Fifteen (21–32 yr) young adults (6 females) with normal hearing sensitivity. Data Collection and Analysis: By subtracting the response to target stimuli elicited in the binaural condition from the sum of responses elicited in the monaural right and left conditions, the BIC waveform was derived and the latencies and amplitudes of the components were measured. The maximal interaction was calculated by dividing BIC amplitude by the summed right and left response amplitudes. In addition, the latencies and amplitudes of the AERPs to target stimuli elicited in the monaural right, monaural left, and binaural listening conditions were measured and subjected to analysis of variance with repeated measures testing the effect of listening condition and laterality. Results: Three consecutive BICs were identified at a mean latency of 129, 406, and 554 msec, and were labeled N1-BIC, P3-BIC, and LNC-BIC, respectively. Maximal interaction increased significantly with progression of auditory processing from perceptual to postperceptual stages and amounted to 51%, 55%, and 75% of the sum of monaural responses for N1-BIC, P3-BIC, and LNC-BIC, respectively. Binaural interaction manifested in a decrease of the binaural response compared to the sum of monaural responses. Furthermore, listening condition affected P3 latency only, whereas laterality effects manifested in enhanced N1 amplitudes at the left (T3) vs. right (T4) scalp electrode and in a greater left–right amplitude difference in the right compared to left listening condition. Conclusions: The current AERP data provides evidence for the occurrence of cortical BICs during perceptual and postperceptual stages, presumably reflecting ongoing integration of information presented to the two ears at the final stages of auditory processing. Increasing binaural interaction with the progression of the auditory processing sequence (N1 to LNC) may support the notion that cortical BICs reflect inherited interactions from preceding stages of upstream processing together with discrete cortical neural activity involved in binaural processing. Clinically, an objective measure of cortical binaural processing has the potential of becoming an appealing neural correlate of binaural behavioral performance.


Author(s):  
Michela Balconi ◽  
Serafino Tutino

The aim of the study is to explore the iconic representation of frozen metaphor. Starting from the dichotomy between the pragmatic models, for which metaphor is a semantic anomaly, and the direct access models, where metaphor is seen as similar to literal language, the cognitive and linguistic processes involved in metaphor comprehension are analyzed using behavioural data (RTs) and neuropsychological indexes (ERPs). 36 subjects listened to 160 sentences equally shared in the variables content (metaphorical vs literal) and congruousness (anomalous vs not semantically anomalous). The ERPs analysis showed two negative deflections (N3-N4 complex), that indicated different cognitive processes involved in sentence comprehension. Repeated measures ANOVA, applied to peak amplitude and latency variables, suggested in fact N4 as index of semantic anomaly (incongruous stimuli), more localized in posterior (Pz) area, while N3 was sensitive to the content variable: metaphor sentences had an ampler deflection than literal ones and posteriorly distributed (Oz). Adding this results with behavioral data (no differences for metaphor vs literal), it seems that the difference between metaphorical and literal decoding isn’t for the cognitive complexity of decoding (direct or indirect access), but for its representation format, which is more iconic for metaphor (as N3 suggests).


2021 ◽  
Vol 11 (3) ◽  
pp. 378
Author(s):  
Laura Martínez-Tejada ◽  
Alex Puertas-González ◽  
Natsue Yoshimura ◽  
Yasuharu Koike

In this article we present the study of electroencephalography (EEG) traits for emotion recognition process using a videogame as a stimuli tool, and considering two different kind of information related to emotions: arousal–valence self-assesses answers from participants, and game events that represented positive and negative emotional experiences under the videogame context. We performed a statistical analysis using Spearman’s correlation between the EEG traits and the emotional information. We found that EEG traits had strong correlation with arousal and valence scores; also, common EEG traits with strong correlations, belonged to the theta band of the central channels. Then, we implemented a regression algorithm with feature selection to predict arousal and valence scores using EEG traits. We achieved better result for arousal regression, than for valence regression. EEG traits selected for arousal and valence regression belonged to time domain (standard deviation, complexity, mobility, kurtosis, skewness), and frequency domain (power spectral density—PDS, and differential entropy—DE from theta, alpha, beta, gamma, and all EEG frequency spectrum). Addressing game events, we found that EEG traits related with the theta, alpha and beta band had strong correlations. In addition, distinctive event-related potentials where identified in the presence of both types of game events. Finally, we implemented a classification algorithm to discriminate between positive and negative events using EEG traits to identify emotional information. We obtained good classification performance using only two traits related with frequency domain on the theta band and on the full EEG spectrum.


2021 ◽  
pp. 030573562097869
Author(s):  
Alice Mado Proverbio ◽  
Francesca Russo

We investigated through electrophysiological recordings how music-induced emotions are recognized and combined with the emotional content of written sentences. Twenty-four sad, joyful, and frightening musical tracks were presented to 16 participants reading 270 short sentences conveying a sad, joyful, or frightening emotional meaning. Audiovisual stimuli could be emotionally congruent or incongruent with each other; participants were asked to pay attention and respond to filler sentences containing cities’ names, while ignoring the rest. The amplitude values of event-related potentials (ERPs) were subjected to repeated measures ANOVAs. Distinct electrophysiological markers were identified for the processing of stimuli inducing fear (N450, either linguistic or musical), for language-induced sadness (P300) and for joyful music (positive P2 and LP potentials). The music/language emotional discordance elicited a large N400 mismatch response ( p = .032). Its stronger intracranial source was the right superior temporal gyrus (STG) devoted to multisensory integration of emotions. The results suggest that music can communicate emotional meaning as distinctively as language.


Author(s):  
Alice M Proverbio

Abstract A well-established neuroimaging literature predicts a right-sided asymmetry in the activation of face-devoted areas such as the fusiform gyrus (FG) and its resulting M/N170 response during face processing. However, the face-related response sometimes appears to be bihemispheric. A few studies have argued that bilaterality depended on the sex composition of the sample. To shed light on this matter, two meta-analyses were conducted starting from a large initial database of 250 ERP (Event-related potentials)/MEG (Magnetoencephalography) peer-reviewed scientific articles. Paper coverage was from 1985 to 2020. Thirty-four articles met the inclusion criteria of a sufficiently large and balanced sample size with strictly right-handed and healthy participants aged 18–35 years and N170 measurements in response to neutral front view faces at left and right occipito/temporal sites. The data of 817 male (n = 414) and female (n = 403) healthy adults were subjected to repeated-measures analyses of variance. The results of statistical analyses from the data of 17 independent studies (from Asia, Europe and America) seem to robustly indicate the presence of a sex difference in the way the two cerebral hemispheres process facial information in humans, with a marked right-sided asymmetry of the bioelectrical activity in males and a bilateral or left-sided activity in females.


2019 ◽  
Vol 5 (1) ◽  
pp. 16-22
Author(s):  
Mohammad Ali Nazari ◽  
◽  
Javad Salehi Fadardi ◽  
Zohreh Gholami Doborjeh ◽  
Taktom Amanzadeh Oghaz ◽  
...  

Background: In human behavior study, by peering directly into the brain and assessing distinct patterns, evoked neurons and neuron spike can be more understandable by taking advantages of accurate brain analysis. Objectives: We investigated the role of Event Related Potentials (ERPs) in pre-comprehension processing of consumers to marketing logos.. Materials & Methods: In the framework of an experimental design, twenty-six right-handed volunteers (13 men, 13 women) participated in 2013 in the University of Tabriz. An individual task with a presentation of familiar vs. unfamiliar logos was designed. Stimuli were displayed on a monitor controlled by a PC using the Mitsar® stimulus presentation system PsyTask. Statistical analyses of ERPs data were analyzed by repeated measures ANOVA. Results: Our results showed, when subjects were dealing with familiar logos, higher peak amplitude for the N1 component in right hemisphere of the brain can be observed. These variations on averages of early components of ERPs in occipital lobe can be referred to the pre-perceptual brain activities. Conclusion: Investigating early components of ERP can be utilized further as an effective factor in prediction of the consumers ‘preference particularly in neuromarketing field.


Author(s):  
Michela Balconi ◽  
Alba Carrera

Emotion decoding constitutes a case of multimodal processing of cues from multiple channels. Previous behavioural and neuropsychological studies indicated that, when we have to decode emotions on the basis of multiple perceptive information, a cross-modal integration has place. The present study investigates the simultaneous processing of emotional tone of voice and emotional facial expression by event-related potentials (ERPs), through an ample range of different emotions (happiness, sadness, fear, anger, surprise, and disgust). Auditory emotional stimuli (a neutral word pronounced in an affective tone) and visual patterns (emotional facial expressions) were matched in congruous (the same emotion in face and voice) and incongruous (different emotions) pairs. Subjects (N=30) were required to process the stimuli and to indicate their comprehension (by stimpad). ERPs variations and behavioural data (response time, RTs) were submitted to repeated measures analysis of variance (ANOVA). We considered two time intervals (150-250; 250-350 ms post-stimulus), in order to explore the ERP variations. ANOVA showed two different ERP effects, a negative deflection (N2), more anterior-distributed (Fz), and a positive deflection (P2), more posterior-distributed, with different cognitive functions. In the first case N2 may be considered a marker of the emotional content (sensitive to type of emotion), whereas P2 may represent a cross-modal integration marker, it being varied as a function of the congruous/incongruous condition, showing a higher peak for congruous stimuli than incongruous stimuli. Finally, a RT reduction was found for some emotion types for congruous condition (i.e. sadness) and an inverted effect for other emotions (i.e. fear, anger, and surprise).


2020 ◽  
Vol 31 (08) ◽  
pp. 566-577
Author(s):  
Sharon E. Miller ◽  
Yang Zhang

Abstract Background Cortical auditory event-related potentials are a potentially useful clinical tool to objectively assess speech outcomes with rehabilitative devices. Whether hearing aids reliably encode the spectrotemporal characteristics of fricative stimuli in different phonological contexts and whether these differences result in distinct neural responses with and without hearing aid amplification remain unclear. Purpose To determine whether the neural coding of the voiceless fricatives /s/ and /ʃ/ in the syllable-final context reliably differed without hearing aid amplification and whether hearing aid amplification altered neural coding of the fricative contrast. Research Design A repeated-measures, within subject design was used to compare the neural coding of a fricative contrast with and without hearing aid amplification. Study Sample Ten adult listeners with normal hearing participated in the study. Data Collection and Analysis Cortical auditory event-related potentials were elicited to an /ɑs/–/ɑʃ/ vowel-fricative contrast in unaided and aided listening conditions. Neural responses to the speech contrast were recorded at 64-electrode sites. Peak latencies and amplitudes of the cortical response waveforms to the fricatives were analyzed using repeated-measures analysis of variance. Results The P2' component of the acoustic change complex significantly differed from the syllable-final fricative contrast with and without hearing aid amplification. Hearing aid amplification differentially altered the neural coding of the contrast across frontal, temporal, and parietal electrode regions. Conclusions Hearing aid amplification altered the neural coding of syllable-final fricatives. However, the contrast remained acoustically distinct in the aided and unaided conditions, and cortical responses to the fricative significantly differed with and without the hearing aid.


Author(s):  
Sharon E. Miller ◽  
Jessica Graham ◽  
Erin Schafer

Purpose Auditory sensory gating is a neural measure of inhibition and is typically measured with a click or tonal stimulus. This electrophysiological study examined if stimulus characteristics and the use of speech stimuli affected auditory sensory gating indices. Method Auditory event-related potentials were elicited using natural speech, synthetic speech, and nonspeech stimuli in a traditional auditory gating paradigm in 15 adult listeners with normal hearing. Cortical responses were recorded at 64 electrode sites, and peak amplitudes and latencies to the different stimuli were extracted. Individual data were analyzed using repeated-measures analysis of variance. Results Significant gating of P1–N1–P2 peaks was observed for all stimulus types. N1–P2 cortical responses were affected by stimulus type, with significantly less neural inhibition of the P2 response observed for natural speech compared to nonspeech and synthetic speech. Conclusions Auditory sensory gating responses can be measured using speech and nonspeech stimuli in listeners with normal hearing. The results of the study indicate the amount of gating and neural inhibition observed is affected by the spectrotemporal characteristics of the stimuli used to evoke the neural responses.


Sign in / Sign up

Export Citation Format

Share Document