phase locking
Recently Published Documents


TOTAL DOCUMENTS

2258
(FIVE YEARS 369)

H-INDEX

78
(FIVE YEARS 10)

2022 ◽  
Vol 148 ◽  
pp. 107775
Author(s):  
Jinhu Long ◽  
Hongxiang Chang ◽  
Yuqiu Zhang ◽  
Tianyue Hou ◽  
Qi Chang ◽  
...  

2022 ◽  
Vol 74 ◽  
pp. 103492
Author(s):  
Bhavya Vasudeva ◽  
Runfeng Tian ◽  
Dee H. Wu ◽  
Shirley A. James ◽  
Hazem H. Refai ◽  
...  

2022 ◽  
Vol 155 ◽  
pp. 111721
Author(s):  
Stefano Lepri ◽  
Arkady Pikovsky

PLoS ONE ◽  
2022 ◽  
Vol 17 (1) ◽  
pp. e0262417
Author(s):  
Cédric Simar ◽  
Robin Petit ◽  
Nichita Bozga ◽  
Axelle Leroy ◽  
Ana-Maria Cebolla ◽  
...  

Objective Different visual stimuli are classically used for triggering visual evoked potentials comprising well-defined components linked to the content of the displayed image. These evoked components result from the average of ongoing EEG signals in which additive and oscillatory mechanisms contribute to the component morphology. The evoked related potentials often resulted from a mixed situation (power variation and phase-locking) making basic and clinical interpretations difficult. Besides, the grand average methodology produced artificial constructs that do not reflect individual peculiarities. This motivated new approaches based on single-trial analysis as recently used in the brain-computer interface field. Approach We hypothesize that EEG signals may include specific information about the visual features of the displayed image and that such distinctive traits can be identified by state-of-the-art classification algorithms based on Riemannian geometry. The same classification algorithms are also applied to the dipole sources estimated by sLORETA. Main results and significance We show that our classification pipeline can effectively discriminate between the display of different visual items (Checkerboard versus 3D navigational image) in single EEG trials throughout multiple subjects. The present methodology reaches a single-trial classification accuracy of about 84% and 93% for inter-subject and intra-subject classification respectively using surface EEG. Interestingly, we note that the classification algorithms trained on sLORETA sources estimation fail to generalize among multiple subjects (63%), which may be due to either the average head model used by sLORETA or the subsequent spatial filtering failing to extract discriminative information, but reach an intra-subject classification accuracy of 82%.


2022 ◽  
Vol 12 (1) ◽  
Author(s):  
Monica Wagner ◽  
Silvia Ortiz-Mantilla ◽  
Mateusz Rusiniak ◽  
April A. Benasich ◽  
Valerie L. Shafer ◽  
...  

AbstractAcoustic structures associated with native-language phonological sequences are enhanced within auditory pathways for perception, although the underlying mechanisms are not well understood. To elucidate processes that facilitate perception, time–frequency (T–F) analyses of EEGs obtained from native speakers of English and Polish were conducted. Participants listened to same and different nonword pairs within counterbalanced attend and passive conditions. Nonwords contained the onsets /pt/, /pət/, /st/, and /sət/ that occur in both the Polish and English languages with the exception of /pt/, which never occurs in the English language in word onset. Measures of spectral power and inter-trial phase locking (ITPL) in the low gamma (LG) and theta-frequency bands were analyzed from two bilateral, auditory source-level channels, created through source localization modeling. Results revealed significantly larger spectral power in LG for the English listeners to the unfamiliar /pt/ onsets from the right hemisphere at early cortical stages, during the passive condition. Further, ITPL values revealed distinctive responses in high and low-theta to acoustic characteristics of the onsets, which were modulated by language exposure. These findings, language-specific processing in LG and acoustic-level and language-specific processing in theta, support the view that multi scale temporal processing in the LG and theta-frequency bands facilitates speech perception.


2022 ◽  
Vol 15 ◽  
Author(s):  
Zhaobo Li ◽  
Xinzui Wang ◽  
Weidong Shen ◽  
Shiming Yang ◽  
David Y. Zhao ◽  
...  

Purpose: Tinnitus is a common but obscure auditory disease to be studied. This study will determine whether the connectivity features in electroencephalography (EEG) signals can be used as the biomarkers for an efficient and fast diagnosis method for chronic tinnitus.Methods: In this study, the resting-state EEG signals of tinnitus patients with different tinnitus locations were recorded. Four connectivity features [including the Phase-locking value (PLV), Phase lag index (PLI), Pearson correlation coefficient (PCC), and Transfer entropy (TE)] and two time-frequency domain features in the EEG signals were extracted, and four machine learning algorithms, included two support vector machine models (SVM), a multi-layer perception network (MLP) and a convolutional neural network (CNN), were used based on the selected features to classify different possible tinnitus sources.Results: Classification accuracy was highest when the SVM algorithm or the MLP algorithm was applied to the PCC feature sets, achieving final average classification accuracies of 99.42 or 99.1%, respectively. And based on the PLV feature, the classification result was also particularly good. And MLP ran the fastest, with an average computing time of only 4.2 s, which was more suitable than other methods when a real-time diagnosis was required.Conclusion: Connectivity features of the resting-state EEG signals could characterize the differentiation of tinnitus location. The connectivity features (PCC and PLV) were more suitable as the biomarkers for the objective diagnosing of tinnitus. And the results were helpful for clinicians in the initial diagnosis of tinnitus.


2021 ◽  
Author(s):  
Melisa Menceloglu ◽  
Marcia Grabowecky ◽  
Satoru Suzuki

Prior research has identified a variety of task-dependent networks that form through inter-regional phase-locking of oscillatory activity as neural correlates of specific behaviors. Despite ample knowledge of task-specific functional networks, general rules governing global phase relations have not been investigated. In order to discover such general rules, we focused on phase modularity, measured as the degree to which global phase relations in EEG comprised distinct synchronized clusters interacting with one another at large phase lags. Synchronized clusters were detected with a standard community-detection algorithm, and the level of phase modularity was quantified by the index q. Our findings suggest that phase modularity is functionally consequential since (1) temporal distribution of q was invariant across a broad range of frequencies (3-50 Hz examined) and behavioral conditions (resting with the eyes closed or watching a silent nature video), and (2) neural interactions (measured as power correlations) in beta-to-gamma bands consistently increased in high-modularity states. Notably, we found that the mechanism controlling phase modularity is remarkably simple. A network comprising anterior-posterior long-distance connectivity coherently shifted phase relations from low-angles (|Δθ| < π/4) in low-modularity states (bottom 5% in q) to high-angles (|Δθ| > 3π/4) in high-modularity states (top 5% in q), accounting for fluctuations in phase modularity. This anterior-posterior network likely plays a fundamental functional role as it controls phase modularity across a broad range of frequencies and behavioral conditions. These results may motivate future investigations into the functional roles of phase modularity as well as the anterior-posterior network that controls it.


2021 ◽  
Author(s):  
Hongxiang Chang ◽  
Rongtao Su ◽  
Jinhu LONG ◽  
Qi Chang ◽  
Pengfei Ma ◽  
...  

2021 ◽  
Vol 15 ◽  
Author(s):  
Dominik Kessler ◽  
Catherine E. Carr ◽  
Jutta Kretzberg ◽  
Go Ashida

Information processing in the nervous system critically relies on temporally precise spiking activity. In the auditory system, various degrees of phase-locking can be observed from the auditory nerve to cortical neurons. The classical metric for quantifying phase-locking is the vector strength (VS), which captures the periodicity in neuronal spiking. More recently, another metric, called the correlation index (CI), was proposed to quantify the temporally reproducible response characteristics of a neuron. The CI is defined as the peak value of a normalized shuffled autocorrelogram (SAC). Both VS and CI have been used to investigate how temporal information is processed and propagated along the auditory pathways. While previous analyses of physiological data in cats suggested covariation of these two metrics, general characterization of their connection has never been performed. In the present study, we derive a rigorous relationship between VS and CI. To model phase-locking, we assume Poissonian spike trains with a temporally changing intensity function following a von Mises distribution. We demonstrate that VS and CI are mutually related via the so-called concentration parameter that determines the degree of phase-locking. We confirm that these theoretical results are largely consistent with physiological data recorded in the auditory brainstem of various animals. In addition, we generate artificial phase-locked spike sequences, for which recording and analysis parameters can be systematically manipulated. Our analysis results suggest that mismatches between empirical data and the theoretical prediction can often be explained with deviations from the von Mises distribution, including skewed or multimodal period histograms. Furthermore, temporal relations of spike trains across trials can contribute to higher CI values than predicted mathematically based on the VS. We find that, for most applications, a SAC bin width of 50 ms seems to be a favorable choice, leading to an estimated error below 2.5% for physiologically plausible conditions. Overall, our results provide general relations between the two measures of phase-locking and will aid future analyses of different physiological datasets that are characterized with these metrics.


2021 ◽  
Author(s):  
Mate Aller ◽  
Heidi Solberg Okland ◽  
Lucy J MacGregor ◽  
Helen Blank ◽  
Matthew H. Davis

Speech perception in noisy environments is enhanced by seeing facial movements of communication partners. However, the neural mechanisms by which audio and visual speech are combined are not fully understood. We explore MEG phase locking to auditory and visual signals in MEG recordings from 14 human participants (6 female) that reported words from single spoken sentences. We manipulated the acoustic clarity and visual speech signals such that critical speech information is present in auditory, visual or both modalities. MEG coherence analysis revealed that both auditory and visual speech envelopes (auditory amplitude modulations and lip aperture changes) were phase-locked to 2-6Hz brain responses in auditory and visual cortex, consistent with entrainment to syllable-rate components. Partial coherence analysis was used to separate neural responses to correlated audio-visual signals and showed non-zero phase locking to auditory envelope in occipital cortex during audio-visual (AV) speech. Furthermore, phase-locking to auditory signals in visual cortex was enhanced for AV speech compared to audio-only (AO) speech that was matched for intelligibility. Conversely, auditory regions of the superior temporal gyrus (STG) did not show above-chance partial coherence with visual speech signals during AV conditions, but did show partial coherence in VO conditions. Hence, visual speech enabled stronger phase locking to auditory signals in visual areas, whereas phase-locking of visual speech in auditory regions only occurred during silent lip-reading. Differences in these cross-modal interactions between auditory and visual speech signals are interpreted in line with cross-modal predictive mechanisms during speech perception.


Sign in / Sign up

Export Citation Format

Share Document