scholarly journals Automated Pre-Processing and Automated Post-Processing in EEG/MEG Brain Source Analysis

Author(s):  
Seyed-Youns Sadat-Nejad

Analyzing Electroencephalography (EEG)/Magnetoencephalography (MEG) brain source signals allows for a better understanding and diagnosis of various brain-related activities or injuries. Due to the high complexity of the mentioned measurements and their low spatial resolution, different techniques have been employed to enhance the quality of the obtained results. The objective of this work is to employ state-of-the-art approaches and develop algorithms with higher analysis reliability. As a pre-processing method, subspace denoising and artifact removal approaches are taken into consideration, to provide a method that automates and improves the estimation of the Number of Component (NoC) for artifacts such as Eye Blinking (EB). By using synthetic EEG-like simulation and real MEG data, it is shown that the proposed method is more reliable over the conventional manual method in estimating the NoC. For Independent Component Analysis (ICA)-based approaches, the proposed method in this thesis provides an estimation for the number of components with an accuracy of 98.7%. The thesis is also devoted to improving source localization techniques, which aims to estimate the location of the source within the brain, which elicit time-series measurements. In this context, after obtaining a practical insight into the performance of the popular L2-Regularization based approaches, a post-processing thresholding method is introduced. The proposed method improves the spatial resolution of the L2-Regularization inverse solutions, especially for Standard Low-Resolution Electromagnetic Tomography (sLORETA), which is a well-known and widely used inverse solution. As a part of the proposed method, a novel noise variance estimation is introduced, which combines the kurtosis statistical parameter and data (noise) entropy. This new noise variance estimation technique allows for a superior performance of the proposed method compared to the existing ones. The algorithm is validated on the synthetic EEG data using well-established validation metrics. It is shown that the proposed solution improves the resolution of conventional methods in the process of thresholding/denoising automatically and without loss of any critical information.

2021 ◽  
Author(s):  
Seyed-Youns Sadat-Nejad

Analyzing Electroencephalography (EEG)/Magnetoencephalography (MEG) brain source signals allows for a better understanding and diagnosis of various brain-related activities or injuries. Due to the high complexity of the mentioned measurements and their low spatial resolution, different techniques have been employed to enhance the quality of the obtained results. The objective of this work is to employ state-of-the-art approaches and develop algorithms with higher analysis reliability. As a pre-processing method, subspace denoising and artifact removal approaches are taken into consideration, to provide a method that automates and improves the estimation of the Number of Component (NoC) for artifacts such as Eye Blinking (EB). By using synthetic EEG-like simulation and real MEG data, it is shown that the proposed method is more reliable over the conventional manual method in estimating the NoC. For Independent Component Analysis (ICA)-based approaches, the proposed method in this thesis provides an estimation for the number of components with an accuracy of 98.7%. The thesis is also devoted to improving source localization techniques, which aims to estimate the location of the source within the brain, which elicit time-series measurements. In this context, after obtaining a practical insight into the performance of the popular L2-Regularization based approaches, a post-processing thresholding method is introduced. The proposed method improves the spatial resolution of the L2-Regularization inverse solutions, especially for Standard Low-Resolution Electromagnetic Tomography (sLORETA), which is a well-known and widely used inverse solution. As a part of the proposed method, a novel noise variance estimation is introduced, which combines the kurtosis statistical parameter and data (noise) entropy. This new noise variance estimation technique allows for a superior performance of the proposed method compared to the existing ones. The algorithm is validated on the synthetic EEG data using well-established validation metrics. It is shown that the proposed solution improves the resolution of conventional methods in the process of thresholding/denoising automatically and without loss of any critical information.


2022 ◽  
Vol 12 (1) ◽  
Author(s):  
Liyao Song ◽  
Quan Wang ◽  
Ting Liu ◽  
Haiwei Li ◽  
Jiancun Fan ◽  
...  

AbstractSpatial resolution is a key factor of quantitatively evaluating the quality of magnetic resonance imagery (MRI). Super-resolution (SR) approaches can improve its spatial resolution by reconstructing high-resolution (HR) images from low-resolution (LR) ones to meet clinical and scientific requirements. To increase the quality of brain MRI, we study a robust residual-learning SR network (RRLSRN) to generate a sharp HR brain image from an LR input. Due to the Charbonnier loss can handle outliers well, and Gradient Difference Loss (GDL) can sharpen an image, we combined the Charbonnier loss and GDL to improve the robustness of the model and enhance the texture information of SR results. Two MRI datasets of adult brain, Kirby 21 and NAMIC, were used to train and verify the effectiveness of our model. To further verify the generalizability and robustness of the proposed model, we collected eight clinical fetal brain MRI 2D data for evaluation. The experimental results have shown that the proposed deep residual-learning network achieved superior performance and high efficiency over other compared methods.


2020 ◽  
Author(s):  
Liyao Song ◽  
Quan Wang ◽  
Ting Liu ◽  
Haiwei Li ◽  
Jiancun Fan ◽  
...  

Abstract Spatial resolution is a key factor of quantitatively evaluating the quality of magnetic resonance imagery (MRI). Super-resolution (SR) approaches can improve its spatial resolution by reconstructing high-resolution (HR) images from low-resolution (LR) ones to meet clinical and scientific requirements. To increase the quality of brain MRI, we study a robust residual-learning SR network (RRLSRN) to generate a sharp HR brain image from an LR input. Given that the Charbonnier loss can handle outliers well, and Gradient Difference Loss (GDL) can sharpen an image, we combine the Charbonnier loss and GDL to improve the robustness of the model and enhance the texture information of SR results. Two MRI datasets of adult brain, Kirby 21 and NAMIC, were used to train and verify the effectiveness of our model. To further verify the generalizability and robustness of the proposed model, we collected eight clinical fetal brain MRI data for evaluation. The experimental results show that the proposed deep residual-learning network achieved superior performance and high efficiency over other compared methods.


2021 ◽  
pp. 1-1
Author(s):  
Ming-Wei Wu ◽  
Yan Jin ◽  
Yan Li ◽  
Tianyu Song ◽  
Pooi Yuen Kam

Author(s):  
Radhika Theagarajan ◽  
Shubham Nimbkar ◽  
Jeyan Arthur Moses ◽  
Chinnaswamy Anandharamakrishnan

2021 ◽  
Vol 1 ◽  
pp. 11-20
Author(s):  
Owen Freeman Gebler ◽  
Mark Goudswaard ◽  
Ben Hicks ◽  
David Jones ◽  
Aydin Nassehi ◽  
...  

AbstractPhysical prototyping during early stage design typically represents an iterative process. Commonly, a single prototype will be used throughout the process, with its form being modified as the design evolves. If the form of the prototype is not captured as each iteration occurs understanding how specific design changes impact upon the satisfaction of requirements is challenging, particularly retrospectively.In this paper two different systems for digitising physical artefacts, structured light scanning (SLS) and photogrammetry (PG), are investigated as means for capturing iterations of physical prototypes. First, a series of test artefacts are presented and procedures for operating each system are developed. Next, artefacts are digitised using both SLS and PG and resulting models are compared against a master model of each artefact. Results indicate that both systems are able to reconstruct the majority of each artefact's geometry within 0.1mm of the master, however, overall SLS demonstrated superior performance, both in terms of completion time and model quality. Additionally, the quality of PG models was far more influenced by the effort and expertise of the user compared to SLS.


2001 ◽  
Vol 1 (4) ◽  
pp. 282-290 ◽  
Author(s):  
F. C. Langbein ◽  
B. I. Mills ◽  
A. D. Marshall ◽  
R. R. Martin

Current reverse engineering systems can generate boundary representation (B-rep) models from 3D range data. Such models suffer from inaccuracies caused by noise in the input data and algorithms. The quality of reverse engineered geometric models can be improved by finding candidate shape regularities in such a model, and constraining the model to meet a suitable subset of them, in a post-processing step called beautification. This paper discusses algorithms to detect such approximate regularities in terms of similarities between feature objects describing properties of faces, edges and vertices, and small groups of these elements in a B-rep model with only planar, spherical, cylindrical, conical and toroidal faces. For each group of similar feature objects they also seek special feature objects which may represent the group, e.g. an integer value which approximates the radius of similar cylinders. Experiments show that the regularities found by the algorithms include the desired regularities as well as spurious regularities, which can be limited by an appropriate choice of tolerances.


Sign in / Sign up

Export Citation Format

Share Document