scholarly journals Anatomical Harmonics Basis based Brain Source Localization with Application to Epilepsy

Author(s):  
Amita Giri ◽  
Lalan Kumar ◽  
Nilesh Kurwale ◽  
Tapan K. Gandhi

Abstract Brain Source Localization (BSL) using Electroencephalogram (EEG) has been a useful noninvasive modality for the diagnosis of epileptogenic zones, study of evoked related potentials, and brain disorders. The inverse solution of BSL is limited by high computational cost and localization error. The performance is additionally limited by head shape assumption and the corresponding harmonics basis function. In this work, an anatomical harmonics basis (Spherical Harmonics (SH), and more particularly Head Harmonics (H2)) based BSL is presented. The spatio-temporal four shell head model is formulated in SH domain. The performance of spatial subspace based Multiple Signal Classification (MUSIC) and Recursively Applied and Projected (RAP)-MUSIC method is compared with the proposed SH and H2 counterparts on simulated data. SH and H2 domain processing effectively resolves the problem of high computational cost without sacrificing the inverse source localization accuracy. The proposed H2 MUSIC was additionally validated for epileptogenic zone localization on clinical EEG data. The proposed framework offers an effective solution to clinicians in automated and time efficient seizure localization.

2020 ◽  
Vol 65 (6) ◽  
pp. 673-682
Author(s):  
Pegah Khosropanah ◽  
Eric Tatt-Wei Ho ◽  
Kheng-Seang Lim ◽  
Si-Lei Fong ◽  
Minh-An Thuy Le ◽  
...  

AbstractEpilepsy surgery is an important treatment modality for medically refractory focal epilepsy. The outcome of surgery usually depends on the localization accuracy of the epileptogenic zone (EZ) during pre-surgical evaluation. Good localization can be achieved with various electrophysiological and neuroimaging approaches. However, each approach has its own merits and limitations. Electroencephalography (EEG) Source Imaging (ESI) is an emerging model-based computational technique to localize cortical sources of electrical activity within the brain volume, three-dimensionally. ESI based pre-surgical evaluation gives an overall clinical yield of 73–91%, depending on choice of head model, inverse solution and EEG electrode density. It is a cost effective, non-invasive method which provides valuable additional information in presurgical evaluation due to its high localizing value specifically in MRI-negative cases, extra or basal temporal lobe epilepsy, multifocal lesions such as tuberous sclerosis or cases with multiple hypotheses. Unfortunately, less than 1% of surgical centers in developing countries use this method as a part of pre-surgical evaluation. This review promotes ESI as a useful clinical tool especially for patients with lesion-negative MRI to determine EZ cost-effectively with high accuracy under the optimized conditions.


2019 ◽  
Vol 9 (18) ◽  
pp. 3758 ◽  
Author(s):  
Xiang Li ◽  
Xiaojie Wang ◽  
Chengli Zhao ◽  
Xue Zhang ◽  
Dongyun Yi

Locating the source that undergoes a diffusion-like process is a fundamental and challenging problem in complex network, which can help inhibit the outbreak of epidemics among humans, suppress the spread of rumors on the Internet, prevent cascading failures of power grids, etc. However, our ability to accurately locate the diffusion source is strictly limited by incomplete information of nodes and inevitable randomness of diffusion process. In this paper, we propose an efficient optimization approach via maximum likelihood estimation to locate the diffusion source in complex networks with limited observations. By modeling the informed times of the observers, we derive an optimal source localization solution for arbitrary trees and then extend it to general graphs via proper approximations. The numerical analyses on synthetic networks and real networks all indicate that our method is superior to several benchmark methods in terms of the average localization accuracy, high-precision localization and approximate area localization. In addition, low computational cost enables our method to be widely applied for the source localization problem in large-scale networks. We believe that our work can provide valuable insights on the interplay between information diffusion and source localization in complex networks.


2017 ◽  
Vol 29 (1) ◽  
pp. 26-36 ◽  
Author(s):  
Ryu Takeda ◽  
◽  
Kazunori Komatani

[abstFig src='/00290001/03.jpg' width='300' text='Sound source localization and problem' ] We focus on the problem of localizing soft/weak voices recorded by small humanoid robots, such as NAO. Sound source localization (SSL) for such robots requires fast processing and noise robustness owing to the restricted resources and the internal noise close to the microphones. Multiple signal classification using generalized eigenvalue decomposition (GEVD-MUSIC) is a promising method for SSL. It achieves noise robustness by whitening robot internal noise using prior noise information. However, whitening increases the computational cost and creates a direction-dependent bias in the localization score, which degrades the localization accuracy. We have thus developed a new implementation of GEVD-MUSIC based on steering vector transformation (TSV-MUSIC). The application of a transformation equivalent to whitening to steering vectors in advance reduces the real-time computational cost of TSV-MUSIC. Moreover, normalization of the transformed vectors cancels the direction-dependent bias and improves the localization accuracy. Experiments using simulated data showed that TSV-MUSIC had the highest accuracy of the methods tested. An experiment using real recoded data showed that TSV-MUSIC outperformed GEVD-MUSIC and other MUSIC methods in terms of localization by about 4 points under low signal-to-noise-ratio conditions.


2021 ◽  
Vol 15 ◽  
Author(s):  
Takayoshi Moridera ◽  
Essam A. Rashed ◽  
Shogo Mizutani ◽  
Akimasa Hirata

Electroencephalogram (EEG) is a method to monitor electrophysiological activity on the scalp, which represents the macroscopic activity of the brain. However, it is challenging to identify EEG source regions inside the brain based on data measured by a scalp-attached network of electrodes. The accuracy of EEG source localization significantly depends on the type of head modeling and inverse problem solver. In this study, we adopted different models with a resolution of 0.5 mm to account for thin tissues/fluids, such as the cerebrospinal fluid (CSF) and dura. In particular, a spatially dependent conductivity (segmentation-free) model created using deep learning was developed and used for more realist representation of electrical conductivity. We then adopted a multi-grid-based finite-difference method (FDM) for forward problem analysis and a sparse-based algorithm to solve the inverse problem. This enabled us to perform efficient source localization using high-resolution model with a reasonable computational cost. Results indicated that the abrupt spatial change in conductivity, inherent in conventional segmentation-based head models, may trigger source localization error accumulation. The accurate modeling of the CSF, whose conductivity is the highest in the head, was an important factor affecting localization accuracy. Moreover, computational experiments with different noise levels and electrode setups demonstrate the robustness of the proposed method with segmentation-free head model.


2021 ◽  
Author(s):  
Emmanuelle Blanc ◽  
Jérôme Enjalbert ◽  
Pierre Barbillon

- Background and Aims Functional-structural plant models are increasingly being used by plant scientists to address a wide variety of questions. However, the calibration of these complex models is often challenging, mainly because of their high computational cost. In this paper, we applied an automatic method to the calibration of WALTer: a functional-structural wheat model that simulates the plasticity of tillering in response to competition for light. - Methods We used a Bayesian calibration method to estimate the values of 5 parameters of the WALTer model by fitting the model outputs to tillering dynamics data. The method presented in this paper is based on the Efficient Global Optimisation algorithm. It involves the use of Gaussian process metamodels to generate fast approximations of the model outputs. To account for the uncertainty associated with the metamodels approximations, an adaptive design was used. The efficacy of the method was first assessed using simulated data. The calibration was then applied to experimental data. - Key Results The method presented here performed well on both simulated and experimental data. In particular, the use of an adaptive design proved to be a very efficient method to improve the quality of the metamodels predictions, especially by reducing the uncertainty in areas of the parameter space that were of interest for the fitting. Moreover, we showed the necessity to have a diversity of field data in order to be able to calibrate the parameters. - Conclusions The method presented in this paper, based on an adaptive design and Gaussian process metamodels, is an efficient approach for the calibration of WALTer and could be of interest for the calibration of other functional-structural plant models .


2012 ◽  
Vol 2 (1) ◽  
pp. 7-9 ◽  
Author(s):  
Satinderjit Singh

Median filtering is a commonly used technique in image processing. The main problem of the median filter is its high computational cost (for sorting N pixels, the temporal complexity is O(N·log N), even with the most efficient sorting algorithms). When the median filter must be carried out in real time, the software implementation in general-purpose processorsdoes not usually give good results. This Paper presents an efficient algorithm for median filtering with a 3x3 filter kernel with only about 9 comparisons per pixel using spatial coherence between neighboring filter computations. The basic algorithm calculates two medians in one step and reuses sorted slices of three vertical neighboring pixels. An extension of this algorithm for 2D spatial coherence is also examined, which calculates four medians per step.


1995 ◽  
Vol 32 (2) ◽  
pp. 95-103
Author(s):  
José A. Revilla ◽  
Kalin N. Koev ◽  
Rafael Díaz ◽  
César Álvarez ◽  
Antonio Roldán

One factor in determining the transport capacity of coastal interceptors in Combined Sewer Systems (CSS) is the reduction of Dissolved Oxygen (DO) in coastal waters originating from the overflows. The study of the evolution of DO in coastal zones is complex. The high computational cost of using mathematical models discriminates against the required probabilistic analysis being undertaken. Alternative methods, based on such mathematical modelling, employed in a limited number of cases, are therefore needed. In this paper two alternative methods are presented for the study of oxygen deficit resulting from overflows of CSS. In the first, statistical analyses focus on the causes of the deficit (the volume discharged). The second concentrates on the effects (the concentrations of oxygen in the sea). Both methods have been applied in a study of the coastal interceptor at Pasajes Estuary (Guipúzcoa, Spain) with similar results.


Mathematics ◽  
2021 ◽  
Vol 9 (8) ◽  
pp. 891
Author(s):  
Aurea Grané ◽  
Alpha A. Sow-Barry

This work provides a procedure with which to construct and visualize profiles, i.e., groups of individuals with similar characteristics, for weighted and mixed data by combining two classical multivariate techniques, multidimensional scaling (MDS) and the k-prototypes clustering algorithm. The well-known drawback of classical MDS in large datasets is circumvented by selecting a small random sample of the dataset, whose individuals are clustered by means of an adapted version of the k-prototypes algorithm and mapped via classical MDS. Gower’s interpolation formula is used to project remaining individuals onto the previous configuration. In all the process, Gower’s distance is used to measure the proximity between individuals. The methodology is illustrated on a real dataset, obtained from the Survey of Health, Ageing and Retirement in Europe (SHARE), which was carried out in 19 countries and represents over 124 million aged individuals in Europe. The performance of the method was evaluated through a simulation study, whose results point out that the new proposal solves the high computational cost of the classical MDS with low error.


Author(s):  
Seyede Vahide Hashemi ◽  
Mahmoud Miri ◽  
Mohsen Rashki ◽  
Sadegh Etedali

This paper aims to carry out sensitivity analyses to study how the effect of each design variable on the performance of self-centering buckling restrained brace (SC-BRB) and the corresponding buckling restrained brace (BRB) without shape memory alloy (SMA) rods. Furthermore, the reliability analyses of BRB and SC-BRB are performed in this study. Considering the high computational cost of the simulation methods, three Meta-models including the Kriging, radial basis function (RBF), and polynomial response surface (PRSM) are utilized to construct the surrogate models. For this aim, the nonlinear dynamic analyses are conducted on both BRB and SC-BRB by using OpenSees software. The results showed that the SMA area, SMA length ratio, and BRB core area have the most effect on the failure probability of SC-BRB. It is concluded that Kriging-based Monte Carlo Simulation (MCS) gives the best performance to estimate the limit state function (LSF) of BRB and SC-BRB in the reliability analysis procedures. Considering the effects of changing the maximum cyclic loading on the failure probability computation and comparison of the failure probability for different LSFs, it is also found that the reliability indices of SC-BRB were always higher than the corresponding reliability indices determined for BRB which confirms the performance superiority of SC-BRB than BRB.


Sign in / Sign up

Export Citation Format

Share Document