scholarly journals Computational modeling of seizure spread on a cortical surface

2020 ◽  
Author(s):  
Viktor Sip ◽  
Maxime Guye ◽  
Fabrice Bartolomei ◽  
Viktor Jirsa

AbstractIn the field of computational epilepsy, neural field models helped to understand some large-scale features of seizure dynamics. These insights however remain on general levels, without translation to the clinical settings via personalization of the model with the patient-specific structure. In particular, a link was suggested between epileptic seizures spreading across the cortical surface and the so-called theta-alpha activity (TAA) pattern seen on intracranial electrographic signals, yet this link was not demonstrated on a patient-specific level. Here we present a single patient computational study linking the seizure spreading across the patient-specific cortical surface with a specific instance of the TAA pattern recorded in the patient. Using the realistic geometry of the cortical surface we perform the simulations of seizure dynamics in The Virtual Brain platform, and we show that the simulated electrographic signals qualitatively agree with the recorded signals. Furthermore, the comparison with the simulations performed on surrogate surfaces reveals that the best quantitative fit is obtained for the real surface. The work illustrates how the patient-specific cortical geometry can be utilized in The Virtual Brain for personalized model building, and the importance of such approach.

2021 ◽  
Vol 17 (2) ◽  
pp. e1008689
Author(s):  
Viktor Sip ◽  
Meysam Hashemi ◽  
Anirudh N. Vattikonda ◽  
Marmaduke M. Woodman ◽  
Huifang Wang ◽  
...  

Surgical interventions in epileptic patients aimed at the removal of the epileptogenic zone have success rates at only 60-70%. This failure can be partly attributed to the insufficient spatial sampling by the implanted intracranial electrodes during the clinical evaluation, leading to an incomplete picture of spatio-temporal seizure organization in the regions that are not directly observed. Utilizing the partial observations of the seizure spreading through the brain network, complemented by the assumption that the epileptic seizures spread along the structural connections, we infer if and when are the unobserved regions recruited in the seizure. To this end we introduce a data-driven model of seizure recruitment and propagation across a weighted network, which we invert using the Bayesian inference framework. Using a leave-one-out cross-validation scheme on a cohort of 45 patients we demonstrate that the method can improve the predictions of the states of the unobserved regions compared to an empirical estimate that does not use the structural information, yet it is on the same level as the estimate that takes the structure into account. Furthermore, a comparison with the performed surgical resection and the surgery outcome indicates a link between the inferred excitable regions and the actual epileptogenic zone. The results emphasize the importance of the structural connectome in the large-scale spatio-temporal organization of epileptic seizures and introduce a novel way to integrate the patient-specific connectome and intracranial seizure recordings in a whole-brain computational model of seizure spread.


2020 ◽  
Author(s):  
Viktor Sip ◽  
Meysam Hashemi ◽  
Anirudh N Vattikonda ◽  
Marmaduke Woodman ◽  
Huifang Wang ◽  
...  

Surgical interventions in epileptic patients aimed at the removal of the epileptogenic zone have success rates at only 60-70%. This failure can be partly attributed to the insufficient spatial sampling by the implanted intracranial electrodes during the clinical evaluation, leading to an incomplete picture of spatio-temporal seizure organization in the regions that are not directly observed. Utilizing the partial observations of the seizure spreading through the brain network, complemented by the assumption that the epileptic seizures spread along the structural connections, we infer if and when are the unobserved regions recruited in the seizure. To this end we introduce a data-driven model of seizure recruitment and propagation across a weighted network, which we invert using the Bayesian inference framework. Using a leave-one-out cross-validation scheme on a cohort of fifty patients we demonstrate that the method can improve the predictions of the states of the unobserved regions compared to an empirical estimate. Furthermore, a comparison with the performed surgical resection and the surgery outcome indicates a link between the inferred excitable regions and the actual epileptogenic zone. The results emphasize the importance of the structural connectome in the large-scale spatio-temporal organization of epileptic seizures and introduce a novel way to integrate the patient-specific connectome and intracranial seizure recordings in a whole-brain computational model of seizure spread.


2008 ◽  
Vol 11 (2) ◽  
pp. 56-60 ◽  
Author(s):  
Jill K. Duthie

Abstract Clinical supervisors in university based clinical settings are challenged by numerous tasks to promote the development of self-analysis and problem-solving skills of the clinical student (American Speech-Language-Hearing Association, ASHA, 1985). The Clinician Directed Hierarchy is a clinical training tool that assists the clinical teaching process by directing the student clinician’s focus to a specific level of intervention. At each of five levels of intervention, the clinician develops an understanding of the client’s speech/language target behaviors and matches clinical support accordingly. Additionally, principles and activities of generalization are highlighted for each intervention level. Preliminary findings suggest this is a useful training tool for university clinical settings. An essential goal of effective clinical supervision is the provision of support and guidance in the student clinician’s development of independent clinical skills (Larson, 2007). The student clinician is challenged with identifying client behaviors in the therapeutic process and learning to match his or her instructions, models, prompts, reinforcement, and use of stimuli appropriately according to the client’s needs. In addition, the student clinician must be aware of techniques in the intervention process that will promote generalization of new communication behaviors. Throughout the intervention process, clinicians are charged with identifying appropriate target behaviors, quantifying the progress of the client’s acquisition of the targets, and making adjustments within and between sessions as necessary. Central to the development of clinical skills is the feedback provided by the clinical supervisor (Brasseur, 1989; Moss, 2007). Particularly in the early stages of clinical skills development, the supervisor is challenged with addressing numerous aspects of clinical performance and awareness, while ensuring the client’s welfare (Moss). To address the management of clinician and client behaviors while developing an understanding of the clinical intervention process, the University of the Pacific has developed and begun to implement the Clinician Directed Hierarchy.


2019 ◽  
Author(s):  
Mohammad Atif Faiz Afzal ◽  
Mojtaba Haghighatlari ◽  
Sai Prasad Ganesh ◽  
Chong Cheng ◽  
Johannes Hachmann

<div>We present a high-throughput computational study to identify novel polyimides (PIs) with exceptional refractive index (RI) values for use as optic or optoelectronic materials. Our study utilizes an RI prediction protocol based on a combination of first-principles and data modeling developed in previous work, which we employ on a large-scale PI candidate library generated with the ChemLG code. We deploy the virtual screening software ChemHTPS to automate the assessment of this extensive pool of PI structures in order to determine the performance potential of each candidate. This rapid and efficient approach yields a number of highly promising leads compounds. Using the data mining and machine learning program package ChemML, we analyze the top candidates with respect to prevalent structural features and feature combinations that distinguish them from less promising ones. In particular, we explore the utility of various strategies that introduce highly polarizable moieties into the PI backbone to increase its RI yield. The derived insights provide a foundation for rational and targeted design that goes beyond traditional trial-and-error searches.</div>


SLEEP ◽  
2020 ◽  
Author(s):  
Luca Menghini ◽  
Nicola Cellini ◽  
Aimee Goldstone ◽  
Fiona C Baker ◽  
Massimiliano de Zambotti

Abstract Sleep-tracking devices, particularly within the consumer sleep technology (CST) space, are increasingly used in both research and clinical settings, providing new opportunities for large-scale data collection in highly ecological conditions. Due to the fast pace of the CST industry combined with the lack of a standardized framework to evaluate the performance of sleep trackers, their accuracy and reliability in measuring sleep remains largely unknown. Here, we provide a step-by-step analytical framework for evaluating the performance of sleep trackers (including standard actigraphy), as compared with gold-standard polysomnography (PSG) or other reference methods. The analytical guidelines are based on recent recommendations for evaluating and using CST from our group and others (de Zambotti and colleagues; Depner and colleagues), and include raw data organization as well as critical analytical procedures, including discrepancy analysis, Bland–Altman plots, and epoch-by-epoch analysis. Analytical steps are accompanied by open-source R functions (depicted at https://sri-human-sleep.github.io/sleep-trackers-performance/AnalyticalPipeline_v1.0.0.html). In addition, an empirical sample dataset is used to describe and discuss the main outcomes of the proposed pipeline. The guidelines and the accompanying functions are aimed at standardizing the testing of CSTs performance, to not only increase the replicability of validation studies, but also to provide ready-to-use tools to researchers and clinicians. All in all, this work can help to increase the efficiency, interpretation, and quality of validation studies, and to improve the informed adoption of CST in research and clinical settings.


Materials ◽  
2021 ◽  
Vol 14 (4) ◽  
pp. 1021
Author(s):  
Bernhard Dorweiler ◽  
Pia Elisabeth Baqué ◽  
Rayan Chaban ◽  
Ahmed Ghazy ◽  
Oroa Salem

As comparative data on the precision of 3D-printed anatomical models are sparse, the aim of this study was to evaluate the accuracy of 3D-printed models of vascular anatomy generated by two commonly used printing technologies. Thirty-five 3D models of large (aortic, wall thickness of 2 mm, n = 30) and small (coronary, wall thickness of 1.25 mm, n = 5) vessels printed with fused deposition modeling (FDM) (rigid, n = 20) and PolyJet (flexible, n = 15) technology were subjected to high-resolution CT scans. From the resulting DICOM (Digital Imaging and Communications in Medicine) dataset, an STL file was generated and wall thickness as well as surface congruency were compared with the original STL file using dedicated 3D engineering software. The mean wall thickness for the large-scale aortic models was 2.11 µm (+5%), and 1.26 µm (+0.8%) for the coronary models, resulting in an overall mean wall thickness of +5% for all 35 3D models when compared to the original STL file. The mean surface deviation was found to be +120 µm for all models, with +100 µm for the aortic and +180 µm for the coronary 3D models, respectively. Both printing technologies were found to conform with the currently set standards of accuracy (<1 mm), demonstrating that accurate 3D models of large and small vessel anatomy can be generated by both FDM and PolyJet printing technology using rigid and flexible polymers.


2020 ◽  
Vol 41 (S1) ◽  
pp. s521-s522
Author(s):  
Debarka Sengupta ◽  
Vaibhav Singh ◽  
Seema Singh ◽  
Dinesh Tewari ◽  
Mudit Kapoor ◽  
...  

Background: The rising trend of antibiotic resistance imposes a heavy burden on healthcare both clinically and economically (US$55 billion), with 23,000 estimated annual deaths in the United States as well as increased length of stay and morbidity. Machine-learning–based methods have, of late, been used for leveraging patient’s clinical history and demographic information to predict antimicrobial resistance. We developed a machine-learning model ensemble that maximizes the accuracy of such a drug-sensitivity versus resistivity classification system compared to the existing best-practice methods. Methods: We first performed a comprehensive analysis of the association between infecting bacterial species and patient factors, including patient demographics, comorbidities, and certain healthcare-specific features. We leveraged the predictable nature of these complex associations to infer patient-specific antibiotic sensitivities. Various base-learners, including k-NN (k-nearest neighbors) and gradient boosting machine (GBM), were used to train an ensemble model for confident prediction of antimicrobial susceptibilities. Base learner selection and model performance evaluation was performed carefully using a variety of standard metrics, namely accuracy, precision, recall, F1 score, and Cohen &kappa;. Results: For validating the performance on MIMIC-III database harboring deidentified clinical data of 53,423 distinct patient admissions between 2001 and 2012, in the intensive care units (ICUs) of the Beth Israel Deaconess Medical Center in Boston, Massachusetts. From ~11,000 positive cultures, we used 4 major specimen types namely urine, sputum, blood, and pus swab for evaluation of the model performance. Figure 1 shows the receiver operating characteristic (ROC) curves obtained for bloodstream infection cases upon model building and prediction on 70:30 split of the data. We received area under the curve (AUC) values of 0.88, 0.92, 0.92, and 0.94 for urine, sputum, blood, and pus swab samples, respectively. Figure 2 shows the comparative performance of our proposed method as well as some off-the-shelf classification algorithms. Conclusions: Highly accurate, patient-specific predictive antibiogram (PSPA) data can aid clinicians significantly in antibiotic recommendation in ICU, thereby accelerating patient recovery and curbing antimicrobial resistance.Funding: This study was supported by Circle of Life Healthcare Pvt. Ltd.Disclosures: None


2021 ◽  
Vol 28 (1) ◽  
pp. e100251
Author(s):  
Ian Scott ◽  
Stacey Carter ◽  
Enrico Coiera

Machine learning algorithms are being used to screen and diagnose disease, prognosticate and predict therapeutic responses. Hundreds of new algorithms are being developed, but whether they improve clinical decision making and patient outcomes remains uncertain. If clinicians are to use algorithms, they need to be reassured that key issues relating to their validity, utility, feasibility, safety and ethical use have been addressed. We propose a checklist of 10 questions that clinicians can ask of those advocating for the use of a particular algorithm, but which do not expect clinicians, as non-experts, to demonstrate mastery over what can be highly complex statistical and computational concepts. The questions are: (1) What is the purpose and context of the algorithm? (2) How good were the data used to train the algorithm? (3) Were there sufficient data to train the algorithm? (4) How well does the algorithm perform? (5) Is the algorithm transferable to new clinical settings? (6) Are the outputs of the algorithm clinically intelligible? (7) How will this algorithm fit into and complement current workflows? (8) Has use of the algorithm been shown to improve patient care and outcomes? (9) Could the algorithm cause patient harm? and (10) Does use of the algorithm raise ethical, legal or social concerns? We provide examples where an algorithm may raise concerns and apply the checklist to a recent review of diagnostic imaging applications. This checklist aims to assist clinicians in assessing algorithm readiness for routine care and identify situations where further refinement and evaluation is required prior to large-scale use.


Author(s):  
Jacqueline A Darrow ◽  
Amanda Calabro ◽  
Sara Gannon ◽  
Amanze Orusakwe ◽  
Rianne Esquivel ◽  
...  

Abstract Background Cerebrospinal fluid (CSF) biomarkers are increasingly used to confirm the accuracy of a clinical diagnosis of mild cognitive impairment or dementia due to Alzheimer disease (AD). Recent evidence suggests that fully automated assays reduce the impact of some preanalytical factors on the variability of these measures. This study evaluated the effect of several preanalytical variables common in clinical settings on the variability of CSF β-amyloid 1–42 (Aβ1–42) concentrations. Methods Aβ1–42 concentrations were measured using the LUMIPULSE G1200 from both freshly collected and frozen CSF samples. Preanalytic variables examined were: (1) patient fasting prior to CSF collection, (2) blood contamination of specimens, and (3) aliquoting specimens sequentially over the course of collection (i.e., CSF gradients). Results Patient fasting did not significantly affect CSF Aβ1–42 levels. While assessing gradient effects, Aβ1–42 concentrations remained stable within the first 5 1-mL aliquots. However, there is evidence of a gradient effect toward higher concentrations over successive aliquots. Aβ1–42 levels were stable when fresh CSF samples were spiked with up to 2.5% of blood. However, in frozen CSF samples, even 0.25% blood contamination significantly decreased Aβ1–42 concentrations. Conclusions The preanalytical variables examined here do not have significant effects on Aβ1–42 concentrations if fresh samples are processed within 2 h. However, a gradient effect can be observed on Aβ1–42 concentrations after the first 5 mL of collection and blood contamination has a significant impact on Aβ1–42 concentrations once specimens have been frozen.


Author(s):  
D. Keith Walters ◽  
Greg W. Burgreen ◽  
Robert L. Hester ◽  
David S. Thompson ◽  
David M. Lavallee ◽  
...  

Computational fluid dynamics (CFD) simulations were performed for unsteady periodic breathing conditions, using large-scale models of the human lung airway. The computational domain included fully coupled representations of the orotracheal region and large conducting zone up to generation four (G4) obtained from patient-specific CT data, and the small conducting zone (to G16) obtained from a stochastically generated airway tree with statistically realistic geometrical characteristics. A reduced-order geometry was used, in which several airway branches in each generation were truncated, and only select flow paths were retained to G16. The inlet and outlet flow boundaries corresponded to the oronasal opening (superior), the inlet/outlet planes in terminal bronchioles (distal), and the unresolved airway boundaries arising from the truncation procedure (intermediate). The cyclic flow was specified according to the predicted ventilation patterns for a healthy adult male at three different activity levels, supplied by the whole-body modeling software HumMod. The CFD simulations were performed using Ansys FLUENT. The mass flow distribution at the distal boundaries was prescribed using a previously documented methodology, in which the percentage of the total flow for each boundary was first determined from a steady-state simulation with an applied flow rate equal to the average during the inhalation phase of the breathing cycle. The distal pressure boundary conditions for the steady-state simulation were set using a stochastic coupling procedure to ensure physiologically realistic flow conditions. The results show that: 1) physiologically realistic flow is obtained in the model, in terms of cyclic mass conservation and approximately uniform pressure distribution in the distal airways; 2) the predicted alveolar pressure is in good agreement with previously documented values; and 3) the use of reduced-order geometry modeling allows accurate and efficient simulation of large-scale breathing lung flow, provided care is taken to use a physiologically realistic geometry and to properly address the unsteady boundary conditions.


Sign in / Sign up

Export Citation Format

Share Document