scholarly journals On co-activation pattern analysis and non-stationarity of resting brain activity

2021 ◽  
Author(s):  
Teppei Matsui ◽  
Trung Quang Pham ◽  
Koji Jimura ◽  
Junichi Chikazoe

AbstractThe non-stationarity of resting-state brain activity has received increasing attention in recent years. Functional connectivity (FC) analysis with short sliding windows and coactivation pattern (CAP) analysis are two widely used methods for assessing the non-stationary characteristics of brain activity observed with functional magnetic resonance imaging (fMRI). However, whether these techniques adequately capture non-stationarity needs to be verified. In this study, we found that the results of CAP analysis were similar for real fMRI data and simulated stationary data with matching covariance structures and spectral contents. We also found that, for both the real and simulated data, CAPs were clustered into spatially heterogeneous modules. Moreover, for each of the modules in the real data, a spatially similar module was found in the simulated data. The present results suggest that care needs to be taken when interpreting observations drawn from CAP analysis as it does not necessarily reflect non-stationarity or a mixture of states in resting brain activity.

2020 ◽  
Author(s):  
Fanny Mollandin ◽  
Andrea Rau ◽  
Pascal Croiseau

ABSTRACTTechnological advances and decreasing costs have led to the rise of increasingly dense genotyping data, making feasible the identification of potential causal markers. Custom genotyping chips, which combine medium-density genotypes with a custom genotype panel, can capitalize on these candidates to potentially yield improved accuracy and interpretability in genomic prediction. A particularly promising model to this end is BayesR, which divides markers into four effect size classes. BayesR has been shown to yield accurate predictions and promise for quantitative trait loci (QTL) mapping in real data applications, but an extensive benchmarking in simulated data is currently lacking. Based on a set of real genotypes, we generated simulated data under a variety of genetic architectures, phenotype heritabilities, and we evaluated the impact of excluding or including causal markers among the genotypes. We define several statistical criteria for QTL mapping, including several based on sliding windows to account for linkage disequilibrium. We compare and contrast these statistics and their ability to accurately prioritize known causal markers. Overall, we confirm the strong predictive performance for BayesR in moderately to highly heritable traits, particularly for 50k custom data. In cases of low heritability or weak linkage disequilibrium with the causal marker in 50k genotypes, QTL mapping is a challenge, regardless of the criterion used. BayesR is a promising approach to simultaneously obtain accurate predictions and interpretable classifications of SNPs into effect size classes. We illustrated the performance of BayesR in a variety of simulation scenarios, and compared the advantages and limitations of each.


2020 ◽  
Vol 12 (5) ◽  
pp. 771 ◽  
Author(s):  
Miguel Angel Ortíz-Barrios ◽  
Ian Cleland ◽  
Chris Nugent ◽  
Pablo Pancardo ◽  
Eric Järpe ◽  
...  

Automatic detection and recognition of Activities of Daily Living (ADL) are crucial for providing effective care to frail older adults living alone. A step forward in addressing this challenge is the deployment of smart home sensors capturing the intrinsic nature of ADLs performed by these people. As the real-life scenario is characterized by a comprehensive range of ADLs and smart home layouts, deviations are expected in the number of sensor events per activity (SEPA), a variable often used for training activity recognition models. Such models, however, rely on the availability of suitable and representative data collection and is habitually expensive and resource-intensive. Simulation tools are an alternative for tackling these barriers; nonetheless, an ongoing challenge is their ability to generate synthetic data representing the real SEPA. Hence, this paper proposes the use of Poisson regression modelling for transforming simulated data in a better approximation of real SEPA. First, synthetic and real data were compared to verify the equivalence hypothesis. Then, several Poisson regression models were formulated for estimating real SEPA using simulated data. The outcomes revealed that real SEPA can be better approximated ( R pred 2 = 92.72 % ) if synthetic data is post-processed through Poisson regression incorporating dummy variables.


2021 ◽  
Vol 2021 (1) ◽  
Author(s):  
Pushpendra Kumar ◽  
Vedat Suat Erturk ◽  
Marina Murillo-Arcila ◽  
Ramashis Banerjee ◽  
A. Manickam

AbstractIn this study, our aim is to explore the dynamics of COVID-19 or 2019-nCOV in Argentina considering the parameter values based on the real data of this virus from March 03, 2020 to March 29, 2021 which is a data range of more than one complete year. We propose a Atangana–Baleanu type fractional-order model and simulate it by using predictor–corrector (P-C) method. First we introduce the biological nature of this virus in theoretical way and then formulate a mathematical model to define its dynamics. We use a well-known effective optimization scheme based on the renowned trust-region-reflective (TRR) method to perform the model calibration. We have plotted the real cases of COVID-19 and compared our integer-order model with the simulated data along with the calculation of basic reproductive number. Concerning fractional-order simulations, first we prove the existence and uniqueness of solution and then write the solution along with the stability of the given P-C method. A number of graphs at various fractional-order values are simulated to predict the future dynamics of the virus in Argentina which is the main contribution of this paper.


2013 ◽  
Vol 2013 ◽  
pp. 1-11 ◽  
Author(s):  
Dong Xu ◽  
Lei Sun ◽  
Jianshu Luo ◽  
Zhihui Liu

A new denoising algorithm is proposed according to the characteristics of hyperspectral remote sensing image (HRSI) in the curvelet domain. Firstly, each band of HRSI is transformed into the curvelet domain, and the sets of subband images are obtained from different wavelength of HRSI. And then the detail subband images in the same scale and same direction from different wavelengths of HRSI are stacked to obtain new 3-D datacubes of the curvelet domain. Again, the characteristics analysis of these 3-D datacubes is performed. The analysis result shows that each new 3-D datacube has the strong spectral correlation. At last, due to the strong spectral correlation of new 3-D datacubes, the multiple linear regression is introduced to deal with these new 3-D datacubes in the curvelet domain. The simulated and the real data experiments are performed. The simulated data experimental results show that the proposed algorithm is superior to the compared algorithms in the references in terms of SNR. Furthermore, MSE and MSSIM in each band are utilized to show that the proposed algorithm is superior. The real data experimental results show that the proposed algorithm effectively removes the common spotty noise and the strip noise and simultaneously maintains more fine features during the denoising process.


2015 ◽  
Vol 26 (12) ◽  
pp. 1550137 ◽  
Author(s):  
A. Q. Pei ◽  
J. Wang

A financial time series model is developed and investigated by the oriented percolation system (one of the statistical physics systems). The nonlinear and statistical behaviors of the return interval time series are studied for the proposed model and the real stock market by applying visibility graph (VG) and multifractal detrended fluctuation analysis (MF-DFA). We investigate the fluctuation behaviors of return intervals of the model for different parameter settings, and also comparatively study these fluctuation patterns with those of the real financial data for different threshold values. The empirical research of this work exhibits the multifractal features for the corresponding financial time series. Further, the VGs deviated from both of the simulated data and the real data show the behaviors of small-world, hierarchy, high clustering and power-law tail for the degree distributions.


The possibility of fabricating a digital voter that will detect and eliminate a faulty sensor in an array of identical biosensors is examined. Eleven statistical outlier-detection procedures are applied to the responses of an array of antimony-antimony oxide penicillinase electrodes and to an extensive computer simulation of small array responses. A Dixon excess-over-range test and a maximum normalized residual test are shown to be safe outlier-detection procedures that will detect a faulty sensor and offer an algorithm that may be economically implemented in hardware. The Iglewicz & Martinez test, which can be implemented more conveniently in software, is shown to be very efficient when applied to the real data. However, its poorer performance when applied to the simulated data suggests that further examination of this test is required.


2020 ◽  
Vol 46 (Supplement_1) ◽  
pp. S34-S35
Author(s):  
Karen S Ambrosen ◽  
Martin W Skjerbæk ◽  
Jonathan Foldager ◽  
Martin C Axelsen ◽  
Nikolaj Bak ◽  
...  

Abstract Background The treatment response of patients with schizophrenia is heterogeneous, and markers of clinical response are missing. Studies using machine learning approaches have provided encouraging results regarding prediction of outcomes, but replicability has been challenging. In the present study, we present a novel methodological framework for applying machine learning to clinical data. Herein, algorithm selection and other methodological choices were based on model performance on a simulated dataset, to minimize bias and avoid overfitting. We subsequently applied the best performing machine learning algorithm to a rich, multimodal neuropsychiatric dataset. We aimed to 1) classify patients from controls, 2) predict short- and long-term clinical response in a sample of initially antipsychotic-naïve first-episode schizophrenia patients, and 3) validate our methodological framework. Methods We included data from 138 antipsychotic-naïve, first-episode schizophrenia patients, who had undergone assessments of psychopathology, cognition, electrophysiology, structural magnetic resonance imaging (MRI). Perinatal data and long-term outcome measures were obtained from Danish registers. Baseline diagnostic classification algorithms also included data from 151 matched healthy controls. Short-term treatment response was defined as change in psychopathology after the initial antipsychotic treatment period. Long-term treatment response (4–16 years) was based on data from Danish registers. The simulated dataset was generated to resemble the real data with respect to dimensionality, multimodality, and pattern of missing data. Noise levels were tunable to enable approximation to the signal-to-noise ratio in the real data. Robustness of the results was ensured by running two parallel, fundamentally different machine learning pipelines, a ‘single algorithm approach’ and an ‘ensemble approach’. Both pipelines included nested cross-validation, missing data imputation, and late integration. Results We significantly classified patients from controls with a balanced accuracy of 64.2% (95% CI = [51.7, 76.7]) for the single algorithm approach and 63.1% (95% CI = [50.4, 75.8]) for the ensemble approach. Post hoc analyses showed that the classification primarily was driven by the cognitive data. Neither approach predicted short- and long-term clinical response. To validate our methodological framework based on simulated data, we selected the best, a medium, and the most poorly performing algorithm on the simulated data and applied them to the real data. We found that the ranking of the algorithms was kept in the real data. Discussion Our rigorous modelling framework incorporating simulated data and parallel pipelines discriminated patients from controls, but our extensive, multimodal neuropsychiatric data from antipsychotic-naïve schizophrenia patients were not predictive of the clinical outcome. Nevertheless, our novel approach holds promise as an important step to obtain reliable, unbiased results with modest sample sizes when independent replication samples are not available.


Author(s):  
Cory Koedel ◽  
Rebecca Leatherman ◽  
Eric Parsons

Abstract It is widely known that standardized tests are noisy measures of student learning, but value added models (VAMs) rarely account for test measurement error (TME). We incorporate information about TME directly into VAMs, focusing on TME that derives from the testing instrument itself. Our analysis is divided into two parts – one based on simulated data and the other based on administrative micro data from Missouri. In the simulations we control the data generating process, which ensures that we obtain accurate TME metrics. In the real-data portion of our analysis we use estimates of TME provided by a major test publisher. In both the simulations and real-data analyses, we find that inference from VAMs is improved by making simple TME adjustments to the models. The improvement is larger in the simulations, but even in the real-data analysis the improvement is on the order of what one could expect if teacher-level sample sizes were increased by 11 to 17 percent.


Author(s):  
Fanny Mollandin ◽  
Andrea Rau ◽  
Pascal Croiseau

Abstract Technological advances and decreasing costs have led to the rise of increasingly dense genotyping data, making feasible the identification of potential causal markers. Custom genotyping chips, which combine medium-density genotypes with a custom genotype panel, can capitalize on these candidates to potentially yield improved accuracy and interpretability in genomic prediction. A particularly promising model to this end is BayesR, which divides markers into four effect size classes. BayesR has been shown to yield accurate predictions and promise for quantitative trait loci (QTL) mapping in real data applications, but an extensive benchmarking in simulated data is currently lacking. Based on a set of real genotypes, we generated simulated data under a variety of genetic architectures, phenotype heritabilities, and we evaluated the impact of excluding or including causal markers among the genotypes. We define several statistical criteria for QTL mapping, including several based on sliding windows to account for linkage disequilibrium. We compare and contrast these statistics and their ability to accurately prioritize known causal markers. Overall, we confirm the strong predictive performance for BayesR in moderately to highly heritable traits, particularly for 50k custom data. In cases of low heritability or weak linkage disequilibrium with the causal marker in 50k genotypes, QTL mapping is a challenge, regardless of the criterion used. BayesR is a promising approach to simultaneously obtain accurate predictions and interpretable classifications of SNPs into effect size classes. We illustrated the performance of BayesR in a variety of simulation scenarios, and compared the advantages and limitations of each.


Sign in / Sign up

Export Citation Format

Share Document