scholarly journals On the intersection between data quality and dynamical modelling of large-scale fMRI signals

2021 ◽  
Author(s):  
Kevin M Aquino ◽  
Ben D. Fulcher ◽  
Stuart Oldham ◽  
Linden M Parkes ◽  
Leonardo Gollo ◽  
...  

Large-scale dynamics of the brain are routinely modelled us- ing systems of nonlinear dynamical equations that describe the evolution of population-level activity, with distinct neural pop- ulations often coupled according to an empirically measured structural connection matrix. This modelling approach has been used to generate insights into the neural underpinnings of spontaneous brain dynamics, as recorded with techniques such as resting state functional MRI (fMRI). In fMRI, researchers have many degrees of freedom in the way that they can pro- cess the data and recent evidence indicates that the choice of pre-processing steps can have a major effect on empirical esti- mates of functional connectivity. However, the potential influ- ence of such variations on modelling results are seldom consid- ered. Here we show, using three popular whole-brain dynam- ical models, that different choices during fMRI preprocessing can dramatically affect model fits and interpretations of find- ings. Critically, we show that the ability of these models to ac- curately capture patterns in fMRI dynamics is mostly driven by the degree to which they fit global signals rather than inter- esting sources of coordinated neural dynamics. We show that widespread deflections can arise from simple global synchroni- sation. We introduce a simple two-parameter model that cap- tures these fluctuations and which performs just as well as more complex, multi-parameter biophysical models. From our com- bined analyses of data and simulations, we describe benchmarks to evaluate model fit and validity. Although most models are not resilient to denoising, we show that relaxing the approxima- tion of homogeneous neural populations by more explicitly modelling inter-regional effective connectivity can improve model accuracy at the expense of increased model complexity. Our results suggest that many complex biophysical models may be fitting relatively trivial properties of the data, and underscore a need for tighter integration between data quality assurance and model development.

2021 ◽  
Vol 11 (2) ◽  
pp. 214
Author(s):  
Anna Kaiser ◽  
Pascal-M. Aggensteiner ◽  
Martin Holtmann ◽  
Andreas Fallgatter ◽  
Marcel Romanos ◽  
...  

Electroencephalography (EEG) represents a widely established method for assessing altered and typically developing brain function. However, systematic studies on EEG data quality, its correlates, and consequences are scarce. To address this research gap, the current study focused on the percentage of artifact-free segments after standard EEG pre-processing as a data quality index. We analyzed participant-related and methodological influences, and validity by replicating landmark EEG effects. Further, effects of data quality on spectral power analyses beyond participant-related characteristics were explored. EEG data from a multicenter ADHD-cohort (age range 6 to 45 years), and a non-ADHD school-age control group were analyzed (ntotal = 305). Resting-state data during eyes open, and eyes closed conditions, and task-related data during a cued Continuous Performance Task (CPT) were collected. After pre-processing, general linear models, and stepwise regression models were fitted to the data. We found that EEG data quality was strongly related to demographic characteristics, but not to methodological factors. We were able to replicate maturational, task, and ADHD effects reported in the EEG literature, establishing a link with EEG-landmark effects. Furthermore, we showed that poor data quality significantly increases spectral power beyond effects of maturation and symptom severity. Taken together, the current results indicate that with a careful design and systematic quality control, informative large-scale multicenter trials characterizing neurophysiological mechanisms in neurodevelopmental disorders across the lifespan are feasible. Nevertheless, results are restricted to the limitations reported. Future work will clarify predictive value.


Energies ◽  
2021 ◽  
Vol 14 (12) ◽  
pp. 3598
Author(s):  
Sara Russo ◽  
Pasquale Contestabile ◽  
Andrea Bardazzi ◽  
Elisa Leone ◽  
Gregorio Iglesias ◽  
...  

New large-scale laboratory data are presented on a physical model of a spar buoy wind turbine with angular motion of control surfaces implemented (pitch control). The peculiarity of this type of rotating blade represents an essential aspect when studying floating offshore wind structures. Experiments were designed specifically to compare different operational environmental conditions in terms of wave steepness and wind speed. Results discussed here were derived from an analysis of only a part of the whole dataset. Consistent with recent small-scale experiments, data clearly show that the waves contributed to most of the model motions and mooring loads. A significant nonlinear behavior for sway, roll and yaw has been detected, whereas an increase in the wave period makes the wind speed less influential for surge, heave and pitch. In general, as the steepness increases, the oscillations decrease. However, higher wind speed does not mean greater platform motions. Data also indicate a significant role of the blade rotation in the turbine thrust, nacelle dynamic forces and power in six degrees of freedom. Certain pairs of wind speed-wave steepness are particularly unfavorable, since the first harmonic of the rotor (coupled to the first wave harmonic) causes the thrust force to be larger than that in more energetic sea states. The experiments suggest that the inclusion of pitch-controlled, variable-speed blades in physical (and numerical) tests on such types of structures is crucial, highlighting the importance of pitch motion as an important design factor.


2021 ◽  
Vol 11 (2) ◽  
pp. 472
Author(s):  
Hyeongmin Cho ◽  
Sangkyun Lee

Machine learning has been proven to be effective in various application areas, such as object and speech recognition on mobile systems. Since a critical key to machine learning success is the availability of large training data, many datasets are being disclosed and published online. From a data consumer or manager point of view, measuring data quality is an important first step in the learning process. We need to determine which datasets to use, update, and maintain. However, not many practical ways to measure data quality are available today, especially when it comes to large-scale high-dimensional data, such as images and videos. This paper proposes two data quality measures that can compute class separability and in-class variability, the two important aspects of data quality, for a given dataset. Classical data quality measures tend to focus only on class separability; however, we suggest that in-class variability is another important data quality factor. We provide efficient algorithms to compute our quality measures based on random projections and bootstrapping with statistical benefits on large-scale high-dimensional data. In experiments, we show that our measures are compatible with classical measures on small-scale data and can be computed much more efficiently on large-scale high-dimensional datasets.


SLEEP ◽  
2021 ◽  
Vol 44 (Supplement_2) ◽  
pp. A86-A86
Author(s):  
Michael Grandner ◽  
Naghmeh Rezaei

Abstract Introduction The COVID-19 pandemic has resulted in societal-level changes to sleep and other behavioral patterns. Objective, longitudinal data would allow for a greater understanding of sleep-related changes at the population level. Methods N= 163,524 deidentified active Fitbit users from 6 major US cities contributed data, representing areas particularly hard-hit by the pandemic (Chicago, Houston, Los Angeles, New York, San Francisco, and Miami). Sleep variables extracted include nightly and weekly mean sleep duration and bedtime, variability (standard deviation) of sleep duration and bedtime, and estimated arousals and sleep stages. Deviation from similar timeframes in 2019 were examined. All analyses were performed in Python. Results These data detail how sleep duration and timing changed longitudinally, stratified by age group and gender, relative to previous years’ data. Overall, 2020 represented a significant departure for all age groups and both men and women (P<0.00001). Mean sleep duration increased in nearly all groups (P<0.00001) by 5-11 minutes, compared to a mean decrease of 5-8 minutes seen over the same period in 2019. Categorically, sleep duration increased for some and decreased for others, but more extended than restricted. Sleep phase shifted later for nearly all groups (p<0.00001). Categorically, bedtime was delayed for some and advanced for others, though more delayed than advanced. Duration and bedtime variability decreased, owing largely to decreased weekday-weekend differences. WASO increased, REM% increased, and Deep% decreased. Additional analyses show stratified, longitudinal changes to sleep duration and timing mean and variability distributions by month, as well as effect sizes and correlations to other outcomes. Conclusion The pandemic was associated with increased sleep duration on average, in contrast to 2019 when sleep decreased. The increase was most profound among younger adults, especially women. The youngest adults also experienced the greatest bedtime delay, in line with extensive school-start-times and chronotype data. When given the opportunity, the difference between weekdays and weekends became smaller, with occupational implications. Sleep staging data showed that slightly extending sleep minimally impacted deep sleep but resulted in a proportional increase in REM. Wakefulness during the night also increased, suggesting increased arousal despite greater sleep duration. Support (if any) This research was supported by Fitbit, Inc.


Sensors ◽  
2021 ◽  
Vol 21 (10) ◽  
pp. 3425
Author(s):  
Andreas Brotzer ◽  
Felix Bernauer ◽  
Karl Ulrich Schreiber ◽  
Joachim Wassermann ◽  
Heiner Igel

In seismology, an increased effort to observe all 12 degrees of freedom of seismic ground motion by complementing translational ground motion observations with measurements of strain and rotational motions could be witnessed in recent decades, aiming at an enhanced probing and understanding of Earth and other planetary bodies. The evolution of optical instrumentation, in particular large-scale ring laser installations, such as G-ring and ROMY (ROtational Motion in seismologY), and their geoscientific application have contributed significantly to the emergence of this scientific field. The currently most advanced, large-scale ring laser array is ROMY, which is unprecedented in scale and design. As a heterolithic structure, ROMY’s ring laser components are subject to optical frequency drifts. Such Sagnac interferometers require new considerations and approaches concerning data acquisition, processing and quality assessment, compared to conventional, mechanical instrumentation. We present an automated approach to assess the data quality and the performance of a ring laser, based on characteristics of the interferometric Sagnac signal. The developed scheme is applied to ROMY data to detect compromised operation states and assign quality flags. When ROMY’s database becomes publicly accessible, this assessment will be employed to provide a quality control feature for data requests.


2003 ◽  
Vol 125 (4) ◽  
pp. 234-241 ◽  
Author(s):  
Vincent Y. Blouin ◽  
Michael M. Bernitsas ◽  
Denby Morrison

In structural redesign (inverse design), selection of the number and type of performance constraints is a major challenge. This issue is directly related to the computational effort and, most importantly, to the success of the optimization solver in finding a solution. These issues are the focus of this paper, which provides and discusses techniques that can help designers formulate a well-posed integrated complex redesign problem. LargE Admissible Perturbations (LEAP) is a general methodology, which solves redesign problems of complex structures with, among others, free vibration, static deformation, and forced response amplitude constraints. The existing algorithm, referred to as the Incremental Method is improved in this paper for problems with static and forced response amplitude constraints. This new algorithm, referred to as the Direct Method, offers comparable level of accuracy for less computational time and provides robustness in solving large-scale redesign problems in the presence of damping, nonstructural mass, and fluid-structure interaction effects. Common redesign problems include several natural frequency constraints and forced response amplitude constraints at various frequencies of excitation. Several locations on the structure and degrees of freedom can be constrained simultaneously. The designer must exercise judgment and physical intuition to limit the number of constraints and consequently the computational time. Strategies and guidelines are discussed. Such techniques are presented and applied to a 2,694 degree of freedom offshore tower.


2011 ◽  
Vol 82 ◽  
pp. 722-727 ◽  
Author(s):  
Kristian Schellenberg ◽  
Norimitsu Kishi ◽  
Hisashi Kon-No

A system of multiple degrees of freedom composed out of three masses and three springs has been presented in 2008 for analyzing rockfall impacts on protective structures covered by a cushion layer. The model has then been used for a blind prediction of a large-scale test carried out in Sapporo, Japan, in November 2009. The test results showed substantial deviations from the blind predictions, which led to a deeper evaluation of the model input parameters showing a significant influence of the modeling properties for the cushion layer on the overall results. The cushion properties include also assumptions for the loading geometry and the definition of the parameters can be challenging. This paper introduces the test setup and the selected parameters in the proposed model for the blind prediction. After comparison with the test results, adjustments in the input parameters in order to match the test results have been evaluated. Conclusions for the application of the model as well as for further model improvements are drawn.


2018 ◽  
Vol 178 ◽  
pp. 02015
Author(s):  
Chong Qi

In this contribution I present systematic calculations on the spectroscopy and electromagnetic transition properties of intermediate-mass and heavy nuclei around 100Sn and 208Pb. We employed the large-scale configuration interaction shell model approach with realistic interactions. Those nuclei are the longest isotopic chains that can be studied by the nuclear shell model. I will show that the yrast spectra of Te isotopes show a vibrational-like equally spaced pattern but the few known E2 transitions show rotational-like behaviour. These kinds of abnormal collective behaviors cannot be reproduced by standard collective models and provide excellent background to study the competition of single-particle and various collective degrees of freedom. Moreover, the calculated B(E2) values for neutron-deficient and heavier Te isotopes show contrasting different behaviours along the yrast line, which may be related to the enhanced neutron-proton correlation when approaching N=50. The deviations between theory and experiment concerning the energies and E2 transition properties of low-lying 0+ and 2+ excited states and isomeric states in those nuclei may provide a constraint on our understanding of nuclear deformation and intruder configuration in that region.


Sign in / Sign up

Export Citation Format

Share Document