parameter interactions
Recently Published Documents


TOTAL DOCUMENTS

54
(FIVE YEARS 10)

H-INDEX

12
(FIVE YEARS 2)

SLEEP ◽  
2022 ◽  
Author(s):  
Noor Adra ◽  
Haoqi Sun ◽  
Wolfgang Ganglberger ◽  
Elissa M Ye ◽  
Lisa W Dümmer ◽  
...  

Abstract Study Objectives Alterations in sleep spindles have been linked to cognitive impairment. This finding has contributed to a growing interest in identifying sleep-based biomarkers of cognition and neurodegeneration, including sleep spindles. However, flexibility surrounding spindle definitions and algorithm parameter settings present a methodological challenge. The aim of this study was to characterize how spindle detection parameter settings influence the association between spindle features and cognition and to identify parameters with the strongest association with cognition. Methods Adult patients (n=167, 49 ± 18 years) completed the NIH Toolbox Cognition Battery after undergoing overnight diagnostic polysomnography recordings for suspected sleep disorders. We explored 1000 combinations across seven parameters in Luna, an open-source spindle detector, and used four features of detected spindles (amplitude, density, duration, and peak frequency) to fit linear multiple regression models to predict cognitive scores. Results Spindle features (amplitude, density, duration, and mean frequency) were associated with the ability to predict raw fluid cognition scores (r=0.503) and age-adjusted fluid cognition scores (r=0.315) with the best spindle parameters. Fast spindle features generally showed better performance relative to slow spindle features. Spindle features weakly predicted total cognition and poorly predicted crystallized cognition regardless of parameter settings. Conclusion Our exploration of spindle detection parameters identified optimal parameters for studies of fluid cognition and revealed the role of parameter interactions for both slow and fast spindles. Our findings support sleep spindles as a sleep-based biomarker of fluid cognition.


Mining ◽  
2021 ◽  
Vol 1 (3) ◽  
pp. 279-296
Author(s):  
Marc Elmouttie ◽  
Jane Hodgkinson ◽  
Peter Dean

Geotechnical complexity in mining often leads to geotechnical uncertainty which impacts both safety and productivity. However, as mining progresses, particularly for strip mining operations, a body of knowledge is acquired which reduces this uncertainty and can potentially be used by mining engineers to improve the prediction of future mining conditions. In this paper, we describe a new method to support this approach based on modelling and neural networks. A high-level causal model of the mining operations based on historical data for a number of parameters was constructed which accounted for parameter interactions, including hydrogeological conditions, weather, and prior operations. An artificial neural network was then trained on this historical data, including production data. The network can then be used to predict future production based on presently observed mining conditions as mining proceeds and compared with the model predictions. Agreement with the predictions indicates confidence that the neural network predictions are properly supported by the newly available data. The efficacy of this approach is demonstrated using semi-synthetic data based on an actual mine.


2021 ◽  
Vol 70 (3) ◽  
pp. 182-191
Author(s):  
Henning Schaak ◽  
Oliver Mußhoff

The paper investigates the influence of different model specifications for interpreting the results of discrete choice experiments when investigating heterogeneous public landscape preferences. Comparing model specifications based on the Mixed Multinomial Logit and the Generalized Multinomial Logit Model reveals that the parameter estimates appear qualitatively comparable. Still, a more in-depth investigation of the conditional estimate distributions of the sample show that parameter interactions in the Generalized Multinomial Logit Model lead to different interpretations compared to the Mixed Multinomial Logit Model. This highlights the potential impact of common model specifications in the results in landscape preference studies.


Critical Care ◽  
2021 ◽  
Vol 25 (1) ◽  
Author(s):  
Harry Magunia ◽  
Simone Lederer ◽  
Raphael Verbuecheln ◽  
Bryant Joseph Gilot ◽  
Michael Koeppen ◽  
...  

Abstract Background Intensive Care Resources are heavily utilized during the COVID-19 pandemic. However, risk stratification and prediction of SARS-CoV-2 patient clinical outcomes upon ICU admission remain inadequate. This study aimed to develop a machine learning model, based on retrospective & prospective clinical data, to stratify patient risk and predict ICU survival and outcomes. Methods A Germany-wide electronic registry was established to pseudonymously collect admission, therapeutic and discharge information of SARS-CoV-2 ICU patients retrospectively and prospectively. Machine learning approaches were evaluated for the accuracy and interpretability of predictions. The Explainable Boosting Machine approach was selected as the most suitable method. Individual, non-linear shape functions for predictive parameters and parameter interactions are reported. Results 1039 patients were included in the Explainable Boosting Machine model, 596 patients retrospectively collected, and 443 patients prospectively collected. The model for prediction of general ICU outcome was shown to be more reliable to predict “survival”. Age, inflammatory and thrombotic activity, and severity of ARDS at ICU admission were shown to be predictive of ICU survival. Patients’ age, pulmonary dysfunction and transfer from an external institution were predictors for ECMO therapy. The interaction of patient age with D-dimer levels on admission and creatinine levels with SOFA score without GCS were predictors for renal replacement therapy. Conclusions Using Explainable Boosting Machine analysis, we confirmed and weighed previously reported and identified novel predictors for outcome in critically ill COVID-19 patients. Using this strategy, predictive modeling of COVID-19 ICU patient outcomes can be performed overcoming the limitations of linear regression models. Trial registration “ClinicalTrials” (clinicaltrials.gov) under NCT04455451.


2021 ◽  
Author(s):  
Harry Magunia ◽  
Simone Lederer ◽  
Raphael Verbuecheln ◽  
Bryant Joseph Gilot ◽  
Michael Koeppen ◽  
...  

Abstract Background. Intensive Care Resources are heavily utilized during the COVID-19 pandemic. However, risk stratification and prediction of SARS-CoV-2 patient clinical outcomes upon ICU admission remain inadequate. This study aimed to develop a machine learning model, based on retrospective & prospective clinical data, to stratify patient risk and predict ICU survival and outcomes.Methods. A Germany-wide electronic registry was established to pseudonymously collect admission, therapeutic and discharge information of SARS-CoV-2 ICU patients retrospectively and prospectively. Machine learning approaches were evaluated for the accuracy and interpretability of predictions. The Explainable Boosting Machine approach was selected as the most suitable method. Individual, non-linear shape functions for predictive parameters and parameter interactions are reported. Results. 1,039 patients were included in the Explainable Boosting Machine model, 596 patients retrospectively collected, and 443 patients prospectively collected. The model for prediction of general ICU outcome was shown to be more reliable to predict “survival”. Age, inflammatory and thrombotic activity, and severity of ARDS at ICU admission were shown to be predictive of ICU survival. Patients’ age, pulmonary dysfunction and transfer from an external institution were predictors for ECMO therapy. The interaction of patient age with D-dimer levels on admission and creatinine levels with SOFA score without GCS were predictors for renal replacement therapy. Conclusions. Using Explainable Boosting Machine analysis, we confirmed and weighed previously reported and identified novel predictors for outcome in critically ill COVID-19 patients. Using this strategy, predictive modeling of COVID-19 ICU patient outcomes can be performed overcoming the limitations of linear regression models.Trial registration. “ClinicalTrials” (clinicaltrials.gov) under NCT04455451


2020 ◽  
Author(s):  
Sabine M. Spiessl ◽  
Sergei Kucherenko

<p>Probabilistic methods of higher order sensitivity analysis provide a possibility for identifying parameter interactions by means of sensitivity indices. Better understanding of parameter interactions may help to better quantify uncertainties of repository models, which can behave in a highly nonlinear, non-monotonic or even discontinuous manner. Sensitivity indices can efficiently be estimated by the Random-Sampling High Dimensional Model Representation (RS-HDMR) metamodeling approach. This approach is based on truncating the ANOVA-HDMR expansions up to the second order, while the truncated terms are then approximated by orthonormal polynomials. By design, the sensitivity index of total order (SIT) in this method is approximated as the sum of the indices of first order (SI1’s) plus all corresponding indices of second order (SI2’s) for a considered parameter. RS-HDMR belongs to a wider class of methods known as polynomial chaos expansion (PCE). PCE methods are based on Wiener’s homogeneous chaos theory published in 1938. It is a widely used approach in metamodeling. Usually only a few terms are relevant in the PCE structure. The Bayesian Sparse PCE method (BSPCE) makes use of sparse PCE. Using BSPCE, SI1 and SIT can be estimated. In this work we used the SobolGSA software [1] which contains both the RS-HDMR and BSPCE methods.</p><p>We have analysed the sensitivities of a model for a generic LILW repository in a salt mine using both the RS-HDMR and the BSPCE approach. The model includes a barrier in the near field which is chemically dissolved (corroded) over time by magnesium-containing brine, resulting in a sudden significant change of the model behaviour and usually a rise of the radiation exposure. We investigated the model with two sets of input parameters: one with 6 parameters and one with 5 additional ones (LILW6 and LILW11 models, respectively). For the time-dependent analysis, 31 time points were used.</p><p>The SI1 indices calculated with both approaches agree well with those obtained from the well-established and reliable first-order algorithm EASI [2] in most investigations. The SIT indices obtained from the BSPCE method seem to increase with the number of simulations used to build the metamodel. The SIT time curves obtained from the RS-HDMR approach with optimal choice of the polynomial coefficients agree well with the ones from the BSPCE approach only for relatively low numbers of simulations. As, in contrast to RS-HDMR, the BSPCE approach takes account of all orders of interaction, this may be a hint for the existence of third- or higher-order effects.</p><p><strong>Acknowledgements</strong></p><p>The work was financed by the German Federal Ministry for Economic Affairs and Energy (BMWi). We would also like to thank Dirk-A. Becker for his constructive feedback.</p><p><strong>References</strong></p><p>[1]         S. M. Spiessl, S. Kucherenko, D.-A. Becker, O. Zaccheus, Higher-order sensitivity analysis of a final repository model with discontinuous behaviour. Reliability Engineering and System Safety, doi: https://doi.org/10.1016/j.ress.2018.12.004, (2018).</p><p>[2]          E. Plischke, An effective algorithm for computing global sensitivity indices (EASI). Reliability Engineering and System Safety, 95: 354–360, (2010).</p>


2020 ◽  
Vol 48 (7) ◽  
pp. e37-e37
Author(s):  
Stéphane Poulain ◽  
Ophélie Arnaud ◽  
Sachi Kato ◽  
Iris Chen ◽  
Hiro Ishida ◽  
...  

Abstract The development of complex methods in molecular biology is a laborious, costly, iterative and often intuition-bound process where optima are sought in a multidimensional parameter space through step-by-step optimizations. The difficulty of miniaturizing reactions under the microliter volumes usually handled in multiwell plates by robots, plus the cost of the experiments, limit the number of parameters and the dynamic ranges that can be explored. Nevertheless, because of non-linearities of the response of biochemical systems to their reagent concentrations, broad dynamic ranges are necessary. Here we use a high-performance nanoliter handling platform and computer generation of liquid transfer programs to explore in quadruplicates 648 combinations of 4 parameters of a biochemical reaction, the reverse-transcription, which lead us to uncover non-linear responses, parameter interactions and novel mechanistic insights. With the increased availability of computer-driven laboratory platforms for biotechnology, our results demonstrate the feasibility and advantage of methods development based on reproducible, computer-aided exhaustive characterization of biochemical systems.


2019 ◽  
Author(s):  
Stéphane Poulain ◽  
Ophélie Arnaud ◽  
Sachi Kato ◽  
Iris Chen ◽  
Hiro Ishida ◽  
...  

AbstractThe development of complex, multi-step methods in molecular biology is a laborious, costly, iterative and often intuition-bound process where an optimum is sought in a multidimensional parameter space through step-by-step optimisations. The difficulty of miniaturising reactions under the microliter volumes usually handled in multiwell plates by robots, plus the cost of the experiments, limit the number of parameters and the dynamic ranges that can be explored. Nevertheless, because of non-linearities of the response of biochemical systems to their reagent concentrations, broad dynamic ranges are necessary. Here we use a high-performance nanoliter handling platform and computer generation of liquid transfer programs to explore in quadruplicates more than 600 combinations of 4 parameters of a biochemical reaction, the reverse-transcription, which lead us to uncover non-linear responses, parameter interactions and novel mechanistic insights. With the increased availability of computer-driven laboratory platforms for biotechnology, our results demonstrate the feasibility and advantage of methods development based on reproducible, computer-aided exhaustive characterisation of biochemical systems.


2019 ◽  
Vol 31 (6) ◽  
pp. 1181-1193 ◽  
Author(s):  
Jun Xu ◽  
Wei Zeng ◽  
Yanyan Lan ◽  
Jiafeng Guo ◽  
Xueqi Cheng

Sign in / Sign up

Export Citation Format

Share Document