Enhancement of multimodality texture-based prediction models via optimization of PET and MR image acquisition protocols: a proof of concept

2017 ◽  
Vol 62 (22) ◽  
pp. 8536-8565 ◽  
Author(s):  
Martin Vallières ◽  
Sébastien Laberge ◽  
André Diamant ◽  
Issam El Naqa
Materials ◽  
2021 ◽  
Vol 14 (15) ◽  
pp. 4070
Author(s):  
Andrea Karen Persons ◽  
John E. Ball ◽  
Charles Freeman ◽  
David M. Macias ◽  
Chartrisa LaShan Simpson ◽  
...  

Standards for the fatigue testing of wearable sensing technologies are lacking. The majority of published fatigue tests for wearable sensors are performed on proof-of-concept stretch sensors fabricated from a variety of materials. Due to their flexibility and stretchability, polymers are often used in the fabrication of wearable sensors. Other materials, including textiles, carbon nanotubes, graphene, and conductive metals or inks, may be used in conjunction with polymers to fabricate wearable sensors. Depending on the combination of the materials used, the fatigue behaviors of wearable sensors can vary. Additionally, fatigue testing methodologies for the sensors also vary, with most tests focusing only on the low-cycle fatigue (LCF) regime, and few sensors are cycled until failure or runout are achieved. Fatigue life predictions of wearable sensors are also lacking. These issues make direct comparisons of wearable sensors difficult. To facilitate direct comparisons of wearable sensors and to move proof-of-concept sensors from “bench to bedside,” fatigue testing standards should be established. Further, both high-cycle fatigue (HCF) and failure data are needed to determine the appropriateness in the use, modification, development, and validation of fatigue life prediction models and to further the understanding of how cracks initiate and propagate in wearable sensing technologies.


Radiology ◽  
1991 ◽  
Vol 180 (2) ◽  
pp. 551-556 ◽  
Author(s):  
R K Butts ◽  
F Farzaneh ◽  
S J Riederer ◽  
J N Rydberg ◽  
R C Grimm

Radiology ◽  
1988 ◽  
Vol 167 (2) ◽  
pp. 541-546 ◽  
Author(s):  
F Farzaneh ◽  
S J Riederer ◽  
W T Djang ◽  
J T Curnes ◽  
R J Herfkens

2018 ◽  
Vol 25 (8) ◽  
pp. 969-975 ◽  
Author(s):  
Jenna M Reps ◽  
Martijn J Schuemie ◽  
Marc A Suchard ◽  
Patrick B Ryan ◽  
Peter R Rijnbeek

Abstract Objective To develop a conceptual prediction model framework containing standardized steps and describe the corresponding open-source software developed to consistently implement the framework across computational environments and observational healthcare databases to enable model sharing and reproducibility. Methods Based on existing best practices we propose a 5 step standardized framework for: (1) transparently defining the problem; (2) selecting suitable datasets; (3) constructing variables from the observational data; (4) learning the predictive model; and (5) validating the model performance. We implemented this framework as open-source software utilizing the Observational Medical Outcomes Partnership Common Data Model to enable convenient sharing of models and reproduction of model evaluation across multiple observational datasets. The software implementation contains default covariates and classifiers but the framework enables customization and extension. Results As a proof-of-concept, demonstrating the transparency and ease of model dissemination using the software, we developed prediction models for 21 different outcomes within a target population of people suffering from depression across 4 observational databases. All 84 models are available in an accessible online repository to be implemented by anyone with access to an observational database in the Common Data Model format. Conclusions The proof-of-concept study illustrates the framework’s ability to develop reproducible models that can be readily shared and offers the potential to perform extensive external validation of models, and improve their likelihood of clinical uptake. In future work the framework will be applied to perform an “all-by-all” prediction analysis to assess the observational data prediction domain across numerous target populations, outcomes and time, and risk settings.


2002 ◽  
Vol 16 (1) ◽  
pp. 42-50 ◽  
Author(s):  
Linda White Nunes ◽  
Sarah A. Englander ◽  
Riad Charafeddine ◽  
Mitchell D. Schnall

2020 ◽  
Author(s):  
Rafael Massahiro Yassue ◽  
José Felipe Gonzaga Sabadin ◽  
Giovanni Galli ◽  
Filipe Couto Alves ◽  
Roberto Fritsche-Neto

AbstractUsually, the comparison among genomic prediction models is based on validation schemes as Repeated Random Subsampling (RRS) or K-fold cross-validation. Nevertheless, the design of training and validation sets has a high effect on the way and subjectiveness that we compare models. Those procedures cited above have an overlap across replicates that might cause an overestimated estimate and lack of residuals independence due to resampling issues and might cause less accurate results. Furthermore, posthoc tests, such as ANOVA, are not recommended due to assumption unfulfilled regarding residuals independence. Thus, we propose a new way to sample observations to build training and validation sets based on cross-validation alpha-based design (CV-α). The CV-α was meant to create several scenarios of validation (replicates x folds), regardless of the number of treatments. Using CV-α, the number of genotypes in the same fold across replicates was much lower than K-fold, indicating higher residual independence. Therefore, based on the CV-α results, as proof of concept, via ANOVA, we could compare the proposed methodology to RRS and K-fold, applying four genomic prediction models with a simulated and real dataset. Concerning the predictive ability and bias, all validation methods showed similar performance. However, regarding the mean squared error and coefficient of variation, the CV-α method presented the best performance under the evaluated scenarios. Moreover, as it has no additional cost nor complexity, it is more reliable and allows the use of non-subjective methods to compare models and factors. Therefore, CV-α can be considered a more precise validation methodology for model selection.


Sign in / Sign up

Export Citation Format

Share Document