scholarly journals The impact of the variation of imaging factors on the robustness of Computed Tomography Radiomic Features: A review

Author(s):  
Reza Reiazi ◽  
Engy Abbas ◽  
Petra Famiyeh ◽  
Aria Rezaie ◽  
Jennifer Y. Y. Kwan ◽  
...  

ABSTRACTThe field of radiomics is at the forefront of personalized medicine. However, there are concerns regarding the robustness of its features against multiple medical imaging parameters and the performance of the predictive models built upon them. Therefore, our review aims to identify image perturbation factors (IPF) that most influence the robustness of radiomic features in biomedical research. We also provide insights into the validity and discrepancy of different methodologies applied to investigate the robustness of radiomic features. We selected 527 papers based on the primary criterion that the papers had imaging parameters that affected the reproducibility of radiomic features extracted from computed tomography (CT) images. We compared the reported performance of these parameters along with IPF in the eligible studies. We then proceeded to divide our studies into three groups based on the type of their IPF: (i) scanner parameters, (ii) acquisition parameters and (iii) reconstruction parameters. Our review highlighted that the reconstruction algorithm was the most reproducible factor and shape along with intensity histogram (IH) were the most robust radiomic features against variation in imaging parameters. This review identified substantial inconsistencies related to the methodology and the reporting style of the reviewed studies such as type of study performed, the metrics used for robustness, the feature extraction techniques, the image perturbation factors, the reporting style and their outcome inclusion. Finally, we hope the IPFs and the methodology inconsistencies identified will aid the scientific community in conducting research in a way that is more reproducible and avoids the pitfalls of previous analyses.

2021 ◽  
Vol 133 ◽  
pp. 104400
Author(s):  
Reza Reiazi ◽  
Engy Abbas ◽  
Petra Famiyeh ◽  
Aria Rezaie ◽  
Jennifer Y.Y. Kwan ◽  
...  

2019 ◽  
pp. 27-35
Author(s):  
Alexandr Neznamov

Digital technologies are no longer the future but are the present of civil proceedings. That is why any research in this direction seems to be relevant. At the same time, some of the fundamental problems remain unattended by the scientific community. One of these problems is the problem of classification of digital technologies in civil proceedings. On the basis of instrumental and genetic approaches to the understanding of digital technologies, it is concluded that their most significant feature is the ability to mediate the interaction of participants in legal proceedings with information; their differentiating feature is the function performed by a particular technology in the interaction with information. On this basis, it is proposed to distinguish the following groups of digital technologies in civil proceedings: a) technologies of recording, storing and displaying (reproducing) information, b) technologies of transferring information, c) technologies of processing information. A brief description is given to each of the groups. Presented classification could serve as a basis for a more systematic discussion of the impact of digital technologies on the essence of civil proceedings. Particularly, it is pointed out that issues of recording, storing, reproducing and transferring information are traditionally more «technological» for civil process, while issues of information processing are more conceptual.


2020 ◽  
Vol 6 (1) ◽  
Author(s):  
Malte Seemann ◽  
Lennart Bargsten ◽  
Alexander Schlaefer

AbstractDeep learning methods produce promising results when applied to a wide range of medical imaging tasks, including segmentation of artery lumen in computed tomography angiography (CTA) data. However, to perform sufficiently, neural networks have to be trained on large amounts of high quality annotated data. In the realm of medical imaging, annotations are not only quite scarce but also often not entirely reliable. To tackle both challenges, we developed a two-step approach for generating realistic synthetic CTA data for the purpose of data augmentation. In the first step moderately realistic images are generated in a purely numerical fashion. In the second step these images are improved by applying neural domain adaptation. We evaluated the impact of synthetic data on lumen segmentation via convolutional neural networks (CNNs) by comparing resulting performances. Improvements of up to 5% in terms of Dice coefficient and 20% for Hausdorff distance represent a proof of concept that the proposed augmentation procedure can be used to enhance deep learning-based segmentation for artery lumen in CTA images.


2021 ◽  
pp. 197140092098866
Author(s):  
Daniel Thomas Ginat ◽  
James Kenniff

Background The COVID-19 pandemic led to a widespread socioeconomic shutdown, including medical facilities in many parts of the world. The purpose of this study was to assess the impact on neuroimaging utilisation at an academic medical centre in the United States caused by this shutdown. Methods Exam volumes from 1 February 2020 to 11 August 2020 were calculated based on patient location, including outpatient, inpatient and emergency, as well as modality type, including computed tomography and magnetic resonance imaging. 13 March 2020 was designated as the beginning of the shutdown period for the radiology department and 1 May 2020 was designated as the reopening date. The scan volumes during the pre-shutdown, shutdown and post-shutdown periods were compared using t-tests. Results Overall, neuroimaging scan volumes declined significantly by 41% during the shutdown period and returned to 98% of the pre-shutdown period levels after the shutdown, with an estimated 3231 missed scans. Outpatient scan volumes were more greatly affected than inpatient scan volumes, while emergency scan volumes declined the least during the shutdown. In addition, the magnetic resonance imaging scan volumes declined to a greater degree than the computed tomography scan volumes during the shutdown. Conclusion The shutdown from the COVID-19 pandemic had a substantial but transient impact on neuroimaging utilisation overall, with variable magnitude depending on patient location and modality type.


2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Federico Calesella ◽  
Alberto Testolin ◽  
Michele De Filippo De Grazia ◽  
Marco Zorzi

AbstractMultivariate prediction of human behavior from resting state data is gaining increasing popularity in the neuroimaging community, with far-reaching translational implications in neurology and psychiatry. However, the high dimensionality of neuroimaging data increases the risk of overfitting, calling for the use of dimensionality reduction methods to build robust predictive models. In this work, we assess the ability of four well-known dimensionality reduction techniques to extract relevant features from resting state functional connectivity matrices of stroke patients, which are then used to build a predictive model of the associated deficits based on cross-validated regularized regression. In particular, we investigated the prediction ability over different neuropsychological scores referring to language, verbal memory, and spatial memory domains. Principal Component Analysis (PCA) and Independent Component Analysis (ICA) were the two best methods at extracting representative features, followed by Dictionary Learning (DL) and Non-Negative Matrix Factorization (NNMF). Consistent with these results, features extracted by PCA and ICA were found to be the best predictors of the neuropsychological scores across all the considered cognitive domains. For each feature extraction method, we also examined the impact of the regularization method, model complexity (in terms of number of features that entered in the model) and quality of the maps that display predictive edges in the resting state networks. We conclude that PCA-based models, especially when combined with L1 (LASSO) regularization, provide optimal balance between prediction accuracy, model complexity, and interpretability.


Sign in / Sign up

Export Citation Format

Share Document