Evaluation von Bildparametern und eines convolutional neural networks in der FDG-PET/MR/CT zur Prädiktion des Gesamtüberlebens (OS) und des Therapieansprechens bei Patienten mit Melanom unter CIT.

2020 ◽  
Author(s):  
F Seith ◽  
J Vogel ◽  
C la Fougère ◽  
T Küstner ◽  
K Nikolaou ◽  
...  
2019 ◽  
Vol 29 (09) ◽  
pp. 1950010 ◽  
Author(s):  
Octavio Martinez Manzanera ◽  
Sanne K. Meles ◽  
Klaus L. Leenders ◽  
Remco J. Renken ◽  
Marco Pagani ◽  
...  

Over the last years convolutional neural networks (CNNs) have shown remarkable results in different image classification tasks, including medical imaging. One area that has been less explored with CNNs is Positron Emission Tomography (PET). Fluorodeoxyglucose Positron Emission Tomography (FDG-PET) is a PET technique employed to obtain a representation of brain metabolic function. In this study we employed 3D CNNs in FDG-PET brain images with the purpose of discriminating patients diagnosed with Parkinson’s disease (PD) from controls. We employed Scaled Subprofile Modeling using Principal Component Analysis as a preprocessing step to focus on specific brain regions and limit the number of voxels that are used as input for the CNNs, thereby increasing the signal-to-noise ratio in our data. We performed hyperparameter optimization on three CNN architectures to estimate the classification accuracy of the networks on new data. The best performance that we obtained was [Formula: see text] and area under the receiver operating characteristic curve [Formula: see text] on the test set. We believe that, with larger datasets, PD patients could be reliably distinguished from controls by FDG-PET scans alone and that this technique could be applied to more clinically challenging tasks, like the differential diagnosis of neurological disorders with similar symptoms, such as PD, Progressive Supranuclear Palsy (PSP) and Multiple System Atrophy (MSA).


Radiology ◽  
2020 ◽  
Vol 294 (2) ◽  
pp. 445-452 ◽  
Author(s):  
Ludovic Sibille ◽  
Robert Seifert ◽  
Nemanja Avramovic ◽  
Thomas Vehren ◽  
Bruce Spottiswoode ◽  
...  

2020 ◽  
Vol 33 (4) ◽  
pp. 888-894 ◽  
Author(s):  
Skander Jemaa ◽  
Jill Fredrickson ◽  
Richard A. D. Carano ◽  
Tina Nielsen ◽  
Alex de Crespigny ◽  
...  

Abstract 18F-Fluorodeoxyglucose-positron emission tomography (FDG-PET) is commonly used in clinical practice and clinical drug development to identify and quantify metabolically active tumors. Manual or computer-assisted tumor segmentation in FDG-PET images is a common way to assess tumor burden, such approaches are both labor intensive and may suffer from high inter-reader variability. We propose an end-to-end method leveraging 2D and 3D convolutional neural networks to rapidly identify and segment tumors and to extract metabolic information in eyes to thighs (whole body) FDG-PET/CT scans. The developed architecture is computationally efficient and devised to accommodate the size of whole-body scans, the extreme imbalance between tumor burden and the volume of healthy tissue, and the heterogeneous nature of the input images. Our dataset consists of a total of 3664 eyes to thighs FDG-PET/CT scans, from multi-site clinical trials in patients with non-Hodgkin’s lymphoma (NHL) and advanced non-small cell lung cancer (NSCLC). Tumors were segmented and reviewed by board-certified radiologists. We report a mean 3D Dice score of 88.6% on an NHL hold-out set of 1124 scans and a 93% sensitivity on 274 NSCLC hold-out scans. The method is a potential tool for radiologists to rapidly assess eyes to thighs FDG-avid tumor burden.


2020 ◽  
Vol 2020 (10) ◽  
pp. 28-1-28-7 ◽  
Author(s):  
Kazuki Endo ◽  
Masayuki Tanaka ◽  
Masatoshi Okutomi

Classification of degraded images is very important in practice because images are usually degraded by compression, noise, blurring, etc. Nevertheless, most of the research in image classification only focuses on clean images without any degradation. Some papers have already proposed deep convolutional neural networks composed of an image restoration network and a classification network to classify degraded images. This paper proposes an alternative approach in which we use a degraded image and an additional degradation parameter for classification. The proposed classification network has two inputs which are the degraded image and the degradation parameter. The estimation network of degradation parameters is also incorporated if degradation parameters of degraded images are unknown. The experimental results showed that the proposed method outperforms a straightforward approach where the classification network is trained with degraded images only.


Sign in / Sign up

Export Citation Format

Share Document