Detection of Degenerative Osteophytes of the Spine on PET/CT Using Region-Based Convolutional Neural Networks

Author(s):  
Yinong Wang ◽  
Jianhua Yao ◽  
Joseph E. Burns ◽  
Jiamin Liu ◽  
Ronald M. Summers
Radiology ◽  
2020 ◽  
Vol 294 (2) ◽  
pp. 445-452 ◽  
Author(s):  
Ludovic Sibille ◽  
Robert Seifert ◽  
Nemanja Avramovic ◽  
Thomas Vehren ◽  
Bruce Spottiswoode ◽  
...  

2020 ◽  
Vol 2 (5) ◽  
pp. e200016 ◽  
Author(s):  
Amy J. Weisman ◽  
Minnie W. Kieler ◽  
Scott B. Perlman ◽  
Martin Hutchings ◽  
Robert Jeraj ◽  
...  

2020 ◽  
Vol 33 (4) ◽  
pp. 888-894 ◽  
Author(s):  
Skander Jemaa ◽  
Jill Fredrickson ◽  
Richard A. D. Carano ◽  
Tina Nielsen ◽  
Alex de Crespigny ◽  
...  

Abstract 18F-Fluorodeoxyglucose-positron emission tomography (FDG-PET) is commonly used in clinical practice and clinical drug development to identify and quantify metabolically active tumors. Manual or computer-assisted tumor segmentation in FDG-PET images is a common way to assess tumor burden, such approaches are both labor intensive and may suffer from high inter-reader variability. We propose an end-to-end method leveraging 2D and 3D convolutional neural networks to rapidly identify and segment tumors and to extract metabolic information in eyes to thighs (whole body) FDG-PET/CT scans. The developed architecture is computationally efficient and devised to accommodate the size of whole-body scans, the extreme imbalance between tumor burden and the volume of healthy tissue, and the heterogeneous nature of the input images. Our dataset consists of a total of 3664 eyes to thighs FDG-PET/CT scans, from multi-site clinical trials in patients with non-Hodgkin’s lymphoma (NHL) and advanced non-small cell lung cancer (NSCLC). Tumors were segmented and reviewed by board-certified radiologists. We report a mean 3D Dice score of 88.6% on an NHL hold-out set of 1124 scans and a 93% sensitivity on 274 NSCLC hold-out scans. The method is a potential tool for radiologists to rapidly assess eyes to thighs FDG-avid tumor burden.


2018 ◽  
Author(s):  
George Symeonidis ◽  
Peter P. Groumpos ◽  
Evangelos Dermatas

2020 ◽  
Vol 2020 (10) ◽  
pp. 28-1-28-7 ◽  
Author(s):  
Kazuki Endo ◽  
Masayuki Tanaka ◽  
Masatoshi Okutomi

Classification of degraded images is very important in practice because images are usually degraded by compression, noise, blurring, etc. Nevertheless, most of the research in image classification only focuses on clean images without any degradation. Some papers have already proposed deep convolutional neural networks composed of an image restoration network and a classification network to classify degraded images. This paper proposes an alternative approach in which we use a degraded image and an additional degradation parameter for classification. The proposed classification network has two inputs which are the degraded image and the degradation parameter. The estimation network of degradation parameters is also incorporated if degradation parameters of degraded images are unknown. The experimental results showed that the proposed method outperforms a straightforward approach where the classification network is trained with degraded images only.


Author(s):  
Edgar Medina ◽  
Roberto Campos ◽  
Jose Gabriel R. C. Gomes ◽  
Mariane R. Petraglia ◽  
Antonio Petraglia

Sign in / Sign up

Export Citation Format

Share Document