Half2Half: deep neural network based CT image denoising without independent reference data

2020 ◽  
Vol 65 (21) ◽  
pp. 215020
Author(s):  
Nimu Yuan ◽  
Jian Zhou ◽  
Jinyi Qi
IEEE Access ◽  
2020 ◽  
pp. 1-1
Author(s):  
Xiaojie Lv ◽  
Xuezhi Ren ◽  
Peng He ◽  
Mi Zhou ◽  
Zourong Long ◽  
...  

IEEE Access ◽  
2021 ◽  
pp. 1-1
Author(s):  
Jiahong Zhang ◽  
Yonggui Zhu ◽  
Wenyi Li ◽  
Wenlong Fu ◽  
Lihong Cao

2019 ◽  
Vol 1 (6) ◽  
pp. 269-276 ◽  
Author(s):  
Hongming Shan ◽  
Atul Padole ◽  
Fatemeh Homayounieh ◽  
Uwe Kruger ◽  
Ruhani Doda Khera ◽  
...  

2019 ◽  
Vol 3 (2) ◽  
pp. 153-161 ◽  
Author(s):  
Kuang Gong ◽  
Jiahui Guan ◽  
Chih-Chieh Liu ◽  
Jinyi Qi

Author(s):  
Rong Yang ◽  
Yizhou Chen ◽  
Guo Sa ◽  
Kangjie Li ◽  
Haigen Hu ◽  
...  

Abstract Background At present, numerous challenges exist in the diagnosis of pancreatic SCNs and MCNs. After the emergence of artificial intelligence (AI), many radiomics research methods have been applied to the identification of pancreatic SCNs and MCNs. Purpose A deep neural network (DNN) model termed Multi-channel-Multiclassifier-Random Forest-ResNet (MMRF-ResNet) was constructed to provide an objective CT imaging basis for differential diagnosis between pancreatic serous cystic neoplasms (SCNs) and mucinous cystic neoplasms (MCNs). Materials and methods This study is a retrospective analysis of pancreatic unenhanced and enhanced CT images in 63 patients with pancreatic SCNs and 47 patients with MCNs (3 of which were mucinous cystadenocarcinoma) confirmed by pathology from December 2010 to August 2016. Different image segmented methods (single-channel manual outline ROI image and multi-channel image), feature extraction methods (wavelet, LBP, HOG, GLCM, Gabor, ResNet, and AlexNet) and classifiers (KNN, Softmax, Bayes, random forest classifier, and Majority Voting rule method) are used to classify the nature of the lesion in each CT image (SCNs/MCNs). Then, the comparisons of classification results were made based on sensitivity, specificity, precision, accuracy, F1 score, and area under the receiver operating characteristic curve (AUC), with pathological results serving as the gold standard. Results Multi-channel-ResNet (AUC 0.98) was superior to Manual-ResNet (AUC 0.91).CT image characteristics of lesions extracted by ResNet are more representative than wavelet, LBP, HOG, GLCM, Gabor, and AlexNet. Compared to the use of three classifiers alone and Majority Voting rule method, the use of the MMRF-ResNet model exhibits a better evaluation effect (AUC 0.96) for the classification of the pancreatic SCNs and MCNs. Conclusion The CT image classification model MMRF-ResNet is an effective method to distinguish between pancreatic SCNs and MCNs. Graphic abstract


2020 ◽  
Author(s):  
Bin Liu ◽  
Xiaoxue Gao ◽  
Mengshuang He ◽  
Fengmao Lv ◽  
Guosheng Yin

Chest computed tomography (CT) scanning is one of the most important technologies for COVID-19 diagnosis and disease monitoring, particularly for early detection of coronavirus. Recent advancements in computer vision motivate more concerted efforts in developing AI-driven diagnostic tools to accommodate the enormous demands for the COVID-19 diagnostic tests globally. To help alleviate burdens on medical systems, we develop a lesion-attention deep neural network (LA-DNN) to predict COVID-19 positive or negative with a richly annotated chest CT image dataset. Based on the textual radiological report accompanied with each CT image, we extract two types of important information for the annotations: One is the indicator of a positive or negative case of COVID-19, and the other is the description of five lesions on the CT images associated with the positive cases. The proposed data-efficient LA-DNN model focuses on the primary task of binary classification for COVID-19 diagnosis, while an auxiliary multi-label learning task is implemented simultaneously to draw the model's attention to the five lesions associated with COVID-19. The joint task learning process makes it a highly sample-efficient deep neural network that can learn COVID-19 radiology features more effectively with limited but high-quality, rich-information samples. The experimental results show that the area under the curve (AUC) and sensitivity (recall), precision, and accuracy for COVID-19 diagnosis are 94.0%, 88.8%, 87.9%, and 88.6% respectively, which reach the clinical standards for practical use. A free online system is currently alive for fast diagnosis using CT images at the website https://www.covidct.cn/, and all codes and datasets are freely accessible at our github address.


Sign in / Sign up

Export Citation Format

Share Document