Renal Lesion Classification in Kidney CT Images by Seven-Layer Convolution Neural Network

2021 ◽  
Vol 11 (5) ◽  
pp. 1422-1430
Author(s):  
Li-Ying Wang ◽  
Zhi-Qiang Xu ◽  
Yu-Dong Zhang

AI techniques are pervading the medical field and facilitating the related educational applications, such as computer aided medical diagnosis, online surgery platforms and medical learning environments. Nowadays daily medical images and data come into being the big medical data need be processed as fast as possible. AI enables human to improve the accuracy and efficiency of diagnosis greatly relying on the techniques of radiological images analysis. In this paper one 7-layer deep Convolutional Neural Network (CNN) is designed to classify renal lesion in kidney Computed Tomography (CT) images. The CNN is trained on a middle-size dataset with 614 kidney CT images collected from real clinical data. Experiments show the mean and standard deviation of the overall accuracy of the binary classification reaches 90.36 ±1.02%. It has greatly better performance about 25% than the traditional Probability Neural Network (PNN) method with predefined features. The optimal structure of this CNN proves our method is rather promising to help doctors make medical diagnosis.

2020 ◽  
Author(s):  
jianfeng sui ◽  
Liugang Gao ◽  
Haijiao Shang ◽  
Chunying Li ◽  
Zhengda Lu ◽  
...  

Abstract Objective: The aim of this study is to generate virtual noncontrast (VNC) computed tomography (CT) from intravenous enhanced CT by using Unet convolutional neural network (CNN). The differences among enhanced, VNC, and noncontrast CT in proton dose calculation were compared. Methods: A total of 30 groups of CT images of patients who received enhanced and noncontrast CT were selected. Enhanced and noncontrast CT were registered. Among these patients, 20 groups of the CT images were chosen as the training set. Enhanced CT images were used as the input, and the corresponding noncontrast CT images were used as output to train the Unet neural network. The remaining 10 groups of CT images were chosen as the test set. VNC images were generated by the trained Unet neural network. The same proton radiotherapy plan for esophagus cancer was designed based on three images. Proton dose distributions in enhanced, VNC, and noncontrast CT were calculated. The relative dose differences in enhanced CT with VNC and noncontrast CT were analyzed. Results: The mean absolute error (MAE) of the CT values between enhanced and noncontrast CT was 32.3 ± 2.6 HU. The MAE of the CT values between VNC and noncontrast CT was 6.7 ± 1.3 HU. The mean values of the enhanced CT in the great vessel, heart, lung, liver, and spinal cord were significantly higher than those of noncontrast CT, with the differences of 97, 83, 42, 40, and 10 HU, respectively. The mean values of the VNC CT showed no significant difference with noncontrast CT. The differences among enhanced, VNC, and noncontrast CT in terms of the average relative proton dose for clinical target volume(CTV), heart, great vessels, and lung were also investigated. The average relative proton doses of the contrast CT for these organs were significantly lower than those of noncontrast CT. The largest difference was observed in the great vessel, while the differences in other organs were relatively small. The γ-passing rates of the enhanced and VNC CT were calculated by 2% dose difference and 2 mm distance to agreement. Results showed that the mean γ-passing rate of VNC CT was significantly higher than that of enhanced CT (p<0.05). Conclusions: The proton radiotherapy design based on enhanced CT increased the range error, thereby resulting in calculation errors of the proton dose. Therefore, a technology that can be used to generate VNC CT from enhanced CT based on Unet neural network was proposed. The proton dose calculated based on VNC CT images was essentially consistent with that based on noncontrast CT.


2021 ◽  
Vol 20 (1) ◽  
Author(s):  
Hongmei Yuan ◽  
Minglei Yang ◽  
Shan Qian ◽  
Wenxin Wang ◽  
Xiaotian Jia ◽  
...  

Abstract Background Image registration is an essential step in the automated interpretation of the brain computed tomography (CT) images of patients with acute cerebrovascular disease (ACVD). However, performing brain CT registration accurately and rapidly remains greatly challenging due to the large intersubject anatomical variations, low resolution of soft tissues, and heavy computation costs. To this end, the HSCN-Net, a hybrid supervised convolutional neural network, was developed for precise and fast brain CT registration. Method HSCN-Net generated synthetic deformation fields using a simulator as one supervision for one reference–moving image pair to address the problem of lack of gold standards. Furthermore, the simulator was designed to generate multiscale affine and elastic deformation fields to overcome the registration challenge posed by large intersubject anatomical deformation. Finally, HSCN-Net adopted a hybrid loss function constituted by deformation field and image similarity to improve registration accuracy and generalization capability. In this work, 101 CT images of patients were collected for model construction (57), evaluation (14), and testing (30). HSCN-Net was compared with the classical Demons and VoxelMorph models. Qualitative analysis through the visual evaluation of critical brain tissues and quantitative analysis by determining the endpoint error (EPE) between the predicted sparse deformation vectors and gold-standard sparse deformation vectors, image normalized mutual information (NMI), and the Dice coefficient of the middle cerebral artery (MCA) blood supply area were carried out to assess model performance comprehensively. Results HSCN-Net and Demons had a better visual spatial matching performance than VoxelMorph, and HSCN-Net was more competent for smooth and large intersubject deformations than Demons. The mean EPE of HSCN-Net (3.29 mm) was less than that of Demons (3.47 mm) and VoxelMorph (5.12 mm); the mean Dice of HSCN-Net was 0.96, which was higher than that of Demons (0.90) and VoxelMorph (0.87); and the mean NMI of HSCN-Net (0.83) was slightly lower than that of Demons (0.84), but higher than that of VoxelMorph (0.81). Moreover, the mean registration time of HSCN-Net (17.86 s) was shorter than that of VoxelMorph (18.53 s) and Demons (147.21 s). Conclusion The proposed HSCN-Net could achieve accurate and rapid intersubject brain CT registration.


2021 ◽  
Vol 42 (1) ◽  
pp. e88825
Author(s):  
Hatice Catal Reis

The coronavirus disease 2019 (COVID-19) is fatal and spreading rapidly. Early detection and diagnosis of the COVID-19 infection will prevent rapid spread. This study aims to automatically detect COVID-19 through a chest computed tomography (CT) dataset. The standard models for automatic COVID-19 detection using raw chest CT images are presented. This study uses convolutional neural network (CNN), Zeiler and Fergus network (ZFNet), and dense convolutional network-121 (DenseNet121) architectures of deep convolutional neural network models. The proposed models are presented to provide accurate diagnosis for binary classification. The datasets were obtained from a public database. This retrospective study included 757 chest CT images (360 confirmed COVID-19 and 397 non-COVID-19 chest CT images).  The algorithms were coded using the Python programming language. The performance metrics used were accuracy, precision, recall, F1-score, and ROC-AUC.  Comparative analyses are presented between the three models by considering hyper-parameter factors to find the best model. We obtained the best performance, with an accuracy of 94,7%, a recall of 90%, a precision of 100%, and an F1-score of 94,7% from the CNN model. As a result, the CNN algorithm is more accurate and precise than the ZFNet and DenseNet121 models. This study can present a second point of view to medical staff.


2020 ◽  
Author(s):  
Bin Liu ◽  
Xiaoxue Gao ◽  
Mengshuang He ◽  
Fengmao Lv ◽  
Guosheng Yin

Chest computed tomography (CT) scanning is one of the most important technologies for COVID-19 diagnosis and disease monitoring, particularly for early detection of coronavirus. Recent advancements in computer vision motivate more concerted efforts in developing AI-driven diagnostic tools to accommodate the enormous demands for the COVID-19 diagnostic tests globally. To help alleviate burdens on medical systems, we develop a lesion-attention deep neural network (LA-DNN) to predict COVID-19 positive or negative with a richly annotated chest CT image dataset. Based on the textual radiological report accompanied with each CT image, we extract two types of important information for the annotations: One is the indicator of a positive or negative case of COVID-19, and the other is the description of five lesions on the CT images associated with the positive cases. The proposed data-efficient LA-DNN model focuses on the primary task of binary classification for COVID-19 diagnosis, while an auxiliary multi-label learning task is implemented simultaneously to draw the model's attention to the five lesions associated with COVID-19. The joint task learning process makes it a highly sample-efficient deep neural network that can learn COVID-19 radiology features more effectively with limited but high-quality, rich-information samples. The experimental results show that the area under the curve (AUC) and sensitivity (recall), precision, and accuracy for COVID-19 diagnosis are 94.0%, 88.8%, 87.9%, and 88.6% respectively, which reach the clinical standards for practical use. A free online system is currently alive for fast diagnosis using CT images at the website https://www.covidct.cn/, and all codes and datasets are freely accessible at our github address.


2019 ◽  
Vol 60 (5) ◽  
pp. 586-594 ◽  
Author(s):  
Iori Sumida ◽  
Taiki Magome ◽  
Hideki Kitamori ◽  
Indra J Das ◽  
Hajime Yamaguchi ◽  
...  

Abstract This study aims to produce non-contrast computed tomography (CT) images using a deep convolutional neural network (CNN) for imaging. Twenty-nine patients were selected. CT images were acquired without and with a contrast enhancement medium. The transverse images were divided into 64 × 64 pixels. This resulted in 14 723 patches in total for both non-contrast and contrast-enhanced CT image pairs. The proposed CNN model comprises five two-dimensional (2D) convolution layers with one shortcut path. For comparison, the U-net model, which comprises five 2D convolution layers interleaved with pooling and unpooling layers, was used. Training was performed in 24 patients and, for testing of trained models, another 5 patients were used. For quantitative evaluation, 50 regions of interest (ROIs) were selected on the reference contrast-enhanced image of the test data, and the mean pixel value of the ROIs was calculated. The mean pixel values of the ROIs at the same location on the reference non-contrast image and the predicted non-contrast image were calculated and those values were compared. Regarding the quantitative analysis, the difference in mean pixel value between the reference contrast-enhanced image and the predicted non-contrast image was significant (P < 0.0001) for both models. Significant differences in pixels (P < 0.0001) were found using the U-net model; in contrast, there was no significant difference using the proposed CNN model when comparing the reference non-contrast images and the predicted non-contrast images. Using the proposed CNN model, the contrast-enhanced region was satisfactorily reduced.


PLoS ONE ◽  
2021 ◽  
Vol 16 (3) ◽  
pp. e0247839
Author(s):  
Caio B. S. Maior ◽  
João M. M. Santana ◽  
Isis D. Lins ◽  
Márcio J. C. Moura

As SARS-CoV-2 has spread quickly throughout the world, the scientific community has spent major efforts on better understanding the characteristics of the virus and possible means to prevent, diagnose, and treat COVID-19. A valid approach presented in the literature is to develop an image-based method to support COVID-19 diagnosis using convolutional neural networks (CNN). Because the availability of radiological data is rather limited due to the novelty of COVID-19, several methodologies consider reduced datasets, which may be inadequate, biasing the model. Here, we performed an analysis combining six different databases using chest X-ray images from open datasets to distinguish images of infected patients while differentiating COVID-19 and pneumonia from ‘no-findings’ images. In addition, the performance of models created from fewer databases, which may imperceptibly overestimate their results, is discussed. Two CNN-based architectures were created to process images of different sizes (512 × 512, 768 × 768, 1024 × 1024, and 1536 × 1536). Our best model achieved a balanced accuracy (BA) of 87.7% in predicting one of the three classes (‘no-findings’, ‘COVID-19’, and ‘pneumonia’) and a specific balanced precision of 97.0% for ‘COVID-19’ class. We also provided binary classification with a precision of 91.0% for detection of sick patients (i.e., with COVID-19 or pneumonia) and 98.4% for COVID-19 detection (i.e., differentiating from ‘no-findings’ or ‘pneumonia’). Indeed, despite we achieved an unrealistic 97.2% BA performance for one specific case, the proposed methodology of using multiple databases achieved better and less inflated results than from models with specific image datasets for training. Thus, this framework is promising for a low-cost, fast, and noninvasive means to support the diagnosis of COVID-19.


2018 ◽  
Vol 10 (1) ◽  
pp. 57-64 ◽  
Author(s):  
Rizqa Raaiqa Bintana ◽  
Chastine Fatichah ◽  
Diana Purwitasari

Community-based question answering (CQA) is formed to help people who search information that they need through a community. One condition that may occurs in CQA is when people cannot obtain the information that they need, thus they will post a new question. This condition can cause CQA archive increased because of duplicated questions. Therefore, it becomes important problems to find semantically similar questions from CQA archive towards a new question. In this study, we use convolutional neural network methods for semantic modeling of sentence to obtain words that they represent the content of documents and new question. The result for the process of finding the same question semantically to a new question (query) from the question-answer documents archive using the convolutional neural network method, obtained the mean average precision value is 0,422. Whereas by using vector space model, as a comparison, obtained mean average precision value is 0,282. Index Terms—community-based question answering, convolutional neural network, question retrieval


Author(s):  
P.L. Nikolaev

This article deals with method of binary classification of images with small text on them Classification is based on the fact that the text can have 2 directions – it can be positioned horizontally and read from left to right or it can be turned 180 degrees so the image must be rotated to read the sign. This type of text can be found on the covers of a variety of books, so in case of recognizing the covers, it is necessary first to determine the direction of the text before we will directly recognize it. The article suggests the development of a deep neural network for determination of the text position in the context of book covers recognizing. The results of training and testing of a convolutional neural network on synthetic data as well as the examples of the network functioning on the real data are presented.


Sign in / Sign up

Export Citation Format

Share Document