Patch Based Texture Classification of Thyroid Ultrasound Images using Convolutional Neural Network

Author(s):  
Prabal Poudel ◽  
Alfredo Illanes ◽  
Maryam Sadeghi ◽  
Michael Friebe
2020 ◽  
Vol 10 (8) ◽  
pp. 1943-1948
Author(s):  
Ran Hui ◽  
Jiaxing Chen ◽  
Yu Liu ◽  
Lin Shi ◽  
Chao Fu ◽  
...  

Objective: To explore the application of deep convolutional neural network theory in thyroid ultrasound image system analysis and eigenvalue extraction to help medically predict the patient’s condition. Methods: The thyroid color ultrasound image dataset of our hospital was selected as the training and test samples. The comparison experiment was designed in the deep convolutional neural network learning framework to test the feasibility of the method. Results: Image information classification based on deep neural network algorithm can predict thyroid nodule lesions well, and has good accuracy in the classification test of benign and malignant nodules. Conclusion: The clinical application of deep learning method and thyroid ultrasound image feature value extraction and system analysis can improve the accuracy of clinical thyroid benign and malignant classification.


2020 ◽  
Vol 33 (5) ◽  
pp. 1266-1279
Author(s):  
Ruoyun Liu ◽  
Shichong Zhou ◽  
Yi Guo ◽  
Yuanyuan Wang ◽  
Cai Chang

2021 ◽  
Author(s):  
He Ma ◽  
Ronghui Tian ◽  
Hong Li ◽  
Hang Sun ◽  
Guoxiu Lu ◽  
...  

Abstract Background: The rapid development of artificial intelligence technology has improved the capability of automatic breast cancer diagnosis, compared to traditional machine learning methods. Convolutional Neural Network (CNN) can automatically select high-efficiency features, which helps to improve the level of computer-aided diagnosis (CAD). It can improve the performance of distinguishing benign and malignant breast ultrasound (BUS) tumor images and makes rapid breast tumor screening possible. Results: The classification model was evaluated by using BUS tumor images without training. Evaluation indicators include accuracy, sensitivity, specificity, and Area Under Curve (AUC) value. The results in the Fus2Net model had an accuracy of 92%, the sensitivity reached 95.65%, the specificity reached 88.89%, and the AUC value reached 0.97 for classifying BUS tumor images. Conclusions: The experiment compared the existing CNN categorized architecture, and the Fus2Net architecture we customed has more advantages in a comprehensive performance. The obtained results demonstrated that the Fus2Net classification method we proposed can better assist radiologists in the diagnosis of benign and malignant BUS tumor images. Methods: The existing public datasets are small and the amount of data suffer from the balance issue. In this paper, we provide a relatively larger dataset with a total of 1052 ultrasound images, including 696 benign images and 356 malignant images, which were collected from a local hospital. We proposed a novel CNN named Fus2Net for the benign and malignant classification of BUS tumor images and it contains two self-designed feature extraction modules. To evaluate how the classifier generalizes on the experimental dataset, 10-fold cross validation was employed. Meanwhile, to solve the balance of the dataset, the training data was augmented before being fed into the Fus2Net. In the experiment, we used hyperparameter fine-tuning and regularization technology to make the Fus2Net convergence.


2021 ◽  
Vol 11 (2) ◽  
pp. 424-431
Author(s):  
Yingxin Wang ◽  
Qianqian Zeng

Texture analysis has always been active areas of ultrasound image processing research. Using texture features to classify the ultrasound images is the focus of researchers' attention. How to extract representative texture features is an important part of successful texture description. The research goal of this paper is to apply the deep neural network into the ultrasound classification of ovarian tumors, and design a novel type of ovarian cancer diagnosis system. The improved HOG feature extraction method and the gray-level concurrence matrix of LBP image are firstly adopted to extract low-level features; Then, these features are cascaded into a new feature vector, and are input into the auto-encoder neural network to learn the high-level feature. Finally, the SVM classifier is used to achieve the classification of ovarian lesion. A large number of qualitative and quantitative experiments show that the improved method has more performance than the comparisons algorithms for ovarian ultrasound lesion, and it can significantly improve the classification performance while ensuring the accuracy rate and recall rate.


2021 ◽  
Author(s):  
Santosh Kumar B P ◽  
Mohd Anul Haq ◽  
Sreenivasulu P ◽  
Siva D ◽  
Malik bader alazzam ◽  
...  

Abstract In echocardiography, an electrocardiogram is conventionally utilized in the chronological arrangement of diverse cardiac views for measuring critical measurements. Cardiac view classification plays a significant role in the identification anddiagnosis of cardiac disease. Early detection of cardiac disease can be cured or treated, and medical experts accomplish this. Computational techniques classify the views without any assistance from medical experts. The process of learning and training faces issues in feature selection, training and classification. Considering these drawbacks, an effective rank-based deep convolutional neural network (R-DCNN) for the proficient feature selection and classification of diverse views of ultrasound images (US). Significant features in the US image are retrieved using rank-based feature selectionand used to classify views. R-DCNN attains 96.7% classification accuracy, and classification results are compared with the existing techniques. From the observation of the classification performance, the R-DCNN outperforms the existing state-of-art classification techniques.


Sign in / Sign up

Export Citation Format

Share Document