scholarly journals Fus2Net: A Novel Convolutional Neural Network for Classification of Benign and Malignant Breast Tumor in Ultrasound Images

Author(s):  
He Ma ◽  
Ronghui Tian ◽  
Hong Li ◽  
Hang Sun ◽  
Guoxiu Lu ◽  
...  

Abstract Background: The rapid development of artificial intelligence technology has improved the capability of automatic breast cancer diagnosis, compared to traditional machine learning methods. Convolutional Neural Network (CNN) can automatically select high-efficiency features, which helps to improve the level of computer-aided diagnosis (CAD). It can improve the performance of distinguishing benign and malignant breast ultrasound (BUS) tumor images and makes rapid breast tumor screening possible. Results: The classification model was evaluated by using BUS tumor images without training. Evaluation indicators include accuracy, sensitivity, specificity, and Area Under Curve (AUC) value. The results in the Fus2Net model had an accuracy of 92%, the sensitivity reached 95.65%, the specificity reached 88.89%, and the AUC value reached 0.97 for classifying BUS tumor images. Conclusions: The experiment compared the existing CNN categorized architecture, and the Fus2Net architecture we customed has more advantages in a comprehensive performance. The obtained results demonstrated that the Fus2Net classification method we proposed can better assist radiologists in the diagnosis of benign and malignant BUS tumor images. Methods: The existing public datasets are small and the amount of data suffer from the balance issue. In this paper, we provide a relatively larger dataset with a total of 1052 ultrasound images, including 696 benign images and 356 malignant images, which were collected from a local hospital. We proposed a novel CNN named Fus2Net for the benign and malignant classification of BUS tumor images and it contains two self-designed feature extraction modules. To evaluate how the classifier generalizes on the experimental dataset, 10-fold cross validation was employed. Meanwhile, to solve the balance of the dataset, the training data was augmented before being fed into the Fus2Net. In the experiment, we used hyperparameter fine-tuning and regularization technology to make the Fus2Net convergence.

2021 ◽  
Vol 20 (1) ◽  
Author(s):  
He Ma ◽  
Ronghui Tian ◽  
Hong Li ◽  
Hang Sun ◽  
Guoxiu Lu ◽  
...  

Abstract Background The rapid development of artificial intelligence technology has improved the capability of automatic breast cancer diagnosis, compared to traditional machine learning methods. Convolutional Neural Network (CNN) can automatically select high efficiency features, which helps to improve the level of computer-aided diagnosis (CAD). It can improve the performance of distinguishing benign and malignant breast ultrasound (BUS) tumor images, making rapid breast tumor screening possible. Results The classification model was evaluated with a different dataset of 100 BUS tumor images (50 benign cases and 50 malignant cases), which was not used in network training. Evaluation indicators include accuracy, sensitivity, specificity, and area under curve (AUC) value. The results in the Fus2Net model had an accuracy of 92%, the sensitivity reached 95.65%, the specificity reached 88.89%, and the AUC value reached 0.97 for classifying BUS tumor images. Conclusions The experiment compared the existing CNN-categorized architecture, and the Fus2Net architecture we customed has more advantages in a comprehensive performance. The obtained results demonstrated that the Fus2Net classification method we proposed can better assist radiologists in the diagnosis of benign and malignant BUS tumor images. Methods The existing public datasets are small and the amount of data suffer from the balance issue. In this paper, we provide a relatively larger dataset with a total of 1052 ultrasound images, including 696 benign images and 356 malignant images, which were collected from a local hospital. We proposed a novel CNN named Fus2Net for the benign and malignant classification of BUS tumor images and it contains two self-designed feature extraction modules. To evaluate how the classifier generalizes on the experimental dataset, we employed the training set (646 benign cases and 306 malignant cases) for tenfold cross-validation. Meanwhile, to solve the balance of the dataset, the training data were augmented before being fed into the Fus2Net. In the experiment, we used hyperparameter fine-tuning and regularization technology to make the Fus2Net convergence.


2020 ◽  
Vol 2020 ◽  
pp. 1-6
Author(s):  
Jian-ye Yuan ◽  
Xin-yuan Nan ◽  
Cheng-rong Li ◽  
Le-le Sun

Considering that the garbage classification is urgent, a 23-layer convolutional neural network (CNN) model is designed in this paper, with the emphasis on the real-time garbage classification, to solve the low accuracy of garbage classification and recycling and difficulty in manual recycling. Firstly, the depthwise separable convolution was used to reduce the Params of the model. Then, the attention mechanism was used to improve the accuracy of the garbage classification model. Finally, the model fine-tuning method was used to further improve the performance of the garbage classification model. Besides, we compared the model with classic image classification models including AlexNet, VGG16, and ResNet18 and lightweight classification models including MobileNetV2 and SuffleNetV2 and found that the model GAF_dense has a higher accuracy rate, fewer Params, and FLOPs. To further check the performance of the model, we tested the CIFAR-10 data set and found the accuracy rates of the model (GAF_dense) are 0.018 and 0.03 higher than ResNet18 and SufflenetV2, respectively. In the ImageNet data set, the accuracy rates of the model (GAF_dense) are 0.225 and 0.146 higher than Resnet18 and SufflenetV2, respectively. Therefore, the garbage classification model proposed in this paper is suitable for garbage classification and other classification tasks to protect the ecological environment, which can be applied to classification tasks such as environmental science, children’s education, and environmental protection.


2019 ◽  
Vol 14 (1) ◽  
pp. 124-134 ◽  
Author(s):  
Shuai Zhang ◽  
Yong Chen ◽  
Xiaoling Huang ◽  
Yishuai Cai

Online feedback is an effective way of communication between government departments and citizens. However, the daily high number of public feedbacks has increased the burden on government administrators. The deep learning method is good at automatically analyzing and extracting deep features of data, and then improving the accuracy of classification prediction. In this study, we aim to use the text classification model to achieve the automatic classification of public feedbacks to reduce the work pressure of administrator. In particular, a convolutional neural network model combined with word embedding and optimized by differential evolution algorithm is adopted. At the same time, we compared it with seven common text classification models, and the results show that the model we explored has good classification performance under different evaluation metrics, including accuracy, precision, recall, and F1-score.


2021 ◽  
Vol 16 ◽  
Author(s):  
Di Gai ◽  
Xuanjing Shen ◽  
Haipeng Chen

Background: The effective classification of the melting curve is conducive to measure the specificity of the amplified products and the influence of invalid data on subsequent experiments is excluded. Objective: In this paper, a convolutional neural network (CNN) classification model based on dynamic filter is proposed, which can categorize the number of peaks in the melting curve image and distinguish the pollution data represented by the noise peaks. Method: The main advantage of the proposed model is that it adopts the filter which changes with the input and uses the dynamic filter to capture more information in the image, making the network learning more accurate. In addition, the residual module is used to extract the characteristics of the melting curve, and the pooling operation is replaced with an atrous convolution to prevent the loss of context information. Result: In order to train the proposed model, a novel melting curve dataset is created, which includes a balanced dataset and an unbalanced dataset. The proposed method uses six classification-based assessment criteria to compare with seven representative methods based on deep learning. Experimental results show that proposed method is not only markedly outperforms the other state-of-the-art methods in accuracy, but also has much less running time. Conclusion: It evidently proves that the proposed method is suitable for judging the specificity of amplification products according to the melting curve. Simultaneously, it overcomes the difficulties of manual selection with low efficiency and artificial bias.


2021 ◽  
pp. 1-10
Author(s):  
Gayatri Pattnaik ◽  
Vimal K. Shrivastava ◽  
K. Parvathi

Pests are major threat to economic growth of a country. Application of pesticide is the easiest way to control the pest infection. However, excessive utilization of pesticide is hazardous to environment. The recent advances in deep learning have paved the way for early detection and improved classification of pest in tomato plants which will benefit the farmers. This paper presents a comprehensive analysis of 11 state-of-the-art deep convolutional neural network (CNN) models with three configurations: transfers learning, fine-tuning and scratch learning. The training in transfer learning and fine tuning initiates from pre-trained weights whereas random weights are used in case of scratch learning. In addition, the concept of data augmentation has been explored to improve the performance. Our dataset consists of 859 tomato pest images from 10 categories. The results demonstrate that the highest classification accuracy of 94.87% has been achieved in the transfer learning approach by DenseNet201 model with data augmentation.


Author(s):  
Victoria Wu

Introduction: Scoliosis, an excessive curvature of the spine, affects approximately 1 in 1,000 individuals. As a result, there have formerly been implementations of mandatory scoliosis screening procedures. Screening programs are no longer widely used as the harms often outweigh the benefits; it causes many adolescents to undergo frequent diagnosis X-ray procedure This makes spinal ultrasounds an ideal substitute for scoliosis screening in patients, as it does not expose them to those levels of radiation. Spinal curvatures can be accurately computed from the location of spinal transverse processes, by measuring the vertebral angle from a reference line [1]. However, ultrasound images are less clear than x-ray images, making it difficult to identify the spinal processes. To overcome this, we employ deep learning using a convolutional neural network, which is a powerful tool for computer vision and image classification [2]. Method: A total of 2,752 ultrasound images were recorded from a spine phantom to train a convolutional neural network. Subsequently, we took another recording of 747 images to be used for testing. All the ultrasound images from the scans were then segmented manually, using the 3D Slicer (www.slicer.org) software. Next, the dataset was fed through a convolutional neural network. The network used was a modified version of GoogLeNet (Inception v1), with 2 linearly stacked inception models. This network was chosen because it provided a balance between accurate performance, and time efficient computations. Results: Deep learning classification using the Inception model achieved an accuracy of 84% for the phantom scan.  Conclusion: The classification model performs with considerable accuracy. Better accuracy needs to be achieved, possibly with more available data and improvements in the classification model.  Acknowledgements: G. Fichtinger is supported as a Canada Research Chair in Computer-Integrated Surgery. This work was funded, in part, by NIH/NIBIB and NIH/NIGMS (via grant 1R01EB021396-01A1 - Slicer+PLUS: Point-of-Care Ultrasound) and by CANARIE’s Research Software Program.    Figure 1: Ultrasound scan containing a transverse process (left), and ultrasound scan containing no transverse process (right).                                Figure 2: Accuracy of classification for training (red) and validation (blue). References:           Ungi T, King F, Kempston M, Keri Z, Lasso A, Mousavi P, Rudan J, Borschneck DP, Fichtinger G. Spinal Curvature Measurement by Tracked Ultrasound Snapshots. Ultrasound in Medicine and Biology, 40(2):447-54, Feb 2014.           Krizhevsky A, Sutskeyer I, Hinton GE. (2012). ImageNet Classification with Deep Convolutional Neural Networks. Advances in Neural Information Processing Systems 25:1097-1105. 


2020 ◽  
Vol 2020 ◽  
pp. 1-12
Author(s):  
Mengwan Wei ◽  
Yongzhao Du ◽  
Xiuming Wu ◽  
Qichen Su ◽  
Jianqing Zhu ◽  
...  

The classification of benign and malignant based on ultrasound images is of great value because breast cancer is an enormous threat to women’s health worldwide. Although both texture and morphological features are crucial representations of ultrasound breast tumor images, their straightforward combination brings little effect for improving the classification of benign and malignant since high-dimensional texture features are too aggressive so that drown out the effect of low-dimensional morphological features. For that, an efficient texture and morphological feature combing method is proposed to improve the classification of benign and malignant. Firstly, both texture (i.e., local binary patterns (LBP), histogram of oriented gradients (HOG), and gray-level co-occurrence matrixes (GLCM)) and morphological (i.e., shape complexities) features of breast ultrasound images are extracted. Secondly, a support vector machine (SVM) classifier working on texture features is trained, and a naive Bayes (NB) classifier acting on morphological features is designed, in order to exert the discriminative power of texture features and morphological features, respectively. Thirdly, the classification scores of the two classifiers (i.e., SVM and NB) are weighted fused to obtain the final classification result. The low-dimensional nonparameterized NB classifier is effectively control the parameter complexity of the entire classification system combine with the high-dimensional parametric SVM classifier. Consequently, texture and morphological features are efficiently combined. Comprehensive experimental analyses are presented, and the proposed method obtains a 91.11% accuracy, a 94.34% sensitivity, and an 86.49% specificity, which outperforms many related benign and malignant breast tumor classification methods.


2019 ◽  
Vol 8 (4) ◽  
pp. 160 ◽  
Author(s):  
Bingxin Liu ◽  
Ying Li ◽  
Guannan Li ◽  
Anling Liu

Spectral characteristics play an important role in the classification of oil film, but the presence of too many bands can lead to information redundancy and reduced classification accuracy. In this study, a classification model that combines spectral indices-based band selection (SIs) and one-dimensional convolutional neural networks was proposed to realize automatic oil films classification using hyperspectral remote sensing images. Additionally, for comparison, the minimum Redundancy Maximum Relevance (mRMR) was tested for reducing the number of bands. The support vector machine (SVM), random forest (RF), and Hu’s convolutional neural networks (CNN) were trained and tested. The results show that the accuracy of classifications through the one dimensional convolutional neural network (1D CNN) models surpassed the accuracy of other machine learning algorithms such as SVM and RF. The model of SIs+1D CNN could produce a relatively higher accuracy oil film distribution map within less time than other models.


2021 ◽  
Vol 11 (1) ◽  
pp. 15-24
Author(s):  
Dequan Guo ◽  
Gexiang Zhang ◽  
Hui Peng ◽  
Jianying Yuan ◽  
Prithwineel Paul ◽  
...  

In recent years, diseases of cardiovascular and cerebrovascular have attracted much attention due to main causes in death in human beings. To reduce mortality, there are lots of efforts which are focused on early diagnosis and prevention. It is an important reference index for cardiovascular diseases through the endovascular membrane in carotid artery by medical ultrasound images. The paper proposes a method which finds the region of interest (ROI) by convolutional neural network, segments and measures intima-media membrane mainly using support vector machine (SVM). Essentially, the task of detecting the membrane is one target detection problem. This paper adopts the strategy, named Yon Only Look Once (YOLO), a new detection algorithm, and follows the convolution neural network algorithm based on end-to-end training. Firstly, sufficient samples are extracted according to certain characteristics in the special region. It can be trained by the SVM classification model. Then the ROI is processed and all the pixels are classified into boundary points and non-boundary points through the classification model. Thirdly, the boundary points are selected to obtain the accurate boundary and calculate the intima-media thickness (IMT). In experiments, two hundred ultrasound images are tested, and the results verify that our algorithm is consistent with the results by ground truth (GT). The detection speed of the algorithm in this paper is in real time, and it has high generalization characteristics. The algorithm computes the intima-media thickness in ultrasound images accurately and quickly with 95% consistence to ground truth.


Sign in / Sign up

Export Citation Format

Share Document