scholarly journals BCDnet: Parallel heterogeneous eight-class classification model of breast pathology

PLoS ONE ◽  
2021 ◽  
Vol 16 (7) ◽  
pp. e0253764
Author(s):  
Qingfang He ◽  
Guang Cheng ◽  
Huimin Ju

Breast cancer is the cancer with the highest incidence of malignant tumors in women, which seriously endangers women’s health. With the help of computer vision technology, it has important application value to automatically classify pathological tissue images to assist doctors in rapid and accurate diagnosis. Breast pathological tissue images have complex and diverse characteristics, and the medical data set of breast pathological tissue images is small, which makes it difficult to automatically classify breast pathological tissues. In recent years, most of the researches have focused on the simple binary classification of benign and malignant, which cannot meet the actual needs for classification of pathological tissues. Therefore, based on deep convolutional neural network, model ensembleing, transfer learning, feature fusion technology, this paper designs an eight-class classification breast pathology diagnosis model BCDnet. A user inputs the patient’s breast pathological tissue image, and the model can automatically determine what the disease is (Adenosis, Fibroadenoma, Tubular Adenoma, Phyllodes Tumor, Ductal Carcinoma, Lobular Carcinoma, Mucinous Carcinoma or Papillary Carcinoma). The model uses the VGG16 convolution base and Resnet50 convolution base as the parallel convolution base of the model. Two convolutional bases (VGG16 convolutional base and Resnet50 convolutional base) obtain breast tissue image features from different fields of view. After the information output by the fully connected layer of the two convolutional bases is fused, it is classified and output by the SoftMax function. The model experiment uses the publicly available BreaKHis data set. The number of samples of each class in the data set is extremely unevenly distributed. Compared with the binary classification, the number of samples in each class of the eight-class classification is also smaller. Therefore, the image segmentation method is used to expand the data set and the non-repeated random cropping method is used to balance the data set. Based on the balanced data set and the unbalanced data set, the BCDnet model, the pre-trained model Resnet50+ fine-tuning, and the pre-trained model VGG16+ fine-tuning are used for multiple comparison experiments. In the comparison experiment, the BCDnet model performed outstandingly, and the correct recognition rate of the eight-class classification model is higher than 98%. The results show that the model proposed in this paper and the method of improving the data set are reasonable and effective.

2020 ◽  
Vol 2020 ◽  
pp. 1-6
Author(s):  
Jian-ye Yuan ◽  
Xin-yuan Nan ◽  
Cheng-rong Li ◽  
Le-le Sun

Considering that the garbage classification is urgent, a 23-layer convolutional neural network (CNN) model is designed in this paper, with the emphasis on the real-time garbage classification, to solve the low accuracy of garbage classification and recycling and difficulty in manual recycling. Firstly, the depthwise separable convolution was used to reduce the Params of the model. Then, the attention mechanism was used to improve the accuracy of the garbage classification model. Finally, the model fine-tuning method was used to further improve the performance of the garbage classification model. Besides, we compared the model with classic image classification models including AlexNet, VGG16, and ResNet18 and lightweight classification models including MobileNetV2 and SuffleNetV2 and found that the model GAF_dense has a higher accuracy rate, fewer Params, and FLOPs. To further check the performance of the model, we tested the CIFAR-10 data set and found the accuracy rates of the model (GAF_dense) are 0.018 and 0.03 higher than ResNet18 and SufflenetV2, respectively. In the ImageNet data set, the accuracy rates of the model (GAF_dense) are 0.225 and 0.146 higher than Resnet18 and SufflenetV2, respectively. Therefore, the garbage classification model proposed in this paper is suitable for garbage classification and other classification tasks to protect the ecological environment, which can be applied to classification tasks such as environmental science, children’s education, and environmental protection.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Rajit Nair ◽  
Santosh Vishwakarma ◽  
Mukesh Soni ◽  
Tejas Patel ◽  
Shubham Joshi

Purpose The latest 2019 coronavirus (COVID-2019), which first appeared in December 2019 in Wuhan's city in China, rapidly spread around the world and became a pandemic. It has had a devastating impact on daily lives, the public's health and the global economy. The positive cases must be identified as soon as possible to avoid further dissemination of this disease and swift care of patients affected. The need for supportive diagnostic instruments increased, as no specific automated toolkits are available. The latest results from radiology imaging techniques indicate that these photos provide valuable details on the virus COVID-19. User advanced artificial intelligence (AI) technologies and radiological imagery can help diagnose this condition accurately and help resolve the lack of specialist doctors in isolated areas. In this research, a new paradigm for automatic detection of COVID-19 with bare chest X-ray images is displayed. Images are presented. The proposed model DarkCovidNet is designed to provide correct binary classification diagnostics (COVID vs no detection) and multi-class (COVID vs no results vs pneumonia) classification. The implemented model computed the average precision for the binary and multi-class classification of 98.46% and 91.352%, respectively, and an average accuracy of 98.97% and 87.868%. The DarkNet model was used in this research as a classifier for a real-time object detection method only once. A total of 17 convolutionary layers and different filters on each layer have been implemented. This platform can be used by the radiologists to verify their initial application screening and can also be used for screening patients through the cloud. Design/methodology/approach This study also uses the CNN-based model named Darknet-19 model, and this model will act as a platform for the real-time object detection system. The architecture of this system is designed in such a way that they can be able to detect real-time objects. This study has developed the DarkCovidNet model based on Darknet architecture with few layers and filters. So before discussing the DarkCovidNet model, look at the concept of Darknet architecture with their functionality. Typically, the DarkNet architecture consists of 5 pool layers though the max pool and 19 convolution layers. Assume as a convolution layer, and as a pooling layer. Findings The work discussed in this paper is used to diagnose the various radiology images and to develop a model that can accurately predict or classify the disease. The data set used in this work is the images bases on COVID-19 and non-COVID-19 taken from the various sources. The deep learning model named DarkCovidNet is applied to the data set, and these have shown signification performance in the case of binary classification and multi-class classification. During the multi-class classification, the model has shown an average accuracy 98.97% for the detection of COVID-19, whereas in a multi-class classification model has achieved an average accuracy of 87.868% during the classification of COVID-19, no detection and Pneumonia. Research limitations/implications One of the significant limitations of this work is that a limited number of chest X-ray images were used. It is observed that patients related to COVID-19 are increasing rapidly. In the future, the model on the larger data set which can be generated from the local hospitals will be implemented, and how the model is performing on the same will be checked. Originality/value Deep learning technology has made significant changes in the field of AI by generating good results, especially in pattern recognition. A conventional CNN structure includes a convolution layer that extracts characteristics from the input using the filters it applies, a pooling layer that reduces calculation efficiency and the neural network's completely connected layer. A CNN model is created by integrating one or more of these layers, and its internal parameters are modified to accomplish a specific mission, such as classification or object recognition. A typical CNN structure has a convolution layer that extracts features from the input with the filters it applies, a pooling layer to reduce the size for computational performance and a fully connected layer, which is a neural network. A CNN model is created by combining one or more such layers, and its internal parameters are adjusted to accomplish a particular task, such as classification or object recognition.


2021 ◽  
Vol 11 (6) ◽  
pp. 1592-1598
Author(s):  
Xufei Liu

The early detection of cardiovascular diseases based on electrocardiogram (ECG) is very important for the timely treatment of cardiovascular patients, which increases the survival rate of patients. ECG is a visual representation that describes changes in cardiac bioelectricity and is the basis for detecting heart health. With the rise of edge machine learning and Internet of Things (IoT) technologies, small machine learning models have received attention. This study proposes an ECG automatic classification method based on Internet of Things technology and LSTM network to achieve early monitoring and early prevention of cardiovascular diseases. Specifically, this paper first proposes a single-layer bidirectional LSTM network structure. Make full use of the timing-dependent features of the sampling points before and after to automatically extract features. The network structure is more lightweight and the calculation complexity is lower. In order to verify the effectiveness of the proposed classification model, the relevant comparison algorithm is used to verify on the MIT-BIH public data set. Secondly, the model is embedded in a wearable device to automatically classify the collected ECG. Finally, when an abnormality is detected, the user is alerted by an alarm. The experimental results show that the proposed model has a simple structure and a high classification and recognition rate, which can meet the needs of wearable devices for monitoring ECG of patients.


Entropy ◽  
2021 ◽  
Vol 23 (11) ◽  
pp. 1502
Author(s):  
Ben Wilkes ◽  
Igor Vatolkin ◽  
Heinrich Müller

We present a multi-modal genre recognition framework that considers the modalities audio, text, and image by features extracted from audio signals, album cover images, and lyrics of music tracks. In contrast to pure learning of features by a neural network as done in the related work, handcrafted features designed for a respective modality are also integrated, allowing for higher interpretability of created models and further theoretical analysis of the impact of individual features on genre prediction. Genre recognition is performed by binary classification of a music track with respect to each genre based on combinations of elementary features. For feature combination a two-level technique is used, which combines aggregation into fixed-length feature vectors with confidence-based fusion of classification results. Extensive experiments have been conducted for three classifier models (Naïve Bayes, Support Vector Machine, and Random Forest) and numerous feature combinations. The results are presented visually, with data reduction for improved perceptibility achieved by multi-objective analysis and restriction to non-dominated data. Feature- and classifier-related hypotheses are formulated based on the data, and their statistical significance is formally analyzed. The statistical analysis shows that the combination of two modalities almost always leads to a significant increase of performance and the combination of three modalities in several cases.


Author(s):  
Alexander M. Zolotarev ◽  
Brian J. Hansen ◽  
Ekaterina A. Ivanova ◽  
Katelynn M. Helfrich ◽  
Ning Li ◽  
...  

Background: Atrial fibrillation (AF) can be maintained by localized intramural reentrant drivers. However, AF driver detection by clinical surface-only multielectrode mapping (MEM) has relied on subjective interpretation of activation maps. We hypothesized that application of machine learning to electrogram frequency spectra may accurately automate driver detection by MEM and add some objectivity to the interpretation of MEM findings. Methods: Temporally and spatially stable single AF drivers were mapped simultaneously in explanted human atria (n=11) by subsurface near-infrared optical mapping (NIOM; 0.3 mm 2 resolution) and 64-electrode MEM (higher density or lower density with 3 and 9 mm 2 resolution, respectively). Unipolar MEM and NIOM recordings were processed by Fourier transform analysis into 28 407 total Fourier spectra. Thirty-five features for machine learning were extracted from each Fourier spectrum. Results: Targeted driver ablation and NIOM activation maps efficiently defined the center and periphery of AF driver preferential tracks and provided validated annotations for driver versus nondriver electrodes in MEM arrays. Compared with analysis of single electrogram frequency features, averaging the features from each of the 8 neighboring electrodes, significantly improved classification of AF driver electrograms. The classification metrics increased when less strict annotation, including driver periphery electrodes, were added to driver center annotation. Notably, f1-score for the binary classification of higher-density catheter data set was significantly higher than that of lower-density catheter (0.81±0.02 versus 0.66±0.04, P <0.05). The trained algorithm correctly highlighted 86% of driver regions with higher density but only 80% with lower-density MEM arrays (81% for lower-density+higher-density arrays together). Conclusions: The machine learning model pretrained on Fourier spectrum features allows efficient classification of electrograms recordings as AF driver or nondriver compared with the NIOM gold-standard. Future application of NIOM-validated machine learning approach may improve the accuracy of AF driver detection for targeted ablation treatment in patients.


Author(s):  
Aydin Ayanzadeh ◽  
Sahand Vahidnia

In this paper, we leverage state of the art models on&nbsp;Imagenet data-sets. We use the pre-trained model and learned&nbsp;weighs to extract the feature from the Dog breeds identification&nbsp;data-set. Afterwards, we applied fine-tuning and dataaugmentation&nbsp;to increase the performance of our test accuracy&nbsp;in classification of dog breeds datasets. The performance of the&nbsp;proposed approaches are compared with the state of the art&nbsp;models of Image-Net datasets such as ResNet-50, DenseNet-121,&nbsp;DenseNet-169 and GoogleNet. we achieved 89.66% , 85.37%&nbsp;84.01% and 82.08% test accuracy respectively which shows thesuperior performance of proposed method to the previous works&nbsp;on Stanford dog breeds datasets.


2019 ◽  
Vol 17 (1) ◽  
pp. 9-20
Author(s):  
I. O. AWOYELU ◽  
I. A. AGBOOLA

Learning disability is a general term that describes specific kinds of learning problems.  Although, Learning Disability cannot be cured medically, there exist several methods for detecting learning disabilities in a child. Existing methods of classification of learning disabilities in children are binary classification – either a child is normal or learning disabled. The focus of this paper is to extend the binary classification to multi-label classification of learning disabilities. This paper formulated and simulated a classification model for learning disabilities in primary school pupils. Information containing the symptoms of learning disabilities in pupils were elicited by administering five hundred (500) questionnaire to teachers of Primary One to Four pupils in fifteen government owned elementary schools within Ife Central Local Government Area, Ile-Ife of Osun State. The classification model was formulated using Principal Component Analysis, rule based system and back propagation algorithm. The formulated model was simulated using Waikatto Environment for Knowledge Analysis (WEKA) version 3.7.2. The performance of the model was evaluated using precision and accuracy. The classification model of primary one, primary two, primary three and primary four yielded precision rate of 95%, 91.18%, 93.10% and 93.60% respectively while the accuracy results were 95.00%, 91.18%, 93.10% and 93.60% respectively. The results obtained showed that the developed model proved to be accurate and precise in classifying pupils with learning disabilities in primary schools. The model can be adopted for the management of pupils with learning disabilities.  


2022 ◽  
Vol 2022 ◽  
pp. 1-11
Author(s):  
Zijin Wu

With the development of the country’s economy, there is a flourishing situation in the field of culture and art. However, the diversification of artistic expressions has not brought development to folk music. On the contrary, it brought a huge impact, and some national music even fell into the dilemma of being lost. This article is mainly aimed at the recognition and classification of folk music emotions and finds the model that can make the classification accuracy rate as high as possible. The classification model used in this article is mainly after determining the use of Support Vector Machine (SVM) classification method, a variety of attempts have been made to feature extraction, and good results have been achieved. Explore the Deep Belief Network (DBN) pretraining and reverse fine-tuning process, using DBN to learn the fusion characteristics of music. According to the abstract characteristics learned by them, the recognition and classification of folk music emotions are carried out. The DBN is improved by adding “Dropout” to each Restricted Boltzmann Machine (RBM) and adjusting the increase standard of weight and bias. The improved network can avoid the overfitting problem and speed up the training of the network. Through experiments, it is found that using the fusion features proposed in this paper, through classification, the classification accuracy has been improved.


2021 ◽  
Vol 12 ◽  
Author(s):  
Shengqi Yang ◽  
Ran Li ◽  
Jiliang Chen ◽  
Zhen Li ◽  
Zhangqin Huang ◽  
...  

Ca2+ sparks are the elementary Ca2+ release events in cardiomyocytes, altered properties of which lead to impaired Ca2+ handling and finally contribute to cardiac pathology under various diseases. Despite increasing use of machine-learning algorithms in deciphering the content of biological and medical data, Ca2+ spark images and data are yet to be deeply learnt and analyzed. In the present study, we developed a deep residual convolutional neural network method to detect Ca2+ sparks. Compared to traditional detection methods with arbitrarily defined thresholds to distinguish signals from noises, our new method detected more Ca2+ sparks with lower amplitudes but similar spatiotemporal distributions, thereby indicating that our new algorithm detected many very weak events that are usually omitted when using traditional detection methods. Furthermore, we proposed an event-based logistic regression and binary classification model to classify single cardiomyocytes using Ca2+ spark characteristics, which to date have generally been used only for simple statistical analyses and comparison between normal and diseased groups. Using this new detection algorithm and classification model, we succeeded in distinguishing wild type (WT) vs RyR2-R2474S± cardiomyocytes with 100% accuracy, and vehicle vs isoprenaline-insulted WT cardiomyocytes with 95.6% accuracy. The model can be extended to judge whether a small number of cardiomyocytes (and so the whole heart) are under a specific cardiac disease. Thus, this study provides a novel and powerful approach for the research and application of calcium signaling in cardiac diseases.


2020 ◽  
Vol 83 (6) ◽  
pp. 602-614
Author(s):  
Hidir Selcuk Nogay ◽  
Hojjat Adeli

<b><i>Introduction:</i></b> The diagnosis of epilepsy takes a certain process, depending entirely on the attending physician. However, the human factor may cause erroneous diagnosis in the analysis of the EEG signal. In the past 2 decades, many advanced signal processing and machine learning methods have been developed for the detection of epileptic seizures. However, many of these methods require large data sets and complex operations. <b><i>Methods:</i></b> In this study, an end-to-end machine learning model is presented for detection of epileptic seizure using the pretrained deep two-dimensional convolutional neural network (CNN) and the concept of transfer learning. The EEG signal is converted directly into visual data with a spectrogram and used directly as input data. <b><i>Results:</i></b> The authors analyzed the results of the training of the proposed pretrained AlexNet CNN model. Both binary and ternary classifications were performed without any extra procedure such as feature extraction. By performing data set creation from short-term spectrogram graphic images, the authors were able to achieve 100% accuracy for binary classification for epileptic seizure detection and 100% for ternary classification. <b><i>Discussion/Conclusion:</i></b> The proposed automatic identification and classification model can help in the early diagnosis of epilepsy, thus providing the opportunity for effective early treatment.


Sign in / Sign up

Export Citation Format

Share Document