scholarly journals A Novel Region-Extreme Convolutional Neural Network for Melanoma Malignancy Recognition

2021 ◽  
Vol 2021 ◽  
pp. 1-18
Author(s):  
Nudrat Nida ◽  
Aun Irtaza ◽  
Muhammad Haroon Yousaf

Melanoma malignancy recognition is a challenging task due to the existence of intraclass similarity, natural or clinical artefacts, skin contrast variation, and higher visual similarity among the normal or melanoma-affected skin. To overcome these problems, we propose a novel solution by leveraging “region-extreme convolutional neural network” for melanoma malignancy recognition as malignant or benign. Recent works on melanoma malignancy recognition employed the traditional machine learning techniques based on various handcrafted features or the recently introduced CNN network. However, the efficient training of these models is possible, if they localize the melanoma affected region and learn high-level feature representation from melanoma lesion to predict melanoma malignancy. In this paper, we incorporate this observation and propose a novel “region-extreme convolutional neural network” for melanoma malignancy recognition. Our proposed region-extreme convolutional neural network refines dermoscopy images to eliminate natural or clinical artefacts, localizes melanoma affected region, and defines precise boundary around the melanoma lesion. The defined melanoma lesion is used to generate deep feature maps for model learning using the extreme learning machine (ELM) classifier. The proposed model is evaluated on two challenge datasets (ISIC-2016 and ISIC-2017) and performs better than ISIC challenge winners. Our region-extreme convolutional neural network recognizes the melanoma malignancy 85% on ISIC-2016 and 93% on ISIC-2017 datasets. Our region-extreme convolutional neural network precisely segments the melanoma lesion with an average Jaccard index of 0.93 and Dice score of 0.94. The region-extreme convolutional neural network has several advantages: it eliminates the clinical and natural artefacts from dermoscopic images, precisely localizes and segments the melanoma lesion, and improves the melanoma malignancy recognition through feedforward model learning. The region-extreme convolutional neural network achieves significant performance improvement over existing methods that makes it adaptable for solving complex medical image analysis problems.

2021 ◽  
Vol 38 (4) ◽  
pp. 1229-1235
Author(s):  
Derya Avci ◽  
Eser Sert

Marble is one of the most popular decorative elements. Marble quality varies depending on its vein patterns and color, which are the two most important factors affecting marble quality and class. The manual classification of marbles is likely to lead to various mistakes due to different optical illusions. However, computer vision minimizes these mistakes thanks to artificial intelligence and machine learning. The present study proposes the Convolutional Neural Network- (CNN-) with genetic algorithm- (GA) Wavelet Kernel- (WK-) Extreme Learning Machine (ELM) (CNN–GA-WK-ELM) approach. Using CNN architectures such as AlexNet, VGG-19, SqueezeNet, and ResNet-50, the proposed approach obtained 4 different feature vectors from 10 different marble images. Later, Genetic Algorithm (GA) was used to optimize adjustable parameters, i.e. k, 1, and m, and hidden layer neuron number in Wavelet Kernel (WK) – Extreme Learning Machine (ELM) and to increase the performance of ELM. Finally, 4 different feature vector parameters were optimized and classified using the WK-ELM classifier. The proposed CNN–GA-WK-ELM yielded an accuracy rate of 98.20%, 96.40%, 96.20%, and 95.60% using AlexNet, SequeezeNet, VGG-19, and ResNet-50, respectively.


2020 ◽  
Vol 13 (1) ◽  
pp. 119
Author(s):  
Song Ouyang ◽  
Yansheng Li

Although the deep semantic segmentation network (DSSN) has been widely used in remote sensing (RS) image semantic segmentation, it still does not fully mind the spatial relationship cues between objects when extracting deep visual features through convolutional filters and pooling layers. In fact, the spatial distribution between objects from different classes has a strong correlation characteristic. For example, buildings tend to be close to roads. In view of the strong appearance extraction ability of DSSN and the powerful topological relationship modeling capability of the graph convolutional neural network (GCN), a DSSN-GCN framework, which combines the advantages of DSSN and GCN, is proposed in this paper for RS image semantic segmentation. To lift the appearance extraction ability, this paper proposes a new DSSN called the attention residual U-shaped network (AttResUNet), which leverages residual blocks to encode feature maps and the attention module to refine the features. As far as GCN, the graph is built, where graph nodes are denoted by the superpixels and the graph weight is calculated by considering the spectral information and spatial information of the nodes. The AttResUNet is trained to extract the high-level features to initialize the graph nodes. Then the GCN combines features and spatial relationships between nodes to conduct classification. It is worth noting that the usage of spatial relationship knowledge boosts the performance and robustness of the classification module. In addition, benefiting from modeling GCN on the superpixel level, the boundaries of objects are restored to a certain extent and there are less pixel-level noises in the final classification result. Extensive experiments on two publicly open datasets show that DSSN-GCN model outperforms the competitive baseline (i.e., the DSSN model) and the DSSN-GCN when adopting AttResUNet achieves the best performance, which demonstrates the advance of our method.


Author(s):  
В’ячеслав Васильович Москаленко ◽  
Альона Сергіївна Москаленко ◽  
Артем Геннадійович Коробов ◽  
Микола Олександрович Зарецький ◽  
Віктор Анатолійович Семашко

The efficient model and learning algorithm of the small object detection system for compact aerial vehicle under conditions of restricted computing resources and the limited volume of the labeled learning set are developed. The four-stage learning algorithm of the object detector is proposed. At the first stage, selecting the type of deep convolutional neural network and the number of low-level layers that is pretrained on the ImageNet dataset for reusing takes place. The second stage involves unsupervised learning of high-level convolutional sparse coding layers using the modification of growing neural gas to automatically determine the required number of neurons and provide optimal distributions of the neurons over the data. Its application makes it possible to utilize the unlabeled learning datasets for the adaptation of the high-level feature description to the domain application area. At the third stage, the output feature map is formed by concatenation of feature maps from the different level of the deep convolutional neural network. At that, there is a reduction of output feature map using principal component analysis and followed by the building of decision rules. In order to perform the classification analysis of output, feature map is proposed to use information-extreme classifier learning on principles of boosting. Besides that, the orthogonal incremental extreme learning machine is used to build the regression model for the predict bounding box of the detected small object. The last stage involves fine-tuning of high-level layers of deep network using simulated annealing metaheuristic algorithm in order to approximate the global optimum of the complex criterion of learning efficiency of detection model. As a result of the use of proposed approach has been achieved 96% correctly detection of objects on the images of the open test dataset which indicates the suitability of the model and learning algorithm for practical use. In this case, the size of the learning dataset that has been used to construct the model was 500 unlabeled and 200 labeled learning samples


Author(s):  
G. D. Praveenkumar ◽  
Dr. R. Nagaraj

In this paper, we introduce a new deep convolutional neural network based extreme learning machine model for the classification task in order to improve the network's performance. The proposed model has two stages: first, the input images are fed into a convolutional neural network layer to extract deep-learned attributes, and then the input is classified using an ELM classifier. The proposed model achieves good recognition accuracy while reducing computational time on both the MNIST and CIFAR-10 benchmark datasets.


2019 ◽  
Vol 9 (20) ◽  
pp. 4209 ◽  
Author(s):  
Yongmei Ren ◽  
Jie Yang ◽  
Qingnian Zhang ◽  
Zhiqiang Guo

The appearance of ships is easily affected by external factors—illumination, weather conditions, and sea state—that make ship classification a challenging task. To facilitate realization of enhanced ship-classification performance, this study proposes a ship classification method based on multi-feature fusion with a convolutional neural network (CNN). First, an improved CNN characterized by shallow layers and few parameters is proposed to learn high-level features and capture structural information. Second, handcrafted features of the histogram of oriented gradients (HOG) and local binary patterns (LBP) are combined with high-level features extracted by the improved CNN in the last fully connected layer to obtain discriminative feature representation. The handcrafted features supplement the edge information and spatial texture information of the ship images. Then, the Softmax function is used to classify different types of ships in the output layer. Effectiveness of the proposed method is evaluated based on its application to two datasets—one self-built and the other publicly available, called visible and infrared spectrums (VAIS). As observed, the proposed method demonstrated attainment of average classification accuracies equal to 97.50% and 93.60%, respectively, when applied to these datasets. Additionally, results obtained in terms of the F1-score and confusion matrix demonstrate the proposed method to be superior to some state-of-the-art methods.


Electronics ◽  
2019 ◽  
Vol 8 (12) ◽  
pp. 1544
Author(s):  
Yu Wang ◽  
Shuyang Ma ◽  
Xuanjing Shen

In order to reduce the computational consumption of the training and the testing phases of video face recognition methods based on a global statistical method and a deep learning network, a novel video face verification algorithm based on a three-patch local binary pattern (TPLBP) and the 3D Siamese convolutional neural network is proposed in this paper. The proposed method takes the TPLBP texture feature which has excellent performance in face analysis as the input of the network. In order to extract the inter-frame information of the video, the texture feature maps of the multi-frames are stacked, and then a shallow Siamese 3D convolutional neural network is used to realize dimension reduction. The similarity of high-level features of the video pair is solved by the shallow Siamese 3D convolutional neural network, and then mapped to the interval of 0 to 1 by linear transformation. The classification result can be obtained with the threshold of 0.5. Through an experiment on the YouTube Face database, the proposed algorithm got higher accuracy with less computational consumption than baseline methods and deep learning methods.


2019 ◽  
Vol 11 (4) ◽  
pp. 419 ◽  
Author(s):  
Qiaoqiao Shi ◽  
Wei Li ◽  
Ran Tao ◽  
Xu Sun ◽  
Lianru Gao

As an important part of maritime traffic, ships play an important role in military and civilian applications. However, ships’ appearances are susceptible to some factors such as lighting, occlusion, and sea state, making ship classification more challenging. This is of great importance when exploring global and detailed information for ship classification in optical remote sensing images. In this paper, a novel method to obtain discriminative feature representation of a ship image is proposed. The proposed classification framework consists of a multifeature ensemble based on convolutional neural network (ME-CNN). Specifically, two-dimensional discrete fractional Fourier transform (2D-DFrFT) is employed to extract multi-order amplitude and phase information, which contains such important information as profiles, edges, and corners; completed local binary pattern (CLBP) is used to obtain local information about ship images; Gabor filter is used to gain the global information about ship images. Then, deep convolutional neural network (CNN) is applied to extract more abstract features based on the above information. CNN, extracting high-level features automatically, has performed well for object classification tasks. After high-feature learning, as the one of fusion strategies, decision-level fusion is investigated for the final classification result. The average accuracy of the proposed approach is 98.75% on the BCCT200-resize data, 92.50% on the original BCCT200 data, and 87.33% on the challenging VAIS data, which validates the effectiveness of the proposed method when compared to the existing state-of-art algorithms.


Genes ◽  
2021 ◽  
Vol 12 (8) ◽  
pp. 1155
Author(s):  
Naeem Islam ◽  
Jaebyung Park

RNA modification is vital to various cellular and biological processes. Among the existing RNA modifications, N6-methyladenosine (m6A) is considered the most important modification owing to its involvement in many biological processes. The prediction of m6A sites is crucial because it can provide a better understanding of their functional mechanisms. In this regard, although experimental methods are useful, they are time consuming. Previously, researchers have attempted to predict m6A sites using computational methods to overcome the limitations of experimental methods. Some of these approaches are based on classical machine-learning techniques that rely on handcrafted features and require domain knowledge, whereas other methods are based on deep learning. However, both methods lack robustness and yield low accuracy. Hence, we develop a branch-based convolutional neural network and a novel RNA sequence representation. The proposed network automatically extracts features from each branch of the designated inputs. Subsequently, these features are concatenated in the feature space to predict the m6A sites. Finally, we conduct experiments using four different species. The proposed approach outperforms existing state-of-the-art methods, achieving accuracies of 94.91%, 94.28%, 88.46%, and 94.8% for the H. sapiens, M. musculus, S. cerevisiae, and A. thaliana datasets, respectively.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Bambang Tutuko ◽  
Siti Nurmaini ◽  
Alexander Edo Tondas ◽  
Muhammad Naufal Rachmatullah ◽  
Annisa Darmawahyuni ◽  
...  

Abstract Background Generalization model capacity of deep learning (DL) approach for atrial fibrillation (AF) detection remains lacking. It can be seen from previous researches, the DL model formation used only a single frequency sampling of the specific device. Besides, each electrocardiogram (ECG) acquisition dataset produces a different length and sampling frequency to ensure sufficient precision of the R–R intervals to determine the heart rate variability (HRV). An accurate HRV is the gold standard for predicting the AF condition; therefore, a current challenge is to determine whether a DL approach can be used to analyze raw ECG data in a broad range of devices. This paper demonstrates powerful results for end-to-end implementation of AF detection based on a convolutional neural network (AFibNet). The method used a single learning system without considering the variety of signal lengths and frequency samplings. For implementation, the AFibNet is processed with a computational cloud-based DL approach. This study utilized a one-dimension convolutional neural networks (1D-CNNs) model for 11,842 subjects. It was trained and validated with 8232 records based on three datasets and tested with 3610 records based on eight datasets. The predicted results, when compared with the diagnosis results indicated by human practitioners, showed a 99.80% accuracy, sensitivity, and specificity. Result Meanwhile, when tested using unseen data, the AF detection reaches 98.94% accuracy, 98.97% sensitivity, and 98.97% specificity at a sample period of 0.02 seconds using the DL Cloud System. To improve the confidence of the AFibNet model, it also validated with 18 arrhythmias condition defined as Non-AF-class. Thus, the data is increased from 11,842 to 26,349 instances for three-class, i.e., Normal sinus (N), AF and Non-AF. The result found 96.36% accuracy, 93.65% sensitivity, and 96.92% specificity. Conclusion These findings demonstrate that the proposed approach can use unknown data to derive feature maps and reliably detect the AF periods. We have found that our cloud-DL system is suitable for practical deployment


Sign in / Sign up

Export Citation Format

Share Document