deep feature extraction
Recently Published Documents


TOTAL DOCUMENTS

105
(FIVE YEARS 79)

H-INDEX

10
(FIVE YEARS 4)

Author(s):  
Chetan Gedam

Cancer is a heterogeneous disorder comprising various types and sub-types. Early detection, screening, and diagnosis of cancer types are necessary for facilitating cancer research in early diagnosis, management, and the evolution of successful therapies. Existing methodologies were only able to classify and diagnose a single variety of cancer based on a homogeneous dataset but more focused on predicting patient survivability then cure. This research defines a machine learning-based methodology to develop an universal approach in diagnosis, detection, symptoms-based prediction, and screening of histopathology cancer, their types, and sub types using a heterogeneous dataset based on images and scans. In this architecture, we use VGG-19 based 3D-Convolutional Neural Network for deep feature extraction and later perform regression using a random forest algorithm. We create a heterogeneous dataset consisting of results from laboratory tests, imaging tests and biopsy reports, not only relying on clinical images. Initially, we categorize tumors and lesions as benign or malignant and classify the malignant lesions into their sub-types, detecting their severity and growth rate. Our system is designed to predict risk at multiple time-points, leverage optional risk factors if they are available and produce predictions that are consistent across mammography machines. We found the classification accuracy for categorizing tumors as cancerous to be 95% whereas the accuracy for classification of malignant lesions into their sub-types to be 94%..


Author(s):  
D. Minola Davids ◽  
C. Seldev Christopher

The visual data attained from surveillance single-camera or multi-view camera networks is exponentially increasing every day. Identifying the important shots in the presented video which faithfully signify the original video is the major task in video summarization. For executing efficient video summarization of the surveillance systems, optimization algorithm like LFOB-COA is proposed in this paper. Data collection, pre-processing, deep feature extraction (FE), shot segmentation JSFCM, classification using Rectified Linear Unit activated BLSTM, and LFOB-COA are the proposed method’s five steps. Finally a post-processing step is utilized. For recognizing the proposed method’s effectiveness, the results are then contrasted with the existent methods.


2022 ◽  
Vol 14 (1) ◽  
pp. 206
Author(s):  
Kai Hu ◽  
Meng Li ◽  
Min Xia ◽  
Haifeng Lin

Water area segmentation is an important branch of remote sensing image segmentation, but in reality, most water area images have complex and diverse backgrounds. Traditional detection methods cannot accurately identify small tributaries due to incomplete mining and insufficient utilization of semantic information, and the edge information of segmentation is rough. To solve the above problems, we propose a multi-scale feature aggregation network. In order to improve the ability of the network to process boundary information, we design a deep feature extraction module using a multi-scale pyramid to extract features, combined with the designed attention mechanism and strip convolution, extraction of multi-scale deep semantic information and enhancement of spatial and location information. Then, the multi-branch aggregation module is used to interact with different scale features to enhance the positioning information of the pixels. Finally, the two high-performance branches designed in the Feature Fusion Upsample module are used to deeply extract the semantic information of the image, and the deep information is fused with the shallow information generated by the multi-branch module to improve the ability of the network. Global and local features are used to determine the location distribution of each image category. The experimental results show that the accuracy of the segmentation method in this paper is better than that in the previous detection methods, and has important practical significance for the actual water area segmentation.


2021 ◽  
Vol 29 (06) ◽  
Author(s):  
Jiajia Lei ◽  
Xiaohai He ◽  
Chao Ren ◽  
Xiaohong Wu ◽  
Yi Wang

2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Tahia Tazin ◽  
Sraboni Sarker ◽  
Punit Gupta ◽  
Fozayel Ibn Ayaz ◽  
Sumaia Islam ◽  
...  

Brain tumors are the most common and aggressive illness, with a relatively short life expectancy in their most severe form. Thus, treatment planning is an important step in improving patients’ quality of life. In general, image methods such as computed tomography (CT), magnetic resonance imaging (MRI), and ultrasound images are used to assess tumors in the brain, lung, liver, breast, prostate, and so on. X-ray images, in particular, are utilized in this study to diagnose brain tumors. This paper describes the investigation of the convolutional neural network (CNN) to identify brain tumors from X-ray images. It expedites and increases the reliability of the treatment. Because there has been a significant amount of study in this field, the presented model focuses on boosting accuracy while using a transfer learning strategy. Python and Google Colab were utilized to perform this investigation. Deep feature extraction was accomplished with the help of pretrained deep CNN models, VGG19, InceptionV3, and MobileNetV2. The classification accuracy is used to assess the performance of this paper. MobileNetV2 had the accuracy of 92%, InceptionV3 had the accuracy of 91%, and VGG19 had the accuracy of 88%. MobileNetV2 has offered the highest level of accuracy among these networks. These precisions aid in the early identification of tumors before they produce physical adverse effects such as paralysis and other impairments.


2021 ◽  
Vol 11 (23) ◽  
pp. 11313
Author(s):  
Xiaomin Pu ◽  
Guangxi Yan ◽  
Chengqing Yu ◽  
Xiwei Mi ◽  
Chengming Yu

In recent years, online course learning has gradually become the mainstream of learning. As the key data reflecting the quality of online courses, users’ comments are very important for improving the quality of online courses. The sentiment information contained in comments is the guide of course improvement. A new ensemble model is proposed for sentiment analysis. The model takes full advantage of Word2Vec and Glove in word vector representation, and utilizes the bidirectional long and short time network and convolutional neural network to achieve deep feature extraction. Moreover, the multi-objective gray wolf optimization (MOGWO) ensemble method is adopted to integrate the models mentioned above. The experimental results show that the sentiment recognition accuracy of the proposed model is higher than that of the other seven comparison models, with an F1score over 91%, and the recognition results of different emotion levels indicate the stability of the proposed ensemble model.


2021 ◽  
Vol 38 (5) ◽  
pp. 1281-1291
Author(s):  
Yusra Obeidat ◽  
Ali Mohammad Alqudah

In this paper we have utilized a hybrid lightweight 1D deep learning model that combines convolutional neural network (CNN) and long short-term memory (LSTM) methods for accurate, fast, and automated beat-wise ECG classification. The CNN and LSTM models were designed separately to compare with the hybrid CNN-LSTM model in terms of accuracy, number of parameters, and the time required for classification. The hybrid CNN-LSTM system provides an automated deep feature extraction and classification for six ECG beats classes including Normal Sinus Rhythm (NSR), atrial fibrillation (AFIB), atrial flutter (AFL), atrial premature beat (APB), left bundle branch block (LBBB), and right bundle branch block (RBBB). The hybrid model uses the CNN blocks for deep feature extraction and selection from the ECG beat. While the LSTM layer will learn how to extract contextual time information. The results show that the proposed hybrid CNN-LSTM model achieves high accuracy and sensitivity of 98.22% and 98.23% respectively. This model is light and fast in classifying ECG beats and superior to other previously used models which makes it very suitable for embedded systems designs that can be used in clinical applications for monitoring heart diseases in faster and more efficient manner.


Electronics ◽  
2021 ◽  
Vol 10 (21) ◽  
pp. 2666
Author(s):  
Ahmad Alzu’bi ◽  
Firas Albalas ◽  
Tawfik AL-Hadhrami ◽  
Lojin Bani Bani Younis ◽  
Amjad Bashayreh

A large number of intelligent models for masked face recognition (MFR) has been recently presented and applied in various fields, such as masked face tracking for people safety or secure authentication. Exceptional hazards such as pandemics and frauds have noticeably accelerated the abundance of relevant algorithm creation and sharing, which has introduced new challenges. Therefore, recognizing and authenticating people wearing masks will be a long-established research area, and more efficient methods are needed for real-time MFR. Machine learning has made progress in MFR and has significantly facilitated the intelligent process of detecting and authenticating persons with occluded faces. This survey organizes and reviews the recent works developed for MFR based on deep learning techniques, providing insights and thorough discussion on the development pipeline of MFR systems. State-of-the-art techniques are introduced according to the characteristics of deep network architectures and deep feature extraction strategies. The common benchmarking datasets and evaluation metrics used in the field of MFR are also discussed. Many challenges and promising research directions are highlighted. This comprehensive study considers a wide variety of recent approaches and achievements, aiming to shape a global view of the field of MFR.


Sign in / Sign up

Export Citation Format

Share Document