scholarly journals TransCapsule Model for Sentiment Classification

Author(s):  
Dr. Akey Sungheetha ◽  
Dr. Rajesh Sharma R,

Aspect-level sentiment classification is the aspect of determining the text in a given document and classifying it according to the sentiment polarity with respect to the objective. However, since annotation cost is very high, it might serve a big obstacle for this purpose. However, from a consumer point of view, this is highly effective in reading document-level labelled data such as reviews which are present online using neural network. The online reviews are packed with sentiment encoded text which can be analyzed using this proposed methodology. In this paper a Transfer Capsule Network model is used which has the ability to transfer the knowledge gained at document-level to the aspect-level to classify according to the sentiment detected in the text. As the first step, the sentence is broken down in semantic representations using aspect routing to form semantic capsule data of both document-level and aspect-level. This routing approach is extended to group the semantic capsules for transfer learning framework. The effectiveness of the proposed methodology are experimented and demonstrated to determine how superior they are to the other methodologies proposed.

2021 ◽  
Vol 10 (9) ◽  
pp. 25394-25398
Author(s):  
Chitra Desai

Deep learning models have demonstrated improved efficacy in image classification since the ImageNet Large Scale Visual Recognition Challenge started since 2010. Classification of images has further augmented in the field of computer vision with the dawn of transfer learning. To train a model on huge dataset demands huge computational resources and add a lot of cost to learning. Transfer learning allows to reduce on cost of learning and also help avoid reinventing the wheel. There are several pretrained models like VGG16, VGG19, ResNet50, Inceptionv3, EfficientNet etc which are widely used.   This paper demonstrates image classification using pretrained deep neural network model VGG16 which is trained on images from ImageNet dataset. After obtaining the convolutional base model, a new deep neural network model is built on top of it for image classification based on fully connected network. This classifier will use features extracted from the convolutional base model.


Author(s):  
Pawan Sonawane ◽  
Sahel Shardhul ◽  
Raju Mendhe

The vast majority of skin cancer deaths are from melanoma, with about 1.04 million cases annually. Early detection of the same can be immensely helpful in order to try to cure it. But most of the diagnosis procedures are either extremely expensive or not available to a vast majority, as these centers are concentrated in urban regions only. Thus, there is a need for an application that can perform a quick, efficient, and low-cost diagnosis. Our solution proposes to build a server less mobile application on the AWS cloud that takes the images of potential skin tumors and classifies it as either Malignant or Benign. The classification would be carried out using a trained Convolution Neural Network model and Transfer learning (Inception v3). Several experiments will be performed based on Morphology and Color of the tumor to identify ideal parameters.


2020 ◽  
Author(s):  
Wen-Hsien Chang ◽  
Han-Kuei Wu ◽  
Lun-chien Lo ◽  
William W. L. Hsiao ◽  
Hsueh-Ting Chu ◽  
...  

Abstract Background: Traditional Chinese medicine (TCM) describes physiological and pathological changes inside and outside the human body by the application of four methods of diagnosis. One of the four methods, tongue diagnosis, is widely used by TCM physicians, since it allows direct observations that prevent discrepancies in the patient’s history and, as such, provides clinically important, objective evidence. The clinical significance of tongue features has been explored in both TCM and modern medicine. However, TCM physicians may have different interpretations of the features displayed by the same tongue, and therefore intra- and inter-observer agreements are relatively low. If an automated interpretation system could be developed, more consistent results could be obtained, and learning could also be more efficient. This study will apply a recently developed deep learning method to the classification of tongue features, and indicate the regions where the features are located.Methods: A large number of tongue photographs with labeled fissures were used. Transfer learning was conducted using the ImageNet-pretrained ResNet50 model to determine whether tongue fissures were identified on a tongue photograph. Often, the neural network model lacks interpretability, and users cannot understand how the model determines the presence of tongue fissures. Therefore, Gradient-weighted Class Activation Mapping (Grad-CAM) was also applied to directly mark the tongue features on the tongue image. Results: Only 6 epochs were trained in this study and no graphics processing units (GPUs) were used. It took less than 4 minutes for each epoch to be trained. The correct rate for the test set was approximately 70%. After the model training was completed, Grad-CAM was applied to localize tongue fissures in each image. The neural network model not only determined whether tongue fissures existed, but also allowed users to learn about the tongue fissure regions.Conclusions: This study demonstrated how to apply transfer learning using the ImageNet-pretrained ResNet50 model for the identification and localization of tongue fissures and regions. The neural network model built in this study provided interpretability and intuitiveness, (often lacking in general neural network models), and improved the feasibility for clinical application.


Complexity ◽  
2020 ◽  
Vol 2020 ◽  
pp. 1-11 ◽  
Author(s):  
Muhammad Mateen ◽  
Junhao Wen ◽  
Nasrullah Nasrullah ◽  
Song Sun ◽  
Shaukat Hayat

In the field of ophthalmology, diabetic retinopathy (DR) is a major cause of blindness. DR is based on retinal lesions including exudate. Exudates have been found to be one of the signs and serious DR anomalies, so the proper detection of these lesions and the treatment should be done immediately to prevent loss of vision. In this paper, pretrained convolutional neural network- (CNN-) based framework has been proposed for the detection of exudate. Recently, deep CNNs were individually applied to solve the specific problems. But, pretrained CNN models with transfer learning can utilize the previous knowledge to solve the other related problems. In the proposed approach, initially data preprocessing is performed for standardization of exudate patches. Furthermore, region of interest (ROI) localization is used to localize the features of exudates, and then transfer learning is performed for feature extraction using pretrained CNN models (Inception-v3, Residual Network-50, and Visual Geometry Group Network-19). Moreover, the fused features from fully connected (FC) layers are fed into the softmax classifier for exudate classification. The performance of proposed framework has been analyzed using two well-known publicly available databases such as e-Ophtha and DIARETDB1. The experimental results demonstrate that the proposed pretrained CNN-based framework outperforms the existing techniques for the detection of exudates.


Author(s):  
Hojun Lee ◽  
Donghwan Yun ◽  
Jayeon Yoo ◽  
Kiyoon Yoo ◽  
Yong Chul Kim ◽  
...  

Background and objectivesIntradialytic hypotension has high clinical significance. However, predicting it using conventional statistical models may be difficult because several factors have interactive and complex effects on the risk. Herein, we applied a deep learning model (recurrent neural network) to predict the risk of intradialytic hypotension using a timestamp-bearing dataset.Design, setting, participants, & measurementsWe obtained 261,647 hemodialysis sessions with 1,600,531 independent timestamps (i.e., time-varying vital signs) and randomly divided them into training (70%), validation (5%), calibration (5%), and testing (20%) sets. Intradialytic hypotension was defined when nadir systolic BP was <90 mm Hg (termed intradialytic hypotension 1) or when a decrease in systolic BP ≥20 mm Hg and/or a decrease in mean arterial pressure ≥10 mm Hg on the basis of the initial BPs (termed intradialytic hypotension 2) or prediction time BPs (termed intradialytic hypotension 3) occurred within 1 hour. The area under the receiver operating characteristic curves, the area under the precision-recall curves, and F1 scores obtained using the recurrent neural network model were compared with those obtained using multilayer perceptron, Light Gradient Boosting Machine, and logistic regression models.ResultsThe recurrent neural network model for predicting intradialytic hypotension 1 achieved an area under the receiver operating characteristic curve of 0.94 (95% confidence intervals, 0.94 to 0.94), which was higher than those obtained using the other models (P<0.001). The recurrent neural network model for predicting intradialytic hypotension 2 and intradialytic hypotension 3 achieved area under the receiver operating characteristic curves of 0.87 (interquartile range, 0.87–0.87) and 0.79 (interquartile range, 0.79–0.79), respectively, which were also higher than those obtained using the other models (P≤0.001). The area under the precision-recall curve and F1 score were higher using the recurrent neural network model than they were using the other models. The recurrent neural network models for intradialytic hypotension were highly calibrated.ConclusionsOur deep learning model can be used to predict the real-time risk of intradialytic hypotension.


Sign in / Sign up

Export Citation Format

Share Document