Empirical study of shallow and deep learning models for sarcasm detection using context in benchmark datasets

Author(s):  
Akshi Kumar ◽  
Geetanjali Garg
2020 ◽  
Vol 34 (07) ◽  
pp. 11890-11898
Author(s):  
Zhongang Qi ◽  
Saeed Khorram ◽  
Li Fuxin

Understanding and interpreting the decisions made by deep learning models is valuable in many domains. In computer vision, computing heatmaps from a deep network is a popular approach for visualizing and understanding deep networks. However, heatmaps that do not correlate with the network may mislead human, hence the performance of heatmaps in providing a faithful explanation to the underlying deep network is crucial. In this paper, we propose I-GOS, which optimizes for a heatmap so that the classification scores on the masked image would maximally decrease. The main novelty of the approach is to compute descent directions based on the integrated gradients instead of the normal gradient, which avoids local optima and speeds up convergence. Compared with previous approaches, our method can flexibly compute heatmaps at any resolution for different user needs. Extensive experiments on several benchmark datasets show that the heatmaps produced by our approach are more correlated with the decision of the underlying deep network, in comparison with other state-of-the-art approaches.


Author(s):  
Himanshu Gupta ◽  
Tanmay Girish Kulkarni ◽  
Lov Kumar ◽  
Lalita Bhanu Murthy Neti ◽  
Aneesh Krishna

2021 ◽  
Vol 1950 (1) ◽  
pp. 012071
Author(s):  
Monika Sethi ◽  
Sachin Ahuja ◽  
Vinay Kukreja

Author(s):  
Amir Pouran Ben Veyseh ◽  
Thien Nguyen ◽  
Dejing Dou

Relation Extraction (RE) is one of the fundamental tasks in Information Extraction and Natural Language Processing. Dependency trees have been shown to be a very useful source of information for this task. The current deep learning models for relation extraction has mainly exploited this dependency information by guiding their computation along the structures of the dependency trees. One potential problem with this approach is it might prevent the models from capturing important context information beyond syntactic structures and cause the poor cross-domain generalization. This paper introduces a novel method to use dependency trees in RE for deep learning models that jointly predicts dependency and semantics relations. We also propose a new mechanism to control the information flow in the model based on the input entity mentions. Our extensive experiments on benchmark datasets show that the proposed model outperforms the existing methods for RE significantly.


2020 ◽  
Author(s):  
Xinhao Li ◽  
Denis Fourches

SMILES-based deep learning models are slowly emerging as an important research topic in cheminformatics. In this study, we introduce SMILES Pair Encoding (SPE), a data-driven tokenization algorithm. SPE first learns a vocabulary of high frequency SMILES substrings from a large chemical dataset (e.g., ChEMBL) and then tokenizes SMILES based on the learned vocabulary for deep learning models. As a result, SPE augments the widely used atom-level tokenization by adding human-readable and chemically explainable SMILES substrings as tokens. Case studies show that SPE can achieve superior performances for both molecular generation and property prediction tasks. In molecular generation task, SPE can boost the validity and novelty of generated SMILES. Herein, the molecular property prediction models were evaluated using 24 benchmark datasets where SPE consistently either did match or outperform atom-level tokenization. Therefore SPE could be a promising tokenization method for SMILES-based deep learning models. An open source Python package <i>SmilesPE</i> was developed to implement this algorithm and is now available at <a href="https://github.com/XinhaoLi74/SmilesPE">https://github.com/XinhaoLi74/SmilesPE</a>.


Due to advancement of multimedia technology, availability and usage of image and video data is enormous. For indexing and retrieving those data, there is a need for an efficient technique. Now, Automatic keyword generation for images is a focussed research which has lot of attractions. In general, conventional auto annotation methods having lesser performance over deep learning methods. The annotation is transformed as captioning in deep learning models. In this paper, we propose a new model CSL Net (CSLN) as a combination of convoluted squeeze and excitation block with Bi-LSTM blocks to predict tags for images. The proposed model is evaluated using the various benchmark datasets like CIFAR10, Corel5K, ESPGame and IAPRTC12. It is observed that, the proposed work yields better results compared to that of the existing methods in term of precision, recall and accuracy


Author(s):  
Liuyu Xiang ◽  
Xiaoming Jin ◽  
Lan Yi ◽  
Guiguang Ding

Deep learning models such as convolutional neural networks and recurrent networks are widely applied in text classification. In spite of their great success, most deep learning models neglect the importance of modeling context information, which is crucial to understanding texts. In this work, we propose the Adaptive Region Embedding to learn context representation to improve text classification. Specifically, a metanetwork is learned to generate a context matrix for each region, and each word interacts with its corresponding context matrix to produce the regional representation for further classification. Compared to previous models that are designed to capture context information, our model contains less parameters and is more flexible. We extensively evaluate our method on 8 benchmark datasets for text classification. The experimental results prove that our method achieves state-of-the-art performances and effectively avoids word ambiguity.


2020 ◽  
Author(s):  
Xinhao Li ◽  
Denis Fourches

SMILES-based deep learning models are slowly emerging as an important research topic in cheminformatics. In this study, we introduce SMILES Pair Encoding (SPE), a data-driven tokenization algorithm. SPE first learns a vocabulary of high frequency SMILES substrings from a large chemical dataset (e.g., ChEMBL) and then tokenizes SMILES based on the learned vocabulary for deep learning models. As a result, SPE augments the widely used atom-level tokenization by adding human-readable and chemically explainable SMILES substrings as tokens. Case studies show that SPE can achieve superior performances for both molecular generation and property prediction tasks. In molecular generation task, SPE can boost the validity and novelty of generated SMILES. Herein, the molecular property prediction models were evaluated using 24 benchmark datasets where SPE consistently either did match or outperform atom-level tokenization. Therefore SPE could be a promising tokenization method for SMILES-based deep learning models. An open source Python package <i>SmilesPE</i> was developed to implement this algorithm and is now available at <a href="https://github.com/XinhaoLi74/SmilesPE">https://github.com/XinhaoLi74/SmilesPE</a>.


Sign in / Sign up

Export Citation Format

Share Document