scholarly journals Coarse2Fine: a two-stage training method for fine-grained visual classification

2021 ◽  
Vol 32 (2) ◽  
Author(s):  
Amir Erfan Eshratifar ◽  
David Eigen ◽  
Michael Gormish ◽  
Massoud Pedram
2021 ◽  
Vol 30 ◽  
pp. 2826-2836 ◽  
Author(s):  
Yifeng Ding ◽  
Zhanyu Ma ◽  
Shaoguo Wen ◽  
Jiyang Xie ◽  
Dongliang Chang ◽  
...  

2020 ◽  
Vol 34 (05) ◽  
pp. 8600-8607
Author(s):  
Haiyun Peng ◽  
Lu Xu ◽  
Lidong Bing ◽  
Fei Huang ◽  
Wei Lu ◽  
...  

Target-based sentiment analysis or aspect-based sentiment analysis (ABSA) refers to addressing various sentiment analysis tasks at a fine-grained level, which includes but is not limited to aspect extraction, aspect sentiment classification, and opinion extraction. There exist many solvers of the above individual subtasks or a combination of two subtasks, and they can work together to tell a complete story, i.e. the discussed aspect, the sentiment on it, and the cause of the sentiment. However, no previous ABSA research tried to provide a complete solution in one shot. In this paper, we introduce a new subtask under ABSA, named aspect sentiment triplet extraction (ASTE). Particularly, a solver of this task needs to extract triplets (What, How, Why) from the inputs, which show WHAT the targeted aspects are, HOW their sentiment polarities are and WHY they have such polarities (i.e. opinion reasons). For instance, one triplet from “Waiters are very friendly and the pasta is simply average” could be (‘Waiters’, positive, ‘friendly’). We propose a two-stage framework to address this task. The first stage predicts what, how and why in a unified model, and then the second stage pairs up the predicted what (how) and why from the first stage to output triplets. In the experiments, our framework has set a benchmark performance in this novel triplet extraction task. Meanwhile, it outperforms a few strong baselines adapted from state-of-the-art related methods.


2017 ◽  
Vol 61 (1) ◽  
Author(s):  
Lihua Guo ◽  
Chenggang Guo ◽  
Lei Li ◽  
Qinghua Huang ◽  
Yanshan Li ◽  
...  

2021 ◽  
Vol 14 (6) ◽  
pp. 863-863
Author(s):  
Supun Nakandala ◽  
Yuhao Zhang ◽  
Arun Kumar

We discovered that there was an inconsistency in the communication cost formulation for the decentralized fine-grained training method in Table 2 of our paper [1]. We used Horovod as the archetype for decentralized fine-grained approaches, and its correct communication cost is higher than what we had reported. So, we amend the communication cost of decentralized fine-grained to [EQUATION]


Author(s):  
Kushagra Mahajan ◽  
Tarasha Khurana ◽  
Ayush Chopra ◽  
Isha Gupta ◽  
Chetan Arora ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document