scholarly journals Merging Weak and Active Supervision for Semantic Parsing

2020 ◽  
Vol 34 (05) ◽  
pp. 8536-8543
Author(s):  
Ansong Ni ◽  
Pengcheng Yin ◽  
Graham Neubig

A semantic parser maps natural language commands (NLs) from the users to executable meaning representations (MRs), which are later executed in certain environment to obtain user-desired results. The fully-supervised training of such parser requires NL/MR pairs, annotated by domain experts, which makes them expensive to collect. However, weakly-supervised semantic parsers are learnt only from pairs of NL and expected execution results, leaving the MRs latent. While weak supervision is cheaper to acquire, learning from this input poses difficulties. It demands that parsers search a large space with a very weak learning signal and it is hard to avoid spurious MRs that achieve the correct answer in the wrong way. These factors lead to a performance gap between parsers trained in weakly- and fully-supervised setting. To bridge this gap, we examine the intersection between weak supervision and active learning, which allows the learner to actively select examples and query for manual annotations as extra supervision to improve the model trained under weak supervision. We study different active learning heuristics for selecting examples to query, and various forms of extra supervision for such queries. We evaluate the effectiveness of our method on two different datasets. Experiments on the WikiSQL show that by annotating only 1.8% of examples, we improve over a state-of-the-art weakly-supervised baseline by 6.4%, achieving an accuracy of 79.0%, which is only 1.3% away from the model trained with full supervision. Experiments on WikiTableQuestions with human annotators show that our method can improve the performance with only 100 active queries, especially for weakly-supervised parsers learnt from a cold start. 1

Author(s):  
Yoav Artzi ◽  
Luke Zettlemoyer

The context in which language is used provides a strong signal for learning to recover its meaning. In this paper, we show it can be used within a grounded CCG semantic parsing approach that learns a joint model of meaning and context for interpreting and executing natural language instructions, using various types of weak supervision. The joint nature provides crucial benefits by allowing situated cues, such as the set of visible objects, to directly influence learning. It also enables algorithms that learn while executing instructions, for example by trying to replicate human actions. Experiments on a benchmark navigational dataset demonstrate strong performance under differing forms of supervision, including correctly executing 60% more instruction sets relative to the previous state of the art.


Author(s):  
Siva Reddy ◽  
Mirella Lapata ◽  
Mark Steedman

In this paper we introduce a novel semantic parsing approach to query Freebase in natural language without requiring manual annotations or question-answer pairs. Our key insight is to represent natural language via semantic graphs whose topology shares many commonalities with Freebase. Given this representation, we conceptualize semantic parsing as a graph matching problem. Our model converts sentences to semantic graphs using CCG and subsequently grounds them to Freebase guided by denotations as a form of weak supervision. Evaluation experiments on a subset of the Free917 and WebQuestions benchmark datasets show our semantic parser improves over the state of the art.


2021 ◽  
Vol 13 (24) ◽  
pp. 5009
Author(s):  
Lingbo Huang ◽  
Yushi Chen ◽  
Xin He

In recent years, supervised learning-based methods have achieved excellent performance for hyperspectral image (HSI) classification. However, the collection of training samples with labels is not only costly but also time-consuming. This fact usually causes the existence of weak supervision, including incorrect supervision where mislabeled samples exist and incomplete supervision where unlabeled samples exist. Focusing on the inaccurate supervision and incomplete supervision, the weakly supervised classification of HSI is investigated in this paper. For inaccurate supervision, complementary learning (CL) is firstly introduced for HSI classification. Then, a new method, which is based on selective CL and convolutional neural network (SeCL-CNN), is proposed for classification with noisy labels. For incomplete supervision, a data augmentation-based method, which combines mixup and Pseudo-Label (Mix-PL) is proposed. And then, a classification method, which combines Mix-PL and CL (Mix-PL-CL), is designed aiming at better semi-supervised classification capacity of HSI. The proposed weakly supervised methods are evaluated on three widely-used hyperspectral datasets (i.e., Indian Pines, Houston, and Salinas datasets). The obtained results reveal that the proposed methods provide competitive results compared to the state-of-the-art methods. For inaccurate supervision, the proposed SeCL-CNN has outperformed the state-of-the-art method (i.e., SSDP-CNN) by 0.92%, 1.84%, and 1.75% in terms of OA on the three datasets, when the noise ratio is 30%. And for incomplete supervision, the proposed Mix-PL-CL has outperformed the state-of-the-art method (i.e., AROC-DP) by 1.03%, 0.70%, and 0.82% in terms of OA on the three datasets, with 25 training samples per class.


Author(s):  
Tao Shen ◽  
Xiubo Geng ◽  
Guodong Long ◽  
Jing Jiang ◽  
Chengqi Zhang ◽  
...  

Many algorithms for Knowledge-Based Question Answering (KBQA) depend on semantic parsing, which translates a question to its logical form. When only weak supervision is provided, it is usually necessary to search valid logical forms for model training. However, a complex question typically involves a huge search space, which creates two main problems: 1) the solutions limited by computation time and memory usually reduce the success rate of the search, and 2) spurious logical forms in the search results degrade the quality of training data. These two problems lead to a poorly-trained semantic parsing model. In this work, we propose an effective search method for weakly supervised KBQA based on operator prediction for questions. With search space constrained by predicted operators, sufficient search paths can be explored, more valid logical forms can be derived, and operators possibly causing spurious logical forms can be avoided. As a result, a larger proportion of questions in a weakly supervised training set are equipped with logical forms, and fewer spurious logical forms are generated. Such high-quality training data directly contributes to a better semantic parsing model. Experimental results on one of the largest KBQA datasets (i.e., CSQA) verify the effectiveness of our approach and deliver a new state-of-the-art performance.


2019 ◽  
Vol 7 ◽  
pp. 233-248
Author(s):  
Laura Jehl ◽  
Carolin Lawrence ◽  
Stefan Riezler

In many machine learning scenarios, supervision by gold labels is not available and conse quently neural models cannot be trained directly by maximum likelihood estimation. In a weak supervision scenario, metric-augmented objectives can be employed to assign feedback to model outputs, which can be used to extract a supervision signal for training. We present several objectives for two separate weakly supervised tasks, machine translation and semantic parsing. We show that objectives should actively discourage negative outputs in addition to promoting a surrogate gold structure. This notion of bipolarity is naturally present in ramp loss objectives, which we adapt to neural models. We show that bipolar ramp loss objectives outperform other non-bipolar ramp loss objectives and minimum risk training on both weakly supervised tasks, as well as on a supervised machine translation task. Additionally, we introduce a novel token-level ramp loss objective, which is able to outperform even the best sequence-level ramp loss on both weakly supervised tasks.


2020 ◽  
Vol 34 (2) ◽  
pp. 165-180 ◽  
Author(s):  
Clemens-Alexander Brust ◽  
Christoph Käding ◽  
Joachim Denzler

Abstract Large amounts of labeled training data are one of the main contributors to the great success that deep models have achieved in the past. Label acquisition for tasks other than benchmarks can pose a challenge due to requirements of both funding and expertise. By selecting unlabeled examples that are promising in terms of model improvement and only asking for respective labels, active learning can increase the efficiency of the labeling process in terms of time and cost. In this work, we describe combinations of an incremental learning scheme and methods of active learning. These allow for continuous exploration of newly observed unlabeled data. We describe selection criteria based on model uncertainty as well as expected model output change (EMOC). An object detection task is evaluated in a continuous exploration context on the PASCAL VOC dataset. We also validate a weakly supervised system based on active and incremental learning in a real-world biodiversity application where images from camera traps are analyzed. Labeling only 32 images by accepting or rejecting proposals generated by our method yields an increase in accuracy from 25.4 to 42.6%.


Sensors ◽  
2021 ◽  
Vol 21 (10) ◽  
pp. 3475
Author(s):  
Jia-Rong Ou ◽  
Shu-Le Deng ◽  
Jin-Gang Yu

Weakly supervised instance segmentation (WSIS) provides a promising way to address instance segmentation in the absence of sufficient labeled data for training. Previous attempts on WSIS usually follow a proposal-based paradigm, critical to which is the proposal scoring strategy. These works mostly rely on certain heuristic strategies for proposal scoring, which largely hampers the sustainable advances concerning WSIS. Towards this end, this paper introduces a novel framework for weakly supervised instance segmentation, called Weakly Supervised R-CNN (WS-RCNN). The basic idea is to deploy a deep network to learn to score proposals, under the special setting of weak supervision. To tackle the key issue of acquiring proposal-level pseudo labels for model training, we propose a so-called Attention-Guided Pseudo Labeling (AGPL) strategy, which leverages the local maximal (peaks) in image-level attention maps and the spatial relationship among peaks and proposals to infer pseudo labels. We also suggest a novel training loss, called Entropic OpenSet Loss, to handle background proposals more effectively so as to further improve the robustness. Comprehensive experiments on two standard benchmarking datasets demonstrate that the proposed WS-RCNN can outperform the state-of-the-art by a large margin, with an improvement of 11.6% on PASCAL VOC 2012 and 10.7% on MS COCO 2014 in terms of mAP50, which indicates that learning-based proposal scoring and the proposed WS-RCNN framework might be a promising way towards WSIS.


Author(s):  
Bo Shao ◽  
Yeyun Gong ◽  
Junwei Bao ◽  
Jianshu Ji ◽  
Guihong Cao ◽  
...  

Semantic parsing is a challenging and important task which aims to convert a natural language sentence to a logical form. Existing neural semantic parsing methods mainly use <question, logical form> (Q-L) pairs to train a sequence-to-sequence model. However, the amount of existing Q-L labeled data is limited and hard to obtain. We propose an effective method which substantially utilizes labeling information from other tasks to enhance the training of a semantic parser. We design a multi-task learning model to train question type classification, entity mention detection together with question semantic parsing using a shared encoder. We propose a weakly supervised learning method to enhance our multi-task learning model with paraphrase data, based on the idea that the paraphrased questions should have the same logical form and question type information. Finally, we integrate the weakly supervised multi-task learning method to an encoder-decoder framework. Experiments on a newly constructed dataset and ComplexWebQuestions show that our proposed method outperforms state-of-the-art methods which demonstrates the effectiveness and robustness of our method.


2020 ◽  
Vol 34 (05) ◽  
pp. 8066-8073
Author(s):  
Katharina Kann ◽  
Ophélie Lacroix ◽  
Anders Søgaard

Part-of-speech (POS) taggers for low-resource languages which are exclusively based on various forms of weak supervision – e.g., cross-lingual transfer, type-level supervision, or a combination thereof – have been reported to perform almost as well as supervised ones. However, weakly supervised POS taggers are commonly only evaluated on languages that are very different from truly low-resource languages, and the taggers use sources of information, like high-coverage and almost error-free dictionaries, which are likely not available for resource-poor languages. We train and evaluate state-of-the-art weakly supervised POS taggers for a typologically diverse set of 15 truly low-resource languages. On these languages, given a realistic amount of resources, even our best model gets only less than half of the words right. Our results highlight the need for new and different approaches to POS tagging for truly low-resource languages.


2015 ◽  
Author(s):  
Judith Gaspers ◽  
Philipp Cimiano ◽  
Britta Wrede

Sign in / Sign up

Export Citation Format

Share Document