Domain-Specific Keyword Extraction Using Joint Modeling of Local and Global Contextual Semantics

2022 ◽  
Vol 16 (4) ◽  
pp. 1-30
Author(s):  
Muhammad Abulaish ◽  
Mohd Fazil ◽  
Mohammed J. Zaki

Domain-specific keyword extraction is a vital task in the field of text mining. There are various research tasks, such as spam e-mail classification, abusive language detection, sentiment analysis, and emotion mining, where a set of domain-specific keywords (aka lexicon) is highly effective. Existing works for keyword extraction list all keywords rather than domain-specific keywords from a document corpus. Moreover, most of the existing approaches perform well on formal document corpuses but fail on noisy and informal user-generated content in online social media. In this article, we present a hybrid approach by jointly modeling the local and global contextual semantics of words, utilizing the strength of distributional word representation and contrasting-domain corpus for domain-specific keyword extraction. Starting with a seed set of a few domain-specific keywords, we model the text corpus as a weighted word-graph. In this graph, the initial weight of a node (word) represents its semantic association with the target domain calculated as a linear combination of three semantic association metrics, and the weight of an edge connecting a pair of nodes represents the co-occurrence count of the respective words. Thereafter, a modified PageRank method is applied to the word-graph to identify the most relevant words for expanding the initial set of domain-specific keywords. We evaluate our method over both formal and informal text corpuses (comprising six datasets), and show that it performs significantly better in comparison to state-of-the-art methods. Furthermore, we generalize our approach to handle the language-agnostic case, and show that it outperforms existing language-agnostic approaches.

Sensors ◽  
2020 ◽  
Vol 20 (24) ◽  
pp. 7180
Author(s):  
Zhe Wang ◽  
Bo Yan ◽  
Chunhua Wu ◽  
Bin Wu ◽  
Xiujuan Wang ◽  
...  

Cross-domain relation extraction has become an essential approach when target domain lacking labeled data. Most existing works adapted relation extraction models from the source domain to target domain through aligning sequential features, but failed to transfer non-local and non-sequential features such as word co-occurrence which are also critical for cross-domain relation extraction. To address this issue, in this paper, we propose a novel tripartite graph architecture to adapt non-local features when there is no labeled data in the target domain. The graph uses domain words as nodes to model the co-occurrence relation between domain-specific words and domain-independent words. Through graph convolutions on the tripartite graph, the information of domain-specific words is propagated so that the word representation can be fine-tuned to align domain-specific features. In addition, unlike the traditional graph structure, the weights of edges innovatively combine fixed weight and dynamic weight, to capture the global non-local features and avoid introducing noise to word representation. Experiments on three domains of ACE2005 datasets show that our method outperforms the state-of-the-art models by a big margin.


Author(s):  
Gretel Liz De la Peña Sarracén ◽  
Paolo Rosso

AbstractThe proliferation of harmful content on social media affects a large part of the user community. Therefore, several approaches have emerged to control this phenomenon automatically. However, this is still a quite challenging task. In this paper, we explore the offensive language as a particular case of harmful content and focus our study in the analysis of keywords in available datasets composed of offensive tweets. Thus, we aim to identify relevant words in those datasets and analyze how they can affect model learning. For keyword extraction, we propose an unsupervised hybrid approach which combines the multi-head self-attention of BERT and a reasoning on a word graph. The attention mechanism allows to capture relationships among words in a context, while a language model is learned. Then, the relationships are used to generate a graph from what we identify the most relevant words by using the eigenvector centrality. Experiments were performed by means of two mechanisms. On the one hand, we used an information retrieval system to evaluate the impact of the keywords in recovering offensive tweets from a dataset. On the other hand, we evaluated a keyword-based model for offensive language detection. Results highlight some points to consider when training models with available datasets.


2012 ◽  
Vol 26 (3) ◽  
pp. 318-334 ◽  
Author(s):  
Eli Tsukayama ◽  
Angela Lee Duckworth ◽  
Betty Kim

We propose a model of impulsivity that predicts both domain–general and domain–specific variance in behaviours that produce short–term gratification at the expense of long–term goals and standards. Specifically, we posit that domain–general impulsivity is explained by domain–general self–control strategies and resources, whereas domain–specific impulsivity is explained by how tempting individuals find various impulsive behaviours, and to a lesser extent, in perceptions of their long–term harm. Using a novel self–report measure, factor analyses produced six (non–exhaustive) domains of impulsive behaviour (Studies 1–2): work, interpersonal relationships, drugs, food, exercise and finances. Domain–general self–control explained 40% of the variance in domain–general impulsive behaviour between individuals, reffect = .71. Domain–specific temptation ( reffect = .83) and perceived harm ( reffect = −.26) explained 40% and 2% of the unique within–individual variance in impulsive behaviour, respectively (59% together). In Study 3, we recruited individuals in special interest groups (e.g. procrastinators) to confirm that individuals who are especially tempted by behaviours in their target domain are not likely to be more tempted in non–target domains. Copyright © 2011 John Wiley & Sons, Ltd.


Author(s):  
Xin Liu ◽  
Kai Liu ◽  
Xiang Li ◽  
Jinsong Su ◽  
Yubin Ge ◽  
...  

The lack of sufficient training data in many domains, poses a major challenge to the construction of domain-specific machine reading comprehension (MRC) models with satisfying performance. In this paper, we propose a novel iterative multi-source mutual knowledge transfer framework for MRC. As an extension of the conventional knowledge transfer with one-to-one correspondence, our framework focuses on the many-to-many mutual transfer, which involves synchronous executions of multiple many-to-one transfers in an iterative manner.Specifically, to update a target-domain MRC model, we first consider other domain-specific MRC models as individual teachers, and employ knowledge distillation to train a multi-domain MRC model, which is differentially required to fit the training data and match the outputs of these individual models according to their domain-level similarities to the target domain. After being initialized by the multi-domain MRC model, the target-domain MRC model is fine-tuned to match both its training data and the output of its previous best model simultaneously via knowledge distillation. Compared with previous approaches, our framework can continuously enhance all domain-specific MRC models by enabling each model to iteratively and differentially absorb the domain-shared knowledge from others. Experimental results and in-depth analyses on several benchmark datasets demonstrate the effectiveness of our framework.


2020 ◽  
Vol 34 (05) ◽  
pp. 7780-7788
Author(s):  
Siddhant Garg ◽  
Thuy Vu ◽  
Alessandro Moschitti

We propose TandA, an effective technique for fine-tuning pre-trained Transformer models for natural language tasks. Specifically, we first transfer a pre-trained model into a model for a general task by fine-tuning it with a large and high-quality dataset. We then perform a second fine-tuning step to adapt the transferred model to the target domain. We demonstrate the benefits of our approach for answer sentence selection, which is a well-known inference task in Question Answering. We built a large scale dataset to enable the transfer step, exploiting the Natural Questions dataset. Our approach establishes the state of the art on two well-known benchmarks, WikiQA and TREC-QA, achieving the impressive MAP scores of 92% and 94.3%, respectively, which largely outperform the the highest scores of 83.4% and 87.5% of previous work. We empirically show that TandA generates more stable and robust models reducing the effort required for selecting optimal hyper-parameters. Additionally, we show that the transfer step of TandA makes the adaptation step more robust to noise. This enables a more effective use of noisy datasets for fine-tuning. Finally, we also confirm the positive impact of TandA in an industrial setting, using domain specific datasets subject to different types of noise.


2002 ◽  
Vol 17 (1) ◽  
pp. 65-70 ◽  
Author(s):  
ADAM PEASE ◽  
IAN NILES

The IEEE Standard Upper Ontology (IEEE, 2001) is an effort to create a large, general-purpose, formal ontology. The ontology will be an open standard that can be reused for both academic and commercial purposes without fee, and it will be designed to support additional domain-specific ontologies. The effort is targeted for use in automated inference, semantic interoperability between heterogeneous information systems and natural language processing applications. The effort was begun in May 2000 with an e-mail discussion list, and since then there have been over 6000 e-mail messages among 170 subscribers. These subscribers include representatives from government, academia and industry in various countries. The effort was officially approved as an IEEE standards project in December 2000. Recently a successful workshop was held at IJCAI 2001 to discuss progress and proposals for this project (IJCAI, 2001).


2020 ◽  
Vol 34 (04) ◽  
pp. 6243-6250 ◽  
Author(s):  
Qian Wang ◽  
Toby Breckon

Unsupervised domain adaptation aims to address the problem of classifying unlabeled samples from the target domain whilst labeled samples are only available from the source domain and the data distributions are different in these two domains. As a result, classifiers trained from labeled samples in the source domain suffer from significant performance drop when directly applied to the samples from the target domain. To address this issue, different approaches have been proposed to learn domain-invariant features or domain-specific classifiers. In either case, the lack of labeled samples in the target domain can be an issue which is usually overcome by pseudo-labeling. Inaccurate pseudo-labeling, however, could result in catastrophic error accumulation during learning. In this paper, we propose a novel selective pseudo-labeling strategy based on structured prediction. The idea of structured prediction is inspired by the fact that samples in the target domain are well clustered within the deep feature space so that unsupervised clustering analysis can be used to facilitate accurate pseudo-labeling. Experimental results on four datasets (i.e. Office-Caltech, Office31, ImageCLEF-DA and Office-Home) validate our approach outperforms contemporary state-of-the-art methods.


AI Magazine ◽  
2010 ◽  
Vol 31 (3) ◽  
pp. 93 ◽  
Author(s):  
Stephen Soderland ◽  
Brendan Roof ◽  
Bo Qin ◽  
Shi Xu ◽  
Mausam ◽  
...  

Information extraction (IE) can identify a set of relations from free text to support question answering (QA). Until recently, IE systems were domain-specific and needed a combination of manual engineering and supervised learning to adapt to each target domain. A new paradigm, Open IE operates on large text corpora without any manual tagging of relations, and indeed without any pre-specified relations. Due to its open-domain and open-relation nature, Open IE is purely textual and is unable to relate the surface forms to an ontology, if known in advance. We explore the steps needed to adapt Open IE to a domain-specific ontology and demonstrate our approach of mapping domain-independent tuples to an ontology using domains from DARPA’s Machine Reading Project. Our system achieves precision over 0.90 from as few as 8 training examples for an NFL-scoring domain.


2014 ◽  
Vol 29 ◽  
pp. 39-52 ◽  
Author(s):  
Delroy Cameron ◽  
Amit P. Sheth ◽  
Nishita Jaykumar ◽  
Krishnaprasad Thirunarayan ◽  
Gaurish Anand ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document