scholarly journals Clustering based two-stage text classification requiring minimal training data

2012 ◽  
Vol 9 (4) ◽  
pp. 1627-1643 ◽  
Author(s):  
Xue Zhang ◽  
Wang-Xin Xiao

Clustering has been employed to expand training data in some semi-supervised learning methods. Clustering based methods are based on the assumption that the learned clusters under the guidance of initial training data can somewhat characterize the underlying distribution of the data set. However, our experiments show that whether such assumption holds is based on both the separability of the considered data set and the size of the training data set. It is often violated on data set of bad separability, especially when the initial training data are too few. In this case, clustering based methods would perform worse. In this paper, we propose a clustering based two-stage text classification approach to address the above problem. In the first stage, labeled and unlabeled data are first clustered with the guidance of the labeled data. Then a self-training style clustering strategy is used to iteratively expand the training data under the guidance of an oracle or expert. At the second stage, discriminative classifiers can subsequently be trained with the expanded labeled data set. Unlike other clustering based methods, the proposed clustering strategy can effectively cope with data of bad separability. Furthermore, our proposed framework converts the challenging problem of sparsely labeled text classification into a supervised one, therefore, supervised classification models, e.g. SVM, can be applied, and techniques proposed for supervised learning can be used to further improve the classification accuracy, such as feature selection, sampling methods and data editing or noise filtering. Our experimental results demonstrated the effectiveness of our proposed approach especially when the size of the training data set is very small.

2012 ◽  
Vol 9 (4) ◽  
pp. 1513-1532 ◽  
Author(s):  
Xue Zhang ◽  
Wangxin Xiao

In order to address the insufficient training data problem, many active semi-supervised algorithms have been proposed. The self-labeled training data in semi-supervised learning may contain much noise due to the insufficient training data. Such noise may snowball themselves in the following learning process and thus hurt the generalization ability of the final hypothesis. Extremely few labeled training data in sparsely labeled text classification aggravate such situation. If such noise could be identified and removed by some strategy, the performance of the active semi-supervised algorithms should be improved. However, such useful techniques of identifying and removing noise have been seldom explored in existing active semi-supervised algorithms. In this paper, we propose an active semi-supervised framework with data editing (we call it ASSDE) to improve sparsely labeled text classification. A data editing technique is used to identify and remove noise introduced by semi-supervised labeling. We carry out the data editing technique by fully utilizing the advantage of active learning, which is novel according to our knowledge. The fusion of active learning with data editing makes ASSDE more robust to the sparsity and the distribution bias of the training data. It further simplifies the design of semi-supervised learning which makes ASSDE more efficient. Extensive experimental study on several real-world text data sets shows the encouraging results of the proposed framework for sparsely labeled text classification, compared with several state-of-the-art methods.


Author(s):  
Eric Larsen ◽  
Sébastien Lachapelle ◽  
Yoshua Bengio ◽  
Emma Frejinger ◽  
Simon Lacoste-Julien ◽  
...  

This paper offers a methodological contribution at the intersection of machine learning and operations research. Namely, we propose a methodology to quickly predict expected tactical descriptions of operational solutions (TDOSs). The problem we address occurs in the context of two-stage stochastic programming, where the second stage is demanding computationally. We aim to predict at a high speed the expected TDOS associated with the second-stage problem, conditionally on the first-stage variables. This may be used in support of the solution to the overall two-stage problem by avoiding the online generation of multiple second-stage scenarios and solutions. We formulate the tactical prediction problem as a stochastic optimal prediction program, whose solution we approximate with supervised machine learning. The training data set consists of a large number of deterministic operational problems generated by controlled probabilistic sampling. The labels are computed based on solutions to these problems (solved independently and offline), employing appropriate aggregation and subselection methods to address uncertainty. Results on our motivating application on load planning for rail transportation show that deep learning models produce accurate predictions in very short computing time (milliseconds or less). The predictive accuracy is close to the lower bounds calculated based on sample average approximation of the stochastic prediction programs.


2021 ◽  
Vol 13 (7) ◽  
pp. 1236
Author(s):  
Yuanjun Shu ◽  
Wei Li ◽  
Menglong Yang ◽  
Peng Cheng ◽  
Songchen Han

Convolutional neural networks (CNNs) have been widely used in change detection of synthetic aperture radar (SAR) images and have been proven to have better precision than traditional methods. A two-stage patch-based deep learning method with a label updating strategy is proposed in this paper. The initial label and mask are generated at the pre-classification stage. Then a two-stage updating strategy is applied to gradually recover changed areas. At the first stage, diversity of training data is gradually restored. The output of the designed CNN network is further processed to generate a new label and a new mask for the following learning iteration. As the diversity of data is ensured after the first stage, pixels within uncertain areas can be easily classified at the second stage. Experiment results on several representative datasets show the effectiveness of our proposed method compared with several existing competitive methods.


2013 ◽  
Vol 427-429 ◽  
pp. 2309-2312
Author(s):  
Hai Bin Mei ◽  
Ming Hua Zhang

Alert classifiers built with the supervised classification technique require large amounts of labeled training alerts. Preparing for such training data is very difficult and expensive. Thus accuracy and feasibility of current classifiers are greatly restricted. This paper employs semi-supervised learning to build alert classification model to reduce the number of needed labeled training alerts. Alert context properties are also introduced to improve the classification performance. Experiments have demonstrated the accuracy and feasibility of our approach.


Author(s):  
Tobias Scheffer

For many classification problems, unlabeled training data are inexpensive and readily available, whereas labeling training data imposes costs. Semi-supervised classification algorithms aim at utilizing information contained in unlabeled data in addition to the (few) labeled data.


2021 ◽  
Author(s):  
Mahdi Abdollahi ◽  
Xiaoying Gao ◽  
Yi Mei ◽  
S Ghosh ◽  
J Li

Document classification (DC) is the task of assigning pre-defined labels to unseen documents by utilizing a model trained on the available labeled documents. DC has attracted much attention in medical fields recently because many issues can be formulated as a classification problem. It can assist doctors in decision making and correct decisions can reduce the medical expenses. Medical documents have special attributes that distinguish them from other texts and make them difficult to analyze. For example, many acronyms and abbreviations, and short expressions make it more challenging to extract information. The classification accuracy of the current medical DC methods is not satisfactory. The goal of this work is to enhance the input feature sets of the DC method to improve the accuracy. To approach this goal, a novel two-stage approach is proposed. In the first stage, a domain-specific dictionary, namely the Unified Medical Language System (UMLS), is employed to extract the key features belonging to the most relevant concepts such as diseases or symptoms. In the second stage, PSO is applied to select more related features from the extracted features in the first stage. The performance of the proposed approach is evaluated on the 2010 Informatics for Integrating Biology and the Bedside (i2b2) data set which is a widely used medical text dataset. The experimental results show substantial improvement by the proposed method on the accuracy of classification.


Author(s):  
Maria Dimakopoulou ◽  
Zhengyuan Zhou ◽  
Susan Athey ◽  
Guido Imbens

Contextual bandit algorithms are sensitive to the estimation method of the outcome model as well as the exploration method used, particularly in the presence of rich heterogeneity or complex outcome models, which can lead to difficult estimation problems along the path of learning. We develop algorithms for contextual bandits with linear payoffs that integrate balancing methods from the causal inference literature in their estimation to make it less prone to problems of estimation bias. We provide the first regret bound analyses for linear contextual bandits with balancing and show that our algorithms match the state of the art theoretical guarantees. We demonstrate the strong practical advantage of balanced contextual bandits on a large number of supervised learning datasets and on a synthetic example that simulates model misspecification and prejudice in the initial training data.


Energies ◽  
2020 ◽  
Vol 13 (9) ◽  
pp. 2148 ◽  
Author(s):  
Pascal A. Schirmer ◽  
Iosif Mporas ◽  
Akbar Sheikh-Akbari

A data-driven methodology to improve the energy disaggregation accuracy during Non-Intrusive Load Monitoring is proposed. In detail, the method uses a two-stage classification scheme, with the first stage consisting of classification models processing the aggregated signal in parallel and each of them producing a binary device detection score, and the second stage consisting of fusion regression models for estimating the power consumption for each of the electrical appliances. The accuracy of the proposed approach was tested on three datasets—ECO (Electricity Consumption & Occupancy), REDD (Reference Energy Disaggregation Data Set), and iAWE (Indian Dataset for Ambient Water and Energy)—which are available online, using four different classifiers. The presented approach improves the estimation accuracy by up to 4.1% with respect to a basic energy disaggregation architecture, while the improvement on device level was up to 10.1%. Analysis on device level showed significant improvement of power consumption estimation accuracy especially for continuous and nonlinear appliances across all evaluated datasets.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Pengcheng Li ◽  
Qikai Liu ◽  
Qikai Cheng ◽  
Wei Lu

Purpose This paper aims to identify data set entities in scientific literature. To address poor recognition caused by a lack of training corpora in existing studies, a distant supervised learning-based approach is proposed to identify data set entities automatically from large-scale scientific literature in an open domain. Design/methodology/approach Firstly, the authors use a dictionary combined with a bootstrapping strategy to create a labelled corpus to apply supervised learning. Secondly, a bidirectional encoder representation from transformers (BERT)-based neural model was applied to identify data set entities in the scientific literature automatically. Finally, two data augmentation techniques, entity replacement and entity masking, were introduced to enhance the model generalisability and improve the recognition of data set entities. Findings In the absence of training data, the proposed method can effectively identify data set entities in large-scale scientific papers. The BERT-based vectorised representation and data augmentation techniques enable significant improvements in the generality and robustness of named entity recognition models, especially in long-tailed data set entity recognition. Originality/value This paper provides a practical research method for automatically recognising data set entities in scientific literature. To the best of the authors’ knowledge, this is the first attempt to apply distant learning to the study of data set entity recognition. The authors introduce a robust vectorised representation and two data augmentation strategies (entity replacement and entity masking) to address the problem inherent in distant supervised learning methods, which the existing research has mostly ignored. The experimental results demonstrate that our approach effectively improves the recognition of data set entities, especially long-tailed data set entities.


2021 ◽  
Vol 17 (12) ◽  
pp. 155014772110599
Author(s):  
Zhong Li ◽  
Huimin Zhuang

Nowadays, in the industrial Internet of things, address resolution protocol attacks are still rampant. Recently, the idea of applying the software-defined networking paradigm to industrial Internet of things is proposed by many scholars since this paradigm has the advantages of flexible deployment of intelligent algorithms and global coordination capabilities. These advantages prompt us to propose a multi-factor integration-based semi-supervised learning address resolution protocol detection method deployed in software-defined networking, called MIS, to specially solve the problems of limited labeled training data and incomplete features extraction in the traditional address resolution protocol detection methods. In MIS method, we design a multi-factor integration-based feature extraction method and propose a semi-supervised learning framework with differential priority sampling. MIS considers the address resolution protocol attack features from different aspects to help the model make correct judgment. Meanwhile, the differential priority sampling enables the base learner in self-training to learn efficiently from the unlabeled samples with differences. We conduct experiments based on a real data set collected from a deepwater port and a simulated data set. The experiments show that MIS can achieve good performance in detecting address resolution protocol attacks with F1-measure, accuracy, and area under the curve of 97.28%, 99.41%, and 98.36% on average. Meanwhile, compared with fully supervised learning and other popular address resolution protocol detection methods, MIS also shows the best performance.


Sign in / Sign up

Export Citation Format

Share Document