Preserve Integrity in Realtime Event Summarization

2021 ◽  
Vol 15 (3) ◽  
pp. 1-29
Author(s):  
Chen Lin ◽  
Zhichao Ouyang ◽  
Xiaoli Wang ◽  
Hui Li ◽  
Zhenhua Huang

Online text streams such as Twitter are the major information source for users when they are looking for ongoing events. Realtime event summarization aims to generate and update coherent and concise summaries to describe the state of a given event. Due to the enormous volume of continuously coming texts, realtime event summarization has become the de facto tool to facilitate information acquisition. However, there exists a challenging yet unexplored issue in current text summarization techniques: how to preserve the integrity, i.e., the accuracy and consistency of summaries during the update process. The issue is critical since online text stream is dynamic and conflicting information could spread during the event period. For example, conflicting numbers of death and injuries might be reported after an earthquake. Such misleading information should not appear in the earthquake summary at any timestamp. In this article, we present a novel realtime event summarization framework called IAEA (i.e., Integrity-Aware Extractive-Abstractive realtime event summarization). Our key idea is to integrate an inconsistency detection module into a unified extractive–abstractive framework. In each update, important new tweets are first extracted in an extractive module, and the extraction is refined by explicitly detecting inconsistency between new tweets and previous summaries. The extractive module is able to capture the sentence-level attention which is later used by an abstractive module to obtain the word-level attention. Finally, the word-level attention is leveraged to rephrase words. We conduct comprehensive experiments on real-world datasets. To reduce efforts required for building sufficient training data, we also provide automatic labeling steps of which the effectiveness has been empirically verified. Through experiments, we demonstrate that IAEA can generate better summaries with consistent information than state-of-the-art approaches.

2021 ◽  
Vol 11 (21) ◽  
pp. 9938
Author(s):  
Kun Shao ◽  
Yu Zhang ◽  
Junan Yang ◽  
Hui Liu

Deep learning models are vulnerable to backdoor attacks. The success rate of textual backdoor attacks based on data poisoning in existing research is as high as 100%. In order to enhance the natural language processing model’s defense against backdoor attacks, we propose a textual backdoor defense method via poisoned sample recognition. Our method consists of two parts: the first step is to add a controlled noise layer after the model embedding layer, and to train a preliminary model with incomplete or no backdoor embedding, which reduces the effectiveness of poisoned samples. Then, we use the model to initially identify the poisoned samples in the training set so as to narrow the search range of the poisoned samples. The second step uses all the training data to train an infection model embedded in the backdoor, which is used to reclassify the samples selected in the first step, and finally identify the poisoned samples. Through detailed experiments, we have proved that our defense method can effectively defend against a variety of backdoor attacks (character-level, word-level and sentence-level backdoor attacks), and the experimental effect is better than the baseline method. For the BERT model trained by the IMDB dataset, this method can even reduce the success rate of word-level backdoor attacks to 0%.


1999 ◽  
Vol 11 (5) ◽  
pp. 1235-1248 ◽  
Author(s):  
Wei Wei ◽  
Todd K. Leen ◽  
Etienne Barnard

Although the outputs of neural network classifiers are often considered to be estimates of posterior class probabilities, the literature that assesses the calibration accuracy of these estimates illustrates that practical networks often fall far short of being ideal estimators. The theorems used to justify treating network outputs as good posterior estimates are based on several assumptions: that the network is sufficiently complex to model the posterior distribution accurately, that there are sufficient training data to specify the network, and that the optimization routine is capable of finding the global minimum of the cost function. Any or all of these assumptions may be violated in practice. This article does three things. First, we apply a simple, previously used histogram technique to assess graphically the accuracy of posterior estimates with respect to individual classes. Second, we introduce a simple and fast remapping procedure that transforms network outputs to provide better estimates of posteriors. Third, we use the remapping in a real-world telephone speech recognition system. The remapping results in a 10% reduction of both word-level error rates (from 4.53% to 4.06%) and sentence-level error rates (from 16.38% to 14.69%) on one corpus, and a 29% reduction at sentence-level error (from 6.3% to 4.5%) on another. The remapping required negligible additional overhead (in terms of both parameters and calculations). McNemar's test shows that these levels of improvement are statistically significant.


Author(s):  
Liangchen Wei ◽  
Zhi-Hong Deng

Cross-language learning allows one to use training data from one language to build models for another language. Many traditional approaches require word-level alignment sentences from parallel corpora, in this paper we define a general bilingual training objective function requiring sentence level parallel corpus only. We propose a variational autoencoding approach for training bilingual word embeddings. The variational model introduces a continuous latent variable to explicitly model the underlying semantics of the parallel sentence pairs and to guide the generation of the sentence pairs. Our model restricts the bilingual word embeddings to represent words in exactly the same continuous vector space. Empirical results on the task of cross lingual document classification has shown that our method is effective.


2021 ◽  
Vol 14 (4) ◽  
pp. 1-24
Author(s):  
Sushant Kafle ◽  
Becca Dingman ◽  
Matt Huenerfauth

There are style guidelines for authors who highlight important words in static text, e.g., bolded words in student textbooks, yet little research has investigated highlighting in dynamic texts, e.g., captions during educational videos for Deaf or Hard of Hearing (DHH) users. In our experimental study, DHH participants subjectively compared design parameters for caption highlighting, including: decoration (underlining vs. italicizing vs. boldfacing), granularity (sentence level vs. word level), and whether to highlight only the first occurrence of a repeating keyword. In partial contrast to recommendations in prior research, which had not been based on experimental studies with DHH users, we found that DHH participants preferred boldface, word-level highlighting in captions. Our empirical results provide guidance for the design of keyword highlighting during captioned videos for DHH users, especially in educational video genres.


Author(s):  
Weida Zhong ◽  
Qiuling Suo ◽  
Abhishek Gupta ◽  
Xiaowei Jia ◽  
Chunming Qiao ◽  
...  

With the popularity of smartphones, large-scale road sensing data is being collected to perform traffic prediction, which is an important task in modern society. Due to the nature of the roving sensors on smartphones, the collected traffic data which is in the form of multivariate time series, is often temporally sparse and unevenly distributed across regions. Moreover, different regions can have different traffic patterns, which makes it challenging to adapt models learned from regions with sufficient training data to target regions. Given that many regions may have very sparse data, it is also impossible to build individual models for each region separately. In this paper, we propose a meta-learning based framework named MetaTP to overcome these challenges. MetaTP has two key parts, i.e., basic traffic prediction network (base model) and meta-knowledge transfer. In base model, a two-layer interpolation network is employed to map original time series onto uniformly-spaced reference time points, so that temporal prediction can be effectively performed in the reference space. The meta-learning framework is employed to transfer knowledge from source regions with a large amount of data to target regions with a few data examples via fast adaptation, in order to improve model generalizability on target regions. Moreover, we use two memory networks to capture the global patterns of spatial and temporal information across regions. We evaluate the proposed framework on two real-world datasets, and experimental results show the effectiveness of the proposed framework.


Author(s):  
Yazan Shaker Almahameed ◽  
May Al-Shaikhli

The current study aimed at investigating the salient syntactic and semantic errors made by Jordanian English foreign language learners as writing in English. Writing poses a great challenge for both native and non-native speakers of English, since writing involves employing most language sub-systems such as grammar, vocabulary, spelling and punctuation. A total of 30 Jordanian English foreign language learners participated in the study. The participants were instructed to write a composition of no more than one hundred and fifty words on a selected topic. Essays were collected and analyzed statistically to obtain the needed results. The results of the study displayed that syntactic errors produced by the participants were varied, in that eleven types of syntactic errors were committed as follows; verb-tense, agreement, auxiliary, conjunctions, word order, resumptive pronouns, null-subject, double-subject, superlative, comparative and possessive pronouns. Amongst syntactic errors, verb tense errors were the most frequent with 33%. The results additionally revealed that two types of semantic errors were made; errors at sentence level and errors at word level. Errors at word level outstripped by far errors at sentence level, scoring respectively 82% and 18%. It can be concluded that the syntactic and semantic knowledge of Jordanian learners of English is still insufficient.


2019 ◽  
Vol 16 (2) ◽  
pp. 359-380
Author(s):  
Zhehua Piao ◽  
Sang-Min Park ◽  
Byung-Won On ◽  
Gyu Choi ◽  
Myong-Soon Park

Product reputation mining systems can help customers make their buying decision about a product of interest. In addition, it will be helpful to investigate the preferences of recently released products made by enterprises. Unlike the conventional manual survey, it will give us quick survey results on a low cost budget. In this article, we propose a novel product reputation mining approach based on three dimensional points of view that are word, sentence, and aspect?levels. Given a target product, the aspect?level method assigns the sentences of a review document to the desired aspects. The sentence?level method is a graph-based model for quantifying the importance of sentences. The word?level method computes both importance and sentiment orientation of words. Aggregating these scores, the proposed approach measures the reputation tendency and preferred intensity and selects top-k informative review documents about the product. To validate the proposed method, we experimented with review documents relevant with K5 in Kia motors. Our experimental results show that our method is more helpful than the existing lexicon?based approach in the empirical and statistical studies.


2021 ◽  
Vol 17 (2) ◽  
pp. 1-20
Author(s):  
Zheng Wang ◽  
Qiao Wang ◽  
Tingzhang Zhao ◽  
Chaokun Wang ◽  
Xiaojun Ye

Feature selection, an effective technique for dimensionality reduction, plays an important role in many machine learning systems. Supervised knowledge can significantly improve the performance. However, faced with the rapid growth of newly emerging concepts, existing supervised methods might easily suffer from the scarcity and validity of labeled data for training. In this paper, the authors study the problem of zero-shot feature selection (i.e., building a feature selection model that generalizes well to “unseen” concepts with limited training data of “seen” concepts). Specifically, they adopt class-semantic descriptions (i.e., attributes) as supervision for feature selection, so as to utilize the supervised knowledge transferred from the seen concepts. For more reliable discriminative features, they further propose the center-characteristic loss which encourages the selected features to capture the central characteristics of seen concepts. Extensive experiments conducted on various real-world datasets demonstrate the effectiveness of the method.


2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Michael Adjeisah ◽  
Guohua Liu ◽  
Douglas Omwenga Nyabuga ◽  
Richard Nuetey Nortey ◽  
Jinling Song

Scaling natural language processing (NLP) to low-resourced languages to improve machine translation (MT) performance remains enigmatic. This research contributes to the domain on a low-resource English-Twi translation based on filtered synthetic-parallel corpora. It is often perplexing to learn and understand what a good-quality corpus looks like in low-resource conditions, mainly where the target corpus is the only sample text of the parallel language. To improve the MT performance in such low-resource language pairs, we propose to expand the training data by injecting synthetic-parallel corpus obtained by translating a monolingual corpus from the target language based on bootstrapping with different parameter settings. Furthermore, we performed unsupervised measurements on each sentence pair engaging squared Mahalanobis distances, a filtering technique that predicts sentence parallelism. Additionally, we extensively use three different sentence-level similarity metrics after round-trip translation. Experimental results on a diverse amount of available parallel corpus demonstrate that injecting pseudoparallel corpus and extensive filtering with sentence-level similarity metrics significantly improves the original out-of-the-box MT systems for low-resource language pairs. Compared with existing improvements on the same original framework under the same structure, our approach exhibits tremendous developments in BLEU and TER scores.


Author(s):  
Xiaocheng Feng ◽  
Jiang Guo ◽  
Bing Qin ◽  
Ting Liu ◽  
Yongjie Liu

Distant supervised relation extraction (RE) has been an effective way of finding novel relational facts from text without labeled training data. Typically it can be formalized as a multi-instance multi-label problem.In this paper, we introduce a novel neural approach for distant supervised (RE) with specific focus on attention mechanisms.Unlike the feature-based logistic regression model and compositional neural models such as CNN, our approach includes two major attention-based memory components, which is capable of explicitly capturing the importance of each context word for modeling the representation of the entity pair, as well as the intrinsic dependencies between relations.Such importance degree and dependency relationship are calculated with multiple computational layers, each of which is a neural attention model over an external memory. Experiment on real-world datasets shows that our approach performs significantly and consistently better than various baselines.


Sign in / Sign up

Export Citation Format

Share Document