CATS: Customizable Abstractive Topic-based Summarization

2022 ◽  
Vol 40 (1) ◽  
pp. 1-24
Author(s):  
Seyed Ali Bahrainian ◽  
George Zerveas ◽  
Fabio Crestani ◽  
Carsten Eickhoff

Neural sequence-to-sequence models are the state-of-the-art approach used in abstractive summarization of textual documents, useful for producing condensed versions of source text narratives without being restricted to using only words from the original text. Despite the advances in abstractive summarization, custom generation of summaries (e.g., towards a user’s preference) remains unexplored. In this article, we present CATS, an abstractive neural summarization model that summarizes content in a sequence-to-sequence fashion while also introducing a new mechanism to control the underlying latent topic distribution of the produced summaries. We empirically illustrate the efficacy of our model in producing customized summaries and present findings that facilitate the design of such systems. We use the well-known CNN/DailyMail dataset to evaluate our model. Furthermore, we present a transfer-learning method and demonstrate the effectiveness of our approach in a low resource setting, i.e., abstractive summarization of meetings minutes, where combining the main available meetings’ transcripts datasets, AMI and International Computer Science Institute(ICSI) , results in merely a few hundred training documents.

Information ◽  
2018 ◽  
Vol 9 (9) ◽  
pp. 217 ◽  
Author(s):  
Xiujuan Xiang ◽  
Guangluan Xu ◽  
Xingyu Fu ◽  
Yang Wei ◽  
Li Jin ◽  
...  

Current popular abstractive summarization is based on an attentional encoder-decoder framework. Based on the architecture, the decoder generates a summary according to the full text that often results in the decoder being interfered by some irrelevant information, thereby causing the generated summaries to suffer from low saliency. Besides, we have observed the process of people writing summaries and find that they write a summary based on the necessary information rather than the full text. Thus, in order to enhance the saliency of the abstractive summarization, we propose an attentive information extraction model. It consists of a multi-layer perceptron (MLP) gated unit that pays more attention to the important information of the source text and a similarity module to encourage high similarity between the reference summary and the important information. Before the summary decoder, the MLP and the similarity module work together to extract the important information for the decoder, thus obtaining the skeleton of the source text. This effectively reduces the interference of irrelevant information to the decoder, therefore improving the saliency of the summary. Our proposed model was tested on CNN/Daily Mail and DUC-2004 datasets, and achieved a 42.01 ROUGE-1 f-score and 33.94 ROUGE-1, recall respectively. The result outperforms the state-of-the-art abstractive model on the same dataset. In addition, by subjective human evaluation, the saliency of the generated summaries was further enhanced.


Author(s):  
Rui Wang ◽  
Xu Tan ◽  
Renqian Luo ◽  
Tao Qin ◽  
Tie-Yan Liu

Neural approaches have achieved state-of-the-art accuracy on machine translation but suffer from the high cost of collecting large scale parallel data. Thus, a lot of research has been conducted for neural machine translation (NMT) with very limited parallel data, i.e., the low-resource setting. In this paper, we provide a survey for low-resource NMT and classify related works into three categories according to the auxiliary data they used: (1) exploiting monolingual data of source and/or target languages, (2) exploiting data from auxiliary languages, and (3) exploiting multi-modal data. We hope that our survey can help researchers to better understand this field and inspire them to design better algorithms, and help industry practitioners to choose appropriate algorithms for their applications.


2020 ◽  
Vol 30 (01) ◽  
pp. 2050001
Author(s):  
Takumi Maruyama ◽  
Kazuhide Yamamoto

Inspired by machine translation task, recent text simplification approaches regard a task as a monolingual text-to-text generation, and neural machine translation models have significantly improved the performance of simplification tasks. Although such models require a large-scale parallel corpus, such corpora for text simplification are very few in number and smaller in size compared to machine translation task. Therefore, we have attempted to facilitate the training of simplification rewritings using pre-training from a large-scale monolingual corpus such as Wikipedia articles. In addition, we propose a translation language model to seamlessly conduct a fine-tuning of text simplification from the pre-training of the language model. The experimental results show that the translation language model substantially outperforms a state-of-the-art model under a low-resource setting. In addition, a pre-trained translation language model with only 3000 supervised examples can achieve a performance comparable to that of the state-of-the-art model using 30,000 supervised examples.


Diabetes ◽  
2018 ◽  
Vol 67 (Supplement 1) ◽  
pp. 93-LB
Author(s):  
EDDY JEAN BAPTISTE ◽  
PHILIPPE LARCO ◽  
MARIE-NANCY CHARLES LARCO ◽  
JULIA E. VON OETTINGEN ◽  
EDDLYS DUBOIS ◽  
...  

2021 ◽  
Vol 14 (4) ◽  
pp. e239250
Author(s):  
Vijay Anand Ismavel ◽  
Moloti Kichu ◽  
David Paul Hechhula ◽  
Rebecca Yanadi

We report a case of right paraduodenal hernia with strangulation of almost the entire small bowel at presentation. Since resection of all bowel of doubtful viability would have resulted in too little residual length to sustain life, a Bogota bag was fashioned using transparent plastic material from an urine drainage bag and the patient monitored intensively for 18 hours. At re-laparotomy, clear demarcation lines had formed with adequate length of viable bowel (100 cm) and resection with anastomosis was done with a good outcome on follow-up, 9 months after surgery. Our description of a rare cause of strangulated intestinal obstruction and a novel method of maximising length of viable bowel is reported for its successful outcome in a low-resource setting.


Author(s):  
Víctor Lopez-Lopez ◽  
Ana Morales ◽  
Elisa García-Vazquez ◽  
Miguel González ◽  
Quiteria Hernandez ◽  
...  

Author(s):  
Navin Kumar ◽  
Mukur Dipi Ray ◽  
D. N. Sharma ◽  
Rambha Pandey ◽  
Kanak Lata ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document