source document
Recently Published Documents


TOTAL DOCUMENTS

86
(FIVE YEARS 17)

H-INDEX

7
(FIVE YEARS 1)

2021 ◽  
pp. 1-29
Author(s):  
Yizhu Liu ◽  
Xinyue Chen ◽  
Xusheng Luo ◽  
Kenny Q. Zhu

Abstract Convolutional sequence to sequence (CNN seq2seq) models have met success in abstractive summarization. However, their outputs often contain repetitive word sequences and logical inconsistencies, limiting the practicality of their application. In this paper, we find the reasons behind the repetition problem in CNN-based abstractive summarization through observing the attention map between the summaries with repetition and their corresponding source documents and mitigate the repetition problem. We propose to reduce the repetition in summaries by attention filter mechanism (ATTF) and sentence-level backtracking decoder (SBD), which dynamically redistributes attention over the input sequence as the output sentences are generated. The ATTF can record previously attended locations in the source document directly and prevent the decoder from attending to these locations. The SBD prevents the decoder from generating similar sentences more than once via backtracking at test. The proposed model outperforms the baselines in terms of ROUGE score, repeatedness, and readability. The results show that this approach generates high-quality summaries with minimal repetition and makes the reading experience better.


2021 ◽  
Vol 67 (3) ◽  
pp. 388-406
Author(s):  
Olegs Andrejevs

The level of scepticism met by the concept of macro-chiasm in ancient literature is noticeably lower today than two decades ago, with sizable agreement coalescing around certain examples. One such example is found in the synoptic double-tradition material as it is preserved in Luke's Gospel, which provides the methodological foundation for the reconstruction of the hypothetical synoptic source document Q. This article explores the study of the macro-chiasm identified in Luke (Q) 3.7–7.35 and its implications for the synoptic problem. It also addresses the methodological considerations advanced by S. E. Porter and J. T. Reed in their NTS article two decades ago, meeting a certain stipulation placed by them upon subsequent scholarship.


Author(s):  
Ms. P. Mahalakshmi Et.al

Cross-Language Multi-document summarization (CLMDS) process produces a summary generated from multiple documents in which the summary language is different from the source document language. The CLMDS model allows the user to provide query in a particular language (e.g., Tamil) and generates a summary in the same language from different language source documents. The proposed model enables the user to provide a query in Tamil language, generate a summary from multiple English documents, and finally translate the summary into Tamil language. The proposed model makes use of naïve Bayes classifier (NBC) model for the CLMDS. An extensive set of experimentation analysis was performed and the results are investigated under distinct aspects. The resultant experimental values ensured the supremacy of the presented CLMDS model.


2020 ◽  
Vol 31 (3) ◽  
pp. 295-328
Author(s):  
Ibrahim Zein ◽  
Ahmed El-Wakil

Abstract This article examines the different versions of the Treaty which Khālid b. al-Wālid granted to the people of Damascus and notes the variations and textual discrepancies between them by examining both Muslim and non-Muslim sources. We demonstrate how the accounts share a common historical memory in recalling the issuance of treaties in the era of ʿUmar b. al-Khaṭṭāb. We argue that the original Treaty with the People of Damascus represented in all likelihood the template for all other treaties given to the inhabitants of Greater Syria, Jordan and Palestine, whose echoes can also be found resonating in Egypt and Iraq. Most important of all, by navigating through the competing and shared historical memories, we conclude that the original Treaty stipulated that the indigenous population’s churches neither be destroyed nor inhabited. We conclude by proposing that this standard policy was not just based on mere pragmatism, but also on some sort of written ordinance that originated with the Prophet Muḥammad.


Author(s):  
Héctor Daniel Hernández-García ◽  
Dulce J. Navarrete-Aria ◽  
Cristy Elizabeth Aguilar-Ojeda

This work describes the development of a software whose task is to translate texts written in an exclusive and sexist language to a text in an inclusive and non-sexist language. This software has three processes: first to extract the text from different digital formats (pdf, docx or html), second to translate the text through an automaton that implements the "manual for the non-sexist use of language" and third to return the translated text in a digital document of the same format as the source. The development was carried out through the incremental methodology, from which three increments were obtained: the first to extract the text from the digital document, the second to implement the automaton and translate the extracted text, and the third increment to return the translated text in a format equal to the source document. The implemented automaton was developed in the JFlex tool. The tests carried out are Alpha and consist of translating simple sentences to validate the correct implementation of the rules described in the "manual for the non-sexist use of language". The objective is to help in the learning of this new language that aims to include and recognize women in the newsrooms.


2020 ◽  
Vol 7 (1) ◽  
pp. 30
Author(s):  
Dennis Michael Bryant

Teachers expect that grammatical metadata is evidence-based, and not subject to inclusion of poetic licence as is evident in Twain’s ‘The Prince and the Pauper’ novel, in which two characters are found to be identical in form (being thitherto unrecognised identically shaped twins) as well as being alike in function (they both manage to pass as royal princes in-waiting). But it must be asked, could similar poetic licence have inadvertently found its way into the grammatical treatment of Articles? This question must be asked because past grammars have not used an evidence-based approach to describing Articles. To address this shortcoming, and believing that an analytical re-measurement is not out of place, this paper is confidant in proposing a substantial treatment of articles, which is based on kindred form and kindred function and not poetic licence. The methodology of this paper, which is to employ discerning exemplars of English sentences, emulates three recent publications, the first of which concerned altering word prominence in pursuit of grammatical convenience; while the remaining papers were concerned with ESL mastery of the verb complex; and lastly, the decoding of contractions. Given that some ESL learners have never required (nor acquired) Articles in order to attain first language competency (say, for example, Czech and some Baltic languages), this paper will serve to shed new light on the hidden-in-plain-sight operations of English, and could become a source document for today’s ESL teachers on the treatment of Articles.


2020 ◽  
Vol 3 (1) ◽  
pp. 18-27
Author(s):  
Muhamad Hendra Febiawan ◽  
Agus Setiawan ◽  
Ardhin Primadewi

The academic world in Indonesia has growing rapidly which is marked by the development of science and technology. With all the facilities offered, technology has a positive and negative impact on life. One of negative impacts is plagiarism. The plagiarism often occurs among students, therefore detection of plagiarism needs to be done to avoid plagiarism. This reseacrh to detect the similarity of the text content of the document using the Levenshtein Distance algorithm. The type of document used is .pdf . The documents used are thesis proposals and publication papers. In the calculation speed test, the source document which count words 4405 with 3 comparative documents that have words 13465 count produces a calculation duration 3.57 seconds.


Text summarying is a process by which the most important information from the source document is precisely found. It stands for the information condensed to a longer text. Text summary is broken down into two approaches: extractive summary and abstractive summary. The proposed method creates an extractive summary of a given text and generate an appropriate title for the generated summary. Extractive summary is generated through sentence selection by using Rule-based concept. Eight different features are considered to rank each sentence according to its importance. Ranking assigns a numerical measure to each sentence. After ranking, sentences that has high rank compared to others will be selected to form the summary. The frequently occurring bi-gram is selected as the title for the summary. The system performs better than existing extractive summarization techniques like Graph-based system and achieved a precision of 0.7


Sign in / Sign up

Export Citation Format

Share Document