Departures in the spoken narratives of normal and language-disordered children

1987 ◽  
Vol 8 (2) ◽  
pp. 185-202 ◽  
Author(s):  
Betty Z. Liles ◽  
Sherry Purcell

ABSTRACTThe spoken narratives of 38 normal and language-disordered children (CA 7;6–10;6) were analyzed by describing their departures from the original text during recall. The narrative texts were presented to an adult listener following each child's viewing of a 35-minute film. The following departure types were compared across groups: (a) acceptable departures from the original text meaning, (b) unacceptable departures from the original text meaning, (c) grammatical departures (i.e., agrammatical utterances), (d) exact repetitions of words or phrases, (e) unacceptable departures from the text's meaning correctly repaired, (f) unacceptable departures from the text meaning incorrectly repaired, (g) departures from text meaning left unrepaired, and (h) repaired grammatical departures. Results indicated that both groups used a higher rate of acceptable departures from the original text meaning than any other departure type, with the normal children producing a higher rate of acceptable departures and a lower rate of unacceptable grammatical departures. Both groups repaired fewer unacceptable grammatical departures than unacceptable departures from text meaning. The groups did not differ in their tendency to ignore grammatical departures. Implications for language processing in narrative discourse are discussed.

2010 ◽  
Vol 13 (2) ◽  
pp. 236-259 ◽  
Author(s):  
Aurora Bel ◽  
Joan Perera ◽  
Naymé Salas

In this study, we focus on pronominal anaphora and we investigate the referential properties of null and overt subject pronouns in Catalan, in the semi-spontaneous production of narrative spoken and written texts by three groups of speakers/writers (9–10, 12–13, and 15–16 year olds). We aimed at determining (1) pronoun preferences for a specific type of antecedent; (2) their specialization in a certain discourse function; and (3) whether the pattern is affected by text modality (spoken vs. written texts). We analyzed 30 spoken and 30 written narrative texts, produced by the same 30 subjects, divided into the age groups mentioned above. Results seem fairly consistent across age groups and modalities, showing that null pronouns tend to select antecedents in subject position and are well specialized in maintaining reference, while overt pronouns offer a less clear pattern both in their selection of antecedents and in the discourse function they perform. Our findings partially support those of previous research on other null-subject languages, in particular, the Position of Antecedent Hypothesis (PAH) formulated by Carminati (2002) for Italian.


2014 ◽  
Vol 23 (3) ◽  
pp. 255-269 ◽  
Author(s):  
Michael Boyden

The first part of this article confronts the ways in which translation scholars have drawn on insights from narratology to make sense of the translator’s involvement in narrative texts. It first considers competing metaphors for conceptualizing the translator’s involvement, arguing for a clearer differentiation between modes of framing and telling. Next, it evaluates the ways in which translation scholars have attempted to integrate the translator as a separate textual agent in governing models of narrative communication, concluding that the conceptual gains to be reaped from positing the translator as a separate enunciator or agent in narrative transactions are limited. The second part of the article analyzes two Dutch translations of Herman Melville’s novella Benito Cereno, by Johan Palm (1950) and Jean Schalekamp (1977) respectively. Rather than striving to isolate the translators as separate tellers or co-producers of narrative structure, the analysis reveals that their agency shows foremost in the ways the ‘voiceless’ narrative of New World slavery is perspectivized in view of changing readerly expectations.


2020 ◽  
Vol 20 (1) ◽  
Author(s):  
Enrico Mensa ◽  
Davide Colla ◽  
Marco Dalmasso ◽  
Marco Giustini ◽  
Carlo Mamo ◽  
...  

Abstract Background Emergency room reports pose specific challenges to natural language processing techniques. In this setting, violence episodes on women, elderly and children are often under-reported. Categorizing textual descriptions as containing violence-related injuries (V) vs. non-violence-related injuries (NV) is thus a relevant task to the ends of devising alerting mechanisms to track (and prevent) violence episodes. Methods We present ViDeS (so dubbed after Violence Detection System), a system to detect episodes of violence from narrative texts in emergency room reports. It employs a deep neural network for categorizing textual ER reports data, and complements such output by making explicit which elements corroborate the interpretation of the record as reporting about violence-related injuries. To these ends we designed a novel hybrid technique for filling semantic frames that employs distributed representations of terms herein, along with syntactic and semantic information. The system has been validated on real data annotated with two sorts of information: about the presence vs. absence of violence-related injuries, and about some semantic roles that can be interpreted as major cues for violent episodes, such as the agent that committed violence, the victim, the body district involved, etc.. The employed dataset contains over 150K records annotated with class (V,NV) information, and 200 records with finer-grained information on the aforementioned semantic roles. Results We used data coming from an Italian branch of the EU-Injury Database (EU-IDB) project, compiled by hospital staff. Categorization figures approach full precision and recall for negative cases and.97 precision and.94 recall on positive cases. As regards as the recognition of semantic roles, we recorded an accuracy varying from.28 to.90 according to the semantic roles involved. Moreover, the system allowed unveiling annotation errors committed by hospital staff. Conclusions Explaining systems’ results, so to make their output more comprehensible and convincing, is today necessary for AI systems. Our proposal is to combine distributed and symbolic (frame-like) representations as a possible answer to such pressing request for interpretability. Although presently focused on the medical domain, the proposed methodology is general and, in principle, it can be extended to further application areas and categorization tasks.


Author(s):  
Philip M. McCarthy ◽  
Shinobu Watanabe ◽  
Travis A. Lamkin

Natural language processing tools, such as Coh-Metrix (see Chapter 11, this volume) and LIWC (see Chapter 12, this volume), have been tremendously successful in offering insight into quantifiable differences between text types. Such quantitative assessments have certainly been highly informative in terms of evaluating theoretical linguistic and psychological categories that distinguish text types (e.g., referential overlap, lexical diversity, positive emotion words, and so forth). Although these identifications are extremely important in revealing ability deficiencies, knowledge gaps, comprehension failures, and underlying psychological phenomena, such assessments can be difficult to interpret because they do not explicitly inform readers and researchers as to which specific linguistic features are driving the text type identification (i.e., the words and word clusters of the text). For example, a tool such as Coh-Metrix informs us that expository texts are more cohesive than narrative texts in terms of sentential referential overlap (McNamara, Louwerse, & Graesser, in press; McCarthy, 2010), but it does not tell us which words (or word clusters) are driving that cohesion. That is, we do not learn which actual words tend to be indicative of the text type differences. These actual words may tend to cluster around certain psychological, cultural, or generic differences, and, as a result, researchers and materials designers who might wish to create or modify text, so as to better meet the needs of readers, are left somewhat in the dark as to which specific language to use. What is needed is a textual analysis tool that offers qualitative output (in addition to quantitative output) that researchers and materials designers might use as a guide to the lexical characteristics of the texts under analysis. The Gramulator is such a tool.


2020 ◽  
Vol 48 (3) ◽  
pp. 495-528
Author(s):  
Regine Eckardt

AbstractText comprehension is based on the literal content of sentences and pragmatic enrichment. Theories of pragmatic enrichment in the literature include enrichment of narrative texts, but also pragmatic content conveyed by presupposition triggers. Taking texts by Ror Wolf as my test case, I illustrate that our capacity of pragmatic enrichment can be abused to understand paradoxical content, even though the literal content of the text seems coherent at the surface level. This shows that pragmatic enrichment in narration is a genuine part of language processing and must not be equated with commonsense reasoning.


Target ◽  
1989 ◽  
Vol 1 (2) ◽  
pp. 151-181 ◽  
Author(s):  
Kitty M. van Leuven-Zwart

Abstract This article presents a method for the establishment and description of shifts in integral translations of narrative texts. The method is based on the premise that both micro- and macrostructural shifts in translation can furnish indications of the translational norms adopted by the translator, his interpretation of the original text and the strategy applied during the process of translation. Further it is based on the assumption that research on the nature and frequency of microstructural shifts must precede research on macrostructural ones, in order to guarantee that findings are verifiable and the study repeatable. Thus, the method developed consists of two components: a comparative and a descriptive model. The comparative model is designed for the classification of microstructural shifts, i.e. semantic, stylistic and pragmatic shifts within sentences, clauses and phrases. The descriptive model focuses on the effects of microstructural shifts on the macrostructural level. With the aid of this model shifts with respect to characters, events, time, place and other meaningful components of the text can be determined and described.


2021 ◽  
Vol 22 (1) ◽  
Author(s):  
Gezheng Xu ◽  
Wenge Rong ◽  
Yanmeng Wang ◽  
Yuanxin Ouyang ◽  
Zhang Xiong

Abstract Background Biomedical question answering (QA) is a sub-task of natural language processing in a specific domain, which aims to answer a question in the biomedical field based on one or more related passages and can provide people with accurate healthcare-related information. Recently, a lot of approaches based on the neural network and large scale pre-trained language model have largely improved its performance. However, considering the lexical characteristics of biomedical corpus and its small scale dataset, there is still much improvement room for biomedical QA tasks. Results Inspired by the importance of syntactic and lexical features in the biomedical corpus, we proposed a new framework to extract external features, such as part-of-speech and named-entity recognition, and fused them with the original text representation encoded by pre-trained language model, to enhance the biomedical question answering performance. Our model achieves an overall improvement of all three metrics on BioASQ 6b, 7b, and 8b factoid question answering tasks. Conclusions The experiments on BioASQ question answering dataset demonstrated the effectiveness of our external feature-enriched framework. It is proven by the experiments conducted that external lexical and syntactic features can improve Pre-trained Language Model’s performance in biomedical domain question answering task.


2021 ◽  
Author(s):  
Sakdipat Ontoum ◽  
Jonathan H. Chan

By identifying and extracting relevant information from articles, automated text summarizing helps the scientific and medical sectors. Automatic text summarization is a way of compressing text documents so that users may find important information in the original text in less time. We will first review some new works in the field of summarizing that use deep learning approaches, and then we will explain the "COVID-19" summarization research papers. The ease with which a reader can grasp written text is referred to as the readability test. The substance of text determines its readability in natural language processing. We constructed word clouds using the abstract's most commonly used text. By looking at those three measurements, we can determine the mean of "ROUGE-1", "ROUGE-2", and "ROUGE-L". As a consequence, "Distilbart-mnli-12-6" and "GPT2-large" are outperform than other. <br>


2014 ◽  
Vol 2 (4) ◽  
pp. 367
Author(s):  
Mehdi Falih Al-Ghazalli

<em>The present paper aims at investigating the lexical and grammatical means by which events in written texts are temporally sequenced in standard Arabic and Standard English. Temporal succession refers to the chronological order of events which is signalled typically by conjunctions, tense, aspect, synonyms, antonyms, time adverbials and prepositions. The researcher built his study on two hypotheses: firstly, both languages tend to use the same lexico-grammatical devices to achieve the succession concerned. Secondly, translating Arabic temporal connectives, found in narrative texts, into English seems to pose rendition difficulties which can be attributed to grammatical and discoursal differences between the two languages. The results of the contrastive analysis conducted by the researcher have proved that the two languages partially employ the same lexico-grammatical connectives to maintain the temporal sequence of actions and events. However, unlike English, Arabic employs some coordinators as time connectives. As for the translation assessment, it has been found out that in Arabic literary texts, time connectives have not been accurately translated. This has been particularly in evidence as far as Arabic coordinators (as time connectives) are concerned.</em>


Sign in / Sign up

Export Citation Format

Share Document