scholarly journals Written text production and its relationship to writing processes and spelling ability in persons with post-stroke aphasia

Aphasiology ◽  
2020 ◽  
pp. 1-18
Author(s):  
Charlotte Johansson-Malmeling ◽  
Lena Hartelius ◽  
Åsa Wengelin ◽  
Ingrid Henriksson
2020 ◽  
Vol 13 (2) ◽  
pp. 43-49
Author(s):  
Piroska DEMÉNY

"Abstract: In Romania, the curriculum for mother tongue education for grade three and four of primary school defines spoken and written text production in various communication situations as a general educational requirement and competence. (see the curriculum for competence-based teaching of the mother tongue approved by Ministerial Decree No. 5003 of 4 December 2014. Hungarian Language and Literature, grade three and four). This experimental study examines the impact of digital storytelling on children’s text production skills. Our aim was to design an interventionprogramme that develops primary school children’s selfexpression, text production skills, creativity but also their digital competencies. The goal is to use digital storytelling to develop children’s composition skills, including staying on the subject, creating the connection between title and content, spelling, text appearance, and reaching the desired length. In order to achieve our objective, we devised experiments involving two cohorts of children in year four of primary school who were given stories selected from Angi Máté’s book Volt egyszer egy… (Once upon a time there was a…). Using these stories as a starting point, the members of the both groups created their own stories, the experimental group applying digital storytelling, while the control group applied the technique of collage."


Author(s):  
Lauri Haapanen

Text production research tends to analyse corpora of text products as its data. However, for the aim of investigating especially text production processes, such an approach falls short; a written text does its best to hide any traces of its genesis. This article argues for a holistic approach in text production research by presenting five methodological guidelines for future research: 1) what/how-research questions need to be followed by why-questions, and such research frameworks require 2) several methods to be applied in order to 3) encompass both product and process perspectives, 4) reveal material, mental and social activities, and 5) move from micro-level activities towards macro-level contexts. The holistic approach is empirically illustrated by drawing on a study on journalistic quoting (Haapanen 2017a).


Author(s):  
Leah Henrickson

Natural language generation (NLG) refers to the process in which computers produce output in readable human languages (e.g., English, French). Despite sounding as though they are contained within the realm of science fiction, computer-generated texts actually abound; business performance reports are generated by NLG systems, as are tweets and even works of longform prose. Yet many are altogether unaware of the increasing prevalence of computer-generated texts. Moreover, there has been limited scholarly consideration of the social and literary implications of NLG from a humanities perspective, despite NLG systems being in development for more than half a century. This article serves as one such consideration. Human-written and computer-generated texts represent markedly different approaches to text production that necessitate distinct approaches to textual interpretation. Characterized by production processes and labor economies that at times seem inconsistent with those of print culture, computer-generated texts bring conventional understandings of the author-reader relationship into question. But who—or what—is the author of the computer-generated text? This article begins with an introduction to NLG as it has been applied to the production of public-facing textual output. NLG’s unique potential for textual personalization is observed. The article then moves toward a consideration of authorship as the concept may be applied to computer-generated texts, citing historical and current legal discussions, as well as various interdisciplinary analyses of authorial attribution. This article suggests a semantic shift from considering NLG systems as tools to considering them as social agents in themselves: not to obsolesce human writers, but to recognize the particular contributions of NLG systems to the current socio-literary landscape. As this article shows, texts are regarded as fundamentally human artifacts. A computer-generated text is no less a human artifact than a human-written text, but its unconventional manifestation of humanity prompts calculated contemplation of what authorship means in an increasingly digital age.


Author(s):  
Jeremie Seror

Technological innovations and the prevalence of the computer as a means of producing and engaging with texts have dramatically transformed the ways in which literacy is defined and developed in modern society. Concurrently, this rise in digital writing practices has led to a growing number of tools and methods that can be used to explore second language (L2) writers’ writing development. This paper provides an overview of one such technique: the contributions of screen capture technology as a means of analyzing writers' composition processes. This paper emphasizes the unique advantages of being able to unobtrusively gather, store and replay what have traditionally remained hidden sequences of events at the heart of L2 writers' text production. Drawing on research data from case studies of university L2 writers, findings underscore the contribution screen capture technology can make to writing theory's understanding of the complex series of behaviours and strategies at the heart of L2 writers' interactions. Les innovations technologiques et la prévalence de l'ordinateur comme moyen de produire et d’interagir avec les textes ont radicalement transformé la façon dont la littératie est définie et développée dans la société moderne. Cette augmentation des pratiques d'écriture numérique a généré un nombre croissant d'outils et de méthodes disponibles pour explorer le développement de l'écriture dans une langue seconde (L2). Cet article donne un aperçu de l’une de ces techniques: les contributions offertes par la technologie de capture d'écran en tant que moyen d’analyse des processus d’écriture. L’article met l'accent sur les avantages incomparables qu’offre la possibilité de recueillir discrètement, de conserver et de revoir ce qui normalement reste une suite d'événements cachés au cœur du processus d’écriture dans une langue seconde. S'appuyant sur des données de recherche issues d’études de cas d’étudiants en L2 de niveau universitaire, les résultats mettent en lumière la contribution de la technologie de capture d'écran à la compréhension théorique de séries complexes de comportements et de stratégies situées au cœur des interactions des étudiants de L2 en contexte d’écriture.


Fachsprache ◽  
2018 ◽  
Vol 40 (3-4) ◽  
pp. 122-140
Author(s):  
Rikke Hartmann Haugaard

News media possess an orchestrating, manipulating power over the public debate; they create the framework in which we discuss events and learn about ourselves and our surroundings. At the same time, news products provide much of our foundation for knowing about the world we inhabit. However, we lack empirical knowledge about the process of writing news texts, i.e. knowledge about the choices made by journalists as to what to communicate and how to communicate it, in other words, the decisions they make as regards content and linguistic form, respectively. Revisions made during writing yield insights into the progression of a text, providing a signficant element to the understanding of how journalists juggle content and form in their mediation of knowledge. Thus, (NN 2016) of journalists’ revision activity when producing a text. The study was designed as a multiple case study and explored different aspects of revisions occurring during three specific instances of professional text producers’ ordinary writing practices as they unfolded in their natural setting in an editorial office of a major Spanish newspaper. Placing the research agenda at the center and with a view to presenting a description as comprehensive as possible of the revisions made during the writing processes, the study applied a mix of qualitative and quantitative methods, i.e. keystroke logging, participant observation and retrospective interviews. For each journalist, the study investigated the characteristics of the revisions of content and form separately. In this sense, the study examined time of occurrence during the writing process, revision type, such as addition, omission and substitution and the possible relation between timing and revision type. Moreover, the study analysed the distribution of revisions between content and form and the differences between and similarities shared by the three journalists. To operationalise the content-form dichotomy, the analysis builds on Faigley/Witte’s (1981) taxonomy. Accordingly, content revisions add new content or omit existing content that cannot otherwise be inferred from the extant text. By contrast, revisions that only affect the form of the text neither omit nor substitute original content that cannot be inferred from the extant text as it is, nor do they add content that cannot already be inferred. When tracking the text production process as it unfolds in computer-based writing, the continuous revisions made as part of the ongoing text production process become visible to the researcher. At any given point during writing, the written text can be revised at its leading edge, where new text is being transcribed, and in the text already written, i.e. after the text has been transcribed. This distinction between revisions according to their location, i.e. in the text currently being transcribed (pre-contextual revision) or in the text already transcribed (contextual revision) is relevant when the effect of a revision (content or form) is to be interpreted; generally, only the effect of contextual revisions is interpretable on the basis of keystroke logging alone. The approach to the analysis of revisions was inspired by the online revision taxonomy developed by Lindgren/Sullivan (2006a, 2006b) in collaboration with Stevenson/Schoonen/de Glopper (2006). However, the taxonomy proved to be insufficiently accurate to be operationalised, and too coarse to categorise all interpretable revisions in the data. Consequently, a stringent and nuanced analytical framework was developed based on existing theory and the data. This framework places the revisions made during text production on a continuum of semantically meaningful context. At one end of the continuum lies the potentially most complete semantically meaningful context represented by a sentence concluded by a sentence-completing character, and at the other end, the semantically non-meaningful context. In between the two ends, the continuum holds semantically meaningful contexts that are potentially less complete, such as semantically meaningful sentences without sentence-completing characters and semantically meaningful phrases. By introducing an interpretation as to whether a revision is conducted in a semantically meaningful context, the analytical framework distances itself from a more objective categorisation of the location of revisions at the leading edge or in the transcribed text. This allows for a systematisation of the contexts in which the effect of revisions at the leading edge can be interpreted and the contexts in which the effect of revisions made in already transcribed text cannot be interpreted. The exploratory and qualitative nature of the study provided a detailed analysis of the journalists’ revision activities, and it offered nuanced insights into their text production. The results showed a relatively homogenous picture, including certain variations, in which the form of the text was revised significantly more often than the content, both during the ongoing text production and, in particular, during the systematic review of the potentially final text in which content was only infrequently revised. Revision types and their effect on the text during the ongoing text production and in the systematic review of the potentially finalised text reflect the diverging purposes of these two phases: the first phase serves to generate cohesive and coherent text for the article, and the second phase aims to evaluate and, especially, to reduce the volume of the written text. The overall tendency of the analyses and the details which it reflects can be used as the basis for new studies and can help generate hypotheses about how other text producers, both in similar and different contexts, write and revise theirs texts and how they juggle content and form in their democratisation of knowledge.


Fachsprache ◽  
2018 ◽  
Vol 40 (3-4) ◽  
pp. 122-140
Author(s):  
Rikke Hartmann Haugaard

News media possess an orchestrating, manipulating power over the public debate; they create the framework in which we discuss events and learn about ourselves and our surroundings. At the same time, news products provide much of our foundation for knowing about the world we inhabit. However, we lack empirical knowledge about the process of writing news texts, i.e. knowledge about the choices made by journalists as to what to communicate and how to communicate it, in other words, the decisions they make as regards content and linguistic form, respectively. Revisions made during writing yield insights into the progression of a text, providing a signficant element to the understanding of how journalists juggle content and form in their mediation of knowledge. Thus, (NN 2016) of journalists’ revision activity when producing a text. The study was designed as a multiple case study and explored different aspects of revisions occurring during three specific instances of professional text producers’ ordinary writing practices as they unfolded in their natural setting in an editorial office of a major Spanish newspaper. Placing the research agenda at the center and with a view to presenting a description as comprehensive as possible of the revisions made during the writing processes, the study applied a mix of qualitative and quantitative methods, i.e. keystroke logging, participant observation and retrospective interviews. For each journalist, the study investigated the characteristics of the revisions of content and form separately. In this sense, the study examined time of occurrence during the writing process, revision type, such as addition, omission and substitution and the possible relation between timing and revision type. Moreover, the study analysed the distribution of revisions between content and form and the differences between and similarities shared by the three journalists. To operationalise the content-form dichotomy, the analysis builds on Faigley/Witte’s (1981) taxonomy. Accordingly, content revisions add new content or omit existing content that cannot otherwise be inferred from the extant text. By contrast, revisions that only affect the form of the text neither omit nor substitute original content that cannot be inferred from the extant text as it is, nor do they add content that cannot already be inferred. When tracking the text production process as it unfolds in computer-based writing, the continuous revisions made as part of the ongoing text production process become visible to the researcher. At any given point during writing, the written text can be revised at its leading edge, where new text is being transcribed, and in the text already written, i.e. after the text has been transcribed. This distinction between revisions according to their location, i.e. in the text currently being transcribed (pre-contextual revision) or in the text already transcribed (contextual revision) is relevant when the effect of a revision (content or form) is to be interpreted; generally, only the effect of contextual revisions is interpretable on the basis of keystroke logging alone. The approach to the analysis of revisions was inspired by the online revision taxonomy developed by Lindgren/Sullivan (2006a, 2006b) in collaboration with Stevenson/Schoonen/de Glopper (2006). However, the taxonomy proved to be insufficiently accurate to be operationalised, and too coarse to categorise all interpretable revisions in the data. Consequently, a stringent and nuanced analytical framework was developed based on existing theory and the data. This framework places the revisions made during text production on a continuum of semantically meaningful context. At one end of the continuum lies the potentially most complete semantically meaningful context represented by a sentence concluded by a sentence-completing character, and at the other end, the semantically non-meaningful context. In between the two ends, the continuum holds semantically meaningful contexts that are potentially less complete, such as semantically meaningful sentences without sentence-completing characters and semantically meaningful phrases. By introducing an interpretation as to whether a revision is conducted in a semantically meaningful context, the analytical framework distances itself from a more objective categorisation of the location of revisions at the leading edge or in the transcribed text. This allows for a systematisation of the contexts in which the effect of revisions at the leading edge can be interpreted and the contexts in which the effect of revisions made in already transcribed text cannot be interpreted. The exploratory and qualitative nature of the study provided a detailed analysis of the journalists’ revision activities, and it offered nuanced insights into their text production. The results showed a relatively homogenous picture, including certain variations, in which the form of the text was revised significantly more often than the content, both during the ongoing text production and, in particular, during the systematic review of the potentially final text in which content was only infrequently revised. Revision types and their effect on the text during the ongoing text production and in the systematic review of the potentially finalised text reflect the diverging purposes of these two phases: the first phase serves to generate cohesive and coherent text for the article, and the second phase aims to evaluate and, especially, to reduce the volume of the written text. The overall tendency of the analyses and the details which it reflects can be used as the basis for new studies and can help generate hypotheses about how other text producers, both in similar and different contexts, write and revise theirs texts and how they juggle content and form in their democratisation of knowledge.


Author(s):  
Frances Rock

AbstractThis paper examines the complex literacy event through which police witness statements are produced in England and Wales. Witness statements are constructed through interviews which archetypally consist of a trajectory from the witness of the crime, through a police officer and onto a written page with the officer taking most control of the writing. This paper examines how this ostensibly inevitable trajectory materializes in practice. It identifies a distinctive way of traversing the trajectory through which the inner workings of the trajectory itself are put on display by the interviewing officer and through this display recursively influence the trajectory. This display of the trajectory draws on four discursive means: writing aloud, proposing wordings, reading back text just written and referring explicitly to the artifactuality of writing, which I label, collectively, “Frontstage Entextualization.” Through Frontstage Entextualization, the writing process comes to be used as a resource for both producing text and involving the witness in text production. The paper identifies three forms of activity which are accomplished through Frontstage Entextualization: First, frontstage drafting which allows words and phrases for possible inclusion to be weighed-up; secondly, frontstage scribing which foregrounds the technology of pen and paper which allows the witness to be appraised of writing processes; and finally, frontstaging the sequentiality of written-ness to textually resolve difficulties of witness memory. The paper concludes by suggesting that the analysis has shown how text trajectories can be made accessible to lay participants by institutional actors.


Sign in / Sign up

Export Citation Format

Share Document