scholarly journals Translation Revision: Correlating Revision Procedure and Error Detection

Author(s):  
A. Helene Ipsen ◽  
Helle V. Dam

This article reports on an empirical study on translation revision. With the aim of investigating the possible link between revision procedure and quality, the research correlates an indicator of quality, error detection, with revision procedure. Error detection and revision procedure were studied drawing on a convergent parallel mixed-methods research design involving three different sources of data. Nine subjects performed a revision task and thus produced text data; their activities on the computer screen were captured and saved as video fi les; and retrospective interviews were conducted with the revisers upon completion of the task. Results show that the highest error detection scores were linked with a variety of revision procedures, but with one common denominator: the target text was consistently the point of departure. Revisers with high error detection scores thus engaged in various different revision procedures, but their focus of attention in the initial operations was the translation rather than the source text in all cases. Conversely, the revisers whose initial attention was directed towards the source text received the lowest error detection scores in the revision task.

Author(s):  
A. Helene Ipsen ◽  
Helle Vrønning Dam

This article reports on an empirical study on translation revision. With the aim of investigating the possible link between revision procedure and quality, the research correlates an indicator of quality, error detection, with revision procedure. Error detection and revision procedure were studied drawing on a convergent parallel mixed-methods research design involving three different sources of data. Nine subjects performed a revision task and thus produced text data; their activities on the computer screen were captured and saved as video fi les; and retrospective interviews were conducted with the revisers upon completion of the task. Results show that the highest error detection scores were linked with a variety of revision procedures, but with one common denominator: the target text was consistently the point of departure. Revisers with high error detection scores thus engaged in various different revision procedures, but their focus of attention in the initial operations was the translation rather than the source text in all cases. Conversely, the revisers whose initial attention was directed towards the source text received the lowest error detection scores in the revision task.


Author(s):  
Rebecca Wilson ◽  
Oliver Butters ◽  
Demetris Avraam ◽  
Andrew Turner ◽  
Paul Burton

ABSTRACT ObjectivesDataSHIELD (www.datashield.ac.uk) was born of the requirement in the biomedical and social sciences to co-analyse individual patient data (microdata) from different sources, without disclosing identity or sensitive information. Under DataSHIELD, raw data never leaves the data provider and no microdata or disclosive information can be seen by the researcher. The analysis is taken to the data - not the data to the analysis. Text data can be very disclosive in the biomedical domain (patient records, GP letters etc). Similar, but different, issues are present in other domains - text could be copyrighted, or have a large IP value, making sharing impractical. ApproachBy treating text in an analogous way to individual patient data we assessed if DataSHIELD could be adapted and implemented for text analysis, and circumvent the key obstacles that currently prevent it. ResultsUsing open digitised text data held by the British Library, a DataSHIELD proof-of-concept infrastructure and prototype DataSHIELD functions for free text analysis were developed. ConclusionsWhilst it is possible to analyse free text within a DataSHIELD infrastructure, the challenge is creating generalised and resilient anti-disclosure methods for free text analysis. There are a range of biomedical and health sciences applications for DataSHIELD methods of privacy protected analysis of free text including analysis of electronic health records and analysis of qualitative data e.g. from social media.


KWALON ◽  
2016 ◽  
Vol 21 (1) ◽  
Author(s):  
Reinoud Bosch ◽  
Ruben Verborgh

Summary An iterative mixed-methods research cycle is proposed as an approach to automatically querying the Semantic Web. To give an indication of what codification of this iterative research cycle could look like in practice, a dynamic iterator pipeline is presented that has been developed for efficient and effective iterative queries of the Semantic Web. The development of the logic of the iterative research cycle could be advanced by providing detailed and systematic answers to the question of how researchers go about answering questions by combining information from different sources on the Web.


2013 ◽  
Vol 4 ◽  
Author(s):  
Brita Bungum ◽  
Elin Kvande

The cash-for-care scheme was introduced in 1998 in Norway. During the first period after its introduction, the percentage of users was high at 91 per cent. Since 2005, however, the use has decreased substantially year by year. Thus, the use of cash for care has changed over the 15 years it has existed. In this article we take these changes as our point of departure and analyse more closely what we might call ‘the rise and fall of the cash-for-care scheme’ in Norway. Over the last 15 to 20 years, Norway has become a multicultural society and we need to include ethnicity when conducting research in the field of family policy. The focus is therefore on the intersection of gender, class, and ethnicity in parents’ use of cash for care over this period. Our analysis is based on different sources of data. We have used data from the evaluative programme undertaken by the Norwegian Research Council, including two surveys conducted before and after the reform (Gulbrandsen & Hellevik, 1998; Hellevik, 2000), and a qualitative case study focusing on fathers and mothers working in three different workplaces (Bungum et al. 2001). We have also used three other statistical studies which were carried out at two different points in time (Pettersen, 2003; Hirch, 2010; Bakken & Myklebø, 2010). Our findings indicate that cash for care is a scheme that mainly encourages mothers who have low income and a low educational level and who are to a large degree from immigrant backgrounds to remain outside the labour market. By distinguishing between three phases, we have aimed to illustrate how the intersection of gender, class, and ethnicity enters in different ways into both the discourse and the practices connected to the cash-for-care scheme since it was introduced in 1998.      


2019 ◽  
Vol 9 (4) ◽  
pp. 245-275 ◽  
Author(s):  
Florian Klonek ◽  
Fabiola Heike Gerpott ◽  
Nale Lehmann-Willenbrock ◽  
Sharon K. Parker

Team processes are interdependent activities among team members that transform inputs into outputs, vary over time, and are critical for team effectiveness. Understanding the temporal dynamics of team processes and related team phenomena with a high-resolution lens (i.e., methods with high sampling rates) is particularly challenging when going “into the wild” (i.e., studying teams operating in their full situated context). We review quantitative field studies using high-resolution methods (e.g., video, chat/text data, archival, wearables) and map out the various temporal lenses for studying team dynamics. We synthesize these different lenses and present an integrated temporal framework that is of help in theorizing about team dynamics. We also provide readers with a “how to” guide that summarizes four essential steps along with analytical methods (e.g., sequential and pattern analyses, mixed-methods research, abductive reasoning) that are applicable to the broad scope of high-resolution methods.


Target ◽  
1995 ◽  
Vol 7 (2) ◽  
pp. 261-284 ◽  
Author(s):  
Christiane Nord

Abstract As a text-type in their own right, titles and headings are intended to achieve six functions: distinctive, metatextual, phatic, referential, expressive, and appellative. Taking as a point of departure the hypothesis that translated texts have to "function" in the target situation for which they are produced by serving the purpose(s) they are intended for (which may or may not be the "same" as those of the source text), it is argued that the translator has to reconcile the conditions of functionality prevailing in the target culture with the communicative intentions of the source-title sender (= functionality + loyalty). The discussion of several examples from an extensive corpus of German, French, English, and Spanish titles and their translations shows how this methodological approach can be put into practice, establishing a model for the functional translation of other texts and text-types.


1954 ◽  
Vol 16 (1) ◽  
pp. 67-90
Author(s):  
Roman Jakobson

“Slavic Studies”—the very expression implies their comparative aspect and raises the question: what enables us to refer to Czechs, Slovaks, Poles, Lusatian Sorbs, Slovenes, Croats, Serbs, Macedonians, Bulgarians, Ukrainians, Byelorussians and Russians by the single all-encompassing term, the “Slavic” peoples? What is their common denominator?It is indisputable that the Slavic peoples are to be defined basically as Slavic-speaking peoples. If speech is the point of departure, the problem becomes primarily a linguistic one. Since the pioneering work of the Czech Abbé Dobrovský (1753–1829), comparative linguistics has proved the existence of a common ancestral language for all the living Slavic languages and has largely reconstructed the sound pattern, grammatical framework and lexical stock of this Common (or Primitive) Slavic language. The problem of where and by whom this Common Slavic language was spoken is being gradually solved by persistent efforts to synchronize the findings of comparative linguistics, toponymy, and archeology. The archeologists' data are like a motion picture without its sound track; whereas the linguists have the sound track without the film. Thus, interdepartmental teamwork becomes indispensable.


1994 ◽  
Vol 19 (2) ◽  
pp. 143-147
Author(s):  
Freddie Rokem

Performance analysis and performance theory have to deal in one way or another with the relationships between the written dramatic text and the stage performance of that text. The point of departure for most discussions of this issue is that the drama text through its staging is ‘translated’ or ‘transformed’ into a performance text but since the ontological status of the written and performed texts are fundamentally different it is virtually impossible to set up a clearly delineated hermeneutic procedure through which the new, ‘translated’ work of art, the performance text, can be analysed on the basis of the dramatic source text alone. It is quite evident, moreover, that the staging of a certain text is both a completely independent work of art, presented by live actors for an audience in the total context created especially for that performance and an ‘interpretation’ of another independent work of art, the dramatic text. The performance itself thus creates a special form of intertextuality where the words assigned to the characters on the printed page of the written text are spoken by the actors on the stage. The behaviour of a certain character on the stage is a specific realization of a potential range of meanings which that character contains in the source-text on the printed page.Since the two texts are so fundamentally different any attempt to judge the adequacy of the ‘new’ work of art, the performance text, mainly in relation to the dramatic text is doomed to run into insurmountable hermeneutic difficulties.


Author(s):  
Pitchapa Smutradontri ◽  
Savitri Gadavanij

AbstractIn our digital era, fandom has become a social and cultural phenomenon, notably in Thailand. Fans are dedicated, and creating fan text (i.e., text production made by fans about their object of fandom) is one way of showing dedication and passion to the fan base. This article explores how Thai fans engage with fan text on popular social media platforms such as Twitter, and how fandom relates to identity construction among Thai fans who are online media users. The results from a selected sample comprising 100 fan tweets from four different sources, suggests five types of fan tweets, including: hypothetical interpretation, fan art, narrative concerning an anecdote regarding the source text, expression of personal opinions and feelings, and fan parody. Moreover, this article discusses fans’ shared lexicon called ‘fan talk,’ and how fans position themselves as relatives and friends of the source texts. This article further discusses the humorous nature and the transcultural elements found in fan tweets, especially the ‘Thai-ifize’ method that fans use in creating fan tweets.


Author(s):  
Chaoran Zhou ◽  
Jianping Zhao ◽  
Xin Zhang ◽  
Chenghao Ren ◽  
◽  
...  

In Internet applications, the description for the same point of interest (POI) entity for different location-based services (LBSs) is not completely identical. The POI entity information in a single LBS data source contains incomplete data and exhibits insufficient objectivity. Aligning and consolidating POI entities from various LBSs can provide users with more comprehensive, objective, and authoritative POI information. We herein propose a multi-attribute measurement-based entity alignment method for Internet LBSs to achieve POI entity alignment and data consolidation. This method is based on multi-attribute information (geographical information, text coincidence information, semantic information) of POI entities and is combined with different measurement methods to calculate the similarity of candidate entity pairs. Considering the demand for computational efficiency, the particle swarm optimization algorithm is used to train the model and optimize the weights of multi-attribute measurements. A consolidation strategy is designed for the LBS text data and user rating data from different sources to obtain more comprehensive and objective information. The experimental results show that, compared with other baseline models, the POI alignment method based on multi-attribute measurement performed the best. Using this method, the information of POI entities in multisource LBS can be integrated to serve netizens.


Sign in / Sign up

Export Citation Format

Share Document