Meaning assembly in simultaneous interpretation

Interpreting ◽  
1998 ◽  
Vol 3 (2) ◽  
pp. 163-199 ◽  
Author(s):  
Robin Setton

Existing simultaneous interpretation (SI) process models lack an account of intermediate representation compatible with the cognitive and linguistic processes inferred from corpus descriptions or psycholinguistic experimentation. Comparison of SL and TL at critical points in synchronised transcripts of German-English and Chinese-English SI shows how interpreters use procedural and intentional clues in the input to overcome typological asymmetries and build a dynamic conceptual and intentional mental model which supports fine-grained incremental comprehension. An Executive, responsible for overall co-ordination and secondary pragmatic processing, compensates at the production stage for the inevitable semantic approximations and re-injects pragmatic guidance in the target language. The methodological and cognitive assumptions for the study are provided by Relevance Theory and a 'weakly interactive' parsing model adapted to simultaneous interpretation.

Fachsprache ◽  
2018 ◽  
Vol 40 (1-2) ◽  
pp. 63-78
Author(s):  
Margarete Flöter-Durr ◽  
Thierry Grass

Despite the work of Dan Sperber and Deirdre Wilson (1989), the concept of relevance has not enjoyed the popularity it deserved among translators as it appears to be more productive in information science and sociology than in translation studies. The theory of relevance provides underpinnings of a unified account of translation proposed by Ernst-August Gutt. However, if the concept of relevance should take into account all parameters of legal translation, the approach should be pragmatic and not cognitive: The aim of a relevant translation is to produce a legal text in the target language which appears relevant to the lawyer in the target legal system, namely a text that can be used in the same way as the original source text. The legal translator works as a facilitator from one legal system into another and relevance is the core of this pragmatic approach which requires translation techniques like adaptation rather than through-translation or calque (in the terminology of Delisle/Lee-Jahnk/Cormier 1999). This contribution tries to show that relevance theory, which was developed in the field of sociology by Alfred Schütz, could also be applied to translation theory with the aim of producing a correct translation in a concrete situation. Some examples extracted from one year of the practice of an expert law translator (German-French) at the Court of Appeal in the Alsace region illustrate our claim and underpin an approach of legal translation and its heuristics that is both pragmatic and reflexive.


2021 ◽  
Vol 31 ◽  
Author(s):  
THOMAS VAN STRYDONCK ◽  
FRANK PIESSENS ◽  
DOMINIQUE DEVRIESE

Abstract Separation logic is a powerful program logic for the static modular verification of imperative programs. However, dynamic checking of separation logic contracts on the boundaries between verified and untrusted modules is hard because it requires one to enforce (among other things) that outcalls from a verified to an untrusted module do not access memory resources currently owned by the verified module. This paper proposes an approach to dynamic contract checking by relying on support for capabilities, a well-studied form of unforgeable memory pointers that enables fine-grained, efficient memory access control. More specifically, we rely on a form of capabilities called linear capabilities for which the hardware enforces that they cannot be copied. We formalize our approach as a fully abstract compiler from a statically verified source language to an unverified target language with support for linear capabilities. The key insight behind our compiler is that memory resources described by spatial separation logic predicates can be represented at run time by linear capabilities. The compiler is separation-logic-proof-directed: it uses the separation logic proof of the source program to determine how memory accesses in the source program should be compiled to linear capability accesses in the target program. The full abstraction property of the compiler essentially guarantees that compiled verified modules can interact with untrusted target language modules as if they were compiled from verified code as well. This article is an extended version of one that was presented at ICFP 2019 (Van Strydonck et al., 2019).


Author(s):  
Waleed Ammar ◽  
George Mulcaire ◽  
Miguel Ballesteros ◽  
Chris Dyer ◽  
Noah A. Smith

We train one multilingual model for dependency parsing and use it to parse sentences in several languages. The parsing model uses (i) multilingual word clusters and embeddings; (ii) token-level language information; and (iii) language-specific features (fine-grained POS tags). This input representation enables the parser not only to parse effectively in multiple languages, but also to generalize across languages based on linguistic universals and typological similarities, making it more effective to learn from limited annotations. Our parser’s performance compares favorably to strong baselines in a range of data scenarios, including when the target language has a large treebank, a small treebank, or no treebank for training.


2021 ◽  
Vol 3 (5) ◽  
pp. 26-30
Author(s):  
Annet Aromo Khachula ◽  
Lucy Mandillah ◽  
Bernard Angatia Mudogo

Languages have different concepts for conveying meanings; hence there is a problem in finding equivalents between the source language (SL) and the target language (TL) in the process of interpreting. The transfer of meaning is identified as one of the basic problems in interpreting due to the absence of equivalence between two languages. This paper identifies levels of equivalence in the interpretation of selected sermons from English into Luhya varieties. Data was collected through key-informant interviews of interpreters, Focus Group Discussions by the congregants, and the researcher’s non-participant observation during church services. An audio recorder was used to collect the corpus for analysis which was later transcribed and translated for analysis. Relevance Theory by Sperber and Wilson (1986) provided the background for the discussion of the data. The findings revealed the following levels of equivalence in the interpretation of English sermons into Luhya varieties; one to many, one to part-of-one and nil equivalence. Further, it was also revealed that interpreters need to identify these three levels of equivalence in interpreting English sermons into Luhya varieties to determine the appropriate measures to counteract the situation.


2021 ◽  
Vol 6 (4) ◽  
pp. 202-211
Author(s):  
Arnida A. Bakar ◽  
Sulhah Ramli

Many translation scholars have proposed various approaches when dealing with culture-specific items. It shows that to achieve a good quality and successful translation work, suitable and functional translation approach should be applied by the translator. Borrowing is one of the approaches applied in various texts’ genre such as sacred text which has culture-specific items. It becomes frequently used in translating word with no equivalent in target language. However, it resulted in some of translations which have applied this kind of approach did not supply adequate meaning and fallout the irrelevant text towards readership. The reason is that borrowing approach stands alone without providing compensation strategies. Therefore, this present article investigates the functionality of borrowing approach in translating Qur’an non-existent cultural elements in Malay culture. This study is qualitative, and the data are analysed descriptively using document analysis by adopting Relevance Theory initiated by Sperber and Wilson (1986). It is suggested that the relevancy of translated text can be achieved not only through borrowing as an approach, but at the same time providing adequate meaning by means of compensation strategies. Thus, the study assumes that the less the effort processing is produced to understand the meaning, the higher the contextual effect of meaning is sufficiently provided. On the other hand, if the effort processing is less produced and the contextual effect is highly provided, the optimum relevancy of translated text can be achieved. It is concluded that the combination of borrowing approach and compensation strategies can help better understanding the meaning of non-existent religious cultural items in Malay culture.


2021 ◽  
Vol 4 (3) ◽  
pp. 216-226
Author(s):  
Eunice Nthenya Musyoka ◽  
Kenneth Odhiambo

This paper explores the challenges of non-equivalence at the grammatical categories in the Kĩkamba Bible translation. Translation involves rendering a source text message into the target text by using the register, background knowledge, and other language resources to meet the intended purpose. The process is hampered by non-equivalence, which occurs when a lexical item or an expression in the source language lacks an equivalent item to translate it into the target language. A descriptive research design was used to obtain information from a sampled population. The Bible is divided into two sections; the Old and the New Testament. It is further categorized into seven groups. Purposive sampling was used to select one book from each category and one chapter from each book to form the sample for the study. Data was collected through careful study of the English Revised Standard Version Bible to identify non-equivalences at the grammatical category level and the Kĩkamba Bible to analyse how it is handled, guided by Equivalence theory proposed by Nida and the Relevance theory (Sperber and Wilson). The study established four categories of non-equivalences at the grammatical category level; gender, number, person and case. According to the research non-equivalence at the grammatical level such as the third person singular and plural, the second person and pronouns in both subjective and objective case pose a challenge when the target language lacks a distinctive expression that is present in the source text, but appropriate strategies such as unit change, explicitation and specification meet the goal of translation. The study recommends that the translator needs to interpret what the categories represent in the context as a whole before translating the separate verses.  It is hoped that the research will be a contribution to applied linguistics in the area of translation, specifically on non-equivalence.


2020 ◽  
Author(s):  
Joshua Calder-Travis ◽  
Wei Ji Ma

AbstractVisual search, the task of detecting or locating target items amongst distractor items in a visual scene, is an important function for animals and humans. Different theoretical accounts make differing predictions for the effects of distractor statistics. Here we use a task in which we parametrically vary distractor items, allowing for a simultaneously fine-grained and comprehensive study of distractor statistics. We found effects of target-distractor similarity, distractor variability, and an interaction between the two, although the effect of the interaction on performance differed from the one expected. To explain these findings, we constructed computational process models that make trial-by-trial predictions for behaviour based on the full set of stimuli in a trial. These models, including a Bayesian observer model, provided excellent accounts of both the qualitative and quantitative effects of distractor statistics, as well as of the effect of changing the statistics of the environment (in the form of distractors being drawn from a different distribution). We conclude with a broader discussion of the role of computational process models in the understanding of visual search.


PLoS ONE ◽  
2021 ◽  
Vol 16 (8) ◽  
pp. e0255503
Author(s):  
Rajesh Bhalwankar ◽  
Jan Treur

Learning knowledge or skills usually is considered to be based on the formation of an adequate internal mental model as a specific type of mental network. The learning process for such a mental model conceptualised as a mental network, is a form of (first-order) mental network adaptation. Such learning often integrates learning by observation and learning by instruction. For an effective learning process, an appropriate timing of these different elements is crucial. By controlling the timing of them, the mental network adaptation process becomes adaptive itself, which is called second-order mental network adaptation. In this paper, a second-order adaptive mental network model is proposed addressing this. The first-order adaptation process models the learning process of mental models and the second-order adaptation process controls the timing of the elements of this learning process. It is illustrated by a case study for the learner-controlled mental model learning in the context of driving a car. Here the learner is in control of the integration of learning by observation and learning by instruction.


2019 ◽  
Vol 12 (1) ◽  
pp. 89-115
Author(s):  
Eitan Grossman

This paper sketches the integration of Greek-origin loan verbs into the valency and transitivity patterns of Coptic (Afroasiatic, Egypt), arguing that transitivities are language-specific descriptive categories, and the comparison of donor-language transitivity with target-language transitivity reveals fine-grained degrees of loan-verb integration. Based on a comparison of Coptic Transitivity and Greek Transitivity, it is shown that Greek-origin loanwords are only partially integrated into the transitivity patterns of Coptic. Specifically, while Greek-origin loan verbs have the same coding properties as native verbs in terms of the A domain, i.e., Differential Subject Marking (dsm), they differ in important respects in terms of the P domain, i.e., Differential Object Marking (dom) and Differential Object Indexing (doi). A main result of this study is that language contact – specifically, massive lexical borrowing – can induce significant transitivity splits in a language’s lexicon and grammar. Furthermore, the findings of this study cast doubt on the usefulness of an overarching cross-linguistic category of transitivity.


Sign in / Sign up

Export Citation Format

Share Document