scholarly journals Catalan Sign Language ellipsis, role shift, and the QUD

Author(s):  
David Blunier ◽  
Giorgia Zorzi
Keyword(s):  
2001 ◽  
Vol 4 (1-2) ◽  
pp. 63-104 ◽  
Author(s):  
Dan I. Slobin ◽  
Nini Hoiting ◽  
Michelle Anthony ◽  
Yael Biederman ◽  
Marlon Kuntze ◽  
...  

The Berkeley Transcription System (BTS) has been designed for the transcription of sign language videotapes at the level of meaning components. The system is based on efforts to transcribe adult-child interactions in American Sign Language (ASL) and Sign Language of the Netherlands (SLN). The goal of BTS is to provide a standard means of transcribing signed utterances, meeting the following objectives: –compatibility with CHAT format and CLAN programs (CHILDES) –linear representation on a continuous typed line, using only ASCII characters –representation at the level of meaning components –full representation of elements of polycomponential verbs –representation of manual and nonmanual elements –representation of gaze direction, role shift, visual attention –representation of gestures and other communicative acts –notation of characteristics of adult-child interaction (child-directed signing, errors, overlap, self-correction).


2019 ◽  
Vol 22 (2) ◽  
pp. 171-209
Author(s):  
Annika Hübl ◽  
Emar Maier ◽  
Markus Steinbach

Abstract There are two main competing views about the nature of sign language role shift within formal semantics today: Quer (2005) and Schlenker (2017a,b), following now standard analyses of indexical shift in spoken languages, analyze it as a so-called ‘monstrous operator’, while Davidson (2015) and Maier (2017), following more traditional and cognitive approaches, analyze it as a form of quotation. Examples of role shift in which some indexicals are shifted and some unshifted pose a prima facie problem for both approaches. In this paper, we propose a pragmatic principle of attraction to regulate the apparent unshifting/unquoting of indexicals in quotational role shift. The analysis is embedded in a systematic empirical investigation of the predictions of the attraction hypothesis for German Sign Language (DGS). Results for the first and second person pronouns (ix 1 and ix 2) support the attraction hypothesis, while results for here are inconclusive.


2001 ◽  
Vol 4 (1-2) ◽  
pp. 63-104 ◽  
Author(s):  
Dan I. Slobin ◽  
Nini Hoiting ◽  
Michelle Anthony ◽  
Yael Biederman ◽  
Marlon Kuntze ◽  
...  

The Berkeley Transcription System (BTS) has been designed for the transcription of sign language videotapes at the level of meaning components. The system is based on efforts to transcribe adult-child interactions in American Sign Language (ASL) and Sign Language of the Netherlands (SLN). The goal of BTS is to provide a standard means of transcribing signed utterances, meeting the following objectives: –compatibility with CHAT format and CLAN programs (CHILDES) –linear representation on a continuous typed line, using only ASCII characters –representation at the level of meaning components –full representation of elements of polycomponential verbs –representation of manual and nonmanual elements –representation of gaze direction, role shift, visual attention –representation of gestures and other communicative acts –notation of characteristics of adult-child interaction (child-directed signing, errors, overlap, self-correction).


2014 ◽  
Vol 17 (1) ◽  
pp. 82-101 ◽  
Author(s):  
Jesse Stewart

In spoken languages, disfluent speech, narrative effects, discourse information, and phrase position may influence the lengthening of segments beyond their typical duration. In sign languages, however, the primary use of the visual-gestural modality results in articulatory differences not expressed in spoken languages. This paper looks at sign lengthening in American Sign Language (ASL). Comparing two retellings of the Pear Story narrative from five signers, three primary lengthening mechanisms were identified: elongation, repetition, and deceleration. These mechanisms allow signers to incorporate lengthening into signs which may benefit from decelerated language production due to high information load or complex articulatory processes. Using a mixed effects model, significant differences in duration were found between (i) non-conventionalized forms vs. lexical signs, (ii) signs produced during role shift vs. non-role shift, (iii) signs in phrase-final/initial vs. phrase-medial position, (iv) new vs. given information, and (v) (non-disordered) disfluent signing vs. non-disfluent signing. These results provide insights into duration effects caused by information load and articulatory processes in ASL.


2018 ◽  
Vol 44 (3-4) ◽  
pp. 295-353 ◽  
Author(s):  
Philippe Schlenker

Abstract‘Visible Meaning’ (Schlenker 2018b) claims (i) that sign language makes visible some aspects of the Logical Form of sentences that are covert in spoken language, and (ii) that, along some dimensions, sign languages are more expressive than spoken languages because iconic conditions can be found at their logical core. Following nine peer commentaries, we clarify both claims and discuss three main issues: what is the nature of the interaction between logic and iconicity in sign language and beyond? does iconicity in sign language play the same role as gestures in spoken language? and is sign language Role Shift best analyzed in terms of visible context shift, or by way of demonstrations referring to gestures?


2020 ◽  
Vol 88 ◽  
pp. 27-52
Author(s):  
Yeonwoo Kim ◽  
Ki-Hyun Nam ◽  
JunMo Cho
Keyword(s):  

2013 ◽  
Vol 5 (4) ◽  
pp. 313-343 ◽  
Author(s):  
Helen Earis ◽  
Kearsy Cormier

AbstractThis paper discusses how point of view (POV) is expressed in British Sign Language (BSL) and spoken English narrative discourse. Spoken languages can mark changes in POV using strategies such as direct/indirect discourse, whereas signed languages can mark changes in POV in a unique way using “role shift”. Role shift is where the signer “becomes” a referent by taking on attributes of that referent, e.g. facial expression. In this study, two native BSL users and two native British English speakers were asked to tell the story “The Tortoise and the Hare”. The data were then compared to see how point of view is expressed and maintained in both languages. The results indicated that the spoken English users preferred the narrator's perspective, whereas the BSL users preferred a character's perspective. This suggests that spoken and signed language users may structure stories in different ways. However, some co-speech gestures and facial expressions used in the spoken English stories to denote characters' thoughts and feelings bear resemblance to the hand movements and facial expressions used by the BSL storytellers. This suggests that while approaches to storytelling may differ, both languages share some gestural resources which manifest themselves in different ways across different modalities.


Sign in / Sign up

Export Citation Format

Share Document