Visual cues during interaction: Are recasts different from noncorrective repetition?

2020 ◽  
Vol 36 (3) ◽  
pp. 359-370 ◽  
Author(s):  
Kim McDonough ◽  
Pavel Trofimovich ◽  
Libing Lu ◽  
Dato Abashidze

Visual cues may help second language (L2) speakers perceive interactional feedback and reformulate their nontarget forms, particularly when paired with recasts, as recasts can be difficult to perceive as corrective. This study explores whether recasts have a visual signature and whether raters can perceive a recast’s corrective function. Transcripts of conversations between a bilingual French–English interlocutor and L2 English university students ( n = 24) were analysed for recasts and noncorrective repetitions with rising and declarative intonation. Videos of those excerpts ( k = 96) were then analysed for the interlocutor’s provision of visual cues during the recast and repetition turns, including eye gaze duration, nods, blinks, and other facial expressions (frowns, eyebrow raises). The videos were rated by 96 undergraduate university students who were randomly assigned to three viewing conditions: clear voice/clear face, clear voice/blurred face, or distorted voice/clear face. Using a 100-millimeter scale with two anchor points (0% = he’s making a comment, and 100% = he’s correcting an error), they rated the corrective function of the interlocutors’ responses while their eye gaze was tracked. Raters reliably distinguished recasts from repetitions through their ratings (although they were generally low), but not through their eye gaze behaviors.

2019 ◽  
Vol 41 (5) ◽  
pp. 1151-1165 ◽  
Author(s):  
Kim McDonough ◽  
Pavel Trofimovich ◽  
Libing Lu ◽  
Dato Abashidze

AbstractThis research report examines the occurrence of listener visual cues during nonunderstanding episodes and investigates raters’ sensitivity to those cues. Nonunderstanding episodes (n = 21) and length-matched understanding episodes (n = 21) were taken from a larger dataset of video-recorded conversations between second language (L2) English speakers and a bilingual French-English interlocutor (McDonough, Trofimovich, Dao, & Abashidze, 2018). Episode videos were analyzed for the occurrence of listener visual cues, such as head nods, blinks, facial expressions, and holds. Videos of the listener’s face were manipulated to create three rating conditions: clear voice/clear face, distorted voice/clear face, and clear voice/blurred face. Raters in the same speech community (N = 66) were assigned to a video condition to assess the listener’s comprehension. Results revealed differences in the occurrence of listener visual cues between the understanding and nonunderstanding episodes. In addition, raters gave lower ratings of listener comprehension when they had access to the listener’s visual cues.


Author(s):  
Aki Tsunemoto ◽  
Rachael Lindberg ◽  
Pavel Trofimovich ◽  
Kim Mcdonough

Abstract This study examined the role of visual cues (facial expressions and hand gestures) in second language (L2) speech assessment. University students (N = 60) at English-medium universities assessed 2-minute video clips of 20 L2 English speakers (10 Chinese and 10 Spanish speakers) narrating a personal story. They rated the speakers’ comprehensibility, accentedness, and fluency using 1,000-point sliding scales. To manipulate access to visual cues, the raters were assigned to three conditions that presented audio along with (a) the speaker’s static image, (b) a static image of a speaker’s torso with dynamic face, or (c) dynamic torso and face. Results showed that raters with access to the full video tended to perceive the speaker as more comprehensible and significantly less accented compared to those who had access to less visually informative conditions. The findings are discussed in terms of how the integration of visual cues may impact L2 speech assessment.


2014 ◽  
Vol 23 (3) ◽  
pp. 132-139 ◽  
Author(s):  
Lauren Zubow ◽  
Richard Hurtig

Children with Rett Syndrome (RS) are reported to use multiple modalities to communicate although their intentionality is often questioned (Bartolotta, Zipp, Simpkins, & Glazewski, 2011; Hetzroni & Rubin, 2006; Sigafoos et al., 2000; Sigafoos, Woodyatt, Tuckeer, Roberts-Pennell, & Pittendreigh, 2000). This paper will present results of a study analyzing the unconventional vocalizations of a child with RS. The primary research question addresses the ability of familiar and unfamiliar listeners to interpret unconventional vocalizations as “yes” or “no” responses. This paper will also address the acoustic analysis and perceptual judgments of these vocalizations. Pre-recorded isolated vocalizations of “yes” and “no” were presented to 5 listeners (mother, father, 1 unfamiliar, and 2 familiar clinicians) and the listeners were asked to rate the vocalizations as either “yes” or “no.” The ratings were compared to the original identification made by the child's mother during the face-to-face interaction from which the samples were drawn. Findings of this study suggest, in this case, the child's vocalizations were intentional and could be interpreted by familiar and unfamiliar listeners as either “yes” or “no” without contextual or visual cues. The results suggest that communication partners should be trained to attend to eye-gaze and vocalizations to ensure the child's intended choice is accurately understood.


2019 ◽  
Vol 8 (1) ◽  
pp. 129-144
Author(s):  
Chinaza Uleanya ◽  
Bongani Thulani Gamede ◽  
Mofoluwake Oluwadamilola Uleanya

2021 ◽  
pp. 1-15
Author(s):  
Kim McDonough ◽  
Rachael Lindberg ◽  
Pavel Trofimovich ◽  
Oguzhan Tekin

Abstract This replication study seeks to extend the generalizability of an exploratory study (McDonough et al., 2019) that identified holds (i.e., temporary cessation of dynamic movement by the listener) as a reliable visual cue of non-understanding. Conversations between second language (L2) English speakers in the Corpus of English as a Lingua Franca Interaction (CELFI; McDonough & Trofimovich, 2019) with non-understanding episodes (e.g., pardon?, what?, sorry?) were sampled and compared with understanding episodes (i.e., follow-up questions). External raters (N = 90) assessed the listener's comprehension under three rating conditions: +face/+voice, −face/+voice, and +face/−voice. The association between non-understanding and holds in McDonough et al. (2019) was confirmed. Although raters distinguished reliably between understanding and non-understanding episodes, they were not sensitive to facial expressions when judging listener comprehension. The initial and replication findings suggest that holds remain a promising visual signature of non-understanding that can be explored in future theoretically- and pedagogically-oriented contexts.


Author(s):  
Marga Stander ◽  
Annemarie Le Roux

Abstract South African Sign Language (SASL) has become an increasingly popular language that hearing university students want to learn as a second language. This requires more qualified SASL instructors and new curricula at South African universities. This paper considers ways in which challenges associated with the teaching and learning of SASL can be overcome. Krashen’s Comprehension Input Hypothesis and Swain’s Output Hypothesis form the theoretical framework as reference to our own independent experience, praxis, and reflection. This study considered different teaching methods and pedagogies and found the post-method approach suggested by Kumaravadivelu (2003) a viable method for teaching SASL as a second language. This method aligns with the method we had independently identified as the most empowering for teachers to create their own strategies focused on their intuition, experiences and pedagogy. Therefore, we do not favour one specific method above another, but rather adopt an integrated approach. We make a few suggestions regarding sign language curriculum content and further research in sign language as an L2, which need urgent attention.


2014 ◽  
Vol 2014 ◽  
pp. 1-11 ◽  
Author(s):  
Dina Tell ◽  
Denise Davidson ◽  
Linda A. Camras

Eye gaze direction and expression intensity effects on emotion recognition in children with autism disorder and typically developing children were investigated. Children with autism disorder and typically developing children identified happy and angry expressions equally well. Children with autism disorder, however, were less accurate in identifying fear expressions across intensities and eye gaze directions. Children with autism disorder rated expressions with direct eyes, and 50% expressions, as more intense than typically developing children. A trend was also found for sad expressions, as children with autism disorder were less accurate in recognizing sadness at 100% intensity with direct eyes than typically developing children. Although the present research showed that children with autism disorder are sensitive to eye gaze direction, impairments in the recognition of fear, and possibly sadness, exist. Furthermore, children with autism disorder and typically developing children perceive the intensity of emotional expressions differently.


Sign in / Sign up

Export Citation Format

Share Document