scholarly journals Event structure influences language production: Evidence from structural priming in motion event description

2013 ◽  
Vol 69 (3) ◽  
pp. 299-323 ◽  
Author(s):  
Ann Bunger ◽  
Anna Papafragou ◽  
John C. Trueswell
2013 ◽  
Vol 29 (4) ◽  
pp. 375-389 ◽  
Author(s):  
Baoguo Chen ◽  
Yuefang Jia ◽  
Zhu Wang ◽  
Susan Dunlap ◽  
Jeong-Ah Shin

This article presents two experiments employing two structural priming paradigms that investigated whether cross-linguistic syntactic priming occurred in Chinese and English passive sentences that differ in word order (production-to-production priming in Experiment 1 and comprehension-to-production priming in Experiment 2). Results revealed that cross-linguistic syntactic priming occurred in Chinese and English passive sentences, regardless of production of primes or comprehension of primes and language direction (L1–L2 or L2–L1). Our findings indicate that word-order similarity between languages is not necessary for cross-linguistic structural priming, supporting the view of a two-stage model of language production.


2021 ◽  
Author(s):  
◽  
Kaitlyn Smith

<p>This thesis investigates the uniquely “bimodal” bilingual language production of some of the New Zealand Deaf community’s youngest members—hearing and cochlear-implanted Deaf children who have Deaf signing parents. These bimodal bilinguals (aged 4-9 years old) are native users of two typologically different languages (New Zealand Sign Language (NZSL) and English), and two modalities (visual-manual and auditory-oral). The primary focus of this study is the variation found in the oral channel produced by these bimodal bilingual children, during a sign-target session (i.e. a signed conversation with a Deaf interlocutor), involving a game designed to elicit location and motion descriptions alongside a sociolinguistic interview.  The findings of this study are three-fold. Firstly, the variation of audible and visual volumes of the oral channel (the spoken modality) between and within participants’ language sessions is described. Notably, audible volume ranges from voiceless, whispered, and fully-voiced productions. Audible volume is found to have an inverse relationship with visual volume, in that reduced auditory cues reflect an increase in visual cues used for clarification. Additionally, a lowered audible volume (whispers or voiceless mouthings) is associated with reduced English, aligning with some NZSL grammatical structures, while full-voice is associated with intact English grammatical structures. Transfer in the opposite direction is also evident during descriptions of a motion event, in that English structures for encoding ‘path’ surface in the manual channel (the signed modality). Bidirectional transfer also occurs simultaneously, where structures of both languages surface in both linguistic channels.  Secondly, the coordination of the oral and manual channels during descriptions of location and motion is described. Notably, the linguistic channels are tightly temporally synchronised in the coordination of meaning. The oral channel can function gesturally by modifying or emphasising meaning in the manual channel; a similar function to co-speech gesture used by hearing users of spoken languages. Thirdly, this thesis details the children’s attitudes towards their use of NZSL and English, highlighting their sensitivity to the uniqueness of their heritage language, the movement between Deaf and hearing worlds and associated languages, and their role in passing on their sign language to other hearing people. Their Deaf/Coda and hearing cultural identification is found to be entangled in use of both oral and manual channels. The oral channel is multifaceted in the ways it functions for both the bimodal bilingual child and their Deaf interlocutor, and thus operates at the intersection of language, cognition and culture. Bimodal bilinguals’ use of the oral channel is influenced by the contact situation that exists between Deaf and hearing communities, the cognitive cost of language suppression, and the interactional setting.  This study contributes to growing global research conducted on the language production of bimodal bilinguals. It provides preliminary insight into oral channel features of young native NZSL users as a way of better understanding bimodal bilingual language development, the connections between audiological status and language, the interplay of codes across linguistic channels, and the role that modality plays in shaping meaning across all human languages.</p>


2020 ◽  
pp. 1-30
Author(s):  
YIYUN LIAO ◽  
KATINKA DIJKSTRA ◽  
ROLF A. ZWAAN

abstract Two Dutch directional prepositions (i.e., naar and richting) provide a useful paradigm to study endpoint conceptualization. Experiment 1 adopted a sentence comprehension task and confirmed the linguistic proposal that, when naar was used in motion event descriptions, participants were more certain that the reference object was the goal of the agent than when richting was used. Experiment 2 and Experiment 3 used this linguistic pair to test the effect of two factors (i.e., the actor’s goal and the interlocutor’s status) on endpoint conceptualization via language production tasks. We found significant effects of both factors. First, participants chose naar more often when there was an inference in the referential situation that the reference object was the actor’s goal than when there was no such inference. Second, participants chose richting more often when they were told to describe the referential scenario to a police officer than to a friend. Participants were more cautious with their statements and were less willing to commit themselves to stating the goal of the agent when talking to a police officer than to a friend. The results are discussed in relation to relevant linguistic theories and event theories.


Cognition ◽  
2007 ◽  
Vol 104 (3) ◽  
pp. 437-458 ◽  
Author(s):  
K BOCK ◽  
G DELL ◽  
F CHANG ◽  
K ONISHI

2009 ◽  
Author(s):  
Malathi Thothathiri ◽  
Laurel Brehm ◽  
Myrna F. Schwartz

Author(s):  
Susan Duncan

<p>Languages such as Spanish and English differ in how each lexically packages and syntactically distributes semantic content related to motion event expression (Talmy 1985, 1991). Comparisons of spoken Spanish and English (Slobin 1996, 1998) reveal less expression of manner of motion in Spanish. This leads to the conclusion that ‘thinking for speaking’ in Spanish involves less conceptualization of manner. Here we assess speech-associated thinking about manner on a broader basis by examining not only speech but also the speech-synchronous gestures of Spanish, English, and Chinese speakers for content related to manner of motion. Speakers of all three languages produce manner-expressive gestures similar in type and frequency. Thus, motion event description may in fact involve conceptualization of manner to roughly the same extent in all three languages. Examination of gesture-speech temporal synchrony shows that Spanish manner gestures associate with expression of the ground component of motion in speech.</p><p>We consider these findings in relation to two assertions: (1) gesture compensates for content speech lacks, (2) gesture and speech ‘jointly highlight’ shared or congruent semantic content. A compensation interpretation of the Spanish manner gestures raises questions about the role of gesture data in studies of thinking-for-speaking, generally. Further evidence from a follow-up study, in which narrators had no visual exposure to the cartoon, lead us to interpret Spanish speakers’ manner-expressive gestures as an instance of joint highlighting. This interpretation accords with McNeill’s (1992) “rule of semantic synchrony” between speech and gesture, one of the foundations of his ‘growth point’ theory of language production (McNeill 1992; McNeill and Duncan 2000). We discuss some implications of a joint highlighting interpretation for analyses of thinking for speaking and for lexical semantic theory.</p>


2020 ◽  
Vol 73 (11) ◽  
pp. 1807-1819
Author(s):  
Mengxing Wang ◽  
Zhenguang G Cai ◽  
Ruiming Wang ◽  
Holly P Branigan ◽  
Martin J Pickering

Do speakers make use of a word’s phonological and orthographic forms to determine the syntactic structure of a sentence? We reported two Mandarin structural priming experiments involving homophones to investigate word-form feedback on syntactic encoding. Participants tended to reuse the syntactic structure across sentences; such a structural priming effect was enhanced when the prime and target sentences used homophone verbs (the homophone boost), regardless of whether the homophones were heterographic (homophones written in different character; Experiments 1 and 2) or homographic (homophones written in the same character; Experiment 2). Critically, the homophone boost was comparable between homographic and heterographic homophone primes (Experiment 2). Hence unlike phonology, orthography appears to play a minimal role in mediating structural priming in production. We suggest that the homophone boost results from lemma associations between homophones that develop due to phonological identity between homophones early during language learning; such associations stabilise before literacy acquisition, thus limiting the influence of orthographic identity on lemma association between homophones and in turn on structural priming in language production.


Sign in / Sign up

Export Citation Format

Share Document