scholarly journals The ASL-LEX 2.0 Project: A Database of Lexical and Phonological Properties for 2,723 Signs in American Sign Language

Author(s):  
Zed Sevcikova Sehyr ◽  
Naomi Caselli ◽  
Ariel M Cohen-Goldberg ◽  
Karen Emmorey

Abstract ASL-LEX is a publicly available, large-scale lexical database for American Sign Language (ASL). We report on the expanded database (ASL-LEX 2.0) that contains 2,723 ASL signs. For each sign, ASL-LEX now includes a more detailed phonological description, phonological density and complexity measures, frequency ratings (from deaf signers), iconicity ratings (from hearing non-signers and deaf signers), transparency (“guessability”) ratings (from non-signers), sign and videoclip durations, lexical class, and more. We document the steps used to create ASL-LEX 2.0 and describe the distributional characteristics for sign properties across the lexicon and examine the relationships among lexical and phonological properties of signs. Correlation analyses revealed that frequent signs were less iconic and phonologically simpler than infrequent signs and iconic signs tended to be phonologically simpler than less iconic signs. The complete ASL-LEX dataset and supplementary materials are available at https://osf.io/zpha4/ and an interactive visualization of the entire lexicon can be accessed on the ASL-LEX page: http://asl-lex.org/.

2020 ◽  
Vol 23 (5) ◽  
pp. 1032-1044 ◽  
Author(s):  
Megan Mott ◽  
Katherine J. Midgley ◽  
Phillip J. Holcomb ◽  
Karen Emmorey

AbstractThis study used ERPs to a) assess the neural correlates of cross-linguistic, cross-modal translation priming in hearing beginning learners of American Sign Language (ASL) and deaf highly proficient signers and b) examine whether sign iconicity modulates these priming effects. Hearing learners exhibited translation priming for ASL signs preceded by English words (greater negativity for unrelated than translation primes) later in the ERP waveform than deaf signers and exhibited earlier and greater priming for iconic than non-iconic signs. Iconicity did not modulate translation priming effects either behaviorally or in the ERPs for deaf signers (except in a 800–1000 ms time window). Because deaf signers showed early translation priming effects (beginning at 400ms-600ms), we suggest that iconicity did not facilitate lexical access, but deaf signers may have recognized sign iconicity later in processing. Overall, the results indicate that iconicity speeds lexical access for L2 sign language learners, but not for proficient signers.


2019 ◽  
Vol 11 (02) ◽  
pp. 208-234 ◽  
Author(s):  
ZED SEVCIKOVA SEHYR ◽  
KAREN EMMOREY

abstractIconicity is often defined as the resemblance between a form and a given meaning, while transparency is defined as the ability to infer a given meaning based on the form. This study examined the influence of knowledge of American Sign Language (ASL) on the perceived iconicity of signs and the relationship between iconicity, transparency (correctly guessed signs), ‘perceived transparency’ (transparency ratings of the guesses), and ‘semantic potential’ (the diversity (H index) of guesses). Experiment 1 compared iconicity ratings by deaf ASL signers and hearing non-signers for 991 signs from the ASL-LEX database. Signers and non-signers’ ratings were highly correlated; however, the groups provided different iconicity ratings for subclasses of signs: nouns vs. verbs, handling vs. entity, and one- vs. two-handed signs. In Experiment 2, non-signers guessed the meaning of 430 signs and rated them for how transparent their guessed meaning would be for others. Only 10% of guesses were correct. Iconicity ratings correlated with transparency (correct guesses), perceived transparency ratings, and semantic potential (H index). Further, some iconic signs were perceived as non-transparent and vice versa. The study demonstrates that linguistic knowledge mediates perceived iconicity distinctly from gesture and highlights critical distinctions between iconicity, transparency (perceived and objective), and semantic potential.


2020 ◽  
Vol 12 (1) ◽  
pp. 182-202 ◽  
Author(s):  
BILL THOMPSON ◽  
MARCUS PERLMAN ◽  
GARY LUPYAN ◽  
ZED SEVCIKOVA SEHYR ◽  
KAREN EMMOREY

abstractA growing body of research shows that both signed and spoken languages display regular patterns of iconicity in their vocabularies. We compared iconicity in the lexicons of American Sign Language (ASL) and English by combining previously collected ratings of ASL signs (Caselli, Sevcikova Sehyr, Cohen-Goldberg, & Emmorey, 2017) and English words (Winter, Perlman, Perry, & Lupyan, 2017) with the use of data-driven semantic vectors derived from English. Our analyses show that models of spoken language lexical semantics drawn from large text corpora can be useful for predicting the iconicity of signs as well as words. Compared to English, ASL has a greater number of regions of semantic space with concentrations of highly iconic vocabulary. There was an overall negative relationship between semantic density and the iconicity of both English words and ASL signs. This negative relationship disappeared for highly iconic signs, suggesting that iconic forms may be more easily discriminable in ASL than in English. Our findings contribute to an increasingly detailed picture of how iconicity is distributed across different languages.


2015 ◽  
Vol 31 (3) ◽  
pp. 375-388 ◽  
Author(s):  
Allison I Hilger ◽  
Torrey MJ Loucks ◽  
David Quinto-Pozos ◽  
Matthew WG Dye

A study was conducted to examine production variability in American Sign Language (ASL) in order to gain insight into the development of motor control in a language produced in another modality. Production variability was characterized through the spatiotemporal index (STI), which represents production stability in whole utterances and is a function of variability in effector displacement waveforms (Smith et al., 1995). Motion capture apparatus was used to acquire wrist displacement data across a set of eight target signs embedded in carrier phrases. The STI values of Deaf signers and hearing learners at three different ASL experience levels were compared to determine whether production stability varied as a function of time spent acquiring ASL. We hypothesized that lower production stability as indexed by the STI would be evident for beginning ASL learners, indicating greater production variability, with variability decreasing as ASL language experience increased. As predicted, Deaf signers showed significantly lower STI values than the hearing learners, suggesting that stability of production is indeed characteristic of increased ASL use. The linear trend across experience levels of hearing learners was not statistically significant in all spatial dimensions, indicating that improvement in production stability across relatively short time scales was weak. This novel approach to characterizing production stability in ASL utterances has relevance for the identification of sign production disorders and for assessing L2 acquisition of sign languages.


2009 ◽  
Vol 21 (2) ◽  
pp. 193-231 ◽  
Author(s):  
Adam Schembri ◽  
David McKee ◽  
Rachel McKee ◽  
Sara Pivac ◽  
Trevor Johnston ◽  
...  

AbstractIn this study, we consider variation in a class of signs in Australian and New Zealand Sign Languages that includes the signs think, name, and clever. In their citation form, these signs are specified for a place of articulation at or near the signer's forehead or above, but are sometimes produced at lower locations. An analysis of 2667 tokens collected from 205 deaf signers in five sites across Australia and of 2096 tokens collected from 138 deaf signers from three regions in New Zealand indicates that location variation in these signs reflects both linguistic and social factors, as also reported for American Sign Language (Lucas, Bayley, & Valli, 2001). Despite similarities, however, we find that some of the particular factors at work, and the kinds of influence they have, appear to differ in these three signed languages. Moreover, our results suggest that lexical frequency may also play a role.


A Gesture Vocalizer is a small scale or a large scale system that provides a way for dumb and mute people to communicate easily. The research paper defines a technique, Finger Gesture Vocalizer which includes sensors attached to the gloves above the fingers of the person who wants to communicate. The sensors are arranged in such a way on the gloves, that they can capture the movements of the fingers and based on the change in resistance of the sensors, it can be identified what the person wants to say. The message is displayed on the LCD and is also converted to audio using the APR33A3 audio processing unit. Standard sign languages such as that of American Sign Language which is used by dumb and mute people to communicate can be employed while wearing these gloves.


Languages ◽  
2019 ◽  
Vol 4 (4) ◽  
pp. 80 ◽  
Author(s):  
Jill P. Morford ◽  
Barbara Shaffer ◽  
Naomi Shin ◽  
Paul Twitchell ◽  
Bettie T. Petersen

American Sign Language (ASL) makes extensive use of pointing signs, but there has been only limited documentation of how pointing signs are used for demonstrative functions. We elicited demonstratives from four adult Deaf signers of ASL in a puzzle completion task. Our preliminary analysis of the demonstratives produced by these signers supports three important conclusions in need of further investigation. First, despite descriptions of four demonstrative signs in the literature, participants expressed demonstrative function 95% of the time through pointing signs. Second, proximal and distal demonstrative referents were not distinguished categorically on the basis of different demonstrative signs, nor on the basis of pointing handshape or trajectory. Third, non-manual features including eye gaze and facial markers were essential to assigning meaning to demonstratives. Our results identify new avenues for investigation of demonstratives in ASL.


2019 ◽  
Vol 9 (6) ◽  
pp. 148 ◽  
Author(s):  
Brittany Lee ◽  
Gabriela Meade ◽  
Katherine J. Midgley ◽  
Phillip J. Holcomb ◽  
Karen Emmorey

Event-related potentials (ERPs) were used to investigate co-activation of English words during recognition of American Sign Language (ASL) signs. Deaf and hearing signers viewed pairs of ASL signs and judged their semantic relatedness. Half of the semantically unrelated signs had English translations that shared an orthographic and phonological rime (e.g., BAR–STAR) and half did not (e.g., NURSE–STAR). Classic N400 and behavioral semantic priming effects were observed in both groups. For hearing signers, targets in sign pairs with English rime translations elicited a smaller N400 compared to targets in pairs with unrelated English translations. In contrast, a reversed N400 effect was observed for deaf signers: target signs in English rime translation pairs elicited a larger N400 compared to targets in pairs with unrelated English translations. This reversed effect was overtaken by a later, more typical ERP priming effect for deaf signers who were aware of the manipulation. These findings provide evidence that implicit language co-activation in bimodal bilinguals is bidirectional. However, the distinct pattern of effects in deaf and hearing signers suggests that it may be modulated by differences in language proficiency and dominance as well as by asymmetric reliance on orthographic versus phonological representations.


2016 ◽  
Vol 49 (2) ◽  
pp. 784-801 ◽  
Author(s):  
Naomi K. Caselli ◽  
Zed Sevcikova Sehyr ◽  
Ariel M. Cohen-Goldberg ◽  
Karen Emmorey

Sign in / Sign up

Export Citation Format

Share Document