scholarly journals Creating Sign Language Web Forms

Author(s):  
Norma-Jane E. Thompson

Currently, the World Wide Web allows web pages to be produced in most written languages. Many deaf people, however, use a visual-spatial language with no written equivalent (e.g. American Sign Language). SignLink Studio, a software tool for designing sign language web pages, allows for hyperlinking within video clips so that sign language only web pages can be created. However, this tool does not allow for other interactive elements such as online forms. In this thesis, a model for an online sign language form is proposed and evaluated. A study consisting of 22 participants was conducted to examine whether there were differences in performance of preferences between sign language forms and text forms, and between two presentation styles (all-at-once versus one-at-a-time). The results showed that there was no clear performance advantage between sign language and text; however, participants were interested in having online questions presented in sign language. Also, there were no advantages in performance or preferences between presentation styles.

2021 ◽  
Author(s):  
Norma-Jane E. Thompson

Currently, the World Wide Web allows web pages to be produced in most written languages. Many deaf people, however, use a visual-spatial language with no written equivalent (e.g. American Sign Language). SignLink Studio, a software tool for designing sign language web pages, allows for hyperlinking within video clips so that sign language only web pages can be created. However, this tool does not allow for other interactive elements such as online forms. In this thesis, a model for an online sign language form is proposed and evaluated. A study consisting of 22 participants was conducted to examine whether there were differences in performance of preferences between sign language forms and text forms, and between two presentation styles (all-at-once versus one-at-a-time). The results showed that there was no clear performance advantage between sign language and text; however, participants were interested in having online questions presented in sign language. Also, there were no advantages in performance or preferences between presentation styles.


2013 ◽  
Vol 25 (4) ◽  
pp. 517-533 ◽  
Author(s):  
Karen Emmorey ◽  
Stephen McCullough ◽  
Sonya Mehta ◽  
Laura L. B. Ponto ◽  
Thomas J. Grabowski

Biological differences between signed and spoken languages may be most evident in the expression of spatial information. PET was used to investigate the neural substrates supporting the production of spatial language in American Sign Language as expressed by classifier constructions, in which handshape indicates object type and the location/motion of the hand iconically depicts the location/motion of a referent object. Deaf native signers performed a picture description task in which they overtly named objects or produced classifier constructions that varied in location, motion, or object type. In contrast to the expression of location and motion, the production of both lexical signs and object type classifier morphemes engaged left inferior frontal cortex and left inferior temporal cortex, supporting the hypothesis that unlike the location and motion components of a classifier construction, classifier handshapes are categorical morphemes that are retrieved via left hemisphere language regions. In addition, lexical signs engaged the anterior temporal lobes to a greater extent than classifier constructions, which we suggest reflects increased semantic processing required to name individual objects compared with simply indicating the type of object. Both location and motion classifier constructions engaged bilateral superior parietal cortex, with some evidence that the expression of static locations differentially engaged the left intraparietal sulcus. We argue that bilateral parietal activation reflects the biological underpinnings of sign language. To express spatial information, signers must transform visual–spatial representations into a body-centered reference frame and reach toward target locations within signing space.


2021 ◽  
Author(s):  
Ellen S. Hibbard

This thesis presents a framework representing research conducted to examine the impact of website based online video technology for Deaf people, their culture, and their communication. This technology enables American Sign Language (ASL) asynchronous communication, called vlogging, for Deaf people. The thesis provides new insights and implications for Deaf culture and communication as a result of studying the practices, opinions and attitudes of vlogging. Typical asynchronous communication media such as blogs, books, e-mails, or movies have been dependent on use of spoken language or text, not incorporating sign language content. Online video and website technologies make it possible for Deaf people to share signed content through video blogs (vlogs), and to have a permanent record of that content. Signed content is typically 3-D, shared during face-to-face gatherings, and ephemeral in nature. Websites are typically textual and video display is 2-D, placing constraints on the spatial modulation required for ASL communication. There have been few academic studies to date examining signed asynchronous communication use by Deaf people and the implications for Deaf culture and communication. In this research, 130 vlogs by Deaf vloggers on the mainstream website YouTube, and specialized website Deafvideo.TV were examined to discover strategies employed by Deaf users as a result of the technology’s spatial limitations, and to explore similarities and differences between the two websites. Semi-structured interviews were conducted with 26 Deaf people as follow up. The main findings from this research include register of vlogging formality depending on website type, informal on Deafvideo.TV while formal on YouTube. In addition, vlogs had flaming behaviour while unexpected findings of lack of ASL literature and use of technical elements that obscured ASL content in vlogs. Questions regarding the space changes and narrative elements observed have arisen, providing avenues for additional research. This study and more research could lead to a fuller understanding the impact of vlogging and vlogging technology on Deaf culture and identify potential improvements or new services that could offered.


Author(s):  
Franc Solina ◽  
Slavko Krapez ◽  
Ales Jaklic ◽  
Vito Komac

Deaf people, as a marginal community, may have severe problems in communicating with hearing people. Usually, they have a lot of problems even with such—for hearing people—simple tasks as understanding the written language. However, deaf people are very skilled in using a sign language, which is their native language. A sign language is a set of signs or hand gestures. A gesture in a sign language equals a word in a written language. Similarly, a sentence in a written language equals a sequence of gestures in a sign language. In the distant past deaf people were discriminated and believed to be incapable of learning and thinking independently. Only after the year 1500 were the first attempts made to educate deaf children. An important breakthrough was the realization that hearing is not a prerequisite for understanding ideas. One of the most important early educators of the deaf and the first promoter of sign language was Charles Michel De L’Epée (1712-1789) in France. He founded the fist public school for deaf people. His teachings about sign language quickly spread all over the world. Like spoken languages, different sign languages and dialects evolved around the world. According to the National Association of the Deaf, the American Sign Language (ASL) is the third most frequently used language in the United States, after English and Spanish. ASL has more than 4,400 distinct signs. The Slovenian sign language (SSL), which is used in Slovenia and also serves as a case study sign language in this chapter, contains approximately 4,000 different gestures for common words. Signs require one or both hands for signing. Facial expressions which accompany signing are also important since they can modify the basic meaning of a hand gesture. To communicate proper nouns and obscure words, sign languages employ finger spelling. Since the majority of signing is with full words, signed conversation can proceed with the same pace as spoken conversation.


Languages ◽  
2019 ◽  
Vol 4 (4) ◽  
pp. 90
Author(s):  
Kim B. Kurz ◽  
Kellie Mullaney ◽  
Corrine Occhino

Constructed action is a cover term used in signed language linguistics to describe multi-functional constructions which encode perspective-taking and viewpoint. Within constructed action, viewpoint constructions serve to create discourse coherence by allowing signers to share perspectives and psychological states. Character, observer, and blended viewpoint constructions have been well documented in signed language literature in Deaf signers. However, little is known about hearing second language learners’ use of constructed action or about the acquisition and use of viewpoint constructions. We investigate the acquisition of viewpoint constructions in 11 college students acquiring American Sign Language (ASL) as a second language in a second modality (M2L2). Participants viewed video clips from the cartoon Canary Row and were asked to “retell the story as if you were telling it to a deaf friend”. We analyzed the signed narratives for time spent in character, observer, and blended viewpoints. Our results show that despite predictions of an overall increase in use of all types of viewpoint constructions, students varied in their time spent in observer and character viewpoints, while blended viewpoint was rarely observed. We frame our preliminary findings within the context of M2L2 learning, briefly discussing how gestural strategies used in multimodal speech-gesture constructions may influence learning trajectories.


2002 ◽  
Vol 24 (3) ◽  
pp. 497-497
Author(s):  
Christine Monikowski

In this volume of the Sociolinguistics in Deaf Communities series, Metzger has edited 11 diverse topics addressing two themes: the perception of Deaf people and Deaf communities, and bilingualism. Deaf people's perception of themselves and their community is explored by authors who discuss an excellent array of topics, ranging from “miracle cures” for Deaf children in Mexico to the nature of name signs in the New Zealand Deaf community; from the linguistic rights of Deaf people in the European Union to a search for the roots of the Nicaraguan Deaf community; from a semiotic analysis of Argentine Sign Language to an analysis of how a Deaf child (American Sign Language) and his hearing family (English) make sense of each other's world views.


2019 ◽  
Vol 57 (15) ◽  
pp. 242-251
Author(s):  
Dominika Wiśniewska

Ethical and methodologically correct diagnosis of a child hearing Deaf parents requires a specialist with extensive knowledge. In every society there are people who use the visual-spatial language – they are deaf people. They are perceived by the majority as disabled people, less frequently as a cultural minority. The adoption of a particular attitude towards the perception of deafness determines the context of the psychologist’s assessment. Diagnosis in such a specific situation shouldbe viewed from the perspective of a child hearing as a bi-cultural person, a descendant of a Deaf parent – a representative of the Deaf culture and himself a psychologist representing the cultural majority of hearing people.


2021 ◽  
Author(s):  
Ellen S. Hibbard

This thesis presents a framework representing research conducted to examine the impact of website based online video technology for Deaf people, their culture, and their communication. This technology enables American Sign Language (ASL) asynchronous communication, called vlogging, for Deaf people. The thesis provides new insights and implications for Deaf culture and communication as a result of studying the practices, opinions and attitudes of vlogging. Typical asynchronous communication media such as blogs, books, e-mails, or movies have been dependent on use of spoken language or text, not incorporating sign language content. Online video and website technologies make it possible for Deaf people to share signed content through video blogs (vlogs), and to have a permanent record of that content. Signed content is typically 3-D, shared during face-to-face gatherings, and ephemeral in nature. Websites are typically textual and video display is 2-D, placing constraints on the spatial modulation required for ASL communication. There have been few academic studies to date examining signed asynchronous communication use by Deaf people and the implications for Deaf culture and communication. In this research, 130 vlogs by Deaf vloggers on the mainstream website YouTube, and specialized website Deafvideo.TV were examined to discover strategies employed by Deaf users as a result of the technology’s spatial limitations, and to explore similarities and differences between the two websites. Semi-structured interviews were conducted with 26 Deaf people as follow up. The main findings from this research include register of vlogging formality depending on website type, informal on Deafvideo.TV while formal on YouTube. In addition, vlogs had flaming behaviour while unexpected findings of lack of ASL literature and use of technical elements that obscured ASL content in vlogs. Questions regarding the space changes and narrative elements observed have arisen, providing avenues for additional research. This study and more research could lead to a fuller understanding the impact of vlogging and vlogging technology on Deaf culture and identify potential improvements or new services that could offered.


Sign in / Sign up

Export Citation Format

Share Document