Urban and rural sign language in India

1991 ◽  
Vol 20 (1) ◽  
pp. 37-57 ◽  
Author(s):  
Jill Jepson

ABSTRACTA comparison is presented of Indian urban and rural sign languages of the deaf. The structures of both languages are designed for efficient communication but have developed differently in response to different sociolinguistic environments. The urban form transmits information primarily by means of appeal to a shared linguistic code; the rural form mainly by appeal to communal nonlinguistic knowledge. Both languages employ effective and appropriate means given their environments. The relationship between language usage and structure is explored. (Sign language, deafness, India)

1999 ◽  
Vol 2 (2) ◽  
pp. 187-215 ◽  
Author(s):  
Wendy Sandler

In natural communication, the medium through which language is transmitted plays an important and systematic role. Sentences are broken up rhythmically into chunks; certain elements receive special stress; and, in spoken language, intonational tunes are superimposed onto these chunks in particular ways — all resulting in an intricate system of prosody. Investigations of prosody in Israeli Sign Language demonstrate that sign languages have comparable prosodic systems to those of spoken languages, although the phonetic medium is completely different. Evidence for the prosodic word and for the phonological phrase in ISL is examined here within the context of the relationship between the medium and the message. New evidence is offered to support the claim that facial expression in sign languages corresponds to intonation in spoken languages, and the term “superarticulation” is coined to describe this system in sign languages. Interesting formaldiffer ences between the intonationaltunes of spoken language and the “superarticulatory arrays” of sign language are shown to offer a new perspective on the relation between the phonetic basis of language, its phonological organization, and its communicative content.


2020 ◽  
Vol LXXXI (3) ◽  
pp. 165-174
Author(s):  
Justyna Kotowicz

Research to date indicates a relationship between reading skills and sign language competences in G people / deaf people. These data, however, only apply to sign languages that have undergone extensive scientific analysis (e.g. American Sign Language). Currently, there are no scientific reports in Poland regarding competences in sign language and in reading in G students / deaf students. For this reason, the present study analyses the relationship between Polish Sign Language (PSL) and understanding of the text read in written Polish. The study involved 52 G students / deaf students with prelingual hearing loss in severe or profound grades I-VI in special primary schools for deaf children and adolescents. Competences at PSL were measured using the Polish Sign Language Grammar Comprehension Test, and comprehension of the text read was tested using the Reading test by Maria Grzywak-Kaczyńska. Hierarchical analysis of multivariate regression showed that competences in PSL are a variable explaining the level of understanding of the read text (in the model the first explanatory the variable was age). Therefore, it has been demonstrated that competences in PSL are relevant to learning to read in Polish among G students / deaf students. The results obtained are important for surdopedagogical practice: they draw attention to the need to improve competences in sign language and to use sign language in the process of learning to read and develop this skill.


Author(s):  
Ronice Müller de Quadros

This chapter argues for specific actions needed for language planning and language policies involving sign languages and Deaf communities, based on the understanding of what sign languages are, who the signers are, where they sign, and the sign language transmission and maintenance mechanisms of the Deaf community. The first section presents an overview of sign languages and their users, highlighting that sign languages are often used in contexts where most people use spoken languages. The second section addresses the functions, roles, and status of sign languages in relation to spoken languages, as well as the relationship between Deaf communities and hearing society. The medical view of deafness, which has a significant impact on language policies for Deaf people, is critically considered. The third section offers examples of language policies, especially related to the use of sign languages in education, and an agenda for future work on sign language policy and planning.


Gesture ◽  
2012 ◽  
Vol 12 (3) ◽  
pp. 265-307 ◽  
Author(s):  
Wendy Sandler

Sign languages make use of the two hands, facial features, the head, and the body to produce multifaceted gestures that are dedicated for linguistic functions. In a newly emerging sign language — Al-Sayyid Bedouin Sign Language — the appearance of dedicated gestures in signers of four age groups or strata reveals that recruitment of gesture for language is a gradual process. Starting with only the hands in Stratum I, each additional articulator is recruited to perform grammatical functions as the language matures, resulting in ever increasing grammatical complexity. The emergence of dedicated gesture in a new language provides a novel context for addressing questions about the relationship between the physical transmission system and grammar and about the emergence of linguistic complexity in human language generally.


2021 ◽  
pp. 1-30
Author(s):  
ANITA SLONIMSKA ◽  
ASLI ÖZYÜREK ◽  
OLGA CAPIRCI

abstract Meanings communicated with depictions constitute an integral part of how speakers and signers actually use language (Clark, 2016). Recent studies have argued that, in sign languages, depicting strategy like constructed action (CA), in which a signer enacts the referent, is used for referential purposes in narratives. Here, we tested the referential function of CA in a more controlled experimental setting and outside narrative context. Given the iconic properties of CA we hypothesized that this strategy could be used for efficient information transmission. Thus, we asked if use of CA increased with the increase in the information required to be communicated. Twenty-three deaf signers of LIS described unconnected images, which varied in the amount of information represented, to another player in a director–matcher game. Results revealed that participants used CA to communicate core information about the images and also increased the use of CA as images became informatively denser. The findings show that iconic features of CA can be used for referential function in addition to its depictive function outside narrative context and to achieve communicative efficiency.


Author(s):  
Deanna L. Gagne ◽  
Marie Coppola

Literacy in Deaf communities has been redefined to include knowledge and skill in the production and comprehension of sign language as well as in the written form of the larger community’s spoken language. However, this reconceptualization has occurred primarily in communities with well-established sign languages. This chapter considers this type of literacy in emerging sign language contexts where the social, political, and financial resources are oftentimes scarce. The chapter presents the community of Nicaraguan Sign Language (NSL) signers, a newly emerged sign language that is now just over 40 years old, as a case study and explores the educational, cultural, and social evolution of NSL. Considering this context, findings are presented that speak to the relationship between language, cognitive development, and academic success particular to sign literacy. These findings are presented in the context of other emerging languages in both urban and rural/village settings.


2020 ◽  
Vol 37 (4) ◽  
pp. 571-608
Author(s):  
Diane Brentari ◽  
Laura Horton ◽  
Susan Goldin-Meadow

Abstract Two differences between signed and spoken languages that have been widely discussed in the literature are: the degree to which morphology is expressed simultaneously (rather than sequentially), and the degree to which iconicity is used, particularly in predicates of motion and location, often referred to as classifier predicates. In this paper we analyze a set of properties marking agency and number in four sign languages for their crosslinguistic similarities and differences regarding simultaneity and iconicity. Data from American Sign Language (ASL), Italian Sign Language (LIS), British Sign Language (BSL), and Hong Kong Sign Language (HKSL) are analyzed. We find that iconic, cognitive, phonological, and morphological factors contribute to the distribution of these properties. We conduct two analyses—one of verbs and one of verb phrases. The analysis of classifier verbs shows that, as expected, all four languages exhibit many common formal and iconic properties in the expression of agency and number. The analysis of classifier verb phrases (VPs)—particularly, multiple-verb predicates—reveals (a) that it is grammatical in all four languages to express agency and number within a single verb, but also (b) that there is crosslinguistic variation in expressing agency and number across the four languages. We argue that this variation is motivated by how each language prioritizes, or ranks, several constraints. The rankings can be captured in Optimality Theory. Some constraints in this account, such as a constraint to be redundant, are found in all information systems and might be considered non-linguistic; however, the variation in constraint ranking in verb phrases reveals the grammatical and arbitrary nature of linguistic systems.


2021 ◽  
Vol 2 (3) ◽  
Author(s):  
Gustaf Halvardsson ◽  
Johanna Peterson ◽  
César Soto-Valero ◽  
Benoit Baudry

AbstractThe automatic interpretation of sign languages is a challenging task, as it requires the usage of high-level vision and high-level motion processing systems for providing accurate image perception. In this paper, we use Convolutional Neural Networks (CNNs) and transfer learning to make computers able to interpret signs of the Swedish Sign Language (SSL) hand alphabet. Our model consists of the implementation of a pre-trained InceptionV3 network, and the usage of the mini-batch gradient descent optimization algorithm. We rely on transfer learning during the pre-training of the model and its data. The final accuracy of the model, based on 8 study subjects and 9400 images, is 85%. Our results indicate that the usage of CNNs is a promising approach to interpret sign languages, and transfer learning can be used to achieve high testing accuracy despite using a small training dataset. Furthermore, we describe the implementation details of our model to interpret signs as a user-friendly web application.


Author(s):  
Marion Kaczmarek ◽  
Michael Filhol

AbstractProfessional Sign Language translators, unlike their text-to-text counterparts, are not equipped with computer-assisted translation (CAT) software. Those softwares are meant to ease the translators’ tasks. No prior study as been conducted on this topic, and we aim at specifying such a software. To do so, we based our study on the professional Sign Language translators’ practices and needs. The aim of this paper is to identify the necessary steps in the text-to-sign translation process. By filming and interviewing professionals for both objective and subjective data, we build a list of tasks and see if they are systematic and performed in a definite order. Finally, we reflect on how CAT tools could assist those tasks, how to adapt the existing tools to Sign Language and what is necessary to add in order to fit the needs of Sign Language translation. In the long term, we plan to develop a first prototype of CAT software for sign languages.


2021 ◽  
Vol 14 (2) ◽  
pp. 1-45
Author(s):  
Danielle Bragg ◽  
Naomi Caselli ◽  
Julie A. Hochgesang ◽  
Matt Huenerfauth ◽  
Leah Katz-Hernandez ◽  
...  

Sign language datasets are essential to developing many sign language technologies. In particular, datasets are required for training artificial intelligence (AI) and machine learning (ML) systems. Though the idea of using AI/ML for sign languages is not new, technology has now advanced to a point where developing such sign language technologies is becoming increasingly tractable. This critical juncture provides an opportunity to be thoughtful about an array of Fairness, Accountability, Transparency, and Ethics (FATE) considerations. Sign language datasets typically contain recordings of people signing, which is highly personal. The rights and responsibilities of the parties involved in data collection and storage are also complex and involve individual data contributors, data collectors or owners, and data users who may interact through a variety of exchange and access mechanisms. Deaf community members (and signers, more generally) are also central stakeholders in any end applications of sign language data. The centrality of sign language to deaf culture identity, coupled with a history of oppression, makes usage by technologists particularly sensitive. This piece presents many of these issues that characterize working with sign language AI datasets, based on the authors’ experiences living, working, and studying in this space.


Sign in / Sign up

Export Citation Format

Share Document