Sign Language Prosody

Author(s):  
Wendy Sandler ◽  
Diane Lillo-Martin ◽  
Svetlana Dachkovsky ◽  
Ronice Müller de Quadros

Sign languages are unlike spoken languages because they are produced by a wide range of visibly perceivable articulators: the hands, the face, the head, and the body. There is as yet no consensus on the division of labour between these articulators and the linguistic elements or subsystems that they subserve. For example, certain systematic facial expressions in sign languages have been argued to be the realization of syntactic structure by some researchers and of information structure, and thus prosodic in nature, by others. This chapter brings evidence from three unrelated sign languages for the latter claim. It shows that certain non-manual markers are best understood as representing pragmatic notions related to information structure, such as accessibility, contingency, and focus, and are thus part of the prosodic system in sign languages generally. The data and argumentation serve to sharpen the distinction between prosody and syntax in language generally.

Phonology ◽  
2013 ◽  
Vol 30 (2) ◽  
pp. 211-252 ◽  
Author(s):  
Svetlana Dachkovsky ◽  
Christina Healy ◽  
Wendy Sandler

In a detailed comparison of the intonational systems of two unrelated languages, Israeli Sign Language and American Sign Language, we show certain similarities as well as differences in the distribution of several articulations of different parts of the face and motions of the head. Differences between the two languages are explained on the basis of pragmatic notions related to information structure, such as accessibility and contingency, providing novel evidence that the system is inherently intonational, and only indirectly related to syntax. The study also identifies specific ways in which the physical modality in which language is expressed influences intonational structure.


2017 ◽  
Vol 3 (1) ◽  
Author(s):  
Tommi Jantunen

AbstractThis paper investigates the interplay of constructed action and the clause in Finnish Sign Language (FinSL). Constructed action is a form of gestural enactment in which the signers use their hands, face and other parts of the body to represent the actions, thoughts or feelings of someone they are referring to in the discourse. With the help of frequencies calculated from corpus data, this article shows firstly that when FinSL signers are narrating a story, there are differences in how they use constructed action. Then the paper argues that there are differences also in the prototypical structure, linkage type and non-manual activity of clauses, depending on the presence or non-presence of constructed action. Finally, taking the view that gesturality is an integral part of language, the paper discusses the nature of syntax in sign languages and proposes a conceptualization in which syntax is seen as a set of norms distributed on a continuum between a categorial-conventional end and a gradient-unconventional end.


2018 ◽  
Vol 8 (2) ◽  
pp. 10 ◽  
Author(s):  
Alev Girli ◽  
Sıla Doğmaz

In this study, children with learning disability (LD) were compared with children with autism spectrum disorder(ASD) in terms of identifying emotions from photographs with certain face and body expressions. The sampleconsisted of a total of 82 children aged 7-19 years living in Izmir in Turkey. A total of 6 separate sets of slides,consisting of black and white photographs, were used to assess participants’ ability to identify feelings – 3 sets forfacial expressions, and 3 sets for body language. There were 20 photographs on the face slides and 38 photographson the body language slides. The results of the nonparametric Mann Whitney-U test showed no significant differencebetween the total scores that children received from each of the face and body language slide sets. It was observedthat the children with LD usually looked at the whole photo, while the children with ASD focused especially aroundthe mouth to describe feelings. The results that were obtained were discussed in the context of the literature, andsuggestions were presented.


2017 ◽  
Vol 20 (1) ◽  
pp. 109-128 ◽  
Author(s):  
Ana Mineiro ◽  
Patrícia Carmo ◽  
Cristina Caroça ◽  
Mara Moita ◽  
Sara Carvalho ◽  
...  

Abstract In Sao Tome and Principe there are approximately five thousand deaf and hard-of-hearing individuals. Until recently, these people had no language to use among them other than basic home signs used only to communicate with their families. With this communication gap in mind, a project was set up to help them come together in a common space in order to create a dedicated environment for a common sign language to emerge. In less than two years, the first cohort began to sign and to develop a newly emerging sign language – the Sao Tome and Principe Sign Language (LGSTP). Signs were elicited by means of drawings and pictures and recorded from the beginning of the project. The emergent structures of signs in this new language were compared with those reported for other emergent sign languages such as the Al-Sayyid Bedouin Sign Language and the Lengua de Señas de Nicaragua, and several similarities were found at the first stage. In this preliminary study on the emergence of LGSTP, it was observed that, in its first stage, signs are mostly iconic and exhibit a greater involvement of the articulators and a larger signing space when compared with subsequent stages of LGSTP emergence and with other sign languages. Although holistic signs are the prevalent structure, compounding seems to be emerging. At this stage of emergence, OSV seems to be the predominant syntactic structure of LGSTP. Yet the data suggest that new signers exhibit difficulties in syntactic constructions with two arguments.


2020 ◽  
Vol 10 (2) ◽  
pp. 127-157
Author(s):  
Carla L. Hudson Kam ◽  
Oksana Tkachman

Abstract The iconic potential of sign languages suggests that the establishment of a conventionalized set of form-meaning pairings should be relatively easy. However, even an iconic form has to be interpreted correctly for it to conventionalize. In sign languages, spatial modulations are used to indicate real spatial relationships (locative) and grammatical relations. The former is a more-or-less direct representation of how things are situated with respect to each other. Grammatical space, in contrast, is more abstract. As such, the former would seem to be more interpretable than the latter, and so on the face of it, should be more likely to conventionalize in a new sign language. But in at least one emerging sign language the grammatical use of space is conventionalizing first. We argue that this is due to the grammatical use of space being easier to understand correctly, using data from four experiments investigating hearing non-signers interpretation of spatially modulated gestures.


2018 ◽  
Author(s):  
Adrienne Wood ◽  
Jared Martin ◽  
Martha W. Alibali ◽  
Paula Niedenthal

Recognition of affect expressed in the face is disrupted when the body expresses an incongruent affect. Existing research has documented such interference for universally recognizable bodily expressions. However, it remains unknown whether learned, conventional gestures can interfere with facial expression processing. Study 1 participants (N = 62) viewed videos of facial expressions accompanied by hand gestures and reported the valence of either the face or hand. Responses were slower and less accurate when the face-hand pairing was incongruent compared to congruent. We hypothesized that hand gestures might exert an even stronger influence on facial expression processing when other routes to understanding the meaning of a facial expression, such as with sensorimotor simulation, are disrupted. Participants in Study 2 (N = 127) completed the same task, but the facial mobility of some participants was restricted, which disrupted face processing in prior work. The hand-face congruency effect from Study 1 was replicated. The facial mobility manipulation affected males only, and it did not moderate the congruency effect. The present work suggests the affective meaning of conventional gestures is processed automatically and can interfere with face perception, but perceivers do not seem to rely more on gestures when sensorimotor face processing is disrupted.


2016 ◽  
Vol 9 (4) ◽  
pp. 573-602 ◽  
Author(s):  
SO-ONE HWANG ◽  
NOZOMI TOMITA ◽  
HOPE MORGAN ◽  
RABIA ERGIN ◽  
DENIZ İLKBAŞARAN ◽  
...  

abstractThis paper examines how gesturers and signers use their bodies to express concepts such as instrumentality and humanness. Comparing across eight sign languages (American, Japanese, German, Israeli, and Kenyan Sign Languages, Ha Noi Sign Language of Vietnam, Central Taurus Sign Language of Turkey, and Al-Sayyid Bedouin Sign Language of Israel) and the gestures of American non-signers, we find recurring patterns for naming entities in three semantic categories (tools, animals, and fruits & vegetables). These recurring patterns are captured in a classification system that identifies iconic strategies based on how the body is used together with the hands. Across all groups, tools are named with manipulation forms, where the head and torso represent those of a human agent. Animals tend to be identified with personification forms, where the body serves as a map for a comparable non-human body. Fruits & vegetables tend to be identified with object forms, where the hands act independently from the rest of the body to represent static features of the referent. We argue that these iconic patterns are rooted in using the body for communication, and provide a basis for understanding how meaningful communication emerges quickly in gesture and persists in emergent and established sign languages.


Author(s):  
Mikhail G. Grif ◽  
◽  
R. Elakkiya ◽  
Alexey L. Prikhodko ◽  
Maxim А. Bakaev ◽  
...  

In the paper, we consider recognition of sign languages (SL) with a particular focus on Russian and Indian SLs. The proposed recognition system includes five components: configuration, orientation, localization, movement and non-manual markers. The analysis uses methods of recognition of individual gestures and continuous sign speech for Indian and Russian sign languages (RSL). To recognize individual gestures, the RSL Dataset was developed, which includes more than 35,000 files for over 1000 signs. Each sign was performed with 5 repetitions and at least by 5 deaf native speakers of the Russian Sign Language from Siberia. To isolate epenthesis for continuous RSL, 312 sentences with 5 repetitions were selected and recorded on video. Five types of movements were distinguished, namely, "No gesture", "There is a gesture", "Initial movement", "Transitional movement", "Final movement". The markup of sentences for highlighting epenthesis was carried out on the Supervisely.ly platform. A recurrent network architecture (LSTM) was built, implemented using the TensorFlow Keras machine learning library. The accuracy of correct recognition of epenthesis was 95 %. The work on a similar dataset for the recognition of both individual gestures and continuous Indian sign language (ISL) is continuing. To recognize hand gestures, the mediapipe holistic library module was used. It contains a group of trained neural network algorithms that allow obtaining the coordinates of the key points of the body, palms and face of a person in the image. The accuracy of 85 % was achieved for the verification data. In the future, it is necessary to significantly increase the amount of labeled data. To recognize non-manual components, a number of rules have been developed for certain movements in the face. These rules include positions for the eyes, eyelids, mouth, tongue, and head tilt.


Author(s):  
Dhruv Piyush Parikh

Abstract: Our world today is driven by machines of various complexities. From a basic one like a computer to a highly complex humanoid robot, everything is a product of human intelligence. A lot of industries are being benefited from such new technologies. Facial Expression Recognition is one of these technologies. It has a wide range of applications and is an area that is constantly evolving. The analogy behind it is, when we gaze at someone, the eyes send signals to the brain. The face patterns of that specific person are carried by these messages. These patterns are then compared to those stored in the brain's memory. Inspired by such innovations, our research collects human expressions and analyses their emotions using our vast dataset, offering some necessary strategies to change their facial expressions. Due to the competitive environment, the youth of our generation has been inclined to a lot of mental health problems such as anxiety and depression. Our generation's youth has been predisposed to a variety of mental health issues. Our idea attempts to provide a relaxing atmosphere to a person based on his or her facial expressions. Keywords: Facial Expression, Face Recognition, Python, PyWhatkit, OpenCV.


Sign in / Sign up

Export Citation Format

Share Document