Marking various aspects in Turkish Sign Language

Author(s):  
Serpil Karabüklü ◽  
Ronnie B. Wilbur

Abstract Sign languages have been reported to have manual signs that function as perfective morphemes (Fischer & Gough 1999; Meir 1999; Rathmann 2005; Duffy 2007; Zucchi et al. 2010). Turkish Sign Language (TİD) has also been claimed to have such morphemes (Zeshan 2003; Kubuş & Rathmann 2009; Dikyuva 2011; Gökgöz 2011; Karabüklü 2016) as well as a nonmanual completive marker (‘bn’) (Dikyuva 2011). This study shows that the nonmanual ‘bn’ is in fact a perfective morpheme. We examine its compatibility with different event types and furthermore show that TİD has a manual sign bı̇t (‘finish’) that is indeed the completive marker but with possibly unusual restrictions on its use. Based on their distribution, the current study distinguishes bı̇t and ‘bn’ as different morphemes even though they can co-occur. TİD is argued to be typologically different from other sign languages since it has both a nonmanual marker (‘bn’) for a perfective morpheme and a manual sign (bı̇t) with different selectional properties than the manual signs reported for other sign languages.

2006 ◽  
Vol 9 (1-2) ◽  
pp. 133-150 ◽  
Author(s):  
Katharina Schalber

The aim of this paper is to investigate the structure of polar (yes/no questions) and content questions (wh-questions) in Austrian Sign Language (ÖGS), analyzing the different nonmanual signals, the occurrence of question signs and their syntactic position. As I will show, the marking strategies used in ÖGS are no exception to the crosslinguistic observations that interrogative constructions in sign languages employ a variety of nonmanual signals and manual signs (Zeshan 2004). In ÖGS polar questions are marked with ‘chin down’, whereas content questions are indicated with ‘chin up’ or ‘head forward’ and content question signs. These same nonmanual markers are reported for Croatian sign language, indicating common foundation due to historical relations and intense language contact.


Linguistics ◽  
2016 ◽  
Vol 54 (6) ◽  
Author(s):  
Richard Bank ◽  
Onno Crasborn ◽  
Roeland van Hout

Abstractin sign languages consist of simultaneously articulated manual signs and spoken language words. These “mouthings” (typically silent articulations) have been observed for many different sign languages. The present study aims to investigate the extent of such bimodal code-mixing in sign languages by investigating the frequency of mouthings produced by deaf users of Sign Language of the Netherlands (NGT), their co-occurrence with pointing signs, and whether any differences can be explained by sociolinguistic variables such as regional origin and age of the signer. We investigated over 10,000 mouth actions from 70 signers, and found that the mouth and the hands are equally active during signing. Moreover, around 80 % of all mouth actions are mouthings, while the remaining 20 % are unrelated to Dutch. We found frequency differences between individual signers and a small effect for level of education, but not for other sociolinguistic variables. Our results provide genuine evidence that mouthings form an inextricable component of signed interaction. Rather than displaying effects of competition between languages or spoken language suppression, NGT signers demonstrate the potential of the visual modality to conjoin parallel information streams.


2019 ◽  
Vol 30 (4) ◽  
pp. 655-686 ◽  
Author(s):  
Sara Siyavoshi

Abstract This paper presents a study of modality in Iranian Sign Language (ZEI) from a cognitive perspective, aimed at analyzing two linguistic channels: facial and manual. While facial markers and their grammatical functions have been studied in some sign languages, we have few detailed analyses of the facial channel in comparison with the manual channel in conveying modal concepts. This study focuses on the interaction between manual and facial markers. A description of manual modal signs is offered. Three facial markers and their modality values are also examined: squinted eyes, brow furrow, and downward movement of lip corners (horseshoe mouth). In addition to offering this first descriptive analysis of modality in ZEI, this paper also applies the Cognitive Grammar model of modality, the Control Cycle, and the Reality Model, classifying modals into two kinds, effective and epistemic. It is suggested that effective control, including effective modality, tends to be expressed on the hands, while facial markers play an important role in marking epistemic assessment, one manifestation of which is epistemic modality. ZEI, like some other sign languages, exhibits an asymmetry between the number of manual signs and facial markers expressing epistemic modality: while the face can be active in the expression of effective modality, it is commonly the only means of expressing epistemic modality. By positing an epistemic core in effective modality, Cognitive Grammar provides a theoretical basis for these findings.


2011 ◽  
Vol 14 (2) ◽  
pp. 248-270 ◽  
Author(s):  
Richard Bank ◽  
Onno A. Crasborn ◽  
Roeland van Hout

Mouthings and mouth gestures are omnipresent in Sign Language of the Netherlands (NGT). Mouthings in NGT commonly have their origin in spoken Dutch. We conducted a corpus study to explore how frequent mouthings in fact are in NGT, whether there is variation within and between signs in mouthings, and how frequent temporal reduction occurs in mouthings. Answers to these questions can help us classify mouthings as being specified in the sign lexicon or as being instances of code-blending. We investigated a sample of 20 frequently occurring signs. We found that each sign in the sample co-occurs frequently with a mouthing, usually that of a specific Dutch lexical item. On the other hand, signs show variation in the way they co-occur with mouthings and mouth gestures. By using a relatively large amount of natural data, we succeeded in gaining more insight into the way mouth actions are utilized in sign languages.


2016 ◽  
Vol 39 (2) ◽  
pp. 391-407 ◽  
Author(s):  
Carl Börstell ◽  
Ryan Lepic ◽  
Gal Belsitzman

Sign languages make use of paired articulators (the two hands), hence manual signs may be either one- or two-handed. Although two-handedness has previously been regarded a purely formal feature, studies have argued morphologically two-handed forms are associated with some types of inflectional plurality. Moreover, recent studies across sign languages have demonstrated that even lexically two-handed signs share certain semantic properties. In this study, we investigate lexically plural concepts in ten different sign languages, distributed across five sign language families, and demonstrate that such concepts are preferentially represented with two-handed forms, across all the languages in our sample. We argue that this is because the signed modality with its paired articulators enables the languages to iconically represent conceptually plural meanings.


2006 ◽  
Vol 9 (1-2) ◽  
pp. 133-150
Author(s):  
Katharina Schalber

The aim of this paper is to investigate the structure of polar (yes/no questions) and content questions (wh-questions) in Austrian Sign Language (ÖGS), analyzing the different nonmanual signals, the occurrence of question signs and their syntactic position. As I will show, the marking strategies used in ÖGS are no exception to the crosslinguistic observations that interrogative constructions in sign languages employ a variety of nonmanual signals and manual signs (Zeshan 2004). In ÖGS polar questions are marked with ‘chin down’, whereas content questions are indicated with ‘chin up’ or ‘head forward’ and content question signs. These same nonmanual markers are reported for Croatian sign language, indicating common foundation due to historical relations and intense language contact.


2020 ◽  
Vol 37 (4) ◽  
pp. 571-608
Author(s):  
Diane Brentari ◽  
Laura Horton ◽  
Susan Goldin-Meadow

Abstract Two differences between signed and spoken languages that have been widely discussed in the literature are: the degree to which morphology is expressed simultaneously (rather than sequentially), and the degree to which iconicity is used, particularly in predicates of motion and location, often referred to as classifier predicates. In this paper we analyze a set of properties marking agency and number in four sign languages for their crosslinguistic similarities and differences regarding simultaneity and iconicity. Data from American Sign Language (ASL), Italian Sign Language (LIS), British Sign Language (BSL), and Hong Kong Sign Language (HKSL) are analyzed. We find that iconic, cognitive, phonological, and morphological factors contribute to the distribution of these properties. We conduct two analyses—one of verbs and one of verb phrases. The analysis of classifier verbs shows that, as expected, all four languages exhibit many common formal and iconic properties in the expression of agency and number. The analysis of classifier verb phrases (VPs)—particularly, multiple-verb predicates—reveals (a) that it is grammatical in all four languages to express agency and number within a single verb, but also (b) that there is crosslinguistic variation in expressing agency and number across the four languages. We argue that this variation is motivated by how each language prioritizes, or ranks, several constraints. The rankings can be captured in Optimality Theory. Some constraints in this account, such as a constraint to be redundant, are found in all information systems and might be considered non-linguistic; however, the variation in constraint ranking in verb phrases reveals the grammatical and arbitrary nature of linguistic systems.


2021 ◽  
Vol 2 (3) ◽  
Author(s):  
Gustaf Halvardsson ◽  
Johanna Peterson ◽  
César Soto-Valero ◽  
Benoit Baudry

AbstractThe automatic interpretation of sign languages is a challenging task, as it requires the usage of high-level vision and high-level motion processing systems for providing accurate image perception. In this paper, we use Convolutional Neural Networks (CNNs) and transfer learning to make computers able to interpret signs of the Swedish Sign Language (SSL) hand alphabet. Our model consists of the implementation of a pre-trained InceptionV3 network, and the usage of the mini-batch gradient descent optimization algorithm. We rely on transfer learning during the pre-training of the model and its data. The final accuracy of the model, based on 8 study subjects and 9400 images, is 85%. Our results indicate that the usage of CNNs is a promising approach to interpret sign languages, and transfer learning can be used to achieve high testing accuracy despite using a small training dataset. Furthermore, we describe the implementation details of our model to interpret signs as a user-friendly web application.


Author(s):  
Marion Kaczmarek ◽  
Michael Filhol

AbstractProfessional Sign Language translators, unlike their text-to-text counterparts, are not equipped with computer-assisted translation (CAT) software. Those softwares are meant to ease the translators’ tasks. No prior study as been conducted on this topic, and we aim at specifying such a software. To do so, we based our study on the professional Sign Language translators’ practices and needs. The aim of this paper is to identify the necessary steps in the text-to-sign translation process. By filming and interviewing professionals for both objective and subjective data, we build a list of tasks and see if they are systematic and performed in a definite order. Finally, we reflect on how CAT tools could assist those tasks, how to adapt the existing tools to Sign Language and what is necessary to add in order to fit the needs of Sign Language translation. In the long term, we plan to develop a first prototype of CAT software for sign languages.


2021 ◽  
Vol 14 (2) ◽  
pp. 1-45
Author(s):  
Danielle Bragg ◽  
Naomi Caselli ◽  
Julie A. Hochgesang ◽  
Matt Huenerfauth ◽  
Leah Katz-Hernandez ◽  
...  

Sign language datasets are essential to developing many sign language technologies. In particular, datasets are required for training artificial intelligence (AI) and machine learning (ML) systems. Though the idea of using AI/ML for sign languages is not new, technology has now advanced to a point where developing such sign language technologies is becoming increasingly tractable. This critical juncture provides an opportunity to be thoughtful about an array of Fairness, Accountability, Transparency, and Ethics (FATE) considerations. Sign language datasets typically contain recordings of people signing, which is highly personal. The rights and responsibilities of the parties involved in data collection and storage are also complex and involve individual data contributors, data collectors or owners, and data users who may interact through a variety of exchange and access mechanisms. Deaf community members (and signers, more generally) are also central stakeholders in any end applications of sign language data. The centrality of sign language to deaf culture identity, coupled with a history of oppression, makes usage by technologists particularly sensitive. This piece presents many of these issues that characterize working with sign language AI datasets, based on the authors’ experiences living, working, and studying in this space.


Sign in / Sign up

Export Citation Format

Share Document