Iconicity and interpretability in language emergence

2020 ◽  
Vol 10 (2) ◽  
pp. 127-157
Author(s):  
Carla L. Hudson Kam ◽  
Oksana Tkachman

Abstract The iconic potential of sign languages suggests that the establishment of a conventionalized set of form-meaning pairings should be relatively easy. However, even an iconic form has to be interpreted correctly for it to conventionalize. In sign languages, spatial modulations are used to indicate real spatial relationships (locative) and grammatical relations. The former is a more-or-less direct representation of how things are situated with respect to each other. Grammatical space, in contrast, is more abstract. As such, the former would seem to be more interpretable than the latter, and so on the face of it, should be more likely to conventionalize in a new sign language. But in at least one emerging sign language the grammatical use of space is conventionalizing first. We argue that this is due to the grammatical use of space being easier to understand correctly, using data from four experiments investigating hearing non-signers interpretation of spatially modulated gestures.

2019 ◽  
Vol 30 (4) ◽  
pp. 655-686 ◽  
Author(s):  
Sara Siyavoshi

Abstract This paper presents a study of modality in Iranian Sign Language (ZEI) from a cognitive perspective, aimed at analyzing two linguistic channels: facial and manual. While facial markers and their grammatical functions have been studied in some sign languages, we have few detailed analyses of the facial channel in comparison with the manual channel in conveying modal concepts. This study focuses on the interaction between manual and facial markers. A description of manual modal signs is offered. Three facial markers and their modality values are also examined: squinted eyes, brow furrow, and downward movement of lip corners (horseshoe mouth). In addition to offering this first descriptive analysis of modality in ZEI, this paper also applies the Cognitive Grammar model of modality, the Control Cycle, and the Reality Model, classifying modals into two kinds, effective and epistemic. It is suggested that effective control, including effective modality, tends to be expressed on the hands, while facial markers play an important role in marking epistemic assessment, one manifestation of which is epistemic modality. ZEI, like some other sign languages, exhibits an asymmetry between the number of manual signs and facial markers expressing epistemic modality: while the face can be active in the expression of effective modality, it is commonly the only means of expressing epistemic modality. By positing an epistemic core in effective modality, Cognitive Grammar provides a theoretical basis for these findings.


2015 ◽  
Vol 1 (1) ◽  
Author(s):  
Kearsy Cormier ◽  
Jordan Fenlon ◽  
Adam Schembri

AbstractSign languages have traditionally been described as having a distinction between (1) arbitrary (referential or syntactic) space, considered to be a purely grammatical use of space in which locations arbitrarily represent concrete or abstract subject and/or object arguments using pronouns or indicating verbs, for example, and (2) motivated (topographic or surrogate) space, involving mapping of locations of concrete referents onto the signing space via classifier constructions. Some linguists have suggested that it may be misleading to see the two uses of space as being completely distinct from one another. In this study, we use conversational data from the British Sign Language Corpus (www.bslcorpusproject.org) to look at the use of space with modified indicating verbs – specifically the directions in which these verbs are used as well as the co-occurrence of eyegaze shifts and constructed action. Our findings suggest that indicating verbs are frequently produced in conditions that use space in a motivated way and are rarely modified using arbitrary space. This contrasts with previous claims that indicating verbs in BSL prototypically use arbitrary space. We discuss the implications of this for theories about grammaticalisation and the role of gesture in sign languages and for sign language teaching.


Author(s):  
Asha Sato ◽  
Simon Kirby ◽  
Molly Flaherty

Research on emergent sign languages suggests that younger sign languages may make greater use of the z-axis, moving outwards from the body, than more established sign languages when describing the relationships between participants and events (Padden, Meir, Aronoff, and Sandler, 2010). This has been suggested to reflect a transition from iconicity rooted in the body (Meir, Padden, Aronoff, and Sandler, 2007) towards a more abstract schematic iconicity. We present the results of an experimental investigation into the use of axis by signers of Nicaraguan Sign Language (NSL). We analysed 1074 verb tokens elicited from NSL signers who entered the signing community at different points in time between 1974 and 2003. We used depth and motion tracking technology to quantify the position of signers’ wrists over time, allowing us to build an automated and continuous measure of axis use. We also consider axis use from two perspectives: a camera-centric perspective and a signer-centric perspective. In contrast to earlier work, we do not observe a trend towards increasing use of the x-axis. Instead we find that signers appear to have an overall preference for the z-axis. However, this preference is only observed from the camera-centric perspective. When measured relative to the body, signers appear to be making approximately equal use of both axes, suggesting the preference for the z-axis is largely driven by signers moving their bodies (and not just their hands) along the z-axis. We argue from this finding that language emergence patterns are not necessarily universal and that use of the x-axis may not be a prerequisite for the establishment of a spatial grammar.


Gesture ◽  
2004 ◽  
Vol 4 (1) ◽  
pp. 75-89 ◽  
Author(s):  
David MacGregor

In analyzing the use of space in American Sign Language (ASL), Liddell (2003) argues convincingly that no account of ASL can be complete without a discussion of how linguistic signs and non-linguistic gestures and gradient phenomena work together to create meaning. This represents a departure from the assumptions of much of linguistic theory, which has attempted to describe purely linguistic phenomena as part of an autonomous system. It also raises the question of whether these phenomena are peculiar to ASL and other sign languages, or if they also apply to spoken language. In this paper, I show how Liddell’s approach can be applied to English data to provide a fuller explanation of how speakers create meaning. Specifically, I analyze Jack Lemmons’ use of space, gesture, and voice in a scene from the movie “Mr. Roberts”.


Author(s):  
Wendy Sandler ◽  
Diane Lillo-Martin ◽  
Svetlana Dachkovsky ◽  
Ronice Müller de Quadros

Sign languages are unlike spoken languages because they are produced by a wide range of visibly perceivable articulators: the hands, the face, the head, and the body. There is as yet no consensus on the division of labour between these articulators and the linguistic elements or subsystems that they subserve. For example, certain systematic facial expressions in sign languages have been argued to be the realization of syntactic structure by some researchers and of information structure, and thus prosodic in nature, by others. This chapter brings evidence from three unrelated sign languages for the latter claim. It shows that certain non-manual markers are best understood as representing pragmatic notions related to information structure, such as accessibility, contingency, and focus, and are thus part of the prosodic system in sign languages generally. The data and argumentation serve to sharpen the distinction between prosody and syntax in language generally.


2021 ◽  
Vol 6 (1) ◽  
Author(s):  
Hannah Lutzenberger ◽  
Connie de Vos ◽  
Onno Crasborn ◽  
Paula Fikkert

Sign language lexicons incorporate phonological specifications. Evidence from emerging sign languages suggests that phonological structure emerges gradually in a new language. In this study, we investigate variation in the form of signs across 20 deaf adult signers of Kata Kolok, a sign language that emerged spontaneously in a Balinese village community. Combining methods previously used for sign comparisons, we introduce a new numeric measure of variation. Our nuanced yet comprehensive approach to form variation integrates three levels (iconic motivation, surface realisation, feature differences) and allows for refinement through weighting the variation score by token and signer frequency. We demonstrate that variation in the form of signs appears in different degrees at different levels. Token frequency in a given dataset greatly affects how much variation can surface, suggesting caution in interpreting previous findings. Different sign variants have different scopes of use among the signing population, with some more widely used than others. Both frequency weightings (token and signer) identify dominant sign variants, i.e., sign forms that are produced frequently or by many signers. We argue that variation does not equal the absence of conventionalisation. Indeed, especially in micro-community sign languages, variation may be key to understanding patterns of language emergence.


Phonology ◽  
2013 ◽  
Vol 30 (2) ◽  
pp. 211-252 ◽  
Author(s):  
Svetlana Dachkovsky ◽  
Christina Healy ◽  
Wendy Sandler

In a detailed comparison of the intonational systems of two unrelated languages, Israeli Sign Language and American Sign Language, we show certain similarities as well as differences in the distribution of several articulations of different parts of the face and motions of the head. Differences between the two languages are explained on the basis of pragmatic notions related to information structure, such as accessibility and contingency, providing novel evidence that the system is inherently intonational, and only indirectly related to syntax. The study also identifies specific ways in which the physical modality in which language is expressed influences intonational structure.


2020 ◽  
Vol 37 (4) ◽  
pp. 571-608
Author(s):  
Diane Brentari ◽  
Laura Horton ◽  
Susan Goldin-Meadow

Abstract Two differences between signed and spoken languages that have been widely discussed in the literature are: the degree to which morphology is expressed simultaneously (rather than sequentially), and the degree to which iconicity is used, particularly in predicates of motion and location, often referred to as classifier predicates. In this paper we analyze a set of properties marking agency and number in four sign languages for their crosslinguistic similarities and differences regarding simultaneity and iconicity. Data from American Sign Language (ASL), Italian Sign Language (LIS), British Sign Language (BSL), and Hong Kong Sign Language (HKSL) are analyzed. We find that iconic, cognitive, phonological, and morphological factors contribute to the distribution of these properties. We conduct two analyses—one of verbs and one of verb phrases. The analysis of classifier verbs shows that, as expected, all four languages exhibit many common formal and iconic properties in the expression of agency and number. The analysis of classifier verb phrases (VPs)—particularly, multiple-verb predicates—reveals (a) that it is grammatical in all four languages to express agency and number within a single verb, but also (b) that there is crosslinguistic variation in expressing agency and number across the four languages. We argue that this variation is motivated by how each language prioritizes, or ranks, several constraints. The rankings can be captured in Optimality Theory. Some constraints in this account, such as a constraint to be redundant, are found in all information systems and might be considered non-linguistic; however, the variation in constraint ranking in verb phrases reveals the grammatical and arbitrary nature of linguistic systems.


2021 ◽  
Vol 2 (3) ◽  
Author(s):  
Gustaf Halvardsson ◽  
Johanna Peterson ◽  
César Soto-Valero ◽  
Benoit Baudry

AbstractThe automatic interpretation of sign languages is a challenging task, as it requires the usage of high-level vision and high-level motion processing systems for providing accurate image perception. In this paper, we use Convolutional Neural Networks (CNNs) and transfer learning to make computers able to interpret signs of the Swedish Sign Language (SSL) hand alphabet. Our model consists of the implementation of a pre-trained InceptionV3 network, and the usage of the mini-batch gradient descent optimization algorithm. We rely on transfer learning during the pre-training of the model and its data. The final accuracy of the model, based on 8 study subjects and 9400 images, is 85%. Our results indicate that the usage of CNNs is a promising approach to interpret sign languages, and transfer learning can be used to achieve high testing accuracy despite using a small training dataset. Furthermore, we describe the implementation details of our model to interpret signs as a user-friendly web application.


Author(s):  
Marion Kaczmarek ◽  
Michael Filhol

AbstractProfessional Sign Language translators, unlike their text-to-text counterparts, are not equipped with computer-assisted translation (CAT) software. Those softwares are meant to ease the translators’ tasks. No prior study as been conducted on this topic, and we aim at specifying such a software. To do so, we based our study on the professional Sign Language translators’ practices and needs. The aim of this paper is to identify the necessary steps in the text-to-sign translation process. By filming and interviewing professionals for both objective and subjective data, we build a list of tasks and see if they are systematic and performed in a definite order. Finally, we reflect on how CAT tools could assist those tasks, how to adapt the existing tools to Sign Language and what is necessary to add in order to fit the needs of Sign Language translation. In the long term, we plan to develop a first prototype of CAT software for sign languages.


Sign in / Sign up

Export Citation Format

Share Document