scholarly journals Verbos copulativos com locativos em Português Europeu e em Língua Gestual Portuguesa

Author(s):  
Celda Morgado ◽  
Ana Maria Brito

Verbs and their syntactic and semantic properties have been studied in several languages, in different theoretical frameworks. However, as for copulative verbs, studies of Sign Languages are still scarce, mainly of Portuguese Sign Language. Therefore, in this paper, some properties of predicative phrases with adjectives, participles and locatives in European Portuguese and Portuguese Sign Language are studied, comparing them with other Oral Languages, in particular Iberian Romance languages, and also with other Sign Languages. Portuguese Sign Language data seem to indicate that the copulative verb is lexically realized when there is a locative predicate and that with a non-locative predicate a null copula occurs.

2021 ◽  
Vol 14 (2) ◽  
pp. 1-45
Author(s):  
Danielle Bragg ◽  
Naomi Caselli ◽  
Julie A. Hochgesang ◽  
Matt Huenerfauth ◽  
Leah Katz-Hernandez ◽  
...  

Sign language datasets are essential to developing many sign language technologies. In particular, datasets are required for training artificial intelligence (AI) and machine learning (ML) systems. Though the idea of using AI/ML for sign languages is not new, technology has now advanced to a point where developing such sign language technologies is becoming increasingly tractable. This critical juncture provides an opportunity to be thoughtful about an array of Fairness, Accountability, Transparency, and Ethics (FATE) considerations. Sign language datasets typically contain recordings of people signing, which is highly personal. The rights and responsibilities of the parties involved in data collection and storage are also complex and involve individual data contributors, data collectors or owners, and data users who may interact through a variety of exchange and access mechanisms. Deaf community members (and signers, more generally) are also central stakeholders in any end applications of sign language data. The centrality of sign language to deaf culture identity, coupled with a history of oppression, makes usage by technologists particularly sensitive. This piece presents many of these issues that characterize working with sign language AI datasets, based on the authors’ experiences living, working, and studying in this space.


2020 ◽  
pp. 026765832090685
Author(s):  
Sannah Gulamani ◽  
Chloë Marshall ◽  
Gary Morgan

Little is known about how hearing adults learn sign languages. Our objective in this study was to investigate how learners of British Sign Language (BSL) produce narratives, and we focused in particular on viewpoint-taking. Twenty-three intermediate-level learners of BSL and 10 deaf native/early signers produced a narrative in BSL using the wordless picture book Frog, where are you? (Mayer, 1969). We selected specific episodes from part of the book that provided rich opportunities for shifting between different characters and taking on different viewpoints. We coded for details of story content, the frequency with which different viewpoints were used and how long those viewpoints were used for, and the numbers of articulators that were used simultaneously. We found that even though learners’ and deaf signers’ narratives did not differ in overall duration, learners’ narratives had less content. Learners used character viewpoint less frequently than deaf signers. Although learners spent just as long as deaf signers in character viewpoint, they spent longer than deaf signers in observer viewpoint. Together, these findings suggest that character viewpoint was harder than observer viewpoint for learners. Furthermore, learners were less skilled than deaf signers in using multiple articulators simultaneously. We conclude that challenges for learners of sign include taking character viewpoint when narrating a story and encoding information across multiple articulators simultaneously.


2016 ◽  
Vol 39 (2) ◽  
pp. 391-407 ◽  
Author(s):  
Carl Börstell ◽  
Ryan Lepic ◽  
Gal Belsitzman

Sign languages make use of paired articulators (the two hands), hence manual signs may be either one- or two-handed. Although two-handedness has previously been regarded a purely formal feature, studies have argued morphologically two-handed forms are associated with some types of inflectional plurality. Moreover, recent studies across sign languages have demonstrated that even lexically two-handed signs share certain semantic properties. In this study, we investigate lexically plural concepts in ten different sign languages, distributed across five sign language families, and demonstrate that such concepts are preferentially represented with two-handed forms, across all the languages in our sample. We argue that this is because the signed modality with its paired articulators enables the languages to iconically represent conceptually plural meanings.


10.29007/r1rt ◽  
2018 ◽  
Author(s):  
Ana-María Fernández Soneira ◽  
Inmaculada C. Báez Montero ◽  
Eva Freijeiro Ocampo

The approval of the law for the recognition of Sign Languages and its subsequent development (together with the laws enacted by the regional governments and the work of universities and institutions such as CNLSE) has changed the landscape of the research activity carried out in the field of SL in Spain.In spite of these social advances, a corpus of Spanish Sign Language (LSE) has not yet been compiled. The average Sign Language corpus is traditionally composed of collections of annotated or tagged videos that contain written material aligned with the main Sign Language data.The compiling project presented here, CORALSE, proposes: 1) to collect a representative number of samples of language use; 2) to tag and transcribe the collected samples and build an online corpus; 3) to advance in the description of the grammar of LSE; 4) to provide the scientific background needed for the development of materials for educational purposes; and 5) to advance in the development of different types of LSE.


2020 ◽  
Vol 37 (4) ◽  
pp. 571-608
Author(s):  
Diane Brentari ◽  
Laura Horton ◽  
Susan Goldin-Meadow

Abstract Two differences between signed and spoken languages that have been widely discussed in the literature are: the degree to which morphology is expressed simultaneously (rather than sequentially), and the degree to which iconicity is used, particularly in predicates of motion and location, often referred to as classifier predicates. In this paper we analyze a set of properties marking agency and number in four sign languages for their crosslinguistic similarities and differences regarding simultaneity and iconicity. Data from American Sign Language (ASL), Italian Sign Language (LIS), British Sign Language (BSL), and Hong Kong Sign Language (HKSL) are analyzed. We find that iconic, cognitive, phonological, and morphological factors contribute to the distribution of these properties. We conduct two analyses—one of verbs and one of verb phrases. The analysis of classifier verbs shows that, as expected, all four languages exhibit many common formal and iconic properties in the expression of agency and number. The analysis of classifier verb phrases (VPs)—particularly, multiple-verb predicates—reveals (a) that it is grammatical in all four languages to express agency and number within a single verb, but also (b) that there is crosslinguistic variation in expressing agency and number across the four languages. We argue that this variation is motivated by how each language prioritizes, or ranks, several constraints. The rankings can be captured in Optimality Theory. Some constraints in this account, such as a constraint to be redundant, are found in all information systems and might be considered non-linguistic; however, the variation in constraint ranking in verb phrases reveals the grammatical and arbitrary nature of linguistic systems.


2021 ◽  
Vol 2 (3) ◽  
Author(s):  
Gustaf Halvardsson ◽  
Johanna Peterson ◽  
César Soto-Valero ◽  
Benoit Baudry

AbstractThe automatic interpretation of sign languages is a challenging task, as it requires the usage of high-level vision and high-level motion processing systems for providing accurate image perception. In this paper, we use Convolutional Neural Networks (CNNs) and transfer learning to make computers able to interpret signs of the Swedish Sign Language (SSL) hand alphabet. Our model consists of the implementation of a pre-trained InceptionV3 network, and the usage of the mini-batch gradient descent optimization algorithm. We rely on transfer learning during the pre-training of the model and its data. The final accuracy of the model, based on 8 study subjects and 9400 images, is 85%. Our results indicate that the usage of CNNs is a promising approach to interpret sign languages, and transfer learning can be used to achieve high testing accuracy despite using a small training dataset. Furthermore, we describe the implementation details of our model to interpret signs as a user-friendly web application.


Author(s):  
Marion Kaczmarek ◽  
Michael Filhol

AbstractProfessional Sign Language translators, unlike their text-to-text counterparts, are not equipped with computer-assisted translation (CAT) software. Those softwares are meant to ease the translators’ tasks. No prior study as been conducted on this topic, and we aim at specifying such a software. To do so, we based our study on the professional Sign Language translators’ practices and needs. The aim of this paper is to identify the necessary steps in the text-to-sign translation process. By filming and interviewing professionals for both objective and subjective data, we build a list of tasks and see if they are systematic and performed in a definite order. Finally, we reflect on how CAT tools could assist those tasks, how to adapt the existing tools to Sign Language and what is necessary to add in order to fit the needs of Sign Language translation. In the long term, we plan to develop a first prototype of CAT software for sign languages.


2021 ◽  
pp. 095679762199155
Author(s):  
Amanda R. Brown ◽  
Wim Pouw ◽  
Diane Brentari ◽  
Susan Goldin-Meadow

When we use our hands to estimate the length of a stick in the Müller-Lyer illusion, we are highly susceptible to the illusion. But when we prepare to act on sticks under the same conditions, we are significantly less susceptible. Here, we asked whether people are susceptible to illusion when they use their hands not to act on objects but to describe them in spontaneous co-speech gestures or conventional sign languages of the deaf. Thirty-two English speakers and 13 American Sign Language signers used their hands to act on, estimate the length of, and describe sticks eliciting the Müller-Lyer illusion. For both gesture and sign, the magnitude of illusion in the description task was smaller than the magnitude of illusion in the estimation task and not different from the magnitude of illusion in the action task. The mechanisms responsible for producing gesture in speech and sign thus appear to operate not on percepts involved in estimation but on percepts derived from the way we act on objects.


2019 ◽  
Vol 5 (1) ◽  
pp. 666-689
Author(s):  
Carl Börstell ◽  
Tommi Jantunen ◽  
Vadim Kimmelman ◽  
Vanja de Lint ◽  
Johanna Mesch ◽  
...  

AbstractWe investigate transitivity prominence of verbs across signed and spoken languages, based on data from both valency dictionaries and corpora. Our methodology relies on the assumption that dictionary data and corpus-based measures of transitivity are comparable, and we find evidence in support of this through the direct comparison of these two types of data across several spoken languages. For the signed modality, we measure the transitivity prominence of verbs in five sign languages based on corpus data and compare the results to the transitivity prominence hierarchy for spoken languages reported in Haspelmath (2015). For each sign language, we create a hierarchy for 12 verb meanings based on the proportion of overt direct objects per verb meaning. We use these hierarchies to calculate correlations between languages – both signed and spoken – and find positive correlations between transitivity hierarchies. Additional findings of this study include the observation that locative arguments seem to behave differently than direct objects judging by our measures of transitivity, and that relatedness among sign languages does not straightforwardly imply similarity in transitivity hierarchies. We conclude that our findings provide support for a modality-independent, semantic basis of transitivity.


Author(s):  
HyeonJung Park ◽  
Youngki Lee ◽  
JeongGil Ko

In this work we present SUGO, a depth video-based system for translating sign language to text using a smartphone's front camera. While exploiting depth-only videos offer benefits such as being less privacy-invasive compared to using RGB videos, it introduces new challenges which include dealing with low video resolutions and the sensors' sensitiveness towards user motion. We overcome these challenges by diversifying our sign language video dataset to be robust to various usage scenarios via data augmentation and design a set of schemes to emphasize human gestures from the input images for effective sign detection. The inference engine of SUGO is based on a 3-dimensional convolutional neural network (3DCNN) to classify a sequence of video frames as a pre-trained word. Furthermore, the overall operations are designed to be light-weight so that sign language translation takes place in real-time using only the resources available on a smartphone, with no help from cloud servers nor external sensing components. Specifically, to train and test SUGO, we collect sign language data from 20 individuals for 50 Korean Sign Language words, summing up to a dataset of ~5,000 sign gestures and collect additional in-the-wild data to evaluate the performance of SUGO in real-world usage scenarios with different lighting conditions and daily activities. Comprehensively, our extensive evaluations show that SUGO can properly classify sign words with an accuracy of up to 91% and also suggest that the system is suitable (in terms of resource usage, latency, and environmental robustness) to enable a fully mobile solution for sign language translation.


Sign in / Sign up

Export Citation Format

Share Document