manual signs
Recently Published Documents


TOTAL DOCUMENTS

64
(FIVE YEARS 8)

H-INDEX

15
(FIVE YEARS 1)

Author(s):  
Serpil Karabüklü ◽  
Ronnie B. Wilbur

Abstract Sign languages have been reported to have manual signs that function as perfective morphemes (Fischer & Gough 1999; Meir 1999; Rathmann 2005; Duffy 2007; Zucchi et al. 2010). Turkish Sign Language (TİD) has also been claimed to have such morphemes (Zeshan 2003; Kubuş & Rathmann 2009; Dikyuva 2011; Gökgöz 2011; Karabüklü 2016) as well as a nonmanual completive marker (‘bn’) (Dikyuva 2011). This study shows that the nonmanual ‘bn’ is in fact a perfective morpheme. We examine its compatibility with different event types and furthermore show that TİD has a manual sign bı̇t (‘finish’) that is indeed the completive marker but with possibly unusual restrictions on its use. Based on their distribution, the current study distinguishes bı̇t and ‘bn’ as different morphemes even though they can co-occur. TİD is argued to be typologically different from other sign languages since it has both a nonmanual marker (‘bn’) for a perfective morpheme and a manual sign (bı̇t) with different selectional properties than the manual signs reported for other sign languages.


Forests ◽  
2021 ◽  
Vol 12 (7) ◽  
pp. 820
Author(s):  
Guannan Lei ◽  
Ruting Yao ◽  
Yandong Zhao ◽  
Yili Zheng

The detection and recognition of unstructured roads in forest environments are critical for smart forestry technology. Forest roads lack effective reference objects and manual signs and have high degrees of nonlinearity and uncertainty, which pose severe challenges to forest engineering vehicles. This research aims to improve the automation and intelligence of forestry engineering and proposes an unstructured road detection and recognition method based on a combination of image processing and 2D lidar detection. This method uses the “improved SEEDS + Support Vector Machine (SVM)” strategy to quickly classify and recognize the road area in the image. Combined with the remapping of 2D lidar point cloud data on the image, the actual navigation requirements of forest unmanned navigation vehicles were fully considered, and road model construction based on the vehicle coordinate system was achieved. The algorithm was transplanted to a self-built intelligent navigation platform to verify its feasibility and effectiveness. The experimental results show that under low-speed conditions, the system can meet the real-time requirements of processing data at an average of 10 frames/s. For the centerline of the road model, the matching error between the image and lidar is no more than 0.119 m. The algorithm can provide effective support for the identification of unstructured roads in forest areas. This technology has important application value for forestry engineering vehicles in autonomous inspection and spraying, nursery stock harvesting, skidding, and transportation.


2020 ◽  
Vol 63 (7) ◽  
pp. 2418-2424
Author(s):  
Ellen Rombouts ◽  
Babette Maessen ◽  
Bea Maes ◽  
Inge Zink

Purpose Key word signing (KWS) entails using manual signs to support the natural speech of individuals with normal hearing and who have communication difficulties. While manual signs from the local sign language may be used for this purpose, some KWS systems have opted for a distinct KWS lexicon. Distinct KWS lexicon typically aims for higher sign iconicity or recognizability to make the lexicon more accessible for individuals with intellectual disabilities. We sought to determine if, in the Belgian Dutch context, signs from such a distinct KWS lexicon (Spreken Met Ondersteuning van Gebaren [Speaking With Support of Signs; SMOG]) were indeed more iconic than their Flemish Sign Language (FSL) counterparts. Method Participants were 224 adults with typical development who had no signing experience. They rated the resemblance between a FSL sign and its meaning. Raw data on the iconicity of SMOG from a previous study were used. Translucency was statistically and qualitatively compared between the SMOG lexicon and their FSL counterparts. Results SMOG had an overall higher translucency than FSL and contained a higher number of iconic signs. Conclusion This finding may support the value of a separate sign lexicon over using sign language signs. Nevertheless, other aspects, such as wide availability and inclusion, need to be considered.


2020 ◽  
pp. 31-54
Author(s):  
John D. Bonvillian ◽  
Nicole Kissane Lee ◽  
Tracy T. Dooley ◽  
Filip T. Loncke

Chapter 2 presents multiple accounts of the widespread use of manual signs by hearing persons in diverse settings throughout history. From an initial theoretical focus on the origins of language in humans, and the potential that language first emerged from gestural or manual communication, the reader is introduced to the views of various historical scholars who believed that signs and gestures are a natural means of communication and could potentially even be a universal form of communication. Such a universal form of communication, however, meets with a substantial obstacle in that gestures may vary widely in meaning and usage cross-culturally. Nevertheless, such a system was developed once before by the Indigenous peoples of North America, who spoke hundreds of different languages. Native Americans used signs as a lingua franca across a wide geographical area to overcome the numerous spoken language barriers they encountered. Also covered in this chapter are the use of signs in early contact situations and interactions between Native Americans and Europeans, and the development of signs by various monastic orders in Europe.


2020 ◽  
Vol 25 (3) ◽  
pp. 298-317
Author(s):  
Felix Sze ◽  
Monica Xiao Wei ◽  
David Lam

Abstract This paper presents the design and development of the Hong Kong Sign Language-Sentence Repetition Test (HKSL-SRT). It will be argued that the test offers evidence of discriminability, reliability, as well as practicality and can serve as an effective global measurement of individuals' proficiency in HKSL. The full version of the test consists of 40 signed sentences of increasing length and complexity. Specifically, we will evaluate the manual and non-manual components of these sentences to find out whether and to what extent they can differentiate three groups of deaf signers, namely, native signers, early learners and late learners. Statistical analyses show that the test scores based on a correct repetition of the manual signs of each sentence bear a significant negative correlation with signers' age of acquisition. Including the correct repetition of non-manuals in the scoring scheme can result in higher reliability and separation index of the test in the Rasch model. This paper will also discuss how psychometric measures of Rasch analysis, including the concept of fit and the rankings of items/persons in the Wright map, have been applied to the original list of the 40 sentence items for the development of a shortened test.


Author(s):  
Josep Quer

Negation systems in sign languages have been shown to display the core grammatical properties attested for natural language negation. Negative manual signs realize clausal negation in much the same way as in spoken languages. However, the visual-gestural modality affords the possibility to encode negative marking non-manually, and sign languages vary as to whether such markers can convey negation on their own or not. Negative concord can be argued to exist between manual and non-negative markers of negation, but we also find cases of negative concord among manual signs. Negation interacts in interesting ways with other grammatical categories, and it can be encoded in irregular and affixal forms that still have sentential scope. At the same time, negation is attested in lexical morphology leading to forms that do not express sentential negation.


2019 ◽  
Vol 30 (4) ◽  
pp. 655-686 ◽  
Author(s):  
Sara Siyavoshi

Abstract This paper presents a study of modality in Iranian Sign Language (ZEI) from a cognitive perspective, aimed at analyzing two linguistic channels: facial and manual. While facial markers and their grammatical functions have been studied in some sign languages, we have few detailed analyses of the facial channel in comparison with the manual channel in conveying modal concepts. This study focuses on the interaction between manual and facial markers. A description of manual modal signs is offered. Three facial markers and their modality values are also examined: squinted eyes, brow furrow, and downward movement of lip corners (horseshoe mouth). In addition to offering this first descriptive analysis of modality in ZEI, this paper also applies the Cognitive Grammar model of modality, the Control Cycle, and the Reality Model, classifying modals into two kinds, effective and epistemic. It is suggested that effective control, including effective modality, tends to be expressed on the hands, while facial markers play an important role in marking epistemic assessment, one manifestation of which is epistemic modality. ZEI, like some other sign languages, exhibits an asymmetry between the number of manual signs and facial markers expressing epistemic modality: while the face can be active in the expression of effective modality, it is commonly the only means of expressing epistemic modality. By positing an epistemic core in effective modality, Cognitive Grammar provides a theoretical basis for these findings.


Author(s):  
Sherman Wilcox ◽  
Barbara Shaffer

This chapter examines evidentiality in signed languages. Data comes primarily from three signed languages—American Sign Language (ASL), Brazilian Sign Language (Libras), and Catalan Sign Language (LSC). The relationship between evidentiality, epistemic modality, and mirativity is examined across the expression of perceptual information as an evidential source, inference, and reported speech. It is suggested that evidentiality relies on simulation and subjectification. Finally, a proposal is offered that evidentiality, epistemic modality, and mirativity are primarily expressed through grammaticalized facial markers in signed languages, rather than by means of manual signs. These markers allow for simultaneous expression of grammatical markers. In signed languages, therefore, not only are the semantic components of evidentiality, epistemic modality, and mirativity integrated, so too are the phonological means of expression.


Sign in / Sign up

Export Citation Format

Share Document