2021 ◽  
Author(s):  
Norma-Jane E. Thompson

Currently, the World Wide Web allows web pages to be produced in most written languages. Many deaf people, however, use a visual-spatial language with no written equivalent (e.g. American Sign Language). SignLink Studio, a software tool for designing sign language web pages, allows for hyperlinking within video clips so that sign language only web pages can be created. However, this tool does not allow for other interactive elements such as online forms. In this thesis, a model for an online sign language form is proposed and evaluated. A study consisting of 22 participants was conducted to examine whether there were differences in performance of preferences between sign language forms and text forms, and between two presentation styles (all-at-once versus one-at-a-time). The results showed that there was no clear performance advantage between sign language and text; however, participants were interested in having online questions presented in sign language. Also, there were no advantages in performance or preferences between presentation styles.


2019 ◽  
Vol 57 (15) ◽  
pp. 242-251
Author(s):  
Dominika Wiśniewska

Ethical and methodologically correct diagnosis of a child hearing Deaf parents requires a specialist with extensive knowledge. In every society there are people who use the visual-spatial language – they are deaf people. They are perceived by the majority as disabled people, less frequently as a cultural minority. The adoption of a particular attitude towards the perception of deafness determines the context of the psychologist’s assessment. Diagnosis in such a specific situation shouldbe viewed from the perspective of a child hearing as a bi-cultural person, a descendant of a Deaf parent – a representative of the Deaf culture and himself a psychologist representing the cultural majority of hearing people.


2013 ◽  
Vol 25 (4) ◽  
pp. 517-533 ◽  
Author(s):  
Karen Emmorey ◽  
Stephen McCullough ◽  
Sonya Mehta ◽  
Laura L. B. Ponto ◽  
Thomas J. Grabowski

Biological differences between signed and spoken languages may be most evident in the expression of spatial information. PET was used to investigate the neural substrates supporting the production of spatial language in American Sign Language as expressed by classifier constructions, in which handshape indicates object type and the location/motion of the hand iconically depicts the location/motion of a referent object. Deaf native signers performed a picture description task in which they overtly named objects or produced classifier constructions that varied in location, motion, or object type. In contrast to the expression of location and motion, the production of both lexical signs and object type classifier morphemes engaged left inferior frontal cortex and left inferior temporal cortex, supporting the hypothesis that unlike the location and motion components of a classifier construction, classifier handshapes are categorical morphemes that are retrieved via left hemisphere language regions. In addition, lexical signs engaged the anterior temporal lobes to a greater extent than classifier constructions, which we suggest reflects increased semantic processing required to name individual objects compared with simply indicating the type of object. Both location and motion classifier constructions engaged bilateral superior parietal cortex, with some evidence that the expression of static locations differentially engaged the left intraparietal sulcus. We argue that bilateral parietal activation reflects the biological underpinnings of sign language. To express spatial information, signers must transform visual–spatial representations into a body-centered reference frame and reach toward target locations within signing space.


2019 ◽  
Vol 13 (1) ◽  
pp. 20-33
Author(s):  
Edvin Ostergaard

In the cave allegory, Plato illustrates his theory of ideas by showing that the world man senses and tries to understand, actually only is a dim representation of the real world. We know the allegory for its light and shadow; however, there is also sound and echo in the cave. In this article, I discuss whether the narrative of the prisoners in the cave is in tune with an audial experience and whether an allegory led by sound corresponds to the one led by sight. I start with a phenomenological analysis of the cave as a place of sound. After that, I elaborate on the training of attentive listening skills and its ramifications for pedagogical practice. I conclude that there are profound differences between seeing and listening and that sound reveals different aspects of “the real” compared to sight. The significance of Plato’s cave allegory should be evaluated in relation to modern, scientific thought characterised by a visual-spatial language. With support of this allegory, the light-shadow polarity has become the Urbild of represented reality. At the same time, a visually oriented culture of ideas repeatedly confirms Plato’s cave allegory as its central metaphor. Finally, an elaboration on the sounds in the cave proves to be fruitful in an educational sense: The comparison of sound and sight sharpens the differences and complementarities of audial and visual experiences.


Cognition ◽  
1993 ◽  
Vol 46 (2) ◽  
pp. 139-181 ◽  
Author(s):  
Karen Emmorey ◽  
Stephen M. Kosslyn ◽  
Ursula Bellugi

2021 ◽  
Author(s):  
Norma-Jane E. Thompson

Currently, the World Wide Web allows web pages to be produced in most written languages. Many deaf people, however, use a visual-spatial language with no written equivalent (e.g. American Sign Language). SignLink Studio, a software tool for designing sign language web pages, allows for hyperlinking within video clips so that sign language only web pages can be created. However, this tool does not allow for other interactive elements such as online forms. In this thesis, a model for an online sign language form is proposed and evaluated. A study consisting of 22 participants was conducted to examine whether there were differences in performance of preferences between sign language forms and text forms, and between two presentation styles (all-at-once versus one-at-a-time). The results showed that there was no clear performance advantage between sign language and text; however, participants were interested in having online questions presented in sign language. Also, there were no advantages in performance or preferences between presentation styles.


2018 ◽  
Author(s):  
Thomas Kluth

Are humans able to split their attentional focus? This master's thesis tries to answer this question by proposing several modifications to the Attentional Vector Sum (AVS) model (Regier & Carlson, 2001). The AVS model is a computational cognitive model of spatial language use that assumes visual attention. Carlson, Regier, Lopez, and Corrigan (2006) have developed a modification to the AVS model that integrates effects of world knowledge (functionality of spatially related objects) into the AVS model. This modified model assumes that people are able to split their visual spatial attention. However, it is debated whether this assumption holds true (e.g., Jans, Peters, & De Weerd, 2010). Thus, this thesis investigates the assumption in the domain of spatial language use by proposing and assessing alternative model modifications that do not assume split attention. Based on available empirical data, the results favor a uni-focal distribution of attention over a multi-focal attentional distribution. At the same time, the results cast doubt on the proper modeling of functional aspects of spatial language use, as the AVS model (not considering functionality) is performing surprisingly well on most data sets. (See https://doi.org/10.1007/978-3-319-11215-2_6 for a condensed version of this work.)


Author(s):  
Pratiksha dundappa Talawar

ABSTRACT: Geriatrics is the Greek word in which “Geron” means old man & “Iatros” means healer. Geriatric medicine is a speciality that focuses on health care of elderly people. Ayurveda used the words such as Vriddha, Vardhakya, Jara to denote the aging. In Ashtang Ayurveda there are eight divisions from which Jarachikitsa is one of the important branch1. There are many disorders related to geriatrics including dementia. Dementia is broad category of brain diseases that cause a long term and often gradual decrease in ability to think and remember that is great enough to affect a person's daily functioning2. The most common affected area includes memory, visual-spatial, language, attention & problem solving3. Old age is not a disease itself but the elderly are vulnerable to long term diseases such as cardiovascular diseases, stroke, diabetes,musculoskeletal& mental disorders. Many diseases results in dementia. In Ayurveda classics there is no separate chapter regarding this condition but the signs along with pathogenesis of dementia can be understood in terms of Smritibransha.In this paper the Ayurvedic perspective of pathogenesis of dementia in geriatrics has been discussed & also suggested the guidelines of management of dementia through Ayurveda which can be beneficial for geriatric patients.  


2016 ◽  
Vol 7 (2) ◽  
pp. 62-77
Author(s):  
Lalit Goyal ◽  
Vishal Goyal

Many machine translation systems for spoken languages are available, but the translation system between the spoken and Sign Language are limited. The translation from Text to Sign Language is different from the translation between spoken languages because the Sign Language is visual spatial language which uses hands, arms, face, and head and body postures for communication in three dimensions. The translation from text to Sign Language is complex as the grammar rules for Sign Language are not standardized. Still a number of approaches have been used for translating the Text to Sign Language in which the input is the text and output is in the form of pre-recorded videos or the animated character generated by computer (Avatar). This paper reviews the research carried out for automatic translation from Text to the Sign Language.


Author(s):  
Monica M. Glumm ◽  
Kathy L. Kehring ◽  
Timothy L. White

This laboratory study examined the effects of visual, spatial language, and 3-D audio cues about target location on target acquisition performance and the recall of information contained in concurrent radio communications. Two baseline conditions were also included in the analysis: no cues (baseline 1) and target presence cues only (baseline 2). In modes in which target location cues were provided, 100% of the targets presented were acquired compared to 94% in baseline 1 and 95% in baseline 2. On average, targets were acquired 1.4 seconds faster in the visual, spatial language, and 3-D audio modes than in the baseline conditions, with times in the visual and 3-D audio modes being 1 second faster than those in spatial language. Overall workload scores were lower in the 3-D audio mode than in all other conditions except the visual mode. Less information (23%) was recalled from auditory communications in baseline 1 than in the other four conditions where attention could be directed to communications between target presentations.


Sign in / Sign up

Export Citation Format

Share Document