Assistive and Augmentive Communication for the Disabled
Latest Publications


TOTAL DOCUMENTS

10
(FIVE YEARS 0)

H-INDEX

2
(FIVE YEARS 0)

Published By IGI Global

9781609605414, 9781609605421

Author(s):  
Tee Zhi Heng ◽  
Ang Li Minn ◽  
Seng Kah Phooi

This chapter presents a novel application for wireless technology to assist visually impaired people. As an alternative to the medical model of rehabilitation, the information explosion era provides the foundation for a technological solution to lead the visually impaired to more independent lives in the community by minimizing the obstacles of living. A “SmartGuide” caregiver monitoring system is built as a standalone portable handheld device linked. The objective of this system is to assist blind and low vision people to walk around independently especially in dynamic changing environments. Navigation assistance is accomplished by providing speech guidance on how to move to a particular location. The system delivers dynamic environmental information to lead the visually impaired to more independent lives in the community by minimizing the obstacles of living. Information of changing environments such as road blockage, road closure, and intelligent navigation aids is provided to the user in order to guide the user safely to his or her destination. This system also includes a camera sensor network to enhance monitoring capabilities for an extra level of security and reliability.


Author(s):  
Georgios A. Dafoulas ◽  
Noha Saleeb

The significance of newly emergent 3D virtual worlds to different genres of users is currently a controversial subject in deliberation. Users range from education pursuers, business contenders, and social seekers to technology enhancers and many more who comprise both users with normal abilities in physical life and those with different disabilities. This study aims to derive and critically analyze, using grounded theory, advantageous and disadvantageous themes, and their sub concepts of providing e-learning through 3D Virtual Learning Environments (VLEs), like Second Life, to disabled users. Hence providing evidence that 3DVLEs not only support traditional physical learning, but also offer e-learning opportunities unavailable through 2D VLEs (like Moodle, Blackboard), and offer learning opportunities unavailable through traditional physical education. Furthermore, to achieve full potential from the above-mentioned derived concepts, architectural and accessibility design requirements of 3D educational facilities proposed by different categories of disabled students to accommodate for their needs, are demonstrated.


Author(s):  
Xiaoxin Xu ◽  
Mingguang Wu ◽  
Bin Sun ◽  
Jianwei Zhang ◽  
Cheng Ding

Advances in embedded computing systems have resulted in the emergence of Wireless Sensor Networks (WSNs), which provide unique opportunities for sensing physical environments. ZigBee-compliant WSN platforms have been proposed for healthcare monitoring, smart home, industrial monitoring and sensor, and other applications. In this chapter, the authors, using TI CC2430 and CC2431 chipsets with Z-Stack, designed an outdoor patients’ healthcare monitoring system for tracking patients and helping doctors and nurses to keep tabs on patients’ health remotely. Furthermore, several important techniques are elaborated, including reliable communication, localization algorithm, and backup power, which can enhance the system performance. Finally, some suggestions for future development are presented.


Author(s):  
Jonathan Bishop

E-learning systems generally rely on good visual and cognitive abilities, making them suitable for individuals with good levels of intelligence in these areas. A group of such individuals are those with non-systemising impairments (NSIs), such as people with autism spectrum conditions (ASCs). These individuals could benefit greatly from technology that allows them to use their abilities to overcome their impairments in social and emotional functioning in order to develop pro-social behaviours. Existing systems such as PARLE and MindReading are discussed, and a new one, the Visual Ontological Imitation System (VOIS), is proposed and discussed. This chapter details an investigation into the acceptability of these systems by those working in social work and advocacy. The study found that VOIS would be well received, although dependency on assistive technology and its impact on how others view NSIs still need to be addressed by society and its institutions.


Author(s):  
Jacey-Lynn Minoi ◽  
Duncan Gillies

The aim of this chapter is to identify those face areas containing high facial expression information, which may be useful for facial expression analysis, face and facial expression recognition and synthesis. In the study of facial expression analysis, landmarks are usually placed on well-defined craniofacial features. In this experiment, the authors have selected a set of landmarks based on craniofacial anthropometry and associate each of the landmarks with facial muscles and the Facial Action Coding System (FACS) framework, which means to locate landmarks on less palpable areas that contain high facial expression mobility. The selected landmarks are statistically analysed in terms of facial muscles motion based on FACS. Given that human faces provide information to channel verbal and non-verbal communication: speech, facial expression of emotions, gestures, and other human communicative actions; hence, these cues may be significant in the identification of expressions such as pain, agony, anger, happiness, et cetera. Here, the authors describe the potential of computer-based models of three-dimensional (3D) facial expression analysis and the non-verbal communication recognition to assist in biometric recognition and clinical diagnosis.


Author(s):  
Lau Sian Lun ◽  
Klaus David

Technology can be used to assist people with disabilities in their daily activities. Especially when the users have communication deficiencies, suitable technology and tools can assuage such needs. We envision that context awareness is a potential method suitable to provide services and solutions in the area of Assistive and Augmentative Communication (AAC). In this chapter, the authors give an introduction to context awareness and the state of the art. This is followed with the elaboration on how context awareness can be used in AAC. The Context Aware Remote Monitoring Assistant (CARMA) is presented as an application designed for a care assistant and his patient. A demonstration of a context aware component implemented in the CARMA application is shown in this chapter. An experiment that investigates movement recognition using an accelerometer in a smartphone and the obtained results are presented. This chapter ends with a discussion on challenges, future work and the conclusion.


Author(s):  
Ong Chin Ann ◽  
Marlene Valerie Lu ◽  
Lau Bee Theng

The main purpose of this research is to enhance the communication of the disabled community. The authors of this chapter propose an enhanced interpersonal-human interaction for people with special needs, especially those with physical and communication disabilities. The proposed model comprises of automated real time behaviour monitoring, designed and implemented with the ubiquitous and affordable concept in mind to suit the underprivileged. In this chapter, the authors present the prototype which encapsulates an automated facial expression recognition system for monitoring the disabled, equipped with a feature to send Short Messaging System (SMS) for notification purposes. The authors adapted the Viola-Jones face detection algorithm at the face detection stage and implemented template matching technique for the expression classification and recognition stage. They tested their model with a few users and achieved satisfactory results. The enhanced real time behaviour monitoring system is an assistive tool to improve the quality of life for the disabled by assisting them anytime and anywhere when needed. They can do their own tasks more independently without constantly being monitored physically or accompanied by their care takers, teachers, or even parents. The rest of this chapter is organized as follows. The background of the facial expression recognition system is reviewed in Section 2. Section 3 is the description and explanations of the conceptual model of facial expression recognition. Evaluation of the proposed system is in Section 4. Results and findings on the testing are laid out in Section 5, and the final section concludes the chapter.


Author(s):  
Vivi Mandasari ◽  
Marlene Valerie Lu ◽  
Lau Bee Theng

Asperger Syndrome is a developmental disorder under the umbrella term of Autism Spectrum Disorders, and it is a milder variant of autism. It is characterized by a significant difficulty in communication, prominently in social interaction and non-verbal communication. Since a decade ago, there have been a variety of tools for teaching and assisting children with AS in the acquisition of social skills, ranging from the simple picture exchange system to the high-end virtual reality system. This chapter discusses on the effectiveness of integrating Social Story, 2D animations and video instruction for teaching social skills to children diagnosed with Asperger Syndrome in an interactive manner. The prototype has been developed, implemented, and evaluated in an experimental way. This chapter will discuss on the evaluation process, results, findings, and areas for further exploration.


Author(s):  
Nia Valeria ◽  
Marlene Valerie Lu ◽  
Lau Bee Theng

Communication through speech is a vital skill, an innate ability in most human beings intended to convey thoughts, needs, and it is the very foundation of literacy. However, some people find it as one of the challenges in their lives, particularly children with Cerebral Palsy. Children with such disability suffer from brain injuries before, during, and after birth that evidently affect their motor, cognitive, and linguistic skills. Some of the additional complexities may also cause hearing, visual, and speech impairments that further decrease their learning abilities. Their development milestones in learning is slower than a typical child, thus they require intensive personal drilling. It is believed that the cognitive skills in these children can be improved to enable them to lead a more productive life. That was an antecedent that strongly motivated us to develop the proposed Virtual Collaborative Learning Tool. It aims to assist the learning ability of the targeted children through a responsive avatar of their parents, teachers, or caretakers. A preliminary study was conducted on voluntary participants to evaluate the effectiveness of the proposed learning model. The results showed 80% of the participants were able to answer questions provided within the program.


Author(s):  
Kanubhai K. Patel ◽  
Sanjay Kumar Vij

A computational model of non-visual spatial learning through virtual learning environment (VLE) is presented in this chapter. The inspiration has come from Landmark-Route-Survey (LRS) theory, the most accepted theory of spatial learning. An attempt has been made to combine the findings and methods from several disciplines including cognitive psychology, behavioral science and computer science (specifically virtual reality (VR) technology). The study of influencing factors on spatial learning and the potential of using cognitive maps in the modeling of spatial learning are described. Motivation to use VLE and its characteristics are also described briefly. Different types of locomotion interface to VLE with their constraints and benefits are discussed briefly. The authors believe that by incorporating perspectives from cognitive and experimental psychology to computer science, this chapter will appeal to a wide range of audience - particularly computer engineers concerned with assistive technologies, professionals interested in virtual environments, including computer engineers, architect, city-planner, cartographer, high-tech artists, and mobility trainers, and psychologists involved in the study of spatial cognition, cognitive behaviour, and human-computer interfaces.


Sign in / Sign up

Export Citation Format

Share Document