Development of Learning Support Equipment for Sign Language and Fingerspelling by Mixed Reality

Author(s):  
Natsuhiko Hirabayashi ◽  
Nami Fujikawa ◽  
Ryohei Yoshimura ◽  
Yoshinori Fujisawa
Author(s):  
Keyla Arisbeth Rojas-Chávez ◽  
Ricardo Quini-Villegas

This article presents the model of a mobile application, as a learning support tool, for children and adults who have a hearing disability or not, learning the Mexican Sign Language. This is to be achieved by means of animated images, a game and a translator who spells the words by means of the dactylographic alphabet. Supported by the Regular Education Support Services Unit (USAER), which is incorporated into the SEP, located in Morelia, Michoacán. Also like the Civil Association "My hands speak to help", which is located in Zitácuaro, Michoacán. Facilitating learning with this mobile application that is designed ad hoc to the eastern region of Michoacán, supported by sign language interpreters. The use of mobile technology allows more people to have access to this type of tools, facilitating learning and teaching. This is intended to human thinking to evolve so that people who have this condition can then contribute their ideas and knowledge in the future.


Author(s):  
Pietro Battistoni

In the field of multimodal communication, sign language is and continues to be, one of the most understudied areas. Thanks to the recent advances in the field of deep learning, there are far-reaching implications and applications that neural networks can have for sign language mastering. This paper describes a method for ASL alphabet recognition using Convolutional Neural Networks (CNN), which allows to monitor user’s learning progress. American Sign Language (ASL) alphabet recognition by computer vision is a challenging task due to the complexity in ASL signs, high interclass similarities, large intraclass variations, and constant occlusions. We produced a robust model that classifies letters correctly in a majority of cases. The experimental results encouraged us to investigate the adoption of AI techniques to support learning of a sign language, as a natural language with its own syntax and lexicon. The challenge was to deliver a mobile sign language training solution that users may adopt during their everyday life. To satisfy the indispensable additional computational resources to the locally connected end- user devices, we propose the adoption of a Fog-Computing Architecture.


Author(s):  
Genevieve G. Tremblay ◽  
Jeff Brice

ASKXXI, Arts and Science Knowledge Building and Sharing in the XXI Century, was an inter-hemispheric, post-secondary diploma program pilot aimed at fostering collaboration in art, emerging digital/virtual technologies, and the ecological sciences. New approaches to narrative creation were introduced through innovative technology workshops in visualization, 3D imaging, 3D printing, virtual, mixed reality, and data visualization. The authors share their dimensional approach that delivered cross-cultural insights, technical training, professional development, mentorship, and network development opportunities. Expanding definitions of CBE and personalized learning support, the new career opportunities in a rapidly changing landscape, the relational, place-based, collaborative, and inquiry-driven learning developed through this pilot program is what the authors identify as a frontier ecosystem in education. They reflect on and share their findings and offer new perspectives on expanded models of competency-based education for academic and workplace credentials.


2018 ◽  
Vol 57 (3) ◽  
pp. 777-807 ◽  
Author(s):  
Cathy Weng ◽  
Abirami Rathinasabapathi ◽  
Apollo Weng ◽  
Cindy Zagita

This study aimed to explore whether the integration of virtual reality and augmented reality used in a specially designed science book could improve the students' science concept learning outcomes. A true experimental research design was conducted to check the effectiveness of the specially designed book in terms of learners' achievement. The sample for this study consisted of 80 fifth-grade students, divided into a control and an experimental group. The results revealed that using mixed reality (augmented reality and virtual reality) as a learning supplement to the printed book could improve students' learning outcomes, particularly for low spatial ability students. Finally, recommendations for future practices and research are discussed.


Author(s):  
Qijia Shao ◽  
Amy Sniffen ◽  
Julien Blanchet ◽  
Megan E. Hillis ◽  
Xinyu Shi ◽  
...  

Author(s):  
Jacqueline A. Towson ◽  
Matthew S. Taylor ◽  
Diana L. Abarca ◽  
Claire Donehower Paul ◽  
Faith Ezekiel-Wilder

Purpose Communication between allied health professionals, teachers, and family members is a critical skill when addressing and providing for the individual needs of patients. Graduate students in speech-language pathology programs often have limited opportunities to practice these skills prior to or during externship placements. The purpose of this study was to research a mixed reality simulator as a viable option for speech-language pathology graduate students to practice interprofessional communication (IPC) skills delivering diagnostic information to different stakeholders compared to traditional role-play scenarios. Method Eighty graduate students ( N = 80) completing their third semester in one speech-language pathology program were randomly assigned to one of four conditions: mixed-reality simulation with and without coaching or role play with and without coaching. Data were collected on students' self-efficacy, IPC skills pre- and postintervention, and perceptions of the intervention. Results The students in the two coaching groups scored significantly higher than the students in the noncoaching groups on observed IPC skills. There were no significant differences in students' self-efficacy. Students' responses on social validity measures showed both interventions, including coaching, were acceptable and feasible. Conclusions Findings indicated that coaching paired with either mixed-reality simulation or role play are viable methods to target improvement of IPC skills for graduate students in speech-language pathology. These findings are particularly relevant given the recent approval for students to obtain clinical hours in simulated environments.


Sign in / Sign up

Export Citation Format

Share Document