scholarly journals Interactive Sound Generation to Aid Visually Impaired People via Object Detection Using Touch Screen Sensor

2021 ◽  
Vol 33 (11) ◽  
pp. 4057
Author(s):  
Tias Kurniati ◽  
Chuan-Kai Yang ◽  
Tzer-Shyong Chen ◽  
Yu-Fang Chung ◽  
Yu-Min Huang ◽  
...  
Author(s):  
Fereshteh S. Bashiri ◽  
Eric LaRose ◽  
Jonathan C. Badger ◽  
Roshan M. D’Souza ◽  
Zeyun Yu ◽  
...  

2014 ◽  
Vol 5 (2) ◽  
pp. 54-71 ◽  
Author(s):  
Nabil Hewahi ◽  
Ghadeer Abu-Shaban ◽  
Esraa El-Ashqer ◽  
Ayat Abu-Noqaira ◽  
Nour El-Wadiya

As smart phones appeared with their elegant, easy and exciting touch functionality, the use of touch screen devices has been spreading very fast. Beside the previous advantages, smart phones addresses some new challenges for people with disabilities. Most of visually impaired people don't prefer using touch-screen devices, as these lack the tactile feedback and are visually demanding. However, there have been some solutions to come over these problems, but they were not enough. Some of these solutions is to connect a special equipment to a smart phone to allow the visually impaired user to enter the required input. Other applications help visually impaired people to use the smart phones and read whatever on the screen by hovering their finger tips on the text. Visually impaired people who use smart phones have to memorize QWERTY keyboard which have a large number of targets with small locations specified for each target which will lead to a high proportion of error occurrence. In this paper, the authors propose ABTKA- Arabic Braille Touch Keyboard for Android Users. This application is the first application for Arabic language that uses Braille language for visually impaired who are using smart phones or intended to do so. ABTKA facilitates text-entry functionality by supporting Braille writing on touch screens. The used approach in the proposed system can be easily adapted to other languages. The main advantages of the used approach are that it does not need any extra equipment to be connected to the smart phone; it is dynamic (no fixed positions for the touch points), simple to use, one entry for each character, supported by voice and respond promptly to the input. ABTKA involves various algorithms to achieve its objectives. It starts with entering the user standard locations of finger tips, then the user can enter any Braille character which has to be reindexed to be in the same order of Perkins Brailler's buttons. Any inserted character is converted to Arabic character. Any converted character will have a voice feedback. Words and full sentences will also have voice feedback. ABTKA has been tested by various visually impaired people and proved that it is easy to learn and simple to use.


Entropy ◽  
2020 ◽  
Vol 22 (9) ◽  
pp. 941
Author(s):  
Rakesh Chandra Joshi ◽  
Saumya Yadav ◽  
Malay Kishore Dutta ◽  
Carlos M. Travieso-Gonzalez

Visually impaired people face numerous difficulties in their daily life, and technological interventions may assist them to meet these challenges. This paper proposes an artificial intelligence-based fully automatic assistive technology to recognize different objects, and auditory inputs are provided to the user in real time, which gives better understanding to the visually impaired person about their surroundings. A deep-learning model is trained with multiple images of objects that are highly relevant to the visually impaired person. Training images are augmented and manually annotated to bring more robustness to the trained model. In addition to computer vision-based techniques for object recognition, a distance-measuring sensor is integrated to make the device more comprehensive by recognizing obstacles while navigating from one place to another. The auditory information that is conveyed to the user after scene segmentation and obstacle identification is optimized to obtain more information in less time for faster processing of video frames. The average accuracy of this proposed method is 95.19% and 99.69% for object detection and recognition, respectively. The time complexity is low, allowing a user to perceive the surrounding scene in real time.


The object detection is used in almost every realworld application such as autonomous traversal, visual system, face detection and even more. This paper aims at applying object detection technique to assist visually impaired people. It helps visually impaired people to know about the objects around them to enable them to walk free. A prototype has been implemented on a Raspberry PI 3 using OpenCV libraries, and satisfactory performance is achieved. In this paper, detailed review has been carried out on object detection using region – conventionaal neural network (RCNN) based learning systems for a real-world application. This paper explores the various process of detecting objects using various object detections methods and walks through detection including a deep neural network for SSD implemented using Caffe model.


Sign in / Sign up

Export Citation Format

Share Document