An Indigenous Smart Cane System For The Visually Impaired With Sound Clap Location Ability

2020 ◽  
Vol 27 (1) ◽  
Author(s):  
JS Igwe ◽  
C Chukwuemeka ◽  
C Ituma ◽  
NH Ogbu

Guiding the visually impaired persons is always very tasking. Smart Canes previously designed for the blinds are relatively costly. Also, most of the available Smart Cane Systems used only remote control and buzzing methods for locating the cane if misplaced. However, no research work has handled how to locate the cane should the remote control itself be misplaced. This work designed an Indigenous Smart Cane System for the Visually Impaired with Sound Clap Location Ability. It is relatively less costly, easy to learn, and can be located by a natural means if misplaced. The system uses one ATMEGA328P Microcontroller which is programmed to control both input and output signals. It was designed with three low-cost 0.3m resolution HC-SR04 Ultrasonic Sensors to detect obstacles within the range of 2cm – 400cm in the front, left and right directions of the user and sends the current obstacle distance signal to the controller for processing. Receiving this signal, the controller determines which of the output devices (Piezo Speaker, Earphone and Vibrator Motor) to use and communicate the object distance to the user. In case the cane is misplaced, the user makes a sound clap which triggers the Sound Sensor to send signal to the controller for an audio feedback. This audio feedback is done by the Piezo Speaker. The system was programmed to use a very simple object – detection algorithm to help the user learn how to use the cane easily. This system has been tested and found to be relatively less costly, easy to learn and can be located with a clap sound. Keywords: Sound-Clap Location Ability, Visually Impaired, Indigenous Smart Cane, Sensors, Audio Feedback; Object Distance; Microcontroller.

2017 ◽  
Author(s):  
Rohit Takhar ◽  
Tushar Sharma ◽  
Udit Arora ◽  
Sohit Verma

In recent years, with the improvement in imaging technology, the quality of small cameras have significantly improved. Coupled with the introduction of credit-card sized single-board computers such as Raspberry Pi, it is now possible to integrate a small camera with a wearable computer. This paper aims to develop a low cost product, using a webcam and Raspberry Pi, for visually-impaired people, which can assist them in detecting and recognising pedestrian crosswalks and staircases. There are two steps involved in detection and recognition of the obstacles i.e pedestrian crosswalks and staircases. In detection algorithm, we extract Haar features from the video frames and push these features to our Haar classifier. In recognition algorithm, we first convert the RGB image to HSV and apply histogram equalization to make the pixel intensity uniform. This is followed by image segmentation and contour detection. These detected contours are passed through a pre-processor which extracts the region of interests (ROI). We applied different statistical methods on these ROI to differentiate between staircases and pedestrian crosswalks. The detection and recognition results on our datasets demonstrate the effectiveness of our system.


2019 ◽  
Vol 11 (3) ◽  
pp. 59-71
Author(s):  
Elias Fank ◽  
Fernando Bevilacqua ◽  
Denio Duarte ◽  
Alesson Scapinello

Visually impaired (VI) people face a set of challenges when trying to orient and contextualize themselves. Computer vision and mobile devices can be valuable tools to help them improve their quality of life. This work presents a tool based on computer vision and image recognition to assist VI people to better contextualize themselves indoors. The tool works as follows: user takes a picture $\rho$ using a mobile application; ρ is sent to the server; ρ is compared to a database of previously taken pictures; server returns metadata of the database image that is most similar to ρ; finally the mobile application gives an audio feedback based on the received metadata. Similarity test among database images and $\rho$ is based on the search of nearest neighbors in key points extracted from the images by SIFT descriptors. Three experiments are presented to support the feasibility of the tool. We believe our solution is a low cost, convenient approach that can leverage existing IT infrastructure, e.g. wireless networks, and does not require any physical adaptation in the environment where it will be used.


2017 ◽  
Author(s):  
Rohit Takhar ◽  
Tushar Sharma ◽  
Udit Arora ◽  
Sohit Verma

In recent years, with the improvement in imaging technology, the quality of small cameras have significantly improved. Coupled with the introduction of credit-card sized single-board computers such as Raspberry Pi, it is now possible to integrate a small camera with a wearable computer. This paper aims to develop a low cost product, using a webcam and Raspberry Pi, for visually-impaired people, which can assist them in detecting and recognising pedestrian crosswalks and staircases. There are two steps involved in detection and recognition of the obstacles i.e pedestrian crosswalks and staircases. In detection algorithm, we extract Haar features from the video frames and push these features to our Haar classifier. In recognition algorithm, we first convert the RGB image to HSV and apply histogram equalization to make the pixel intensity uniform. This is followed by image segmentation and contour detection. These detected contours are passed through a pre-processor which extracts the region of interests (ROI). We applied different statistical methods on these ROI to differentiate between staircases and pedestrian crosswalks. The detection and recognition results on our datasets demonstrate the effectiveness of our system.


2020 ◽  
Vol 24 (03) ◽  
pp. 515-520
Author(s):  
Vattumilli Komal Venugopal ◽  
Alampally Naveen ◽  
Rajkumar R ◽  
Govinda K ◽  
Jolly Masih

2017 ◽  
Vol 7 (1) ◽  
pp. 42 ◽  
Author(s):  
Christopher Nkiko ◽  
Morayo I. Atinmo ◽  
Happiness Chijioke Michael-Onuoha ◽  
Julie E. Ilogho ◽  
Michael O. Fagbohun ◽  
...  

Studies have shown inadequate reading materials for the visually impaired in Nigeria. Information technology has greatly advanced the provision of information to the visually impaired in other industrialized climes. This study investigated the extent of application of information technology to the transcription of reading materials for the visually impaired in Nigeria. The study adopted survey research design of the ex-post facto to select 470 personnel as respondents. A questionnaire titled Information Technology Use Scale (α=0.74), and Interview Schedule (α=0.75), were used. Data were analyzed using descriptive statistics and Pearson Product Moment Correlation. The findings indicate that information technology in transcription was low and a significant positive relationship between application of information technology and transcription of information materials (r=0.62: p<0.05). The study recommended among others that Multi-National Corporations should be sensitized to extend their Corporate Social Responsibility (CSR) activities to help in procuring modern information technology devices and software to enhance transcription.


Entropy ◽  
2021 ◽  
Vol 23 (7) ◽  
pp. 848
Author(s):  
Karla Miriam Reyes Leiva ◽  
Milagros Jaén-Vargas ◽  
Miguel Ángel Cuba ◽  
Sergio Sánchez Lara ◽  
José Javier Serrano Olmedo

The rehabilitation of a visually impaired person (VIP) is a systematic process where the person is provided with tools that allow them to deal with the impairment to achieve personal autonomy and independence, such as training for the use of the long cane as a tool for orientation and mobility (O&M). This process must be trained personally by specialists, leading to a limitation of human, technological and structural resources in some regions, especially those with economical narrow circumstances. A system to obtain information about the motion of the long cane and the leg using low-cost inertial sensors was developed to provide an overview of quantitative parameters such as sweeping coverage and gait analysis, that are currently visually analyzed during rehabilitation. The system was tested with 10 blindfolded volunteers in laboratory conditions following constant contact, two points touch, and three points touch travel techniques. The results indicate that the quantification system is reliable for measuring grip rotation, safety zone, sweeping amplitude and hand position using orientation angles with an accuracy of around 97.62%. However, a new method or an improvement of hardware must be developed to improve gait parameters’ measurements, since the step length measurement presented a mean accuracy of 94.62%. The system requires further development to be used as an aid in the rehabilitation process of the VIP. Now, it is a simple and low-cost technological aid that has the potential to improve the current practice of O&M.


Sign in / Sign up

Export Citation Format

Share Document