scholarly journals Efficient Multi-Object Detection and Smart Navigation Using Artificial Intelligence for Visually Impaired People

Entropy ◽  
2020 ◽  
Vol 22 (9) ◽  
pp. 941
Author(s):  
Rakesh Chandra Joshi ◽  
Saumya Yadav ◽  
Malay Kishore Dutta ◽  
Carlos M. Travieso-Gonzalez

Visually impaired people face numerous difficulties in their daily life, and technological interventions may assist them to meet these challenges. This paper proposes an artificial intelligence-based fully automatic assistive technology to recognize different objects, and auditory inputs are provided to the user in real time, which gives better understanding to the visually impaired person about their surroundings. A deep-learning model is trained with multiple images of objects that are highly relevant to the visually impaired person. Training images are augmented and manually annotated to bring more robustness to the trained model. In addition to computer vision-based techniques for object recognition, a distance-measuring sensor is integrated to make the device more comprehensive by recognizing obstacles while navigating from one place to another. The auditory information that is conveyed to the user after scene segmentation and obstacle identification is optimized to obtain more information in less time for faster processing of video frames. The average accuracy of this proposed method is 95.19% and 99.69% for object detection and recognition, respectively. The time complexity is low, allowing a user to perceive the surrounding scene in real time.

Author(s):  
Raghad Raied Mahmood Et al.

It is relatively simple for a normal human to interpret and understand every banknote, but one of the major problems for visually impaired people are money recognition, especially for paper currency. Since money plays such an important role in our everyday lives and is required for every business transaction, real-time detection and recognition of banknotes become a necessity for blind or visually impaired people For that purpose, we propose a real-time object detection system to help visually impaired people in their daily business transactions. Dataset Images of the Iraqi banknote category are collected in different conditions initially and then, these images are augmented with different geometric transformations, to make the system strong. These augmented images are then annotated manually using the "LabelImg" program, from which training sets and validation image sets are prepared. We will use YOLOv3 real-time Object Detection algorithm trained on custom Iraqi banknote dataset for detection and recognition of banknotes. Then the label of the banknotes is identified and then converted into audio by using Google Text to Speech (gTTS), which will be the expected output. The performance of the trained model is evaluated on a test dataset and real-time live video. The test results demonstrate that the proposed method can detect and recognize Iraqi paper money with high mAP reaches 97.405% and a short time.


Visual disability is a global issue. Visually impaired people confront several challenges every day. Many times, blindness affects a person’s ability to self-navigate in known or unknown environments. The difficulties faced by them and how they deal with themare largely known and explored. The system we developed is an idea to overcome the challenges of detecting objects in a known environment or room environment with the help of Artificial Intelligence. The idea is based on the approach to aid visually impaired people with voice assistance to detect objects of the surrounding using 360° view cameras. The proposed system uses a 360° view camera of the mobile phone to assist the user for detecting desired objects in the room environment and provide localization. Using this system, users can search for the desired objects by giving voice commands and can be assisted to the object location. When the user wants to search any object, he/she simply gives a voice command using NLP to the system. The system then identifies commands and extracts the object name to be searched. With the help of imageprocessing, first identifies and locates the object in surrounding and navigates the user to that object using a voice assistant.


2021 ◽  
Vol 1085 (1) ◽  
pp. 012006
Author(s):  
Therese Yamuna Mahesh ◽  
S S Parvathy ◽  
Shibin Thomas ◽  
Shilpa Rachel Thomas ◽  
Thomas Sebastian

2020 ◽  
Vol 32 ◽  
pp. 03054
Author(s):  
Akshata Parab ◽  
Rashmi Nagare ◽  
Omkar Kolambekar ◽  
Parag Patil

Vision is one of the very essential human senses and it plays a major role in human perception about surrounding environment. But for people with visual impairment their definition of vision is different. Visually impaired people are often unaware of dangers in front of them, even in familiar environment. This study proposes a real time guiding system for visually impaired people for solving their navigation problem and to travel without any difficulty. This system will help the visually impaired people by detecting the objects and giving necessary information about that object. This information may include what the object is, its location, its precision, distance from the visually impaired etc. All these information will be conveyed to the person through audio commands so that they can navigate freely anywhere anytime with no or minimal assistance. Object detection is done using You Only Look Once (YOLO) algorithm. As the process of capturing the video/images and sending it to the main module has to be carried at greater speed, Graphics Processing Unit (GPU) is used. This will help in enhancing the overall speed of the system and will help the visually Impaired to get the maximum necessary instructions as quickly as possible. The process starts from capturing the real time video, sending it for analysis and processing and get the calculated results. The results obtained from analysis are conveyed to user by means of hearing aid. As a result by this system the blind or the visually impaired people can visualize the surrounding environment and travel freely from source to destination on their own.


Author(s):  
Fereshteh S. Bashiri ◽  
Eric LaRose ◽  
Jonathan C. Badger ◽  
Roshan M. D’Souza ◽  
Zeyun Yu ◽  
...  

2018 ◽  
Vol 7 (3.12) ◽  
pp. 116
Author(s):  
N Vignesh ◽  
Meghachandra Srinivas Reddy.P ◽  
Nirmal Raja.G ◽  
Elamaram E ◽  
B Sudhakar

Eyes play important role in our day to day lives and are perhaps the most valuable gift we have. This world is visible to us because we are blessed with eyesight. But there are some people who lag this ability of visualizing these things. Due to this, they will undergo a lot of troubles o move comfortably in public places. Hence, wearable device should design for such visual impaired people. A smart shoe is wearable system design to provide directional information to visually impaired people. To provide smart and sensible navigation guidance to visually impaired people, the system has great potential especially when integrated with visual processing units. During the operation, the user is supposed to wear the shoes. When sensors will detect any obstacle, user will be informed through Android system being used by the user. The Smart Shoes along with the application on the Android system shall help the user in moving around independently.


Sign in / Sign up

Export Citation Format

Share Document