scholarly journals Soundbased Image and Position Recognition System SIPReS

Author(s):  
Shin’ichiro Uno ◽  
Yasuo Suzuki ◽  
Takashi Watanabe ◽  
Miku Matsumoto ◽  
Yan Wang

We developed software called SIPReS, which describes two-dimensional images with sound. With this system, visually-impaired people can tell the location of a certain point in an image just by hearing notes of frequency each assigned according to the brightness of the point a user touches on. It can run on Android smartphones and tablets. We conducted a small-scale experiment to see if a visually-impaired person can recognize images with SIPReS. In the experiment, the subject successfully recognized if there is an object or not. He also recognized the location information. The experiment suggests this application’s potential as image recognition software.

2018 ◽  
Vol 7 (3.12) ◽  
pp. 116
Author(s):  
N Vignesh ◽  
Meghachandra Srinivas Reddy.P ◽  
Nirmal Raja.G ◽  
Elamaram E ◽  
B Sudhakar

Eyes play important role in our day to day lives and are perhaps the most valuable gift we have. This world is visible to us because we are blessed with eyesight. But there are some people who lag this ability of visualizing these things. Due to this, they will undergo a lot of troubles o move comfortably in public places. Hence, wearable device should design for such visual impaired people. A smart shoe is wearable system design to provide directional information to visually impaired people. To provide smart and sensible navigation guidance to visually impaired people, the system has great potential especially when integrated with visual processing units. During the operation, the user is supposed to wear the shoes. When sensors will detect any obstacle, user will be informed through Android system being used by the user. The Smart Shoes along with the application on the Android system shall help the user in moving around independently.


2019 ◽  
Vol 16 (1) ◽  
pp. 13-32 ◽  
Author(s):  
Hana Porkertová

This article thematizes relations between visual impairment and urban space, drawing from the analytical perspective of actor-network theory (ANT). It traces the ways in which visually impaired people create specific connections with space and how they transform it. Urban space is configured for use by able-bodied persons, for whom movement within it is easy and seems to be disembodied. However, for those who defy the standardization of space, the materiality of movement is constantly present and visible, because the passages are difficult to make and are not ready in advance. These materialities, as well as the strategies that people use to make connections with urban space, differ according to the assemblages that visually impaired people create. A route is different with a cane, a human companion, a guide dog, or the use of a combination of such assistance; the visually impaired person pays attention to different clues, follows specific lines, and other information is important and available. Each configuration makes it possible or impossible to do something; this shows disability as dynamic, and demonstrates the collective nature of action, which is more visible and palpable in the case of a disabled person.


Author(s):  
Tejal Adep ◽  
Rutuja Nikam ◽  
Sayali Wanewe ◽  
Dr. Ketaki B. Naik

Blind people face the problem in daily life. They can't even walk without any aid. Many times they rely on others for help. Several technologies for the assistance of visually impaired people have been developed. Among the various technologies being utilized to assist the blind, Computer Vision-based solutions are emerging as one of the most promising options due to their affordability and accessibility. This paper proposes a system for visually impaired people. The proposed system aims to create a wearable visual aid for visually impaired people in which speech commands are accepted by the user. Its functionality addresses the identification of objects and signboards. This will help the visually impaired person to manage day-to-day activities and navigate through his/her surroundings. Raspberry Pi is used to implement artificial vision using python language on the Open CV platform.


2021 ◽  
Vol 2089 (1) ◽  
pp. 012056
Author(s):  
K.A. Sunitha ◽  
Ganti Sri Giri Sai Suraj ◽  
G Atchyut Sriram ◽  
N Savitha Sai

Abstract The proposed robot aims to serve as a personal assistant for visually impaired people in obstacle avoidance, in identifying the person (known or unknown) with whom they are interacting with and in navigating. The robot has a special feature in truly locating the subject’s location using GPS. The novel feature of this robot is to identify people with whom the subject interacts. Facial detection and identification in real-time has been a challenge and achieved with accurate image processing using viola jones and SURF algorithms. An obstacle avoidance design has been implanted in the system with many sensors to guide in the correct path. Hence, the robot is a fusion of providing the best of the comfort and safety with minimal cost.


2020 ◽  
Vol 4 (4) ◽  
pp. 79
Author(s):  
Julian Kreimeier ◽  
Timo Götzelmann

Although most readers associate the term virtual reality (VR) with visually appealing entertainment content, this technology also promises to be helpful to disadvantaged people like blind or visually impaired people. While overcoming physical objects’ and spaces’ limitations, virtual objects and environments that can be spatially explored have a particular benefit. To give readers a complete, clear and concise overview of current and past publications on touchable and walkable audio supplemented VR applications for blind and visually impaired users, this survey paper presents a high-level taxonomy to cluster the work done up to now from the perspective of technology, interaction and application. In this respect, we introduced a classification into small-, medium- and large-scale virtual environments to cluster and characterize related work. Our comprehensive table shows that especially grounded force feedback devices for haptic feedback (‘small scale’) were strongly researched in different applications scenarios and mainly from an exocentric perspective, but there are also increasingly physically (‘medium scale’) or avatar-walkable (‘large scale’) egocentric audio-haptic virtual environments. In this respect, novel and widespread interfaces such as smartphones or nowadays consumer grade VR components represent a promising potential for further improvements. Our survey paper provides a database on related work to foster the creation process of new ideas and approaches for both technical and methodological aspects.


Entropy ◽  
2020 ◽  
Vol 22 (9) ◽  
pp. 941
Author(s):  
Rakesh Chandra Joshi ◽  
Saumya Yadav ◽  
Malay Kishore Dutta ◽  
Carlos M. Travieso-Gonzalez

Visually impaired people face numerous difficulties in their daily life, and technological interventions may assist them to meet these challenges. This paper proposes an artificial intelligence-based fully automatic assistive technology to recognize different objects, and auditory inputs are provided to the user in real time, which gives better understanding to the visually impaired person about their surroundings. A deep-learning model is trained with multiple images of objects that are highly relevant to the visually impaired person. Training images are augmented and manually annotated to bring more robustness to the trained model. In addition to computer vision-based techniques for object recognition, a distance-measuring sensor is integrated to make the device more comprehensive by recognizing obstacles while navigating from one place to another. The auditory information that is conveyed to the user after scene segmentation and obstacle identification is optimized to obtain more information in less time for faster processing of video frames. The average accuracy of this proposed method is 95.19% and 99.69% for object detection and recognition, respectively. The time complexity is low, allowing a user to perceive the surrounding scene in real time.


2016 ◽  
Vol 2 (1) ◽  
pp. 727-730
Author(s):  
Nora Loepthien ◽  
Tanja Jehnichen ◽  
Josephine Hauser ◽  
Benjamin Schullcke ◽  
Knut Möller

AbstractThe aim of the project is the development of an aid for blind or visually impaired people, considering economic aspects as well as easy adaptability to various daily situations. Distance sensors were attached to a walking frame (rollator) to detect the distance to obstacles. The information from the sensors is transmitted to the user via tactile feedback. This is realized with a number of vibration motors which were located at the upper belly area of the subject. To test the functionality of the aid to the blind, a testing track with obstacles has been passed through by a number of volunteers. While passing the track five times the needed time to pass through, as well as the number of collisions, were noticed. The results showed a decline in the average time needed to pass though the testing track. This indicates a learning process of the operator to interpret the signals given by the tactile feedback.


Today there are few educational institutions that carry out activities that promote entertainment, fun and the inclusion of visually impaired people in the educational environment. In this article it is presented a playful form of teaching in which there will be a greater social inclusion for the visually impaired in an institutional environment. In view of the practicality of intuitive games to support the learning of the visually impaired, the objective is to demonstrate an electronic game tool not yet well known by the general public: audio games, in order to show that games with distinct characteristics were adopted to check which will be more promising for a visually impaired person in support of learning. With questionnaire methods in search of divergent opinions, description of concepts, divergences and difficulties, demonstration of possible audio games that may be included in the teaching dynamics and various texts and articles on the subject, it is possible to conclude that there is little knowledge of the existence of audio games and, along with that, the inequality in the educational system, given the lack of social inclusion in relation to the visually impaired in ludic teaching activities. Therefore, it is necessary to create and implement greater activities aiming equality, inclusion, participation and education for all during elementary school, a phase of great moral and educational development.


Author(s):  
Tadahiro Sakai ◽  
◽  
Takuya Handa ◽  
Masatsugu Sakajiri ◽  
Toshihiro Shimizu ◽  
...  

We propose a new method of presenting two-dimensional information, such as figures and graphs, on a tactile display so that visually impaired people are able to perceive them quickly and accurately. The new presentation method is developed for a tactile-proprioceptive display, which can present information on not only conventional “concave–convex” tactile display, but also vibration presentation in arbitrary area on a tactile display and mechanical leading presentation by mechanically leading user’s fingers using haptic device. In this paper, we outline the abovementioned two presentation method and the developed tactile-prop display, and objectively evaluate the effects of the local area vibration presentation method as an integral part of the tactile-prop display in comparison with the conventional “concave–convex” presentation method. We conducted experiments to evaluate the effects of the proposed local area vibration presentation method using two typical content patterns. In Experiment 1, discreetly dispersed objects are searched, and in Experiment 2, the cross graphs of line segments are distinguished and perceived. The experiments have proved that the method is effective in reducing search and cognitive time as well as identifying the correct cognition of cross graphs, as compared to the “concave–convex” tactile presentation method.


Sign in / Sign up

Export Citation Format

Share Document