Image Processing with CNN in a FPGA-Based Augmented Reality System for Visually Impaired People

Author(s):  
F. Javier Toledo ◽  
J. Javier Martínez ◽  
F. Javier Garrigós ◽  
J. Manuel Ferrández
Sensors ◽  
2021 ◽  
Vol 21 (9) ◽  
pp. 3061
Author(s):  
Alice Lo Valvo ◽  
Daniele Croce ◽  
Domenico Garlisi ◽  
Fabrizio Giuliano ◽  
Laura Giarré ◽  
...  

In recent years, we have assisted with an impressive advance in augmented reality systems and computer vision algorithms, based on image processing and artificial intelligence. Thanks to these technologies, mainstream smartphones are able to estimate their own motion in 3D space with high accuracy. In this paper, we exploit such technologies to support the autonomous mobility of people with visual disabilities, identifying pre-defined virtual paths and providing context information, reducing the distance between the digital and real worlds. In particular, we present ARIANNA+, an extension of ARIANNA, a system explicitly designed for visually impaired people for indoor and outdoor localization and navigation. While ARIANNA is based on the assumption that landmarks, such as QR codes, and physical paths (composed of colored tapes, painted lines, or tactile pavings) are deployed in the environment and recognized by the camera of a common smartphone, ARIANNA+ eliminates the need for any physical support thanks to the ARKit library, which we exploit to build a completely virtual path. Moreover, ARIANNA+ adds the possibility for the users to have enhanced interactions with the surrounding environment, through convolutional neural networks (CNNs) trained to recognize objects or buildings and enabling the possibility of accessing contents associated with them. By using a common smartphone as a mediation instrument with the environment, ARIANNA+ leverages augmented reality and machine learning for enhancing physical accessibility. The proposed system allows visually impaired people to easily navigate in indoor and outdoor scenarios simply by loading a previously recorded virtual path and providing automatic guidance along the route, through haptic, speech, and sound feedback.


2011 ◽  
Vol 103 ◽  
pp. 687-694
Author(s):  
Akira Yamawaki ◽  
Serikawa Seiichi

We propose a wearable supporting system with a CMOS image sensor for the visually impaired people in operating capacitive touchscreen. This system attaches the CMOS image sensor without a lens to the tip of the middle finger. The icons and buttons displayed on the touchscreen are replaced to the color barcodes. Touching the surface of the touchscreen with the CMOS image sensor directly, the color barcode is detected and decoded. The decoded results are returned to the user by some interaction like audio. Then, the user touches the button area around the color barcode by the forefinger to operate the target device. This system can provide very easy and natural way for operating the touchscreen to the visually impaired people who usually recognize the materials by the finger. Any mechanical modification of the target device is not needed. The modification can be made by changing its software program. Since the color barcode is sensed by the image sensor without any lens touching the surface of the touchscreen, each bar in the color barcode should be blurred. So, we develop an easy and simple image processing to handle such problem. We design it as the hardware module to achieve the high performance and low-power wearable device. A prototype hardware using an FPGA shows the hardware size, the performance and the actual demonstration.


This paper describes a obstacle detection system for visually impaired people using Image processing in MATLAB.This system, together with ultra-sonic sensor interfaced with Arduino detects stairs and doors with or without signage and distance of these objects from the user. This information is conveyed to the user through a speaker. The results show satisfactory accuracy in detecting stairs and extracting different signage on doors such as that of washroom, exit, elevator etc.


Sign in / Sign up

Export Citation Format

Share Document