scholarly journals Uncertainty-Aware Visual Perception System for Outdoor Navigation of the Visually Challenged

Sensors ◽  
2020 ◽  
Vol 20 (8) ◽  
pp. 2385 ◽  
Author(s):  
George Dimas ◽  
Dimitris E. Diamantis ◽  
Panagiotis Kalozoumis ◽  
Dimitris K. Iakovidis

Every day, visually challenged people (VCP) face mobility restrictions and accessibility limitations. A short walk to a nearby destination, which for other individuals is taken for granted, becomes a challenge. To tackle this problem, we propose a novel visual perception system for outdoor navigation that can be evolved into an everyday visual aid for VCP. The proposed methodology is integrated in a wearable visual perception system (VPS). The proposed approach efficiently incorporates deep learning, object recognition models, along with an obstacle detection methodology based on human eye fixation prediction using Generative Adversarial Networks. An uncertainty-aware modeling of the obstacle risk assessment and spatial localization has been employed, following a fuzzy logic approach, for robust obstacle detection. The above combination can translate the position and the type of detected obstacles into descriptive linguistic expressions, allowing the users to easily understand their location in the environment and avoid them. The performance and capabilities of the proposed method are investigated in the context of safe navigation of VCP in outdoor environments of cultural interest through obstacle recognition and detection. Additionally, a comparison between the proposed system and relevant state-of-the-art systems for the safe navigation of VCP, focused on design and user-requirements satisfaction, is performed.

Nanophotonics ◽  
2020 ◽  
Vol 10 (1) ◽  
pp. 41-74
Author(s):  
Bernard C. Kress ◽  
Ishan Chatterjee

AbstractThis paper is a review and analysis of the various implementation architectures of diffractive waveguide combiners for augmented reality (AR), mixed reality (MR) headsets, and smart glasses. Extended reality (XR) is another acronym frequently used to refer to all variants across the MR spectrum. Such devices have the potential to revolutionize how we work, communicate, travel, learn, teach, shop, and are entertained. Already, market analysts show very optimistic expectations on return on investment in MR, for both enterprise and consumer applications. Hardware architectures and technologies for AR and MR have made tremendous progress over the past five years, fueled by recent investment hype in start-ups and accelerated mergers and acquisitions by larger corporations. In order to meet such high market expectations, several challenges must be addressed: first, cementing primary use cases for each specific market segment and, second, achieving greater MR performance out of increasingly size-, weight-, cost- and power-constrained hardware. One such crucial component is the optical combiner. Combiners are often considered as critical optical elements in MR headsets, as they are the direct window to both the digital content and the real world for the user’s eyes.Two main pillars defining the MR experience are comfort and immersion. Comfort comes in various forms: –wearable comfort—reducing weight and size, pushing back the center of gravity, addressing thermal issues, and so on–visual comfort—providing accurate and natural 3-dimensional cues over a large field of view and a high angular resolution–vestibular comfort—providing stable and realistic virtual overlays that spatially agree with the user’s motion–social comfort—allowing for true eye contact, in a socially acceptable form factor.Immersion can be defined as the multisensory perceptual experience (including audio, display, gestures, haptics) that conveys to the user a sense of realism and envelopment. In order to effectively address both comfort and immersion challenges through improved hardware architectures and software developments, a deep understanding of the specific features and limitations of the human visual perception system is required. We emphasize the need for a human-centric optical design process, which would allow for the most comfortable headset design (wearable, visual, vestibular, and social comfort) without compromising the user’s sense of immersion (display, sensing, and interaction). Matching the specifics of the display architecture to the human visual perception system is key to bound the constraints of the hardware allowing for headset development and mass production at reasonable costs, while providing a delightful experience to the end user.


The world has increased its demand for assistive technology (AT). There are a lot of researches and developments going on with respect to AT. Among the AT devices which are being developed, the need for a reliable and less expensive device which serves as an assistance for a visually challenged person is in serious demand all around the world. We, therefore, intend to provide a solution for this by constructing a device that has the capability to detect the obstacles within a given range for a visually challenged person and alerting the person about the obstacles. This involves various components like a camera for image detection, an ultrasonic distance sensor for distance estimation and a vibration motor which works on the principle of Haptic feedback and rotates with varied intensities depending on how far the obstacle is from the user. This paper presents a model which is a part of the footwear of the user and hence, no additional device is required to hold onto for assistance. The model involves the use of a microcontroller, a camera, to dynamically perceive the obstacles and a haptic feedback system to alert the person about the same. The camera dynamically acquires the real time video footage which is further processed by the microcontroller to detect the obstacles. Simultaneously, one more algorithm is being executed to estimate the distance with the help of an ultrasonic distance sensor. Depending on the distance, the frequency of the vibration motor, which acts as the output for notifying the user about the obstacle, is varied (haptic feedback). With this system, a visually challenged person will be able to avoid the obstacles successfully without the use of any additional device.


Sensors ◽  
2019 ◽  
Vol 19 (19) ◽  
pp. 4350 ◽  
Author(s):  
Julie Foucault ◽  
Suzanne Lesecq ◽  
Gabriela Dudnik ◽  
Marc Correvon ◽  
Rosemary O’Keeffe ◽  
...  

Environment perception is crucial for the safe navigation of vehicles and robots to detect obstacles in their surroundings. It is also of paramount interest for navigation of human beings in reduced visibility conditions. Obstacle avoidance systems typically combine multiple sensing technologies (i.e., LiDAR, radar, ultrasound and visual) to detect various types of obstacles under different lighting and weather conditions, with the drawbacks of a given technology being offset by others. These systems require powerful computational capability to fuse the mass of data, which limits their use to high-end vehicles and robots. INSPEX delivers a low-power, small-size and lightweight environment perception system that is compatible with portable and/or wearable applications. This requires miniaturizing and optimizing existing range sensors of different technologies to meet the user’s requirements in terms of obstacle detection capabilities. These sensors consist of a LiDAR, a time-of-flight sensor, an ultrasound and an ultra-wideband radar with measurement ranges respectively of 10 m, 4 m, 2 m and 10 m. Integration of a data fusion technique is also required to build a model of the user’s surroundings and provide feedback about the localization of harmful obstacles. As primary demonstrator, the INSPEX device will be fixed on a white cane.


2017 ◽  
Vol 68 ◽  
pp. 14-27 ◽  
Author(s):  
Christian Häne ◽  
Lionel Heng ◽  
Gim Hee Lee ◽  
Friedrich Fraundorfer ◽  
Paul Furgale ◽  
...  

2015 ◽  
Vol 12 (01) ◽  
pp. 1550009 ◽  
Author(s):  
Francisco Martín ◽  
Carlos E. Agüero ◽  
José M. Cañas

Robots detect and keep track of relevant objects in their environment to accomplish some tasks. Many of them are equipped with mobile cameras as the main sensors, process the images and maintain an internal representation of the detected objects. We propose a novel active visual memory that moves the camera to detect objects in robot's surroundings and tracks their positions. This visual memory is based on a combination of multi-modal filters that efficiently integrates partial information. The visual attention subsystem is distributed among the software components in charge of detecting relevant objects. We demonstrate the efficiency and robustness of this perception system in a real humanoid robot participating in the RoboCup SPL competition.


2020 ◽  
Vol 17 (9) ◽  
pp. 4364-4367
Author(s):  
Shreya Srinarasi ◽  
Seema Jahagirdar ◽  
Charan Renganathan ◽  
H. Mallika

The preliminary step in the navigation of Unmanned Vehicles is to detect and identify the horizon line. One method to locate the horizon and obstacles in an image is through a supervised learning, semantic segmentation algorithm using Neural Networks. Unmanned Aerial Vehicles (UAVs) are rapidly gaining prominence in military, commercial and civilian applications. For the safe navigation of UAVs, there poses a requirement for an accurate and efficient obstacle detection and avoidance. The position of the horizon and obstacles can also be used for adjusting flight parameters and estimating altitude. It can also be used for the navigation of Unmanned Ground Vehicles (UGV), by neglecting the part of the image above the horizon to reduce the processing time. Locating the horizon and identifying the various obstacles in an image can help in minimizing collisions and high costs due to failure of UAVs and UGVs. To achieve a robust and accurate system to aid navigation of autonomous vehicles, the efficiency and accuracy of Convolutional Neural Networks (CNN) and Recurrent-CNNs (RCNN) are analysed. It is observed via experimentation that the RCNN model classifies test images with higher accuracy.


Sign in / Sign up

Export Citation Format

Share Document