scholarly journals Printed Texts Tracking and Following for a Finger-Wearable Electro-Braille System Through Opto-electrotactile Feedback

2021 ◽  
Author(s):  
Mehdi Rahimi ◽  
Yantao Shen ◽  
Zhiming Liu ◽  
Fang Jiang

This paper presents our recent development on a portable and refreshable text reading and sensory substitution system for the blind or visually impaired (BVI), called Finger-eye. The system mainly consists of an opto-text processing unit and a compact electro-tactile based display that can deliver text-related electrical signals to the fingertip skin through a wearable and Braille-dot patterned electrode array and thus delivers the electro-stimulation based Braille touch sensations to the fingertip. To achieve the goal of aiding BVI to read any text not written in Braille through this portable system, in this work, a Rapid Optical Character Recognition (R-OCR) method is firstly developed for real-time processing text information based on a Fisheye imaging device mounted at the finger-wearable electro-tactile display. This allows real-time translation of printed text to electro-Braille along with natural movement of user's fingertip as if reading any Braille display or book. More importantly, an electro-tactile neuro-stimulation feedback mechanism is proposed and incorporated with the R-OCR method, which facilitates a new opto-electrotactile feedback based text line tracking control approach that enables text line following by user fingertip during reading. Multiple experiments were designed and conducted to test the ability of blindfolded participants to read through and follow the text line based on the opto-electrotactile-feedback method. The experiments show that as the result of the opto-electrotactile-feedback, the users were able to maintain their fingertip within a 2mm distance of the text while scanning a text line. This research is a significant step to aid the BVI users with a portable means to translate and follow to read any printed text to Braille, whether in the digital realm or physically, on any surface.

2019 ◽  
Author(s):  
Jamie E. Poole ◽  
Jhon P. C. Casas ◽  
Roberto A. Bolli ◽  
Hermano I. Krebs

2021 ◽  
Author(s):  
Sai Chaitanya Cherukumilli

Human-computer interaction systems have been providing new ways for amateurs to compose music using traditional computer peripherals as well as gesture interfaces. Vibro-tactile patterns, which are a vibrational art form similar to auditory music, can also be composed using human-computer interfaces. This thesis discusses the gesture interface system called the Vibro-Motion, which facilitates the composition of vibro-tactile patterns in real-time on an existing tactile sensory substitution system called the Emoti-Chair. The Vibro-Motion allows users to control the pitch, magnitude of the vibration as well as the position of the vibration. A usability evaluation of Vibro-Motion system showed it to be intuitive, comfortable and enjoyable for the participants.


2020 ◽  
Author(s):  
Mehdi Rahimi ◽  
Fang Jiang ◽  
Yantao Shen

An electro-tactile display can be used to stimulate sensations in the skin. The ultimate achievement in this area is to open a new information communication channel using this sensory substitution system. One of the requirement of such communication channel is to deliver meaningful commands to the user. The sensations should be distinctive enough to be readily understandable for the operator.<br>This study is perusing the feasibility of generating identifiable moving patterns in the electro-tactile display. Then, the degree of identification performed by the users will be validated.<br><br>An electro-tactile display is built using an array of sixteen contacts to form a moving pattern by delivering electrical signal to the fingertip skin.<br>This signal can have varying voltages, frequencies or duty cycles to form the most comfortable sensation.<br>Moving patterns can be generated by individually or collectively toggling the electrical contacts on the electro-tactile display. This will achieve a stimulation of a moving pattern. In this regard, a moving pattern can be compared to a set of frame-by-frame pictures that construct a movie. Similarly, by toggling the contacts in a specific order, a moving pattern can be achieved.<br><br>In this study, eight subjects participated. A questionnaire was used to assess the sensation of the corresponding movement.<br>The results of these reports were analyzed and a conclusion regarding the identification of the direction of the movement was drawn. It became clear that the direction of the movement had a significant impact on the recognition of the patterns.<br><br>Furthermore, an analysis of the detection threshold (DT) voltage and current mapping was performed to evaluate the effect of the internal structure of the skin for each user on the assessment performance.<br>Based on the mapping results, it became clear that the DT voltage is vastly different for each contact and the resulting spatial map is also unique to each user.


2020 ◽  
Author(s):  
Mehdi Rahimi ◽  
Fang Jiang ◽  
Yantao Shen

An electro-tactile display can be used to stimulate sensations in the skin. The ultimate achievement in this area is to open a new information communication channel using this sensory substitution system. One of the requirement of such communication channel is to deliver meaningful commands to the user. The sensations should be distinctive enough to be readily understandable for the operator.<br>This study is perusing the feasibility of generating identifiable moving patterns in the electro-tactile display. Then, the degree of identification performed by the users will be validated.<br><br>An electro-tactile display is built using an array of sixteen contacts to form a moving pattern by delivering electrical signal to the fingertip skin.<br>This signal can have varying voltages, frequencies or duty cycles to form the most comfortable sensation.<br>Moving patterns can be generated by individually or collectively toggling the electrical contacts on the electro-tactile display. This will achieve a stimulation of a moving pattern. In this regard, a moving pattern can be compared to a set of frame-by-frame pictures that construct a movie. Similarly, by toggling the contacts in a specific order, a moving pattern can be achieved.<br><br>In this study, eight subjects participated. A questionnaire was used to assess the sensation of the corresponding movement.<br>The results of these reports were analyzed and a conclusion regarding the identification of the direction of the movement was drawn. It became clear that the direction of the movement had a significant impact on the recognition of the patterns.<br><br>Furthermore, an analysis of the detection threshold (DT) voltage and current mapping was performed to evaluate the effect of the internal structure of the skin for each user on the assessment performance.<br>Based on the mapping results, it became clear that the DT voltage is vastly different for each contact and the resulting spatial map is also unique to each user.


2021 ◽  
Author(s):  
Sai Chaitanya Cherukumilli

Human-computer interaction systems have been providing new ways for amateurs to compose music using traditional computer peripherals as well as gesture interfaces. Vibro-tactile patterns, which are a vibrational art form similar to auditory music, can also be composed using human-computer interfaces. This thesis discusses the gesture interface system called the Vibro-Motion, which facilitates the composition of vibro-tactile patterns in real-time on an existing tactile sensory substitution system called the Emoti-Chair. The Vibro-Motion allows users to control the pitch, magnitude of the vibration as well as the position of the vibration. A usability evaluation of Vibro-Motion system showed it to be intuitive, comfortable and enjoyable for the participants.


Author(s):  
Muhammad Hanif Ahmad Nizar ◽  
Chow Khuen Chan ◽  
Azira Khalil ◽  
Ahmad Khairuddin Mohamed Yusof ◽  
Khin Wee Lai

Background: Valvular heart disease is a serious disease leading to mortality and increasing medical care cost. The aortic valve is the most common valve affected by this disease. Doctors rely on echocardiogram for diagnosing and evaluating valvular heart disease. However, the images from echocardiogram are poor in comparison to Computerized Tomography and Magnetic Resonance Imaging scan. This study proposes the development of Convolutional Neural Networks (CNN) that can function optimally during a live echocardiographic examination for detection of the aortic valve. An automated detection system in an echocardiogram will improve the accuracy of medical diagnosis and can provide further medical analysis from the resulting detection. Methods: Two detection architectures, Single Shot Multibox Detector (SSD) and Faster Regional based Convolutional Neural Network (R-CNN) with various feature extractors were trained on echocardiography images from 33 patients. Thereafter, the models were tested on 10 echocardiography videos. Results: Faster R-CNN Inception v2 had shown the highest accuracy (98.6%) followed closely by SSD Mobilenet v2. In terms of speed, SSD Mobilenet v2 resulted in a loss of 46.81% in framesper- second (fps) during real-time detection but managed to perform better than the other neural network models. Additionally, SSD Mobilenet v2 used the least amount of Graphic Processing Unit (GPU) but the Central Processing Unit (CPU) usage was relatively similar throughout all models. Conclusion: Our findings provide a foundation for implementing a convolutional detection system to echocardiography for medical purposes.


2021 ◽  
Vol 20 (3) ◽  
pp. 1-22
Author(s):  
David Langerman ◽  
Alan George

High-resolution, low-latency apps in computer vision are ubiquitous in today’s world of mixed-reality devices. These innovations provide a platform that can leverage the improving technology of depth sensors and embedded accelerators to enable higher-resolution, lower-latency processing for 3D scenes using depth-upsampling algorithms. This research demonstrates that filter-based upsampling algorithms are feasible for mixed-reality apps using low-power hardware accelerators. The authors parallelized and evaluated a depth-upsampling algorithm on two different devices: a reconfigurable-logic FPGA embedded within a low-power SoC; and a fixed-logic embedded graphics processing unit. We demonstrate that both accelerators can meet the real-time requirements of 11 ms latency for mixed-reality apps. 1


2020 ◽  
Vol 32 ◽  
pp. 03054
Author(s):  
Akshata Parab ◽  
Rashmi Nagare ◽  
Omkar Kolambekar ◽  
Parag Patil

Vision is one of the very essential human senses and it plays a major role in human perception about surrounding environment. But for people with visual impairment their definition of vision is different. Visually impaired people are often unaware of dangers in front of them, even in familiar environment. This study proposes a real time guiding system for visually impaired people for solving their navigation problem and to travel without any difficulty. This system will help the visually impaired people by detecting the objects and giving necessary information about that object. This information may include what the object is, its location, its precision, distance from the visually impaired etc. All these information will be conveyed to the person through audio commands so that they can navigate freely anywhere anytime with no or minimal assistance. Object detection is done using You Only Look Once (YOLO) algorithm. As the process of capturing the video/images and sending it to the main module has to be carried at greater speed, Graphics Processing Unit (GPU) is used. This will help in enhancing the overall speed of the system and will help the visually Impaired to get the maximum necessary instructions as quickly as possible. The process starts from capturing the real time video, sending it for analysis and processing and get the calculated results. The results obtained from analysis are conveyed to user by means of hearing aid. As a result by this system the blind or the visually impaired people can visualize the surrounding environment and travel freely from source to destination on their own.


Sign in / Sign up

Export Citation Format

Share Document