scholarly journals Development of a Computer Interface for People with Disabilities based on Computer Vision

Author(s):  
Gustavo Scalabrini Sampaio ◽  
Maurício Marengoni
2021 ◽  
pp. 128-134
Author(s):  
Kaixuan Liu ◽  
Yang Yu ◽  
Yadong Liu ◽  
Zongtan Zhou

2016 ◽  
Vol 28 (01) ◽  
pp. 1650004
Author(s):  
Taher Bodaghi ◽  
Mohammad Reza Karami ◽  
Hamid Jazayeriy

The VIWA system has been developed to provide computer access for paralyzed people. The system detects the users’ breath pressure with an electrical board and translates it to the location coordinates in computer display screen. The user navigates mouse cursor by specifying the coordination of x and y. A graphical keyboard has been designed to provide text-entry ability with text-to-speech embedded feature. The system is platform-independent and can be run in every client side devices. A group of 10 people without disabilities tested the VIWA and quickly learned how to use it to spell out messages or interact with computer. Ten people with disabilities also tried the system. The measured mistakes and time are showing that the user is capable to control the mouse cursor with the accuracy of 99.8%. The average time of experiment shows that users can reach their speed of writing to 7.9 words/min after two attempts.


2011 ◽  
Vol 8 (3) ◽  
pp. 036025 ◽  
Author(s):  
Eric A Pohlmeyer ◽  
Jun Wang ◽  
David C Jangraw ◽  
Bin Lou ◽  
Shih-Fu Chang ◽  
...  

2005 ◽  
Vol 17 (06) ◽  
pp. 284-292 ◽  
Author(s):  
MU-CHUN SU ◽  
SHI-YONG SU ◽  
GWO-DONG CHEN

The object of this paper is to present a low-lost vision-based computer interface which allows people with disabilities to use their head movements to manipulate computers. Our system requires only one low-cost web camera and a personal computer. Several experiments were conducted to test the performance of the proposed human-computer interface.


Author(s):  
Aryansh Shrivastava

The goal of my project is to create a computer vision and AI based system that can interpret the face gestures in an intelligent and meaningful way, helping people with disabilities to do two way written and verbal communication. This system should also use face gestures to control the precise navigation of wheelchairs and other guided robotic devices, which will help their movement. Also, this system should be able to interpret and convert facial gestures into commands which can control home and office gadgets that will help them control the environment around them, such as lighting (on/off), temperature, and sound.


2021 ◽  
Author(s):  
Hamza Ali Imran ◽  
Usama Latif

Human Activity Recognition (HAR) an important area of research in the light of enormous applications that it provides, such as health monitoring, sports, entertainment, efficient human computer interface, child care, education and many more. Use of Computer Vision for Human Activity Recognition has many limitations. The use of inertial sensors which include accelerometer and gyroscopic sensors for HAR is becoming the norm these days considering their benefits over traditional Computer Vision techniques. In this paper we have proposed a 1-dimensional Convolutions Neural Network which is inspired by two state-of-the art architectures proposed for image classifications; namely Inception Net and Dense Net. We have evaluated its performance on two different publicly available datasets for HAR. Precision, Recall, F1-measure and accuracies are reported.<br>


Sign in / Sign up

Export Citation Format

Share Document