Computer Vision Based Eye Gaze Controlled Virtual Keyboard for People with Quadriplegia

Author(s):  
Robiul Islam ◽  
Sazedur Rahman ◽  
Anamica Sarkar
Author(s):  
Marishetti Niharika

Eye gazing is the fundamental nonverbal interaction that is presently strengthening in emerging technology. This eye blink device facilitates communication among people with disabilities. The process is so simple that it can be done with the eyes blinking on the specific keys built into the virtual keyboard. This type of system may synthesize speech, regulate his environment, and provide a significant boost in self-belief in the individual. Our study emphasises the virtual keyboard, which not only includes integrated alphabetic keys but also contains emergency phrases that may seek help in a variety of scenarios. It can, however, provide voice notification and speech assistance to those who are speech-impaired. To get this, we employed our PC/computer digital Digi-Cam, which is integrated and recognises the face and its elements. As a result, the technique for detecting the face is far less complicated than everything else. The blink of an eye provides an opportunity for a mouse to click on the digital interface. Our goal is to provide nonverbal communication, and as a result, physically impaired people should be able to communicate with the use of a voice assistant. This type of innovation is a blessing for those who have lost their voice or are suffering from paralytic ailments.


2019 ◽  
Vol 8 (4) ◽  
pp. 3264-3269 ◽  

Human Computer Interaction is an emerging technology which refers to a vast number of algorithms and different types of techniques to enhance the interaction process. Eye gaze technology is one of the most significant techniques of modern science and can be used in many areas like security, typing, information tracking etc. The need of the system for physically disable people motivated many researchers to develop systems which can be used only using the eye gaze and blinking. In this paper, we are going to represent the development of a virtual keyboard which work by detecting eye gaze and eye blinking. It involves building a system that capture video directly from PC camera and detects human face, eyes. To detect face accurately we will follow a simple rule as the eyes and lips are always in the sample place as image, which will make the eye detection process much easier. To do this we are going to use an approach that involves the 68 points of face which is specific and must exist in every face as- the eye area, top of the chin, eyebrow, nose, outside edge of face etc. It also detect eye gaze as left, right to select keyboard portion and eye blinking to select the desired key from the virtual keyboard on the board. The goal of this system is to type without using finger or hands. Such types of application are really important and blessings for those people who completely lost the control of their limbs. The methodology is described including flow-charts for each stage of the system and then implementation results has described.


2021 ◽  
Vol 3 (3) ◽  
pp. 190-207
Author(s):  
S. K. B. Sangeetha

In recent years, deep-learning systems have made great progress, particularly in the disciplines of computer vision and pattern recognition. Deep-learning technology can be used to enable inference models to do real-time object detection and recognition. Using deep-learning-based designs, eye tracking systems could determine the position of eyes or pupils, regardless of whether visible-light or near-infrared image sensors were utilized. For growing electronic vehicle systems, such as driver monitoring systems and new touch screens, accurate and successful eye gaze estimates are critical. In demanding, unregulated, low-power situations, such systems must operate efficiently and at a reasonable cost. A thorough examination of the different deep learning approaches is required to take into consideration all of the limitations and opportunities of eye gaze tracking. The goal of this research is to learn more about the history of eye gaze tracking, as well as how deep learning contributed to computer vision-based tracking. Finally, this research presents a generalized system model for deep learning-driven eye gaze direction diagnostics, as well as a comparison of several approaches.


Author(s):  
Vidya Itsna Saraswati ◽  
Riyanto Sigit ◽  
Tri Harsono
Keyword(s):  
Eye Gaze ◽  

2021 ◽  
pp. 1-20
Author(s):  
Jeevithashree DV ◽  
Puneet Jain ◽  
Abhishek Mukhopadhyay ◽  
Kamal Preet Singh Saluja ◽  
Pradipta Biswas

BACKGROUND: Users with Severe Speech and Motor Impairment (SSMI) often use a communication chart through their eye gaze or limited hand movement and care takers interpret their communication intent. There is already significant research conducted to automate this communication through electronic means. Developing electronic user interface and interaction techniques for users with SSMI poses significant challenges as research on their ocular parameters found that such users suffer from Nystagmus and Strabismus limiting number of elements in a computer screen. This paper presents an optimized eye gaze controlled virtual keyboard for English language with an adaptive dwell time feature for users with SSMI. OBJECTIVE: Present an optimized eye gaze controlled English virtual keyboard that follows both static and dynamic adaptation process. The virtual keyboard can automatically adapt to reduce eye gaze movement distance and dwell time for selection and help users with SSMI type better without any intervention of an assistant. METHODS: Before designing the virtual keyboard, we undertook a pilot study to optimize screen region which would be most comfortable for SSMI users to operate. We then proposed an optimized two-level English virtual keyboard layout through Genetic algorithm using static adaptation process; followed by dynamic adaptation process which tracks users’ interaction and reduces dwell time based on a Markov model-based algorithm. Further, we integrated the virtual keyboard for a web-based interactive dashboard that visualizes real-time Covid data. RESULTS: Using our proposed virtual keyboard layout for English language, the average task completion time for users with SSMI was 39.44 seconds in adaptive condition and 29.52 seconds in non-adaptive condition. Overall typing speed was 16.9 lpm (letters per minute) for able-bodied users and 6.6 lpm for users with SSMI without using any word completion or prediction features. A case study with an elderly participant with SSMI found a typing speed of 2.70 wpm (words per minute) and 14.88 lpm (letters per minute) after 6 months of practice. CONCLUSIONS: With the proposed layout for English virtual keyboard, the adaptive system increased typing speed statistically significantly for able bodied users than a non-adaptive version while for 6 users with SSMI, task completion time reduced by 8.8% in adaptive version than nonadaptive one. Additionally, the proposed layout was successfully integrated to a web-based interactive visualization dashboard thereby making it accessible for users with SSMI.


Mathematics ◽  
2021 ◽  
Vol 9 (3) ◽  
pp. 287
Author(s):  
Pieter Vanneste ◽  
José Oramas ◽  
Thomas Verelst ◽  
Tinne Tuytelaars ◽  
Annelies Raes ◽  
...  

Computer vision has shown great accomplishments in a wide variety of classification, segmentation and object recognition tasks, but tends to encounter more difficulties when tasks require more contextual assessment. Measuring the engagement of students is an example of such a complex task, as it requires a strong interpretative component. This research describes a methodology to measure students’ engagement, taking both an individual (student-level) and a collective (classroom) approach. Results show that students’ individual behaviour, such as note-taking or hand-raising, is challenging to recognise, and does not correlate with students’ self-reported engagement. Interestingly, students’ collective behaviour can be quantified in a more generic way using measures for students’ symmetry, reaction times and eye-gaze intersections. Nonetheless, the evidence for a connection between these collective measures and engagement is rather weak. Although this study does not succeed in providing a proxy of students’ self-reported engagement, our approach sheds light on the needs for future research. More concretely, we suggest that not only the behavioural, but also the emotional and cognitive component of engagement should be captured.


2021 ◽  
pp. 71-79
Author(s):  
Polok Ghosh ◽  
Rohit Singhee ◽  
Rohan Karmakar ◽  
Snehomoy Maitra ◽  
Sanskar Rai ◽  
...  

2021 ◽  
Vol 12 (4) ◽  
pp. 1224-1233
Author(s):  
Villuri Gnaneswar

Iris Movement and gaze tracking has been an active research field in the past years as it adds convenience to a variety of applications. It is considered a significant untraditional method of human-computer interaction. The goal of this paper is to present a study on the existing literature on Iris Movement and Gaze Tracking and Develop an Efficient Technique that can revolutionize the field of Computer Vision. With the uptrend of systems based on eye Tracking in many different areas of life in recent years, this subject has gained much more attention by in the academic and industrial area.


Author(s):  
Pavneet Bhatia ◽  
Arun Khosla ◽  
Gajendra Singh

In past few decades, eye tracking has evolved as an emerging technology with wide areas of applications in gaming, human-computer interaction, business research, assistive technology, automatic safety research, and many more. Eye-gaze tracking is a provocative idea in computer-vision technology. This chapter includes the recent researches, expansion, and development in the technology, techniques, and its wide-ranging applications. It gives a detailed background of technology with all the efforts done in the direction to improve the tracking system.


Sign in / Sign up

Export Citation Format

Share Document