Computer vision applied to improve interaction and communication of people with motor disabilities: A systematic mapping

2021 ◽  
pp. 1-18
Author(s):  
Rúbia Eliza de Oliveira Schultz Ascari ◽  
Luciano Silva ◽  
Roberto Pereira

BACKGROUND: The use of computers as a communication tool by people with disabilities can serve as an alternative effective to promote social interactions and the more inclusive and active participation of people in society. OBJECTIVE: This paper presents a systematic mapping of the literature that provides a survey of scientific contributions where Computer Vision is applied to enable users with motor and speech impairments to access computers easily, allowing them to exert their communicative abilities. METHODS: The mapping was conducted employing searches that identified 221 potentially eligible scientific articles published between 2009 and 2019, indexed by ACM, IEEE, Science Direct, and Springer databases. RESULTS: From the retrieved papers, 33 were selected and categorized into themes of this research interest: Human-Computer Interaction, Human-Machine Interaction, Human-Robot Interaction, Recreation, and surveys. Most of the chosen studies use sets of predefined gestures, low-cost cameras, and tracking a specific body region for gestural interaction. CONCLUSION: The results offer an overview of the Computer Vision techniques used in applied research on Assistive Technology for people with motor and speech disabilities, pointing out opportunities and challenges in this research domain.

Sensors ◽  
2021 ◽  
Vol 21 (2) ◽  
pp. 343
Author(s):  
Kim Bjerge ◽  
Jakob Bonde Nielsen ◽  
Martin Videbæk Sepstrup ◽  
Flemming Helsing-Nielsen ◽  
Toke Thomas Høye

Insect monitoring methods are typically very time-consuming and involve substantial investment in species identification following manual trapping in the field. Insect traps are often only serviced weekly, resulting in low temporal resolution of the monitoring data, which hampers the ecological interpretation. This paper presents a portable computer vision system capable of attracting and detecting live insects. More specifically, the paper proposes detection and classification of species by recording images of live individuals attracted to a light trap. An Automated Moth Trap (AMT) with multiple light sources and a camera was designed to attract and monitor live insects during twilight and night hours. A computer vision algorithm referred to as Moth Classification and Counting (MCC), based on deep learning analysis of the captured images, tracked and counted the number of insects and identified moth species. Observations over 48 nights resulted in the capture of more than 250,000 images with an average of 5675 images per night. A customized convolutional neural network was trained on 2000 labeled images of live moths represented by eight different classes, achieving a high validation F1-score of 0.93. The algorithm measured an average classification and tracking F1-score of 0.71 and a tracking detection rate of 0.79. Overall, the proposed computer vision system and algorithm showed promising results as a low-cost solution for non-destructive and automatic monitoring of moths.


Author(s):  
Maxwell K. Micali ◽  
Hayley M. Cashdollar ◽  
Zachary T. Gima ◽  
Mitchell T. Westwood

While CNC programmers have powerful tools to develop optimized toolpaths and machining plans, these efforts can be wholly undermined by something as simple as human operator error during fixturing. This project addresses that potential operator error with a computer vision approach to provide coarse, closed-loop control between fixturing and machining processes. Prior to starting the machining cycle, a sensor suite detects the geometry that is currently fixtured using computer vision algorithms and compare this geometry to a CAD reference. If the detected and reference geometries are not similar, the machining cycle will not start, and an alarm will be raised. The outcome of this project is the proof of concept of a low-cost, machine/controller agnostic solution that is applied to CNC milling machines. The Workpiece Verification System (WVS) prototype implemented in this work cost a total of $100 to build, and all of the processing is performed on the self-contained platform. This solution has additional applications beyond milling that the authors are exploring.


2017 ◽  
Vol 107 (09) ◽  
pp. 572-577
Author(s):  
B. Prof. Lorenz ◽  
I. Kaltenmark

In modernen Produktionen ist Lean Manufacturing einer der wichtigsten Treiber für Produktivitätssteigerungen. Durch neue Entwicklungen im Bereich Industrie 4.0 können Impulse im Lean Manufacturing gegeben werden. An der OTH Regensburg wird getestet, wie kostengünstige Kamerasysteme helfen können, Verschwendungen sichtbar zu machen und zu minimieren. Es zeigt sich, dass auch mit geringen Investitionskosten neue Potentiale zur Verschwendungsreduktion aufgedeckt werden können.   In modern production lean manufacturing is one of the most effective drivers for productivity. Due to new developments in the Industrie 4.0-campaign new impulses can be given into lean manufacturing. Experiments at OTH Regensburg indicate that a low-cost camera system can help to make waste visible and minimize it. This shows that with low invest costs, new potentials for waste reduction can be revealed.


2018 ◽  
Vol 2018 ◽  
pp. 1-20 ◽  
Author(s):  
Markus Bajones ◽  
David Fischinger ◽  
Astrid Weiss ◽  
Daniel Wolf ◽  
Markus Vincze ◽  
...  

We present the robot developed within the Hobbit project, a socially assistive service robot aiming at the challenge of enabling prolonged independent living of elderly people in their own homes. We present the second prototype (Hobbit PT2) in terms of hardware and functionality improvements following first user studies. Our main contribution lies within the description of all components developed within the Hobbit project, leading to autonomous operation of 371 days during field trials in Austria, Greece, and Sweden. In these field trials, we studied how 18 elderly users (aged 75 years and older) lived with the autonomously interacting service robot over multiple weeks. To the best of our knowledge, this is the first time a multifunctional, low-cost service robot equipped with a manipulator was studied and evaluated for several weeks under real-world conditions. We show that Hobbit’s adaptive approach towards the user increasingly eased the interaction between the users and Hobbit. We provide lessons learned regarding the need for adaptive behavior coordination, support during emergency situations, and clear communication of robotic actions and their consequences for fellow researchers who are developing an autonomous, low-cost service robot designed to interact with their users in domestic contexts. Our trials show the necessity to move out into actual user homes, as only there can we encounter issues such as misinterpretation of actions during unscripted human-robot interaction.


2015 ◽  
Vol 76 (12) ◽  
Author(s):  
Por Jing Zhao ◽  
Shafriza Nisha Basah ◽  
Shazmin Aniza Abdul Shukor

High demand of building construction has been taking places in the major city of Malaysia. However, despite this magnificent development, the lack of proper maintenance has caused a large portion of these properties deteriorated over time. The implementation of the project - Automated Detection of Physical Defect via Computer Vision - is a low cost system that helps to inspect the wall condition using Kinect camera. The system is able to classify the types of physical defects -crack and hole - and state its level of severity.The system uses artificial neural network as the image classifier due to its reliability and consistency. The validity of the system is shown using experiments on synthetic and real image data. This automated physical defect detection could detect building defect early, quickly, and easily, which results in cost saving and extending building life span. 


2020 ◽  
Author(s):  
Vysakh S Mohan

Edge processing for computer vision systems enable incorporating visual intelligence to mobile robotics platforms. Demand for low power, low cost and small form factor devices are on the rise.This work proposes a unified platform to generate deep learning models compatible on edge devices from Intel, NVIDIA and XaLogic. The platform enables users to create custom data annotations,train neural networks and generate edge compatible inference models. As a testimony to the tools ease of use and flexibility, we explore two use cases — vision powered prosthetic hand and drone vision. Neural network models for these use cases will be built using the proposed pipeline and will be open-sourced. Online and offline versions of the tool and corresponding inference modules for edge devices will also be made public for users to create custom computer vision use cases.


Author(s):  
Ananya Choudhury ◽  
Kandarpa Kumar Sarma

In the present scenario, around 15% of the world's population experience some form of disability. So, there has been an enormous increase in the demand for assistive techniques for overcoming the restraints faced by people with physical impairments. More recently, gesture-based character recognition (GBCR) has emerged as an assistive tool of immense importance, especially for facilitating the needs of persons with special necessities. Such GBCR systems serve as a powerful mediator for communication among people having hearing and speech impairments. They can also serve as a rehabilitative aid for people with motor disabilities who cannot write with pen on paper, or face difficulty in using common human-machine interactive (HMI) devices. This chapter provides a glimpse of disability prevalence around the globe and particularly in India, emphasizes the importance of learning-based GBCR systems in practical education of differently-abled children, and highlights the novel research contributions made in this field.


2019 ◽  
Vol 9 (15) ◽  
pp. 3196 ◽  
Author(s):  
Lidia María Belmonte ◽  
Rafael Morales ◽  
Antonio Fernández-Caballero

Personal assistant robots provide novel technological solutions in order to monitor people’s activities, helping them in their daily lives. In this sense, unmanned aerial vehicles (UAVs) can also bring forward a present and future model of assistant robots. To develop aerial assistants, it is necessary to address the issue of autonomous navigation based on visual cues. Indeed, navigating autonomously is still a challenge in which computer vision technologies tend to play an outstanding role. Thus, the design of vision systems and algorithms for autonomous UAV navigation and flight control has become a prominent research field in the last few years. In this paper, a systematic mapping study is carried out in order to obtain a general view of this subject. The study provides an extensive analysis of papers that address computer vision as regards the following autonomous UAV vision-based tasks: (1) navigation, (2) control, (3) tracking or guidance, and (4) sense-and-avoid. The works considered in the mapping study—a total of 144 papers from an initial set of 2081—have been classified under the four categories above. Moreover, type of UAV, features of the vision systems employed and validation procedures are also analyzed. The results obtained make it possible to draw conclusions about the research focuses, which UAV platforms are mostly used in each category, which vision systems are most frequently employed, and which types of tests are usually performed to validate the proposed solutions. The results of this systematic mapping study demonstrate the scientific community’s growing interest in the development of vision-based solutions for autonomous UAVs. Moreover, they will make it possible to study the feasibility and characteristics of future UAVs taking the role of personal assistants.


Sign in / Sign up

Export Citation Format

Share Document