scholarly journals Computer Vision for Human-Computer Interaction Using Noninvasive Technology

2021 ◽  
Vol 2021 ◽  
pp. 1-15
Author(s):  
Janarthanan Ramadoss ◽  
J. Venkatesh ◽  
Shubham Joshi ◽  
Piyush Kumar Shukla ◽  
Sajjad Shaukat Jamal ◽  
...  

Computer vision is a significant component of human-computer interaction (HCI) processes in interactive control systems. In general, the interaction between humans and computers relies on the flexibility of the interactive visualization system. Electromyography (EMG) is a bioelectric signal used in HCI that can be captured noninvasively by placing electrodes on the human hand. Due to the impact of complex background, accurate recognition and analysis of human motion in real-time multitarget scenarios are considered challenging in HCI. Further, EMG signals of human hand motions are exceedingly nonlinear, and it is important to utilize a dynamic approach to address the noise problem in EMG signals. Hence, in this paper, the Optimized Noninvasive Human-Computer Interaction (ONIHCI) model has been proposed to predict human motion recognition. Average Intrinsic Mode Function (AIMF) has been used to reduce the noise factor in EMG signals. Furthermore, this paper introduces spatial thermographic imaging to overcome the conventional sensor problem, such as gesture recognition and human target identification in multitarget scenarios. The human motion behavior in spatial thermographic images is examined by target trajectory, and body movement kinematics is employed to classify human targets and objects. The experimental findings demonstrate that the proposed method reduces noise by 7.2% and improves accuracy by 97.2% in human motion recognition and human target identification.

Complexity ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Xiangkun Li ◽  
Guoqing Sun ◽  
Yifei Li

With the development of science and technology, the introduction of virtual reality technology has pushed the development of human-computer interaction technology to a new height. The combination of virtual reality and human-computer interaction technology has been applied more and more in military simulation, medical rehabilitation, game creation, and other fields. Action is the basis of human behavior. Among them, human behavior and action analysis is an important research direction. In human behavior and action, recognition research based on behavior and action has the characteristics of convenience, intuition, strong interaction, rich expression information, and so on. It has become the first choice of many researchers for human behavior analysis. However, human motion and motion pictures are complex objects with many ambiguous factors, which are difficult to express and process. Traditional motion recognition is usually based on two-dimensional color images, while two-dimensional RGB images are vulnerable to background disturbance, light, environment, and other factors that interfere with human target detection. In recent years, more and more researchers have begun to use fuzzy mathematics theory to identify human behaviors. The plantar pressure data under different motion modes were collected through experiments, and the current gait information was analyzed. The key gait events including toe-off and heel touch were identified by dynamic baseline monitoring. For the error monitoring of key gait events, the screen window is used to filter the repeated recognition events in a certain period of time, which greatly improves the recognition accuracy and provides important gait information for motion pattern recognition. The similarity matching is performed on each template, the correct rate of motion feature extraction is 90.2%, and the correct rate of motion pattern recognition is 96.3%, which verifies the feasibility and effectiveness of human motion recognition based on fuzzy theory. It is hoped to provide processing techniques and application examples for artificial intelligence recognition applications.


2021 ◽  
Vol 2021 ◽  
pp. 1-8
Author(s):  
Peng Wang

With the rapid development of science and technology in today’s society, various industries are pursuing information digitization and intelligence, and pattern recognition and computer vision are also constantly carrying out technological innovation. Computer vision is to let computers, cameras, and other machines receive information like human beings, analyze and process their semantic information, and make coping strategies. As an important research direction in the field of computer vision, human motion recognition has new solutions with the gradual rise of deep learning. Human motion recognition technology has a high market value, and it has broad application prospects in the fields of intelligent monitoring, motion analysis, human-computer interaction, and medical monitoring. This paper mainly studies the recognition of sports training action based on deep learning algorithm. Experimental work has been carried out in order to show the validity of the proposed research.


2021 ◽  
Author(s):  
Lu Jin ◽  
Zhenhong Li ◽  
Zekun Liu ◽  
Bethany Richardson ◽  
Yan Zheng ◽  
...  

Abstract Human motion recognition using flexible/stretchable wearable sensors holds great promise for human-machine interaction and biomedical engineering. However, to measure the individual joint motion with multiple degrees of freedom, many sensor networks are normally required and pinpointed onto the targeted area, restricting body movement. This is due to the limitation of current wearable sensors; inferring a sensor deformation based on the sensor's electrical signal is challenging. A new concept of wearable sensor that can recognize how the sensor deforms could radically solve this issue. Here, we report a wearable integrated piezoelectric film sensor (i-PFS) capable of detecting basic deformations. To achieve this, for the first time, we propose a novel design concept of using uniaxially drawn piezoelectric poly L-lactic acid (PLLA) films to engineer unimodal tension, bend, shear, and twist sensors that only respond to their corresponding deformations with the enhanced piezoelectric response and self-shielding function. Based on this, we construct the i-PFS by combining the four unimodal sensors and demonstrate that the i-PFS can detect and differentiate individual deformation modes, such as tensioning, bending, shearing, and twisting. To our best knowledge, the i-PFS is the world's first film-based sensor that identifies the abovementioned deformations. To prove the potential impact of the i-PFS, we design a sleeve and a glove with the i-PFS that can capture various wrist motions and subtle finger movements, respectively. We also develop a virtual text-entry interface system using the glove and a deep neural network algorithm with a character classification accuracy of about 90 %. The i-PFS technology is expected to provide a turning point in developing motion capture systems.


2021 ◽  
Vol 18 (1) ◽  
pp. 172988142098321
Author(s):  
Anzhu Miao ◽  
Feiping Liu

Human motion recognition is a branch of computer vision research and is widely used in fields like interactive entertainment. Most research work focuses on human motion recognition methods based on traditional video streams. Traditional RGB video contains rich colors, edges, and other information, but due to complex background, variable illumination, occlusion, viewing angle changes, and other factors, the accuracy of motion recognition algorithms is not high. For the problems, this article puts forward human motion recognition based on extreme learning machine (ELM). ELM uses the randomly calculated implicit network layer parameters for network training, which greatly reduces the time spent on network training and reduces computational complexity. In this article, the interframe difference method is used to detect the motion region, and then, the HOG3D feature descriptor is used for feature extraction. Finally, ELM is used for classification and recognition. The results imply that the method proposed here has achieved good results in human motion recognition.


2021 ◽  
pp. 1-1
Author(s):  
Mu-Chun Su ◽  
Pang-Ti Tai ◽  
Jieh-Haur Chen ◽  
Yi-Zeng Hsieh ◽  
Shu-Fang Lee ◽  
...  

2018 ◽  
Vol 09 (04) ◽  
pp. 841-848
Author(s):  
Kevin King ◽  
John Quarles ◽  
Vaishnavi Ravi ◽  
Tanvir Chowdhury ◽  
Donia Friday ◽  
...  

Background Through the Health Information Technology for Economic and Clinical Health Act of 2009, the federal government invested $26 billion in electronic health records (EHRs) to improve physician performance and patient safety; however, these systems have not met expectations. One of the cited issues with EHRs is the human–computer interaction, as exhibited by the excessive number of interactions with the interface, which reduces clinician efficiency. In contrast, real-time location systems (RTLS)—technologies that can track the location of people and objects—have been shown to increase clinician efficiency. RTLS can improve patient flow in part through the optimization of patient verification activities. However, the data collected by RTLS have not been effectively applied to optimize interaction with EHR systems. Objectives We conducted a pilot study with the intention of improving the human–computer interaction of EHR systems by incorporating a RTLS. The aim of this study is to determine the impact of RTLS on process metrics (i.e., provider time, number of rooms searched to find a patient, and the number of interactions with the computer interface), and the outcome metric of patient identification accuracy Methods A pilot study was conducted in a simulated emergency department using a locally developed camera-based RTLS-equipped EHR that detected the proximity of subjects to simulated patients and displayed patient information when subjects entered the exam rooms. Ten volunteers participated in 10 patient encounters with the RTLS activated (RTLS-A) and then deactivated (RTLS-D). Each volunteer was monitored and actions recorded by trained observers. We sought a 50% improvement in time to locate patients, number of rooms searched to locate patients, and the number of mouse clicks necessary to perform those tasks. Results The time required to locate patients (RTLS-A = 11.9 ± 2.0 seconds vs. RTLS-D = 36.0 ± 5.7 seconds, p < 0.001), rooms searched to find patient (RTLS-A = 1.0 ± 1.06 vs. RTLS-D = 3.8 ± 0.5, p < 0.001), and number of clicks to access patient data (RTLS-A = 1.0 ± 0.06 vs. RTLS-D = 4.1 ± 0.13, p < 0.001) were significantly reduced with RTLS-A relative to RTLS-D. There was no significant difference between RTLS-A and RTLS-D for patient identification accuracy. Conclusion This pilot demonstrated in simulation that an EHR equipped with real-time location services improved performance in locating patients and reduced error compared with an EHR without RTLS. Furthermore, RTLS decreased the number of mouse clicks required to access information. This study suggests EHRs equipped with real-time location services that automates patient location and other repetitive tasks may improve physician efficiency, and ultimately, patient safety.


Sign in / Sign up

Export Citation Format

Share Document