scholarly journals Real-time decoding of 5 finger movements from 2 EMG channels for mixed reality human-computer interaction

2021 ◽  
Author(s):  
Eric James McDermott ◽  
Thimm Zwiener ◽  
Ulf Ziemann ◽  
Christoph Zrenner

The search for optimized forms of human-computer interaction (HCI) has intensified alongside the growing potential for the combination of biosignals with virtual reality (VR) and augmented reality (AR) to enable the next generation of personal computing. At the core, this requires decoding the user's biosignals into digital commands. Electromyography (EMG) is a biosensor of particular interest due to the ease of data collection, the relatively high signal-to-noise-ratio, its non-invasiveness, and the ability to interpret the signal as being generated by (intentional) muscle activity. Here, we investigate the potential of using data taken from a simple 2-channel EMG setup to differentiate 5 distinct movements. In particular, EMG was recorded from two bipolar sensors over small hand muscles (extensor digitorum, flexor digitorum profundus) while a subject performed 50 trials of dorsal extension and return for each of the five digits. The maximum and the mean data values across the trial were determined for each channel and used as features. A k-nearest neighbors (kNN) classification was performed and overall 5-class classification accuracy reached 94% when using the full trial's time window, while simulated real-time classification reached 90.4% accuracy when using the constructed kNN model (k=3) with a 280ms sliding window. Additionally, unsupervised learning was performed and a homogeneity of 85% was achieved. This study demonstrates that reliable decoding of different natural movements is possible with fewer than one channel per class, even without taking into account temporal features of the signal. The technical feasibility of this approach in a real-time setting was validated by sending real-time EMG data to a custom Unity3D VR application through a Lab Streaming Layer to control a user interface. Further use-cases of gamification and rehabilitation were also examined alongside integration of eye-tracking and gesture recognition for a sensor fusion approach to HCI and user intent.

2018 ◽  
Vol 09 (04) ◽  
pp. 841-848
Author(s):  
Kevin King ◽  
John Quarles ◽  
Vaishnavi Ravi ◽  
Tanvir Chowdhury ◽  
Donia Friday ◽  
...  

Background Through the Health Information Technology for Economic and Clinical Health Act of 2009, the federal government invested $26 billion in electronic health records (EHRs) to improve physician performance and patient safety; however, these systems have not met expectations. One of the cited issues with EHRs is the human–computer interaction, as exhibited by the excessive number of interactions with the interface, which reduces clinician efficiency. In contrast, real-time location systems (RTLS)—technologies that can track the location of people and objects—have been shown to increase clinician efficiency. RTLS can improve patient flow in part through the optimization of patient verification activities. However, the data collected by RTLS have not been effectively applied to optimize interaction with EHR systems. Objectives We conducted a pilot study with the intention of improving the human–computer interaction of EHR systems by incorporating a RTLS. The aim of this study is to determine the impact of RTLS on process metrics (i.e., provider time, number of rooms searched to find a patient, and the number of interactions with the computer interface), and the outcome metric of patient identification accuracy Methods A pilot study was conducted in a simulated emergency department using a locally developed camera-based RTLS-equipped EHR that detected the proximity of subjects to simulated patients and displayed patient information when subjects entered the exam rooms. Ten volunteers participated in 10 patient encounters with the RTLS activated (RTLS-A) and then deactivated (RTLS-D). Each volunteer was monitored and actions recorded by trained observers. We sought a 50% improvement in time to locate patients, number of rooms searched to locate patients, and the number of mouse clicks necessary to perform those tasks. Results The time required to locate patients (RTLS-A = 11.9 ± 2.0 seconds vs. RTLS-D = 36.0 ± 5.7 seconds, p < 0.001), rooms searched to find patient (RTLS-A = 1.0 ± 1.06 vs. RTLS-D = 3.8 ± 0.5, p < 0.001), and number of clicks to access patient data (RTLS-A = 1.0 ± 0.06 vs. RTLS-D = 4.1 ± 0.13, p < 0.001) were significantly reduced with RTLS-A relative to RTLS-D. There was no significant difference between RTLS-A and RTLS-D for patient identification accuracy. Conclusion This pilot demonstrated in simulation that an EHR equipped with real-time location services improved performance in locating patients and reduced error compared with an EHR without RTLS. Furthermore, RTLS decreased the number of mouse clicks required to access information. This study suggests EHRs equipped with real-time location services that automates patient location and other repetitive tasks may improve physician efficiency, and ultimately, patient safety.


Author(s):  
Carl Smith

The contribution of this research is to argue that truly creative patterns for interaction within cultural heritage contexts must create situations and concepts that could not have been realised without the intervention of those interaction patterns. New forms of human-computer interaction and therefore new tools for navigation must be designed that unite the strengths, features, and possibilities of both the physical and the virtual space. The human-computer interaction techniques and mixed reality methodologies formulated during this research are intended to enhance spatial cognition while implicitly improving pattern recognition. This research reports on the current state of location-based technology including Mobile Augmented Reality (MAR) and GPS. The focus is on its application for use within cultural heritage as an educational and outreach tool. The key questions and areas to be investigated include: What are the requirements for effective digital intervention within the cultural heritage sector? What are the affordances of mixed and augmented reality? What mobile technology is currently being utilised to explore cultural heritage? What are the key projects? Finally, through a series of case studies designed and implemented by the author, some broad design guidelines are outlined. The chapter concludes with an overview of the main issues to consider when (re)engineering cultural heritage contexts.


Photonics ◽  
2019 ◽  
Vol 6 (3) ◽  
pp. 90 ◽  
Author(s):  
Bosworth ◽  
Russell ◽  
Jacob

Over the past decade, the Human–Computer Interaction (HCI) Lab at Tufts University has been developing real-time, implicit Brain–Computer Interfaces (BCIs) using functional near-infrared spectroscopy (fNIRS). This paper reviews the work of the lab; we explore how we have used fNIRS to develop BCIs that are based on a variety of human states, including cognitive workload, multitasking, musical learning applications, and preference detection. Our work indicates that fNIRS is a robust tool for the classification of brain-states in real-time, which can provide programmers with useful information to develop interfaces that are more intuitive and beneficial for the user than are currently possible given today’s human-input (e.g., mouse and keyboard).


Geophysics ◽  
2009 ◽  
Vol 74 (4) ◽  
pp. J35-J48 ◽  
Author(s):  
Bernard Giroux ◽  
Abderrezak Bouchedda ◽  
Michel Chouteau

We introduce two new traveltime picking schemes developed specifically for crosshole ground-penetrating radar (GPR) applications. The main objective is to automate, at least partially, the traveltime picking procedure and to provide first-arrival times that are closer in quality to those of manual picking approaches. The first scheme is an adaptation of a method based on cross-correlation of radar traces collated in gathers according to their associated transmitter-receiver angle. A detector is added to isolate the first cycle of the radar wave and to suppress secon-dary arrivals that might be mistaken for first arrivals. To improve the accuracy of the arrival times obtained from the crosscorrelation lags, a time-rescaling scheme is implemented to resize the radar wavelets to a common time-window length. The second method is based on the Akaike information criterion(AIC) and continuous wavelet transform (CWT). It is not tied to the restrictive criterion of waveform similarity that underlies crosscorrelation approaches, which is not guaranteed for traces sorted in common ray-angle gathers. It has the advantage of being automated fully. Performances of the new algorithms are tested with synthetic and real data. In all tests, the approach that adds first-cycle isolation to the original crosscorrelation scheme improves the results. In contrast, the time-rescaling approach brings limited benefits, except when strong dispersion is present in the data. In addition, the performance of crosscorrelation picking schemes degrades for data sets with disparate waveforms despite the high signal-to-noise ratio of the data. In general, the AIC-CWT approach is more versatile and performs well on all data sets. Only with data showing low signal-to-noise ratios is the AIC-CWT superseded by the modified crosscorrelation picker.


Author(s):  
Xiangyang Li ◽  
Zhili Zhang ◽  
Feng Liang ◽  
Qinhe Gao ◽  
Lilong Tan

Aiming at the human–computer interaction control (HCIC) requirements of multi operators in collaborative virtual maintenance (CVM), real-time motion capture and simulation drive of multi operators with optical human motion capture system (HMCS) is proposed. The detailed realization process of real-time motion capture and data drive for virtual operators in CVM environment is presented to actualize the natural and online interactive operations. In order to ensure the cooperative and orderly interactions of virtual operators with the input operations of actual operators, collaborative HCIC model is established according to specific planning, allocating and decision-making of different maintenance tasks as well as the human–computer interaction features and collaborative maintenance operation features among multi maintenance trainees in CVM process. Finally, results of the experimental implementation validate the effectiveness and practicability of proposed methods, models, strategies and mechanisms.


Sign in / Sign up

Export Citation Format

Share Document