A Human-Computer Interaction System Utilizing Inertial Measurement Unit and Convolutional Neural Network

Author(s):  
Shahriar Rahman Fahim ◽  
Yeahia Sarker ◽  
Md. Rashiduzzaman ◽  
Omar Kamrul Islam ◽  
Subrato K. Sarker ◽  
...  
Author(s):  
Md. Al-Amin ◽  
Ruwen Qin ◽  
Wenjin Tao ◽  
David Doell ◽  
Ravon Lingard ◽  
...  

Assembly carries paramount importance in manufacturing. Being able to support workers in real time to maximize their positive contributions to assembly is a tremendous interest of manufacturers. Human action recognition has been a way to automatically analyze and understand worker actions to support real-time assistance for workers and facilitate worker–machine collaboration. Assembly actions are distinct from activities that have been well studied in the action recognition literature. Actions taken by assembly workers are intricate, variable, and may involve very fine motions. Therefore, recognizing assembly actions remains a challenging task. This paper proposes to simply use only two wearable devices that respectively capture the inertial measurement unit data of each hand of workers. Then, two convolutional neural network models with an identical architecture are independently trained using the two sources of inertial measurement unit data to respectively recognize the right-hand and the left-hand actions of an assembly worker. Classification results of the two convolutional neural network models are fused to yield a final action recognition result because the two hands often collaborate in assembling operations. Transfer learning is implemented to adapt the action recognition models to subjects whose data have not been included in dataset for training the models. One operation in assembling a Bukito three-dimensional printer, which is composed of seven actions, is used to demonstrate the implementation and assessment of the proposed method. Results from the study have demonstrated that the proposed approach effectively improves the prediction accuracy at both the action level and the subject level. Work of the paper builds a foundation for building advanced action recognition systems such as multimodal sensor-based action recognition.


Electronics ◽  
2019 ◽  
Vol 8 (2) ◽  
pp. 181 ◽  
Author(s):  
Changhui Jiang ◽  
Yuwei Chen ◽  
Shuai Chen ◽  
Yuming Bo ◽  
Wei Li ◽  
...  

Currently, positioning, navigation, and timing information is becoming more and more vital for both civil and military applications. Integration of the global navigation satellite system and /inertial navigation system is the most popular solution for various carriers or vehicle positioning. As is well-known, the global navigation satellite system positioning accuracy will degrade in signal challenging environments. Under this condition, the integration system will fade to a standalone inertial navigation system outputting navigation solutions. However, without outer aiding, positioning errors of the inertial navigation system diverge quickly due to the noise contained in the raw data of the inertial measurement unit. In particular, the micromechanics system inertial measurement unit experiences more complex errors due to the manufacturing technology. To improve the navigation accuracy of inertial navigation systems, one effective approach is to model the raw signal noise and suppress it. Commonly, an inertial measurement unit is composed of three gyroscopes and three accelerometers, among them, the gyroscopes play an important role in the accuracy of the inertial navigation system’s navigation solutions. Motivated by this problem, in this paper, an advanced deep recurrent neural network was employed and evaluated in noise modeling of a micromechanics system gyroscope. Specifically, a deep long short term memory recurrent neural network and a deep gated recurrent unit–recurrent neural network were combined together to construct a two-layer recurrent neural network for noise modeling. In this method, the gyroscope data were treated as a time series, and a real dataset from a micromechanics system inertial measurement unit was employed in the experiments. The results showed that, compared to the two-layer long short term memory, the three-axis attitude errors of the mixed long short term memory–gated recurrent unit decreased by 7.8%, 20.0%, and 5.1%. When compared with the two-layer gated recurrent unit, the proposed method showed 15.9%, 14.3%, and 10.5% improvement. These results supported a positive conclusion on the performance of designed method, specifically, the mixed deep recurrent neural networks outperformed than the two-layer gated recurrent unit and the two-layer long short term memory recurrent neural networks.


2019 ◽  
Vol 5 (1) ◽  
pp. 401-403
Author(s):  
Michael Munz ◽  
Nicolas Wolf

AbstractIn this work, a methodology for the classification of breathing patterns in order to prevent sudden infant death (SID) incidents is presented. The basic idea is to classify breathing patterns which might lead to SID prior to an incident. A thorax sensor is proposed, which is able to simulate breathing patterns given by certain parameters. A sensor combination of conductive strain fabric and an inertial measurement unit is used for data acquisition. The data is then classified using a neural network.


Author(s):  
K. Martin Sagayam ◽  
A. Diana Andrushia ◽  
Ahona Ghosh ◽  
Omer Deperlioglu ◽  
Ahmed A. Elngar

In recent technology, there is tremendous growth in computer applications that highlight human–computer interaction (HCI), such as augmented reality (AR), and Internet of Things (IoT). As a consequence, hand gesture recognition was highlighted as a very up-to-date research area in computer vision. The body language is a vital method to communicate between people, as well as emphasis on voice messages, or as a complete message on its own. Thus, automatic hand gestures recognition systems can be used to increase human–computer interaction. Therefore, many approaches for hand gesture recognition systems have been designed. However, most of these methods include hybrid processes such as image pre-processing, segmentation, and classification. This paper describes how to create hand gesture model easily and quickly with a well-tuned deep convolutional neural network. Experiments were performed using the Cambridge Hand Gesture data set for illustration of success and efficiency of the convolutional neural network. The accuracy was achieved as 96.66%, where sensitivity and specificity were found to be 85% and 98.12%, respectively, according to the average values obtained at the end of 20 times of operation. These results were compared with the existing works using the same dataset and it was found to have higher values than the hybrid methods.


Sign in / Sign up

Export Citation Format

Share Document