scholarly journals Real-time Smartphone Activity Classification Using Inertial Sensors—Recognition of Scrolling, Typing, and Watching Videos While Sitting or Walking

Sensors ◽  
2020 ◽  
Vol 20 (3) ◽  
pp. 655 ◽  
Author(s):  
Sijie Zhuo ◽  
Lucas Sherlock ◽  
Gillian Dobbie ◽  
Yun Sing Koh ◽  
Giovanni Russello ◽  
...  

By developing awareness of smartphone activities that the user is performing on their smartphone, such as scrolling feeds, typing and watching videos, we can develop application features that are beneficial to the users, such as personalization. It is currently not possible to access real-time smartphone activities directly, due to standard smartphone privileges and if internal movement sensors can detect them, there may be implications for access policies. Our research seeks to understand whether the sensor data from existing smartphone inertial measurement unit (IMU) sensors (triaxial accelerometers, gyroscopes and magnetometers) can be used to classify typical human smartphone activities. We designed and conducted a study with human participants which uses an Android app to collect motion data during scrolling, typing and watching videos, while walking or seated and the baseline of smartphone non-use, while sitting and walking. We then trained a machine learning (ML) model to perform real-time activity recognition of those eight states. We investigated various algorithms and parameters for the best accuracy. Our optimal solution achieved an accuracy of 78.6% with the Extremely Randomized Trees algorithm, data sampled at 50 Hz and 5-s windows. We conclude by discussing the viability of using IMU sensors to recognize common smartphone activities.

Sensors ◽  
2020 ◽  
Vol 20 (3) ◽  
pp. 919 ◽  
Author(s):  
Hao Du ◽  
Wei Wang ◽  
Chaowen Xu ◽  
Ran Xiao ◽  
Changyin Sun

The question of how to estimate the state of an unmanned aerial vehicle (UAV) in real time in multi-environments remains a challenge. Although the global navigation satellite system (GNSS) has been widely applied, drones cannot perform position estimation when a GNSS signal is not available or the GNSS is disturbed. In this paper, the problem of state estimation in multi-environments is solved by employing an Extended Kalman Filter (EKF) algorithm to fuse the data from multiple heterogeneous sensors (MHS), including an inertial measurement unit (IMU), a magnetometer, a barometer, a GNSS receiver, an optical flow sensor (OFS), Light Detection and Ranging (LiDAR), and an RGB-D camera. Finally, the robustness and effectiveness of the multi-sensor data fusion system based on the EKF algorithm are verified by field flights in unstructured, indoor, outdoor, and indoor and outdoor transition scenarios.


2016 ◽  
Vol 2016 ◽  
pp. 1-10 ◽  
Author(s):  
Shashidhar Patil ◽  
Dubeom Kim ◽  
Seongsill Park ◽  
Youngho Chai

We present a wireless-inertial-measurement-unit- (WIMU-) based hand motion analysis technique for handwriting recognition in three-dimensional (3D) space. The proposed handwriting recognition system is not bounded by any limitations or constraints; users have the freedom and flexibility to write characters in free space. It uses hand motion analysis to segment hand motion data from a WIMU device that incorporates magnetic, angular rate, and gravity sensors (MARG) and a sensor fusion algorithm to automatically distinguish segments that represent handwriting from nonhandwriting data in continuous hand motion data. Dynamic time warping (DTW) recognition algorithm is used to recognize handwriting in real-time. We demonstrate that a user can freely write in air using an intuitive WIMU as an input and hand motion analysis device to recognize the handwriting in 3D space. The experimental results for recognizing handwriting in free space show that the proposed method is effective and efficient for other natural interaction techniques, such as in computer games and real-time hand gesture recognition applications.


Author(s):  
Zahari Taha ◽  
Mohd Yashim Wong ◽  
Hwa Jen Yap ◽  
Amirul Abdullah ◽  
Wee Kian Yeo

Immersion is one of the most important aspects in ensuring the applicability of Virtual Reality systems to training regimes aiming to improve performance. To ensure that this key aspect is met, the registration of motion between the real world and virtual environment must be made as accurate and as low latency as possible. Thus, an in-house developed Inertial Measurement Unit (IMU) system is developed for use in tracking the movement of the player’s racquet. This IMU tracks 6 DOF motion data and transmits it to the mobile training system for processing. Physically, the custom motion is built into the shape of a racquet grip to give a more natural sensation when swinging the racquet. In addition to that, an adaptive filter framework is also established to cope with different racquet movements automatically, enabling real-time 6 DOF tracking by balancing the jitter and latency. Experiments are performed to compare the efficacy of our approach with other conventional tracking methods such as the using Microsoft Kinect. The results obtained demonstrated noticeable accuracy and lower latency when compared with the aforementioned methods.


Sensors ◽  
2019 ◽  
Vol 19 (18) ◽  
pp. 3827 ◽  
Author(s):  
Minwoo Kim ◽  
Jaechan Cho ◽  
Seongjoo Lee ◽  
Yunho Jung

We propose an efficient hand gesture recognition (HGR) algorithm, which can cope with time-dependent data from an inertial measurement unit (IMU) sensor and support real-time learning for various human-machine interface (HMI) applications. Although the data extracted from IMU sensors are time-dependent, most existing HGR algorithms do not consider this characteristic, which results in the degradation of recognition performance. Because the dynamic time warping (DTW) technique considers the time-dependent characteristic of IMU sensor data, the recognition performance of DTW-based algorithms is better than that of others. However, the DTW technique requires a very complex learning algorithm, which makes it difficult to support real-time learning. To solve this issue, the proposed HGR algorithm is based on a restricted column energy (RCE) neural network, which has a very simple learning scheme in which neurons are activated when necessary. By replacing the metric calculation of the RCE neural network with DTW distance, the proposed algorithm exhibits superior recognition performance for time-dependent sensor data while supporting real-time learning. Our verification results on a field-programmable gate array (FPGA)-based test platform show that the proposed HGR algorithm can achieve a recognition accuracy of 98.6% and supports real-time learning and recognition at an operating frequency of 150 MHz.


2020 ◽  
Vol 110 (6) ◽  
pp. 2647-2660
Author(s):  
Nikolaj Dahmen ◽  
Roland Hohensinn ◽  
John Clinton

ABSTRACT The 2016 Mw 7.0 Kumamoto earthquake resulted in exceptional datasets of Global Navigation Satellite Systems (GNSS) and seismic data. We explore the spatial similarity of the signals and investigate procedures for combining collocated sensor data. GNSS enables the direct observation of the long-period ground displacements, limited by noise levels in regimes of millimeters to several centimeters. Strong-motion accelerometers are inertial sensors and therefore optimally resolve middle- to high-frequency strong ground motion. The double integration from acceleration to displacement amplifies long-period errors introduced by tilt, rotation, noise, and nonlinear instrument responses and can lead to large nonphysical drifts. For the case study of the Kumamoto earthquake, 39 GNSS stations (1  samples/s) with nearby located strong-motion accelerometers (100  samples/s) are investigated. The GNSS waveforms obtained by precise point positioning under real-time conditions prove to be very similar to the postprocessed result. Real-time GNSS and nearby located accelerometers show consistent observations for periods between ∼3–5 and ∼50–100  s. The matching frequency range is defined by the long-period noise of the accelerometer and the low signal-to-noise ratio (SNR) of GNSS, when it comes to small displacements close to its noise level. Current procedures in fusing the data with a Kalman filter are verified for the dataset of this event. Combined data result in a very broadband waveform that covers the optimal frequency range of each sensor. We explore how to integrate fused processing in a real-time network, including event detection and magnitude estimation. Carrying out a statistical test on the GNSS records allows us to identify seismic events and sort out stations with a low SNR, which would otherwise impair the quality of downstream products. The results of this study reinforce the emerging consensus that there is real benefit to collocation GNSS and strong-motion sensors for the monitoring of moderate-to-large earthquakes.


Sensors ◽  
2021 ◽  
Vol 21 (8) ◽  
pp. 2814
Author(s):  
Tsige Tadesse Alemayoh ◽  
Jae Hoon Lee ◽  
Shingo Okamoto

For the effective application of thriving human-assistive technologies in healthcare services and human–robot collaborative tasks, computing devices must be aware of human movements. Developing a reliable real-time activity recognition method for the continuous and smooth operation of such smart devices is imperative. To achieve this, light and intelligent methods that use ubiquitous sensors are pivotal. In this study, with the correlation of time series data in mind, a new method of data structuring for deeper feature extraction is introduced herein. The activity data were collected using a smartphone with the help of an exclusively developed iOS application. Data from eight activities were shaped into single and double-channels to extract deep temporal and spatial features of the signals. In addition to the time domain, raw data were represented via the Fourier and wavelet domains. Among the several neural network models used to fit the deep-learning classification of the activities, a convolutional neural network with a double-channeled time-domain input performed well. This method was further evaluated using other public datasets, and better performance was obtained. The practicability of the trained model was finally tested on a computer and a smartphone in real-time, where it demonstrated promising results.


Sensors ◽  
2020 ◽  
Vol 20 (18) ◽  
pp. 5342
Author(s):  
Ashok Kumar Patil ◽  
Adithya Balasubramanyam ◽  
Jae Yeong Ryu ◽  
Pavan Kumar B N ◽  
Bharatesh Chakravarthi ◽  
...  

Today, enhancement in sensing technology enables the use of multiple sensors to track human motion/activity precisely. Tracking human motion has various applications, such as fitness training, healthcare, rehabilitation, human-computer interaction, virtual reality, and activity recognition. Therefore, the fusion of multiple sensors creates new opportunities to develop and improve an existing system. This paper proposes a pose-tracking system by fusing multiple three-dimensional (3D) light detection and ranging (lidar) and inertial measurement unit (IMU) sensors. The initial step estimates the human skeletal parameters proportional to the target user’s height by extracting the point cloud from lidars. Next, IMUs are used to capture the orientation of each skeleton segment and estimate the respective joint positions. In the final stage, the displacement drift in the position is corrected by fusing the data from both sensors in real time. The installation setup is relatively effortless, flexible for sensor locations, and delivers results comparable to the state-of-the-art pose-tracking system. We evaluated the proposed system regarding its accuracy in the user’s height estimation, full-body joint position estimation, and reconstruction of the 3D avatar. We used a publicly available dataset for the experimental evaluation wherever possible. The results reveal that the accuracy of height and the position estimation is well within an acceptable range of ±3–5 cm. The reconstruction of the motion based on the publicly available dataset and our data is precise and realistic.


2019 ◽  
Vol 93 ◽  
pp. 224-236 ◽  
Author(s):  
Darpan Triboan ◽  
Liming Chen ◽  
Feng Chen ◽  
Zumin Wang

Sign in / Sign up

Export Citation Format

Share Document