scholarly journals Deep Learning Models Predict Dynamic Oxygen Uptake Responses from Wearable Sensor Data during Moderate‐ and Heavy‐Intensity Exercise

2021 ◽  
Vol 35 (S1) ◽  
Author(s):  
Eric Hedge ◽  
Richard Hughson ◽  
Robert Amelard
2018 ◽  
Vol 8 (10) ◽  
pp. 1992 ◽  
Author(s):  
YiNa Jeong ◽  
SuRak Son ◽  
SangSik Lee ◽  
ByungKwan Lee

This paper proposes a total crop-diagnosis platform (TCP) based on deep learning models in a natural nutrient environment, which collects the weather information based on a farm’s location information, diagnoses the collected weather information and the crop soil sensor data with a deep learning technique, and notifies a farm manager of the diagnosed result. The proposed TCP is composed of 1 gateway and 2 modules as follows. First, the optimized farm sensor gateway (OFSG) collects data by internetworking sensor nodes which use Zigbee, Wi-Fi and Bluetooth protocol and reduces the number of sensor data fragmentation times through the compression of a fragment header. Second, the data storage module (DSM) stores the collected farm data and weather data in a farm central server. Third, the crop self-diagnosis module (CSM) works in the cloud server and diagnoses by deep learning whether or not the status of a farm is in good condition for growing crops according to current weather and soil information. The TCP performance shows that the data processing rate of the OFSG is increased by about 7% compared with existing sensor gateways. The learning time of the CSM is shorter than that of the long short-term memory models (LSTM) by 0.43 s, and the success rate of the CSM is higher than that of the LSTM by about 7%. Therefore, the TCP based on deep learning interconnects the communication protocols of various sensors, solves the maximum data size that sensor can transfer, predicts in advance crop disease occurrence in an external environment, and helps to make an optimized environment in which to grow crops.


Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1064
Author(s):  
I Nyoman Kusuma Wardana ◽  
Julian W. Gardner ◽  
Suhaib A. Fahmy

Accurate air quality monitoring requires processing of multi-dimensional, multi-location sensor data, which has previously been considered in centralised machine learning models. These are often unsuitable for resource-constrained edge devices. In this article, we address this challenge by: (1) designing a novel hybrid deep learning model for hourly PM2.5 pollutant prediction; (2) optimising the obtained model for edge devices; and (3) examining model performance running on the edge devices in terms of both accuracy and latency. The hybrid deep learning model in this work comprises a 1D Convolutional Neural Network (CNN) and a Long Short-Term Memory (LSTM) to predict hourly PM2.5 concentration. The results show that our proposed model outperforms other deep learning models, evaluated by calculating RMSE and MAE errors. The proposed model was optimised for edge devices, the Raspberry Pi 3 Model B+ (RPi3B+) and Raspberry Pi 4 Model B (RPi4B). This optimised model reduced file size to a quarter of the original, with further size reduction achieved by implementing different post-training quantisation. In total, 8272 hourly samples were continuously fed to the edge device, with the RPi4B executing the model twice as fast as the RPi3B+ in all quantisation modes. Full-integer quantisation produced the lowest execution time, with latencies of 2.19 s and 4.73 s for RPi4B and RPi3B+, respectively.


Author(s):  
Ahsen Tahir ◽  
Jawad Ahmad ◽  
Gordon Morison ◽  
Hadi Larijani ◽  
Ryan M. Gibson ◽  
...  

Falls are a major health concern in older adults. Falls lead to mortality, immobility and high costs to social and health care services. Early detection and classification of falls is imperative for timely and appropriate medical aid response. Traditional machine learning models have been explored for fall classification. While newly developed deep learning techniques have the ability to potentially extract high-level features from raw sensor data providing high accuracy and robustness to variations in sensor position, orientation and diversity of work environments that may skew traditional classification models. However, frequently used deep learning models like Convolutional Neural Networks (CNN) are computationally intensive. To the best of our knowledge, we present the first instance of a Hybrid Multichannel Random Neural Network (HMCRNN) architecture for fall detection and classification. The proposed architecture provides the highest accuracy of 92.23% with dropout regularization, compared to other deep learning implementations. The performance of the proposed technique is approximately comparable to a CNN yet requires only half the computation cost of the CNN-based implementation. Furthermore, the proposed HMCRNN architecture provides 34.12% improvement in accuracy on average than a Multilayer Perceptron.


Sensors ◽  
2020 ◽  
Vol 20 (11) ◽  
pp. 3307 ◽  
Author(s):  
Caroline König ◽  
Ahmed Mohamed Helmi

Condition monitoring (CM) is a useful application in industry 4.0, where the machine’s health is controlled by computational intelligence methods. Data-driven models, especially from the field of deep learning, are efficient solutions for the analysis of time series sensor data due to their ability to recognize patterns in high dimensional data and to track the temporal evolution of the signal. Despite the excellent performance of deep learning models in many applications, additional requirements regarding the interpretability of machine learning models are getting relevant. In this work, we present a study on the sensitivity of sensors in a deep learning based CM system providing high-level information about the relevance of the sensors. Several convolutional neural networks (CNN) have been constructed from a multisensory dataset for the prediction of different degradation states in a hydraulic system. An attribution analysis of the input features provided insights about the contribution of each sensor in the prediction of the classifier. Relevant sensors were identified, and CNN models built on the selected sensors resulted equal in prediction quality to the original models. The information about the relevance of sensors is useful for the system’s design to decide timely on the required sensors.


Author(s):  
Govind P. Gupta ◽  
Shubham Gaur

Remote monitoring and recognition of physical activities of elderly people within smart homes and detection of the deviations in their daily activities from previous behavior is one of the fundamental research challenges for the development of ambient assisted living system. This system is also very helpful in monitoring the health of a swiftly aging population in the developed countries. In this chapter, a framework is proposed for remote monitoring and recognition of physical activities of elderly people using smart phone accelerometer sensor data using deep learning models. The main objective of the proposed framework is to provide preventive measures for the emergency health issues such as cardiac arrest, sudden falls, dementia, or arthritis. For the performance evaluation of the proposed framework, two different benchmark accelerometer sensor datasets, UCI and WISDM, are used. Results analysis confirms the performance of the proposed scheme in terms of accuracy, F1-score, root-mean square error (RMSE).


Sensors ◽  
2020 ◽  
Vol 20 (3) ◽  
pp. 740
Author(s):  
Danica Hendry ◽  
Ryan Leadbetter ◽  
Kristoffer McKee ◽  
Luke Hopper ◽  
Catherine Wild ◽  
...  

This study aimed to develop a wearable sensor system, using machine-learning models, capable of accurately estimating peak ground reaction force (GRF) during ballet jumps in the field. Female dancers (n = 30) performed a series of bilateral and unilateral ballet jumps. Dancers wore six ActiGraph Link wearable sensors (100 Hz). Data were collected simultaneously from two AMTI force platforms and synchronised with the ActiGraph data. Due to sensor hardware malfunctions and synchronisation issues, a multistage approach to model development, using a reduced data set, was taken. Using data from the 14 dancers with complete multi-sensor synchronised data, the best single sensor was determined. Subsequently, the best single sensor model was refined and validated using all available data for that sensor (23 dancers). Root mean square error (RMSE) in body weight (BW) and correlation coefficients (r) were used to assess the GRF profile, and Bland–Altman plots were used to assess model peak GRF accuracy. The model based on sacrum data was the most accurate single sensor model (unilateral landings: RMSE = 0.24 BW, r = 0.95; bilateral landings: RMSE = 0.21 BW, r = 0.98) with the refined model still showing good accuracy (unilateral: RMSE = 0.42 BW, r = 0.80; bilateral: RMSE = 0.39 BW, r = 0.92). Machine-learning models applied to wearable sensor data can provide a field-based system for GRF estimation during ballet jumps.


Sign in / Sign up

Export Citation Format

Share Document