scholarly journals Wearable Device-Based Smart Football Athlete Health Prediction Algorithm Based on Recurrent Neural Networks

2021 ◽  
Vol 2021 ◽  
pp. 1-7
Author(s):  
Qingkun Feng ◽  
Yanying Liu ◽  
Lijun Wang

For football players who participate in sports, the word “health” is extremely important. Athletes cannot create their own value in competitive competitions without a strong foundation. Scholars have paid a lot of attention to athlete health this year, and many analysis methods have been proposed, but there have been few studies using neural networks. As a result, this article proposes a novel wearable device-based smart football player health prediction algorithm based on recurrent neural networks. To begin, this article employs wearable sensors to collect health data from football players. The time step data are then fed into a recurrent neural network to extract deep features, followed by the health prediction results. The collected football player health dataset is used in this paper to conduct experiments. The simulation results prove the reliability and superiority of the proposed algorithm. Furthermore, the algorithm presented in this paper can serve as a foundation for the football team’s and coaches’ scientific training plans.

2020 ◽  
Vol 68 (2) ◽  
pp. 130-139
Author(s):  
Travis J. Desell ◽  
AbdElRahman A. ElSaid ◽  
Zimeng Lyu ◽  
David Stadem ◽  
Shuchita Patwardhan ◽  
...  

AbstractThis work presents an investigation into the ability of recurrent neural networks (RNNs) to provide long term predictions of time series data generated by coal fired power plants. While there are numerous studies which have used artificial neural networks (ANNs) to predict coal plant parameters, to the authors’ knowledge these have almost entirely been restricted to predicting values at the next time step, and not farther into the future. Using a novel neuro-evolution strategy called Evolutionary eXploration of Augmenting Memory Models (EXAMM), we evolved RNNs with advanced memory cells to predict per-minute plant parameters and per-hour boiler parameters up to 8 hours into the future. These data sets were challenging prediction tasks as they involve spiking behavior in the parameters being predicted. While the evolved RNNs were able to successfully predict the spikes in the hourly data they did not perform very well in accurately predicting their severity. The per-minute data proved even more challenging as medium range predictions miscalculated the beginning and ending of spikes, and longer range predictions reverted to long term trends and ignored the spikes entirely. We hope this initial study will motivate further study into this highly challenging prediction problem. The use of fuel properties data generated by a new Coal Tracker Optimization (CTO) program was also investigated and this work shows that their use improved predictive ability of the evolved RNNs.


Author(s):  
M. Venturini

In the paper, feed-forward Recurrent Neural Networks with a single hidden layer and trained by using a back-propagation learning algorithm are studied and developed for the simulation of compressor behavior under unsteady conditions. The data used for training and testing the RNNs are both obtained by means of a non-linear physics-based model for compressor dynamic simulation (simulated data) and measured on a multi-stage axial-centrifugal small size compressor (field data). The analysis on simulated data deals with the evaluation of the influence of the number of training patterns and of each RNN input on model response, both for data not corrupted and corrupted with measurement errors, for different RNN configurations and different values of the total delay time. For RNN models trained directly on experimental data, the analysis of the influence of RNN input combination on model response is repeated, as carried out for models trained on simulated data, in order to evaluate real system dynamic behavior. Then, predictor RNNs (i.e., those which do not include among the inputs the exogenous inputs evaluated at the same time step as the output vector) are developed and a discussion about their capabilities is carried out. The analysis on simulated data led to the conclusion that, to improve RNN performance, it is beneficial the adoption of a one-time delayed RNN, with an as low as possible total delay time (in the paper, 0.1 s) and trained with an as high as possible number of training patterns (at least 500). The analysis of the influence of each input on RNN response, conducted for RNN models trained on field data, showed that the single-step-ahead predictor RNN allowed very good performance, comparable with that of RNN models with all inputs (overall error for each single calculation equal to 1.3% and 0.9% for the two test cases considered). Moreover, the analysis of multi-step-ahead predictor capabilities showed that the reduction of the number of RNN calculations is the key factor for improving its performance over a significant time horizon. In fact, when a high test data sampling time is chosen (in the paper, 0.24 s), prediction errors were acceptable (lower than 1.9%).


2006 ◽  
Vol 129 (3) ◽  
pp. 468-478 ◽  
Author(s):  
M. Venturini

In this paper, feed-forward recurrent neural networks (RNNs) with a single hidden layer and trained by using a back-propagation learning algorithm are studied and developed for the simulation of compressor behavior under unsteady conditions. The data used for training and testing the RNNs are both obtained by means of a nonlinear physics-based model for compressor dynamic simulation (simulated data) and measured on a multistage axial-centrifugal small-size compressor (field data). The analysis on simulated data deals with the evaluation of the influence of the number of training patterns and of each RNN input on model response, both for data not corrupted and corrupted with measurement errors, for different RNN configurations, and different values of the total delay time. For RNN models trained directly on experimental data, the analysis of the influence of RNN input combination on model response is repeated, as carried out for models trained on simulated data, in order to evaluate real system dynamic behavior. Then, predictor RNNs (i.e., those that do not include among the inputs the exogenous inputs evaluated at the same time step as the output vector) are developed and a discussion about their capabilities is carried out. The analysis on simulated data led to the conclusion that, to improve RNN performance, the adoption of a one-time delayed RNN is beneficial, with an as-low-as-possible total delay time (in this paper, 0.1s) and trained with an as-high-as possible number of training patterns (at least 500). The analysis of the influence of each input on RNN response, conducted for RNN models trained on field data, showed that the single-step-ahead predictor RNN allowed very good performance, comparable to that of RNN models with all inputs (overall error for each single calculation equal to 1.3% and 0.9% for the two test cases considered). Moreover, the analysis of multi-step-ahead predictor capabilities showed that the reduction of the number of RNN calculations is the key factor for improving its performance over a significant time horizon. In fact, when a high test data sampling time is chosen (in this paper, 0.24s), prediction errors were acceptable (lower than 1.9%).


2019 ◽  
Author(s):  
Amrit Kashyap ◽  
Shella Keilholz

AbstractLarge scale patterns of spontaneous whole brain activity seen in resting state functional Magnetic Resonance Imaging (rsfMRI), are in part believed to arise from neural populations interacting through the structural fiber network [18]. Generative models that simulate this network activity, called Brain Network Models (BNM), are able to reproduce global averaged properties of empirical rsfMRI activity such as functional connectivity (FC) [7, 27]. However, they perform poorly in reproducing unique trajectories and state transitions that are observed over the span of minutes in whole brain data [20]. At very short timescales between measurements, it is not known how much of the variance these BNM can explain because they are not currently synchronized with the measured rsfMRI. We demonstrate that by solving for the initial conditions of BNM from an observed data point using Recurrent Neural Networks (RNN) and integrating it to predict the next time step, the trained network can explain large amounts of variance for the 5 subsequent time points of unseen future trajectory. The RNN and BNM combined system essentially models the network component of rsfMRI, and where future activity is solely based on previous neural activity propagated through the structural network. Longer instantiations of this generative model simulated over the span of minutes can reproduce average FC and the 1/f power spectrum from 0.01 to 0.3 Hz seen in fMRI. Simulated data also contain interesting resting state dynamics, such as unique repeating trajectories, called QPPs [22] that are highly correlated to the empirical trajectory which spans over 20 seconds. Moreover, it exhibits complex states and transitions as seen using k-Means analysis on windowed FC matrices [1]. This suggests that by combining BNMs with RNN to accurately predict future resting state activity at short timescales, it is learning the manifold of the network dynamics, allowing it to simulate complex resting state trajectories at longer time scales. We believe that our technique will be useful in understanding the large-scale functional organization of the brain and how different BNMs recapitulate different aspects of the system dynamics.


Author(s):  
Dieuwke Hupkes ◽  
Willem Zuidema

In this paper, we investigate how recurrent neural networks can learn and process languages with hierarchical, compositional semantics. To this end, we define the artificial task of processing nested arithmetic expressions, and study whether different types of neural networks can learn to compute their meaning. We find that simple recurrent networks cannot find a generalising solution to this task, but gated recurrent neural networks perform surprisingly well: networks learn to predict the outcome of the arithmetic expressions with high accuracy, although performance deteriorates somewhat with increasing length. We test multiple hypotheses on the information that is encoded and processed by the networks using a method called diagnostic classification. In this method, simple neural classifiers are used to test sequences of predictions about features of the hidden state representations at each time step. Our results indicate that the networks follow a strategy similar to our hypothesised ‘cumulative strategy’, which explains the high accuracy of the network on novel expressions, the generalisation to longer expressions than seen in training, and the mild deterioration with increasing length. This, in turn, shows that diagnostic classifiers can be a useful technique for opening up the black box of neural networks.


Sign in / Sign up

Export Citation Format

Share Document