2021 ◽  
Author(s):  
Matej Choma ◽  
Jakub Bartel ◽  
Petr Šimánek ◽  
Vojtěch Rybář

<p>The standard for weather radar nowcasting in the Central Europe region is the COTREC extrapolation method. We propose a recurrent neural network based on the PredRNN architecture, which outperforms the COTREC 60 minutes predictions by a significant margin.</p><p>Nowcasting, as a complement to numerical weather predictions, is a well-known concept. However, the increasing speed of information flow in our society today creates an opportunity for its effective implementation. Methods currently used for these predictions are primarily based on the optical flow and are struggling in the prediction of the development of the echo shape and intensity.</p><p>In this work, we are benefiting from a data-driven approach and building on the advances in the capabilities of neural networks for computer vision. We define the prediction task as an extrapolation of sequences of the latest weather radar echo measurements. To capture the spatiotemporal behaviour of rainfall and storms correctly, we propose the use of a recurrent neural network using a combination of long short term memory (LSTM) techniques with convolutional neural networks (CNN). Our approach is applicable to any geographical area, radar network resolution and refresh rate.</p><p>We conducted the experiments comparing predictions for 10 to 60 minutes into the future with the Critical Success Index, which evaluates the spatial accuracy of the predicted echo and Mean Squared Error. Our neural network model has been trained with three years of rainfall data captured by weather radars over the Czech Republic. Results for our bordered testing domain show that our method achieves comparable or better scores than both COTREC and optical flow extrapolation methods available in the open-source pySTEPS and rainymotion libraries.</p><p>With our work, we aim to contribute to the nowcasting research in general and create another source of short-time predictions for both experts and the general public.</p>


Author(s):  
HUA LI ◽  
JUN WANG

Optical flow computation in dynamic image processing can be formulated as a minimization problem by a variational approach. Because solving the problem is computationally intensive, we reformulate the problem suitable for neural computing. In this paper, we propose a recurrent neural network model which may be implemented in hardware with many processing elements (neurons) operating asynchronously in parallel to achieve a possible real-time solution. We derive and prove the properties of the reformulation, as well as analyze the asymptotic stability and convergence rate of the proposed neural network. Experiments using both the test patterns and the real laboratory images are conducted.


2020 ◽  
Vol 39 (6) ◽  
pp. 8927-8935
Author(s):  
Bing Zheng ◽  
Dawei Yun ◽  
Yan Liang

Under the impact of COVID-19, research on behavior recognition are highly needed. In this paper, we combine the algorithm of self-adaptive coder and recurrent neural network to realize the research of behavior pattern recognition. At present, most of the research of human behavior recognition is focused on the video data, which is based on the video number. At the same time, due to the complexity of video image data, it is easy to violate personal privacy. With the rapid development of Internet of things technology, it has attracted the attention of a large number of experts and scholars. Researchers have tried to use many machine learning methods, such as random forest, support vector machine and other shallow learning methods, which perform well in the laboratory environment, but there is still a long way to go from practical application. In this paper, a recursive neural network algorithm based on long and short term memory (LSTM) is proposed to realize the recognition of behavior patterns, so as to improve the accuracy of human activity behavior recognition.


2020 ◽  
Vol 2020 (17) ◽  
pp. 2-1-2-6
Author(s):  
Shih-Wei Sun ◽  
Ting-Chen Mou ◽  
Pao-Chi Chang

To improve the workout efficiency and to provide the body movement suggestions to users in a “smart gym” environment, we propose to use a depth camera for capturing a user’s body parts and mount multiple inertial sensors on the body parts of a user to generate deadlift behavior models generated by a recurrent neural network structure. The contribution of this paper is trifold: 1) The multimodal sensing signals obtained from multiple devices are fused for generating the deadlift behavior classifiers, 2) the recurrent neural network structure can analyze the information from the synchronized skeletal and inertial sensing data, and 3) a Vaplab dataset is generated for evaluating the deadlift behaviors recognizing capability in the proposed method.


2019 ◽  
Author(s):  
Qi Yuan ◽  
Alejandro Santana-Bonilla ◽  
Martijn Zwijnenburg ◽  
Kim Jelfs

<p>The chemical space for novel electronic donor-acceptor oligomers with targeted properties was explored using deep generative models and transfer learning. A General Recurrent Neural Network model was trained from the ChEMBL database to generate chemically valid SMILES strings. The parameters of the General Recurrent Neural Network were fine-tuned via transfer learning using the electronic donor-acceptor database from the Computational Material Repository to generate novel donor-acceptor oligomers. Six different transfer learning models were developed with different subsets of the donor-acceptor database as training sets. We concluded that electronic properties such as HOMO-LUMO gaps and dipole moments of the training sets can be learned using the SMILES representation with deep generative models, and that the chemical space of the training sets can be efficiently explored. This approach identified approximately 1700 new molecules that have promising electronic properties (HOMO-LUMO gap <2 eV and dipole moment <2 Debye), 6-times more than in the original database. Amongst the molecular transformations, the deep generative model has learned how to produce novel molecules by trading off between selected atomic substitutions (such as halogenation or methylation) and molecular features such as the spatial extension of the oligomer. The method can be extended as a plausible source of new chemical combinations to effectively explore the chemical space for targeted properties.</p>


2020 ◽  
Vol 17 (4) ◽  
pp. 497-506
Author(s):  
Sunil Patel ◽  
Ramji Makwana

Automatic classification of dynamic hand gesture is challenging due to the large diversity in a different class of gesture, Low resolution, and it is performed by finger. Due to a number of challenges many researchers focus on this area. Recently deep neural network can be used for implicit feature extraction and Soft Max layer is used for classification. In this paper, we propose a method based on a two-dimensional convolutional neural network that performs detection and classification of hand gesture simultaneously from multimodal Red, Green, Blue, Depth (RGBD) and Optical flow Data and passes this feature to Long-Short Term Memory (LSTM) recurrent network for frame-to-frame probability generation with Connectionist Temporal Classification (CTC) network for loss calculation. We have calculated an optical flow from Red, Green, Blue (RGB) data for getting proper motion information present in the video. CTC model is used to efficiently evaluate all possible alignment of hand gesture via dynamic programming and check consistency via frame-to-frame for the visual similarity of hand gesture in the unsegmented input stream. CTC network finds the most probable sequence of a frame for a class of gesture. The frame with the highest probability value is selected from the CTC network by max decoding. This entire CTC network is trained end-to-end with calculating CTC loss for recognition of the gesture. We have used challenging Vision for Intelligent Vehicles and Applications (VIVA) dataset for dynamic hand gesture recognition captured with RGB and Depth data. On this VIVA dataset, our proposed hand gesture recognition technique outperforms competing state-of-the-art algorithms and gets an accuracy of 86%


Sign in / Sign up

Export Citation Format

Share Document