scholarly journals Fluid Equation-Based and Data-Driven Simulation of Special Effects Animation

2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Yujuan Deng

This paper analyzes the simulation of special effects animation through fluid equations and data-driven methods. This paper also considers the needs of computer fluid animation simulation in terms of computational accuracy and simulation efficiency, takes high real-time, high interactivity, and high physical accuracy of simulation algorithm as the research focus and target, and proposes a solution algorithm and acceleration scheme based on deep neural network framework for the key problems of simulation of natural phenomena including smoke and liquid. With the deep development of artificial intelligence technology, deep neural network models are widely used in research fields such as computer image classification, speech recognition, and fluid detail synthesis with their powerful data learning capability. Its stable and efficient computational model provides a new problem-solving approach for computerized fluid animation simulation. In terms of time series reconstruction, this paper adopts a tracking-based reconstruction method, including target tracking, 2D trajectory fitting and repair, and 3D trajectory reconstruction. For continuous image sequences, a linear dynamic model algorithm based on pyramidal optical flow is used to track the feature centers of the objects, and the spatial coordinates and motion parameters of the feature points are obtained by reconstructing the motion trajectories. The experimental results show that in terms of spatial reconstruction, the matching method proposed in this paper is more accurate compared with the traditional stereo matching algorithm; in terms of time series reconstruction, the error of target tracking reduced. Finally, the 3D motion trajectory of the point feature object and the motion pattern at a certain moment are shown, and the method in this paper obtains more ideal results, which proves the effectiveness of the method.

Sensors ◽  
2019 ◽  
Vol 19 (9) ◽  
pp. 2082 ◽  
Author(s):  
Jong-Min Yeom ◽  
Seonyoung Park ◽  
Taebyeong Chae ◽  
Jin-Young Kim ◽  
Chang Suk Lee

Although data-driven methods including deep neural network (DNN) were introduced, there was not enough assessment about spatial characteristics when using limited ground observation as reference. This work aimed to interpret the feasibility of several machine learning approaches to assess the spatial distribution of solar radiation on Earth based on the Communication, Ocean, and Meteorological Satellite (COMS) Meteorological Imager (MI) geostationary satellite. Four data-driven models were selected (artificial neural network (ANN), random forest (RF), support vector regression (SVR), and DNN), to compare their accuracy and spatial estimating performance. Moreover, we used a physical model to probe the ability of data-driven methods, implementing hold-out and k-fold cross-validation approaches based on pyranometers located in South Korea. The results of analysis showed the RF had the highest accuracy in predicting performance, although the difference between RF and the second-best technique (DNN) was insignificant. Temporal variations in root mean square error (RMSE) were dependent on the number of data samples, while the physical model showed relatively less sensitivity. Nevertheless, DNN and RF showed less variability in RMSE than the others. To examine spatial estimation performance, we mapped solar radiation over South Korea for each model. The data-driven models accurately simulated the observed cloud pattern spatially, whereas the physical model failed to do because of cloud mask errors. These exhibited different spatial retrieval performances according to their own training approaches. Overall analysis showed that deeper layers of networks approaches (RF and DNN), could best simulate the challenging spatial pattern of thin clouds when using satellite multispectral data.


Author(s):  
Muhammad Faheem Mushtaq ◽  
Urooj Akram ◽  
Muhammad Aamir ◽  
Haseeb Ali ◽  
Muhammad Zulqarnain

It is important to predict a time series because many problems that are related to prediction such as health prediction problem, climate change prediction problem and weather prediction problem include a time component. To solve the time series prediction problem various techniques have been developed over many years to enhance the accuracy of forecasting. This paper presents a review of the prediction of physical time series applications using the neural network models. Neural Networks (NN) have appeared as an effective tool for forecasting of time series.  Moreover, to resolve the problems related to time series data, there is a need of network with single layer trainable weights that is Higher Order Neural Network (HONN) which can perform nonlinearity mapping of input-output. So, the developers are focusing on HONN that has been recently considered to develop the input representation spaces broadly. The HONN model has the ability of functional mapping which determined through some time series problems and it shows the more benefits as compared to conventional Artificial Neural Networks (ANN). The goal of this research is to present the reader awareness about HONN for physical time series prediction, to highlight some benefits and challenges using HONN.


2021 ◽  
Vol 10 (1) ◽  
pp. 21
Author(s):  
Omar Nassef ◽  
Toktam Mahmoodi ◽  
Foivos Michelinakis ◽  
Kashif Mahmood ◽  
Ahmed Elmokashfi

This paper presents a data driven framework for performance optimisation of Narrow-Band IoT user equipment. The proposed framework is an edge micro-service that suggests one-time configurations to user equipment communicating with a base station. Suggested configurations are delivered from a Configuration Advocate, to improve energy consumption, delay, throughput or a combination of those metrics, depending on the user-end device and the application. Reinforcement learning utilising gradient descent and genetic algorithm is adopted synchronously with machine and deep learning algorithms to predict the environmental states and suggest an optimal configuration. The results highlight the adaptability of the Deep Neural Network in the prediction of intermediary environmental states, additionally the results present superior performance of the genetic reinforcement learning algorithm regarding its performance optimisation.


Electronics ◽  
2021 ◽  
Vol 10 (13) ◽  
pp. 1514
Author(s):  
Seung-Ho Lim ◽  
WoonSik William Suh ◽  
Jin-Young Kim ◽  
Sang-Young Cho

The optimization for hardware processor and system for performing deep learning operations such as Convolutional Neural Networks (CNN) in resource limited embedded devices are recent active research area. In order to perform an optimized deep neural network model using the limited computational unit and memory of an embedded device, it is necessary to quickly apply various configurations of hardware modules to various deep neural network models and find the optimal combination. The Electronic System Level (ESL) Simulator based on SystemC is very useful for rapid hardware modeling and verification. In this paper, we designed and implemented a Deep Learning Accelerator (DLA) that performs Deep Neural Network (DNN) operation based on the RISC-V Virtual Platform implemented in SystemC in order to enable rapid and diverse analysis of deep learning operations in an embedded device based on the RISC-V processor, which is a recently emerging embedded processor. The developed RISC-V based DLA prototype can analyze the hardware requirements according to the CNN data set through the configuration of the CNN DLA architecture, and it is possible to run RISC-V compiled software on the platform, can perform a real neural network model like Darknet. We performed the Darknet CNN model on the developed DLA prototype, and confirmed that computational overhead and inference errors can be analyzed with the DLA prototype developed by analyzing the DLA architecture for various data sets.


ChemMedChem ◽  
2021 ◽  
Author(s):  
Christoph Grebner ◽  
Hans Matter ◽  
Daniel Kofink ◽  
Jan Wenzel ◽  
Friedemann Schmidt ◽  
...  

2021 ◽  
Author(s):  
Mohammed Ayub ◽  
SanLinn Kaka

Abstract Manual first-break picking from a large volume of seismic data is extremely tedious and costly. Deployment of machine learning models makes the process fast and cost effective. However, these machine learning models require high representative and effective features for accurate automatic picking. Therefore, First- Break (FB) picking classification model that uses effective minimum number of features and promises performance efficiency is proposed. The variants of Recurrent Neural Networks (RNNs) such as Long ShortTerm Memory (LSTM) and Gated Recurrent Unit (GRU) can retain contextual information from long previous time steps. We deploy this advantage for FB picking as seismic traces are amplitude values of vibration along the time-axis. We use behavioral fluctuation of amplitude as input features for LSTM and GRU. The models are trained on noisy data and tested for generalization on original traces not seen during the training and validation process. In order to analyze the real-time suitability, the performance is benchmarked using accuracy, F1-measure and three other established metrics. We have trained two RNN models and two deep Neural Network models for FB classification using only amplitude values as features. Both LSTM and GRU have the accuracy and F1-measure with a score of 94.20%. With the same features, Convolutional Neural Network (CNN) has an accuracy of 93.58% and F1-score of 93.63%. Again, Deep Neural Network (DNN) model has scores of 92.83% and 92.59% as accuracy and F1-measure, respectively. From the pexperiment results, we see significant superior performance of LSTM and GRU to CNN and DNN when used the same features. For robustness of LSTM and GRU models, the performance is compared with DNN model that is trained using nine features derived from seismic traces and observed that the performance superiority of RNN models. Therefore, it is safe to conclude that RNN models (LSTM and GRU) are capable of classifying the FB events efficiently even by using a minimum number of features that are not computationally expensive. The novelty of our work is the capability of automatic FB classification with the RNN models that incorporate contextual behavioral information without the need for sophisticated feature extraction or engineering techniques that in turn can help in reducing the cost and fostering classification model robust and faster.


Sign in / Sign up

Export Citation Format

Share Document