scholarly journals Block Mining reward prediction with Polynomial Regression, Long short-term memory, and Prophet API for Ethereum blockchain miners

2021 ◽  
Vol 37 ◽  
pp. 01004
Author(s):  
Jeyasheela Rakkini Simon ◽  
K Geetha

The Ethereum blockchain is an open-source, decentralized blockchain with functions triggered by smart contract and has voluminous real-time data for analysis using machine learning and deep learning algorithms. Ether is the cryptocurrency of the Ethereum blockchain. Ethereum virtual machine is used to run Turing complete scripts. The data set concerning a block in the Ethereum blockchain with a block number, timestamp, crypto address of the miner, and the block rewards for the miner are explored for K means clustering for clustering miners with a unique crypto address and their rewards. Linear regression and polynomial regression are used for the prediction of the next block reward to the miner. The Long ShortTerm Memory (LSTM) algorithm is used to exploit the Ether market data set for predicting the next ether price in the market. Every kind of price and volume for every four hours is taken for prediction. The root mean square error of 34.9% is obtained for linear regression, the silhouette score is 71% for K-means clustering of miners with same rewards, with the optimal number of clusters obtained by Gap statistic method.

Author(s):  
Mr. V. Manoj Kumar

Prediction is most important for stock market not only for traders but also for computer engineers who analyses stock data. We can perform this prediction by two ways one is using historical stock data and other by analyzing by information gathered from social media. It is based on model/pattern used to predict stock dataset, there are so many models are available for predicting stocks, simply model is algorithm that’s from machine learning and deep learning. In the data set the two main parameters open and close value are used for stock prediction mostly but we can also predict by its volume too. So that data is preprocessed before it is used for prediction. In this paper we used various algorithm like linear regression, support vector regression and long short-term memory for better accuracy and to compare how it different from other algorithm and for predicting future stock.


2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Xiaofei Zhang ◽  
Tao Wang ◽  
Qi Xiong ◽  
Yina Guo

Imagery-based brain-computer interfaces (BCIs) aim to decode different neural activities into control signals by identifying and classifying various natural commands from electroencephalogram (EEG) patterns and then control corresponding equipment. However, several traditional BCI recognition algorithms have the “one person, one model” issue, where the convergence of the recognition model’s training process is complicated. In this study, a new BCI model with a Dense long short-term memory (Dense-LSTM) algorithm is proposed, which combines the event-related desynchronization (ERD) and the event-related synchronization (ERS) of the imagery-based BCI; model training and testing were conducted with its own data set. Furthermore, a new experimental platform was built to decode the neural activity of different subjects in a static state. Experimental evaluation of the proposed recognition algorithm presents an accuracy of 91.56%, which resolves the “one person one model” issue along with the difficulty of convergence in the training process.


2021 ◽  
Vol 3 ◽  
Author(s):  
Yueling Ma ◽  
Carsten Montzka ◽  
Bagher Bayat ◽  
Stefan Kollet

The lack of high-quality continental-scale groundwater table depth observations necessitates developing an indirect method to produce reliable estimation for water table depth anomalies (wtda) over Europe to facilitate European groundwater management under drought conditions. Long Short-Term Memory (LSTM) networks are a deep learning technology to exploit long-short-term dependencies in the input-output relationship, which have been observed in the response of groundwater dynamics to atmospheric and land surface processes. Here, we introduced different input variables including precipitation anomalies (pra), which is the most common proxy of wtda, for the networks to arrive at improved wtda estimates at individual pixels over Europe in various experiments. All input and target data involved in this study were obtained from the simulated TSMP-G2A data set. We performed wavelet coherence analysis to gain a comprehensive understanding of the contributions of different input variable combinations to wtda estimates. Based on the different experiments, we derived an indirect method utilizing LSTM networks with pra and soil moisture anomaly (θa) as input, which achieved the optimal network performance. The regional medians of test R2 scores and RMSEs obtained by the method in the areas with wtd ≤ 3.0 m were 76–95% and 0.17–0.30, respectively, constituting a 20–66% increase in median R2 and a 0.19–0.30 decrease in median RMSEs compared to the LSTM networks only with pra as input. Our results show that introducing θa significantly improved the performance of the trained networks to predict wtda, indicating the substantial contribution of θa to explain groundwater anomalies. Also, the European wtda map reproduced by the method had good agreement with that derived from the TSMP-G2A data set with respect to drought severity, successfully detecting ~41% of strong drought events (wtda ≥ 1.5) and ~29% of extreme drought events (wtda ≥ 2) in August 2015. The study emphasizes the importance to combine soil moisture information with precipitation information in quantifying or predicting groundwater anomalies. In the future, the indirect method derived in this study can be transferred to real-time monitoring of groundwater drought at the continental scale using remotely sensed soil moisture and precipitation observations or respective information from weather prediction models.


2021 ◽  
Vol 17 (12) ◽  
pp. 155014772110612
Author(s):  
Zhengqiang Ge ◽  
Xinyu Liu ◽  
Qiang Li ◽  
Yu Li ◽  
Dong Guo

To significantly protect the user’s privacy and prevent the user’s preference disclosure from leading to malicious entrapment, we present a combination of the recommendation algorithm and the privacy protection mechanism. In this article, we present a privacy recommendation algorithm, PrivItem2Vec, and the concept of the recommended-internet of things, which is a privacy recommendation algorithm, consisting of user’s information, devices, and items. Recommended-internet of things uses bidirectional long short-term memory, based on item2vec, which improves algorithm time series and the recommended accuracy. In addition, we reconstructed the data set in conjunction with the Paillier algorithm. The data on the server are encrypted and embedded, which reduces the readability of the data and ensures the data’s security to a certain extent. Experiments show that our algorithm is superior to other works in terms of recommended accuracy and efficiency.


2020 ◽  
Author(s):  
Frederik Kratzert ◽  
Daniel Klotz ◽  
Sepp Hochreiter ◽  
Grey S. Nearing

Abstract. A deep learning rainfall-runoff model can take multiple meteorological forcing products as inputs and learn to combine them in spatially and temporally dynamic ways. This is demonstrated using Long Short Term Memory networks (LSTMs) trained over basins in the continental US using the CAMELS data set. Using multiple precipitation products (NLDAS, Maurer, DayMet) in a single LSTM significantly improved simulation accuracy relative to using only individual precipitation products. A sensitivity analysis showed that the LSTM learned to utilize different precipitation products in different ways in different basins and for simulating different parts of the hydrograph in individual basins.


Author(s):  
Dejiang Kong ◽  
Fei Wu

The widely use of positioning technology has made mining the movements of people feasible and plenty of trajectory data have been accumulated. How to efficiently leverage these data for location prediction has become an increasingly popular research topic as it is fundamental to location-based services (LBS). The existing methods often focus either on long time (days or months) visit prediction (i.e., the recommendation of point of interest) or on real time location prediction (i.e., trajectory prediction). In this paper, we are interested in the location prediction problem in a weak real time condition and aim to predict users' movement in next minutes or hours. We propose a Spatial-Temporal Long-Short Term Memory (ST-LSTM) model which naturally combines spatial-temporal influence into LSTM to mitigate the problem of data sparsity. Further, we employ a hierarchical extension of the proposed ST-LSTM (HST-LSTM) in an encoder-decoder manner which models the contextual historic visit information in order to boost the prediction performance. The proposed HST-LSTM is evaluated on a real world trajectory data set and the experimental results demonstrate the effectiveness of the proposed model.


2021 ◽  
Author(s):  
Jianrong Dai

Abstract Purpose Machine Performance Check (MPC) is a daily quality assurance (QA) tool for Varian machines. The daily QA data based on MPC tests show machine performance patterns and potentially provide warning messages for preventive actions. This study developed a neural network model that could predict the trend of data variations quantitively. Methods and materials: MPC data used were collected daily for 3 years. The stacked long short-term memory (LSTM)model was used to develop the neural work model. To compare the stacked LSTM, the autoregressive integrated moving average model (ARIMA) was developed on the same data set. Cubic interpolation was used to double the amount of data to enhance prediction accuracy. After then, the data were divided into 3 groups: 70% for training, 15% for validation, and 15% for testing. The training set and the validation set were used to train the stacked LSTM with different hyperparameters to find the optimal hyperparameter. Furthermore, a greedy coordinate descent method was employed to combinate different hyperparameter sets. The testing set was used to assess the performance of the model with the optimal hyperparameter combination. The accuracy of the model was quantified by the mean absolute error (MAE), root-mean-square error (RMSE), and coefficient of determination (R2). Results A total of 867 data were collected to predict the data for the next 5 days. The mean MAE, RMSE, and \({\text{R}}^{2}\) with all MPC tests was 0.013, 0.020, and 0.853 in LSTM, while 0.021, 0.030, and 0.618 in ARIMA, respectively. The results show that the LSTM outperforms the ARIMA. Conclusions In this study, the stacked LSTM model can accurately predict the daily QA data based on MPC tests. Predicting future performance data based on MPC tests will foresee possible machine failure, allowing early machine maintenance and reducing unscheduled machine downtime.


Author(s):  
Chenchao Zhou ◽  
Qun Chen ◽  
Zhanhuai Li ◽  
Bo Zhao ◽  
Yongjun Xu ◽  
...  

Online reviews play an increasingly important role in users' purchase decisions. E-commerce websites provide massive user reviews, but it is hard for individuals to make full use of the information. Therefore, it is an urgent task to classify, analyze and summarize the massive comments. In this paper, a model based on attention mechanism and bi-directional long short-term memory (BLSTM) is used to identify the categories of these review objects for the classification of the reviews. The model first uses BLSTM to train the review in the form of word vectors; then according to the part-of-speech, the output vectors of the BLSTM are given corresponding weights. The weights as prior knowledge can guide the learning of attention mechanism to enhance the classification accuracy; finally, the attention mechanism is used to capture category-related important features which are used for category determination. Experiments on the SemEval data set show that our model outperforms the state-of-the-art methods on aspect category detection.


2019 ◽  
Vol 15 (10) ◽  
pp. 155014771988313 ◽  
Author(s):  
Chi Hua ◽  
Erxi Zhu ◽  
Liang Kuang ◽  
Dechang Pi

Accurate prediction of the generation capacity of photovoltaic systems is fundamental to ensuring the stability of the grid and to performing scheduling arrangements correctly. In view of the temporal defect and the local minimum problem of back-propagation neural network, a forecasting method of power generation based on long short-term memory-back-propagation is proposed. On this basis, the traditional prediction data set is improved. According to the three traditional methods listed in this article, we propose a fourth method to improve the traditional photovoltaic power station short-term power generation prediction. Compared with the traditional method, the long short-term memory-back-propagation neural network based on the improved data set has a lower prediction error. At the same time, a horizontal comparison with the multiple linear regression and the support vector machine shows that the long short-term memory-back-propagation method has several advantages. Based on the long short-term memory-back-propagation neural network, the short-term forecasting method proposed in this article for generating capacity of photovoltaic power stations will provide a basis for dispatching plan and optimizing operation of power grid.


foresight ◽  
2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Alireza Sedighi Fard

Purpose This study aims to compare many artificial neural network (ANN) methods to find out which method is better for the prediction of Covid19 number of cases in N steps ahead of the current time. Therefore, the authors can be more ready for similar issues in the future. Design/methodology/approach The authors are going to use many ANNs in this study including, five different long short-term memory (LSTM) methods, polynomial regression (from degree 2 to 5) and online dynamic unsupervised feedforward neural network (ODUFFNN). The authors are going to use these networks over a data set of Covid19 number of cases gathered by World Health Organization. After 1,000 epochs for each network, the authors are going to calculate the accuracy of each network, to be able to compare these networks by their performance and choose the best method for the prediction of Covid19. Findings The authors concluded that for most of the cases LSTM could predict Covid19 cases with an accuracy of more than 85% after LSTM networks ODUFFNN had medium accuracy of 45% but this network is highly flexible and fast computing. The authors concluded that polynomial regression cant is a good method for the specific purpose. Originality/value Considering the fact that Covid19 is a new global issue, less studies have been conducted with a comparative approach toward the prediction of Covid19 using ANN methods to introduce the best model of the prediction of this virus.


Sign in / Sign up

Export Citation Format

Share Document