scholarly journals Recommending Cryptocurrency Trading Points with Deep Reinforcement Learning Approach

2020 ◽  
Vol 10 (4) ◽  
pp. 1506 ◽  
Author(s):  
Otabek Sattarov ◽  
Azamjon Muminov ◽  
Cheol Won Lee ◽  
Hyun Kyu Kang ◽  
Ryumduck Oh ◽  
...  

The net profit of investors can rapidly increase if they correctly decide to take one of these three actions: buying, selling, or holding the stocks. The right action is related to massive stock market measurements. Therefore, defining the right action requires specific knowledge from investors. The economy scientists, following their research, have suggested several strategies and indicating factors that serve to find the best option for trading in a stock market. However, several investors’ capital decreased when they tried to trade the basis of the recommendation of these strategies. That means the stock market needs more satisfactory research, which can give more guarantee of success for investors. To address this challenge, we tried to apply one of the machine learning algorithms, which is called deep reinforcement learning (DRL) on the stock market. As a result, we developed an application that observes historical price movements and takes action on real-time prices. We tested our proposal algorithm with three—Bitcoin (BTC), Litecoin (LTC), and Ethereum (ETH)—crypto coins’ historical data. The experiment on Bitcoin via DRL application shows that the investor got 14.4% net profits within one month. Similarly, tests on Litecoin and Ethereum also finished with 74% and 41% profit, respectively.

Author(s):  
Ika Nurkasanah

Background: Inventory policy highly influences Supply Chain Management (SCM) process. Evidence suggests that almost half of SCM costs are set off by stock-related expenses.Objective: This paper aims to minimise total inventory cost in SCM by applying a multi-agent-based machine learning called Reinforcement Learning (RL).Methods: The ability of RL in finding a hidden pattern of inventory policy is run under various constraints which have not been addressed together or simultaneously in previous research. These include capacitated manufacturer and warehouse, limitation of order to suppliers, stochastic demand, lead time uncertainty and multi-sourcing supply. RL was run through Q-Learning with four experiments and 1,000 iterations to examine its result consistency. Then, RL was contrasted to the previous mathematical method to check its efficiency in reducing inventory costs.Results: After 1,000 trial-error simulations, the most striking finding is that RL can perform more efficiently than the mathematical approach by placing optimum order quantities at the right time. In addition, this result was achieved under complex constraints and assumptions which have not been simultaneously simulated in previous studies.Conclusion: Results confirm that the RL approach will be invaluable when implemented to comparable supply network environments expressed in this project. Since RL still leads to higher shortages in this research, combining RL with other machine learning algorithms is suggested to have more robust end-to-end SCM analysis. Keywords: Inventory Policy, Multi-Echelon, Reinforcement Learning, Supply Chain Management, Q-Learning


2020 ◽  
Vol 17 (10) ◽  
pp. 129-141
Author(s):  
Yiwen Nie ◽  
Junhui Zhao ◽  
Jun Liu ◽  
Jing Jiang ◽  
Ruijin Ding

2016 ◽  
Author(s):  
Dario di Nocera ◽  
Alberto Finzi ◽  
Silvia Rossi ◽  
Mariacarla Staffa

Author(s):  
Panagiotis Radoglou-Grammatikis ◽  
Konstantinos Robolos ◽  
Panagiotis Sarigiannidis ◽  
Vasileios Argyriou ◽  
Thomas Lagkas ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document