scholarly journals Contour error modeling and compensation of CNC machining based on deep learning and reinforcement learning

Author(s):  
Yakun Jiang ◽  
Jihong Chen ◽  
Huicheng Zhou ◽  
Jianzhong Yang ◽  
Pengcheng Hu ◽  
...  
2021 ◽  
Author(s):  
Yakun Jiang ◽  
Jihong Chen ◽  
Huicheng Zhou ◽  
Jianzhong Yang ◽  
Pengcheng Hu ◽  
...  

Abstract Contour error compensation of the Computer Numerical Control (CNC) machine tool is a vital technology that can improve machining accuracy and quality. To achieve this goal, the tracking error of a feeding axis, which is a dominant issue incurring the contour error, should be firstly modeled and then a proper compensation strategy should be determined. However, building the precise tracking error prediction model is a challenging task because of the nonlinear issues like backlash and friction involved in the feeding axis; besides, the optimal compensation parameter is also difficult to determine because it is sensitive to the machining tool path. In this paper, a set of novel approaches for contour error prediction and compensation is presented based on the technologies of deep learning and reinforcement learning. By utilizing the internal data of the CNC system, the tracking error of the feeding axis is modeled as a Nonlinear Auto-Regressive Long-Short Term Memory (NAR-LSTM) network, considering all the nonlinear issues of the feeding axis. Given the contour error as calculated based on the predicted tracking error of each feeding axis, a compensation strategy is presented with its parameters identified efficiently by a Time-Series Deep Q-Network (TS-DQN) as designed in our work. To validate the feasibility and advantage of the proposed approaches, extensive experiments are conducted, testifying that, our approaches can predict the tracking error and contour error with very good precision (better than about 99% and 90% respectively), and the contour error compensated based on the predicted results and our compensation strategy is significantly reduced (about 70%~85% reduction) with the machining quality improved drastically (machining error reduced about 50%).


Author(s):  
Sangseok Yun ◽  
Jae-Mo Kang ◽  
Jeongseok Ha ◽  
Sangho Lee ◽  
Dong-Woo Ryu ◽  
...  

2021 ◽  
Vol 13 (1) ◽  
Author(s):  
Tiago Pereira ◽  
Maryam Abbasi ◽  
Bernardete Ribeiro ◽  
Joel P. Arrais

AbstractIn this work, we explore the potential of deep learning to streamline the process of identifying new potential drugs through the computational generation of molecules with interesting biological properties. Two deep neural networks compose our targeted generation framework: the Generator, which is trained to learn the building rules of valid molecules employing SMILES strings notation, and the Predictor which evaluates the newly generated compounds by predicting their affinity for the desired target. Then, the Generator is optimized through Reinforcement Learning to produce molecules with bespoken properties. The innovation of this approach is the exploratory strategy applied during the reinforcement training process that seeks to add novelty to the generated compounds. This training strategy employs two Generators interchangeably to sample new SMILES: the initially trained model that will remain fixed and a copy of the previous one that will be updated during the training to uncover the most promising molecules. The evolution of the reward assigned by the Predictor determines how often each one is employed to select the next token of the molecule. This strategy establishes a compromise between the need to acquire more information about the chemical space and the need to sample new molecules, with the experience gained so far. To demonstrate the effectiveness of the method, the Generator is trained to design molecules with an optimized coefficient of partition and also high inhibitory power against the Adenosine $$A_{2A}$$ A 2 A and $$\kappa$$ κ opioid receptors. The results reveal that the model can effectively adjust the newly generated molecules towards the wanted direction. More importantly, it was possible to find promising sets of unique and diverse molecules, which was the main purpose of the newly implemented strategy.


Author(s):  
Eduardo F. Morales ◽  
Rafael Murrieta-Cid ◽  
Israel Becerra ◽  
Marco A. Esquivel-Basaldua

2021 ◽  
Author(s):  
Hanxiao Xu ◽  
Jie Liang ◽  
Wenchaun Zang

Abstract This paper combines deep Q network (DQN) with long and short-term memory (LSTM) and proposes a novel hybrid deep learning method called DQN-LSTM framework. The proposed method aims to address the prediction of five Chinese agricultural commodities futures prices over different time duration. The DQN-LSTM applies the strategy enhancement of deep reinforcement learning to the structural parameter optimization of deep recurrent networks, and achieves the organic integration of two types of deep learning algorithms. The new framework has the capacity of self-optimization and learning of parameters, thus improving the performance of prediction by its own iteration, which shows great prospects for future application in financial prediction and other directions. The performance of the proposed method is evaluated by comparing the effectiveness of the DQN-LSTM method with that of traditional predicting methods such as auto-regressive integrated moving average (ARIMA), support vector machine (SVR) and LSTM. The results show that the DQN-LSTM method can effectively optimize the traditional LSTM structural parameters through policy iteration of the deep reinforcement learning algorithm, which contributes to a better long and short-term prediction accuracy. In particular, the longer the prediction period, the more obvious the advantage of prediction accuracy of a DQN-LSTM method.


Author(s):  
Zhaoliang He ◽  
Hongshan Li ◽  
Zhi Wang ◽  
Shutao Xia ◽  
Wenwu Zhu

With the growth of computer vision-based applications, an explosive amount of images have been uploaded to cloud servers that host such online computer vision algorithms, usually in the form of deep learning models. JPEG has been used as the de facto compression and encapsulation method for images. However, standard JPEG configuration does not always perform well for compressing images that are to be processed by a deep learning model—for example, the standard quality level of JPEG leads to 50% of size overhead (compared with the best quality level selection) on ImageNet under the same inference accuracy in popular computer vision models (e.g., InceptionNet and ResNet). Knowing this, designing a better JPEG configuration for online computer vision-based services is still extremely challenging. First, cloud-based computer vision models are usually a black box to end-users; thus, it is challenging to design JPEG configuration without knowing their model structures. Second, the “optimal” JPEG configuration is not fixed; instead, it is determined by confounding factors, including the characteristics of the input images and the model, the expected accuracy and image size, and so forth. In this article, we propose a reinforcement learning (RL)-based adaptive JPEG configuration framework, AdaCompress. In particular, we design an edge (i.e., user-side) RL agent that learns the optimal compression quality level to achieve an expected inference accuracy and upload image size, only from the online inference results, without knowing details of the model structures. Furthermore, we design an explore-exploit mechanism to let the framework fast switch an agent when it detects a performance degradation, mainly due to the input change (e.g., images captured across daytime and night). Our evaluation experiments using real-world online computer vision-based APIs from Amazon Rekognition, Face++, and Baidu Vision show that our approach outperforms existing baselines by reducing the size of images by one-half to one-third while the overall classification accuracy only decreases slightly. Meanwhile, AdaCompress adaptively re-trains or re-loads the RL agent promptly to maintain the performance.


Sign in / Sign up

Export Citation Format

Share Document