scholarly journals Scheduling of AGVs in Automated Container Terminal Based on the Deep Deterministic Policy Gradient (DDPG) Using the Convolutional Neural Network (CNN)

2021 ◽  
Vol 9 (12) ◽  
pp. 1439
Author(s):  
Chun Chen ◽  
Zhi-Hua Hu ◽  
Lei Wang

In order to improve the horizontal transportation efficiency of the terminal Automated Guided Vehicles (AGVs), it is necessary to focus on coordinating the time and space synchronization operation of the loading and unloading of equipment, the transportation of equipment during the operation, and the reduction in the completion time of the task. Traditional scheduling methods limited dynamic response capabilities and were not suitable for handling dynamic terminal operating environments. Therefore, this paper discusses how to use delivery task information and AGVs spatiotemporal information to dynamically schedule AGVs, minimizes the delay time of tasks and AGVs travel time, and proposes a deep reinforcement learning algorithm framework. The framework combines the benefits of real-time response and flexibility of the Convolutional Neural Network (CNN) and the Deep Deterministic Policy Gradient (DDPG) algorithm, and can dynamically adjust AGVs scheduling strategies according to the input spatiotemporal state information. In the framework, firstly, the AGVs scheduling process is defined as a Markov decision process, which analyzes the system’s spatiotemporal state information in detail, introduces assignment heuristic rules, and rewards the reshaping mechanism in order to realize the decoupling of the model and the AGVs dynamic scheduling problem. Then, a multi-channel matrix is built to characterize space–time state information, the CNN is used to generalize and approximate the action value functions of different state information, and the DDPG algorithm is used to achieve the best AGV and container matching in the decision stage. The proposed model and algorithm frame are applied to experiments with different cases. The scheduling performance of the adaptive genetic algorithm and rolling horizon approach is compared. The results show that, compared with a single scheduling rule, the proposed algorithm improves the average performance of task completion time, task delay time, AGVs travel time and task delay rate by 15.63%, 56.16%, 16.36% and 30.22%, respectively; compared with AGA and RHPA, it reduces the tasks completion time by approximately 3.10% and 2.40%.

2019 ◽  
Vol 9 (10) ◽  
pp. 1983 ◽  
Author(s):  
Seigo Ito ◽  
Mineki Soga ◽  
Shigeyoshi Hiratsuka ◽  
Hiroyuki Matsubara ◽  
Masaru Ogawa

Automated guided vehicles (AGVs) are important in modern factories. The main functions of an AGV are its own localization and object detection, for which both sensor and localization methods are crucial. For localization, we used a small imaging sensor named a single-photon avalanche diode (SPAD) light detection and ranging (LiDAR), which uses the time-of-flight principle and arrays of SPADs. The SPAD LiDAR works both indoors and outdoors and is suitable for AGV applications. We utilized a deep convolutional neural network (CNN) as a localization method. For accurate CNN-based localization, the quality of the supervised data is important. The localization results can be poor or good if the supervised training data are noisy or clean, respectively. To address this issue, we propose a quality index for supervised data based on correlations between consecutive frames visualizing the important pixels for CNN-based localization. First, the important pixels for CNN-based localization are determined, and the quality index of supervised data is defined based on differences in these pixels. We evaluated the quality index in indoor-environment localization using the SPAD LiDAR and compared the localization performance. Our results demonstrate that the index correlates well to the quality of supervised training data for CNN-based localization.


Electronics ◽  
2020 ◽  
Vol 9 (9) ◽  
pp. 1340 ◽  
Author(s):  
Rohyoung Myung ◽  
Heonchang Yu

Applications with large-scale data are processed on a distributed system, such as Spark, as they are data- and computation-intensive. Predicting the performance of such applications is difficult, because they are influenced by various aspects of configurations from the distributed framework level to the application level. In this paper, we propose a completion time prediction model based on machine learning for the representative deep learning model convolutional neural network (CNN) by analyzing the effects of data, task, and resource characteristics on performance when executing the model in Spark cluster. To reduce the time utilized in collecting the data for training the model, we consider the causal relationship between the model features and the completion time based on Spark CNN’s distributed data-parallel model. The model features include the configurations of the Data Center OS Mesos environment, configurations of Apache Spark, and configurations of the CNN model. By applying the proposed model to famous CNN implementations, we achieved 99.98% prediction accuracy about estimating the job completion time. In addition to the downscale search area for the model features, we leverage extrapolation, which significantly reduces the model build time at most to 89% with even better prediction accuracy in comparison to the actual work.


IEEE Access ◽  
2018 ◽  
Vol 6 ◽  
pp. 59336-59349 ◽  
Author(s):  
Xiangdong Ran ◽  
Zhiguang Shan ◽  
Yufei Fang ◽  
Chuang Lin

2021 ◽  
Vol 13 (3) ◽  
pp. 1253
Author(s):  
Xiantong Li ◽  
Hua Wang ◽  
Pengcheng Sun ◽  
Hongquan Zu

Travel time prediction is one of the most important parameters to forecast network-wide traffic conditions. Travelers can access traffic roadway networks and arrive in their destinations at the lowest costs guided by accurate travel time estimation on alternative routes. In this study, we propose a long short-term memory (LSTM)-based deep learning model, deep learning on spatiotemporal features with Convolution Neural Network (DLSF-CNN), to extract the spatial–temporal correlation of travel time on different routes to accurately predict route travel time. Specifically, this model utilizes network-wide travel time, considering its topological structure as inputs, and combines convolutional neural network and LSTM techniques to accurately predict travel time. In addition to their spatial dependence, both coarse-grained and fine-grained temporal dependences are fully considered among the road segments along a route as well. The shift problem is formulated in the coarse-grained granularity to predict the route travel time in the next time interval. The experimental tests were conducted using real route travel time obtained by taxi trajectories in Harbin. The test results show that the travel time prediction accuracy of DLSF-CNN is above 90%. Meanwhile, the proposed model outperformed the other machine learning models based on multiple evaluation criteria. The RMSE (Root Mean Squard Error) and R2 (R Squared) increased by 18.6% and 22.46%, respectively. The results indicate the proposed model performs reasonably well under prevailing traffic conditions.


2020 ◽  
Author(s):  
S Kashin ◽  
D Zavyalov ◽  
A Rusakov ◽  
V Khryashchev ◽  
A Lebedev

2020 ◽  
Vol 2020 (10) ◽  
pp. 181-1-181-7
Author(s):  
Takahiro Kudo ◽  
Takanori Fujisawa ◽  
Takuro Yamaguchi ◽  
Masaaki Ikehara

Image deconvolution has been an important issue recently. It has two kinds of approaches: non-blind and blind. Non-blind deconvolution is a classic problem of image deblurring, which assumes that the PSF is known and does not change universally in space. Recently, Convolutional Neural Network (CNN) has been used for non-blind deconvolution. Though CNNs can deal with complex changes for unknown images, some CNN-based conventional methods can only handle small PSFs and does not consider the use of large PSFs in the real world. In this paper we propose a non-blind deconvolution framework based on a CNN that can remove large scale ringing in a deblurred image. Our method has three key points. The first is that our network architecture is able to preserve both large and small features in the image. The second is that the training dataset is created to preserve the details. The third is that we extend the images to minimize the effects of large ringing on the image borders. In our experiments, we used three kinds of large PSFs and were able to observe high-precision results from our method both quantitatively and qualitatively.


Sign in / Sign up

Export Citation Format

Share Document