scholarly journals Direct Policy Optimization using Deterministic Sampling and Collocation

Author(s):  
Taylor Howell ◽  
Chunjiang Fu ◽  
Zachary Manchester
2018 ◽  
Vol 6 (1) ◽  
Author(s):  
Firrean Firrean

Special Economic Zones (SEZ) is a region with certain limits within the jurisdiction of Indonesia which is set to perform economic functions and obtain certain facilities. One SEZ developed in North Sumatra Province and included in the National Strategic Area (KSN) Medan - Binjai - Deli Serdang - Karo is SEZ Sei Mangke. SEZ Sei Mangke is defined in PP 29 of 2012 on 27 February 2012 and is the first KEK in Indonesia which was inaugurated its operation by President Joko Widodo on January 27, 2015. KSN Mebidangro itself is an area of priority spatial arrangement because it has a very important influence nationally against state sovereignty, defense and state security, economic, social, cultural, and / or environment, including areas designated as world heritage. This research is an evaluative research intended to find out the end of a policy program in order to determine recommendation of last policy by using CIPO model which includes four stages: (1) context, (2) input, (3) process, and (4) output. The research method used is case study by applying qualitative research that aims to make an accurate interpretation of the characteristics of the object under study. Findings on the evaluation context indicate that the program is generally running well, but some aspects of synergy and policy optimization as well as financing support from central and local government need to be improved. In the input evaluation, and evaluation process some aspects need to be improved because the findings show the weakness of some aspects is the result of lack of synergy and optimization of policy and support from local government. Interesting from the evaluation of ouput is that with some weaknesses in the input and process components, it turns out the evaluation findings ouput show Seek Mangke SEZ development can still run well. The recommendation of this research is to improve the quality of policy synergy / program of SEZ Seek development by improving several aspects that are categorized in each stage of evaluation


2021 ◽  
Author(s):  
Srivatsan Krishnan ◽  
Behzad Boroujerdian ◽  
William Fu ◽  
Aleksandra Faust ◽  
Vijay Janapa Reddi

AbstractWe introduce Air Learning, an open-source simulator, and a gym environment for deep reinforcement learning research on resource-constrained aerial robots. Equipped with domain randomization, Air Learning exposes a UAV agent to a diverse set of challenging scenarios. We seed the toolset with point-to-point obstacle avoidance tasks in three different environments and Deep Q Networks (DQN) and Proximal Policy Optimization (PPO) trainers. Air Learning assesses the policies’ performance under various quality-of-flight (QoF) metrics, such as the energy consumed, endurance, and the average trajectory length, on resource-constrained embedded platforms like a Raspberry Pi. We find that the trajectories on an embedded Ras-Pi are vastly different from those predicted on a high-end desktop system, resulting in up to $$40\%$$ 40 % longer trajectories in one of the environments. To understand the source of such discrepancies, we use Air Learning to artificially degrade high-end desktop performance to mimic what happens on a low-end embedded system. We then propose a mitigation technique that uses the hardware-in-the-loop to determine the latency distribution of running the policy on the target platform (onboard compute on aerial robot). A randomly sampled latency from the latency distribution is then added as an artificial delay within the training loop. Training the policy with artificial delays allows us to minimize the hardware gap (discrepancy in the flight time metric reduced from 37.73% to 0.5%). Thus, Air Learning with hardware-in-the-loop characterizes those differences and exposes how the onboard compute’s choice affects the aerial robot’s performance. We also conduct reliability studies to assess the effect of sensor failures on the learned policies. All put together, Air Learning enables a broad class of deep RL research on UAVs. The source code is available at: https://github.com/harvard-edge/AirLearning.


Author(s):  
Aleksandr Ianenko ◽  
Alexander Artamonov ◽  
Georgii Sarapulov ◽  
Alexey Safaraleev ◽  
Sergey Bogomolov ◽  
...  

2021 ◽  
Vol 11 (4) ◽  
pp. 1514 ◽  
Author(s):  
Quang-Duy Tran ◽  
Sang-Hoon Bae

To reduce the impact of congestion, it is necessary to improve our overall understanding of the influence of the autonomous vehicle. Recently, deep reinforcement learning has become an effective means of solving complex control tasks. Accordingly, we show an advanced deep reinforcement learning that investigates how the leading autonomous vehicles affect the urban network under a mixed-traffic environment. We also suggest a set of hyperparameters for achieving better performance. Firstly, we feed a set of hyperparameters into our deep reinforcement learning agents. Secondly, we investigate the leading autonomous vehicle experiment in the urban network with different autonomous vehicle penetration rates. Thirdly, the advantage of leading autonomous vehicles is evaluated using entire manual vehicle and leading manual vehicle experiments. Finally, the proximal policy optimization with a clipped objective is compared to the proximal policy optimization with an adaptive Kullback–Leibler penalty to verify the superiority of the proposed hyperparameter. We demonstrate that full automation traffic increased the average speed 1.27 times greater compared with the entire manual vehicle experiment. Our proposed method becomes significantly more effective at a higher autonomous vehicle penetration rate. Furthermore, the leading autonomous vehicles could help to mitigate traffic congestion.


2020 ◽  
Vol 26 (1) ◽  
pp. 1-16
Author(s):  
Kevin Vanslette ◽  
Abdullatif Al Alsheikh ◽  
Kamal Youcef-Toumi

AbstractWe motive and calculate Newton–Cotes quadrature integration variance and compare it directly with Monte Carlo (MC) integration variance. We find an equivalence between deterministic quadrature sampling and random MC sampling by noting that MC random sampling is statistically indistinguishable from a method that uses deterministic sampling on a randomly shuffled (permuted) function. We use this statistical equivalence to regularize the form of permissible Bayesian quadrature integration priors such that they are guaranteed to be objectively comparable with MC. This leads to the proof that simple quadrature methods have expected variances that are less than or equal to their corresponding theoretical MC integration variances. Separately, using Bayesian probability theory, we find that the theoretical standard deviations of the unbiased errors of simple Newton–Cotes composite quadrature integrations improve over their worst case errors by an extra dimension independent factor {\propto N^{-\frac{1}{2}}}. This dimension independent factor is validated in our simulations.


Sign in / Sign up

Export Citation Format

Share Document