algorithm benchmarking
Recently Published Documents


TOTAL DOCUMENTS

12
(FIVE YEARS 4)

H-INDEX

4
(FIVE YEARS 1)

2020 ◽  
Vol 65 (5) ◽  
pp. 055014 ◽  
Author(s):  
Sarah Weppler ◽  
Colleen Schinkel ◽  
Charles Kirkby ◽  
Wendy Smith

Machine learning is not quite a new topic for discussion these days. A lot of enthusiasts excel in this field. The problem just lies with the beginners who lack just the right amount of intuition in to step ahead in this field. This paper is all about finding a simple enough solution to this issue through an example problem Cart-Pole an Open AI Gym’s classic Machine Learning algorithm benchmarking tool. The contents here will provide a perception to Machine Learning and will help beginners get familiar with the field quite a lot. Machine Learning techniques like Regression which further includes Linear and Logistic Regression, forming the basics of Neural Networks using familiar terms from Logistic regression would be mentioned here. Along with using TensorFlow, a Google’s project initiative which is widely used today for computational efficiency would be all of the techniques used here to solve the trivial game Cart-Pole


2018 ◽  
Vol 15 (6) ◽  
pp. 066011 ◽  
Author(s):  
Vinay Jayaram ◽  
Alexandre Barachant

Author(s):  
Jose Martin Z. Maningo ◽  
◽  
Ryan Rhay P. Vicerra ◽  
Laurence A. Gan Lim ◽  
Edwin Sybingco ◽  
...  

This paper uses a fluid mechanics approach to perform swarming aggregation on a quadrotor unmanned aerial vehicle (QUAV) swarm platform. This is done by adapting the Smoothed Particle Hydrodynamics (SPH) technique. An algorithm benchmarking is conducted to see how well SPH performs. Simulations of varying set-ups are experimented to compare different algorithms with SPH. The position error of SPH is 30% less than the benchmark algorithm when a target enclosure is introduce. SPH is implemented using Crazyflie quadrotor swarm. The aggregation behavior exhibited successfully in the said platform.


2017 ◽  
Vol 143 (1) ◽  
pp. 04016070 ◽  
Author(s):  
Omid Bozorg-Haddad ◽  
Ali Azarnivand ◽  
Seyed-Mohammad Hosseini-Moghari ◽  
Hugo A. Loáiciga

Author(s):  
Janos Toth ◽  
Laszlo Kovacs ◽  
Balazs Harangi ◽  
Csaba Kiss ◽  
Andras Mohacsi ◽  
...  

2004 ◽  
Vol 50 (11) ◽  
pp. 41-49 ◽  
Author(s):  
C. Rosen ◽  
U. Jeppsson ◽  
P.A. Vanrolleghem

The COST/IWA benchmark simulation model has been available for seven years. Its primary purpose has been to create a platform for control strategy benchmarking of biological wastewater treatment processes. The fact that the benchmark has resulted in more than 100 publications, not only in Europe but also worldwide, demonstrates the interest for such a tool in the research community. In this paper, an extension of the benchmark simulation model no. 1 (BSM1) is proposed. It aims at facilitating evaluation of two closely related operational tasks: long-term control strategy performance and process monitoring performance. The motivation for the extension is that these two tasks typically act on longer time scales. The extension proposed here consists of 1) prolonging the evaluation period to one year (including influent files), 2) specifying time varying process parameters and 3) including sensor and actuator failures. The prolonged evaluation period is necessary to obtain a relevant and realistic assessment of the effects of such disturbances. Also, a prolonged evaluation period allows for a number of long-term control actions/handles that cannot be evaluated in a realistic fashion in the one week BSM1 evaluation period. In the paper, models for influent file design, parameter changes and sensor failures, initialization procedure and evaluation criteria are discussed. Important remaining topics, for which consensus is required, are identified. The potential of a long-term benchmark is illustrated with an example of process monitoring algorithm benchmarking.


Sign in / Sign up

Export Citation Format

Share Document