scholarly journals Analog Computation For Mobile Robotics Education

2020 ◽  
Author(s):  
Carl Wick ◽  
Bradley Bishop
Author(s):  
Paulo R.S.L. Coelho ◽  
Rodrigo F. Sassi ◽  
Eleri Cardozo ◽  
Eliane G. Guimaraes ◽  
Luis F. Faina ◽  
...  

2005 ◽  
Author(s):  
Huan Li ◽  
John Sweeney ◽  
Krithi Ramamritham ◽  
Roderic Grupen ◽  
Prashant Shenoy
Keyword(s):  

Sensors ◽  
2021 ◽  
Vol 21 (10) ◽  
pp. 3327
Author(s):  
Vicente Román ◽  
Luis Payá ◽  
Adrián Peidró ◽  
Mónica Ballesta ◽  
Oscar Reinoso

Over the last few years, mobile robotics has experienced a great development thanks to the wide variety of problems that can be solved with this technology. An autonomous mobile robot must be able to operate in a priori unknown environments, planning its trajectory and navigating to the required target points. With this aim, it is crucial solving the mapping and localization problems with accuracy and acceptable computational cost. The use of omnidirectional vision systems has emerged as a robust choice thanks to the big quantity of information they can extract from the environment. The images must be processed to obtain relevant information that permits solving robustly the mapping and localization problems. The classical frameworks to address this problem are based on the extraction, description and tracking of local features or landmarks. However, more recently, a new family of methods has emerged as a robust alternative in mobile robotics. It consists of describing each image as a whole, what leads to conceptually simpler algorithms. While methods based on local features have been extensively studied and compared in the literature, those based on global appearance still merit a deep study to uncover their performance. In this work, a comparative evaluation of six global-appearance description techniques in localization tasks is carried out, both in terms of accuracy and computational cost. Some sets of images captured in a real environment are used with this aim, including some typical phenomena such as changes in lighting conditions, visual aliasing, partial occlusions and noise.


Drones ◽  
2021 ◽  
Vol 5 (2) ◽  
pp. 51
Author(s):  
Fábio Azevedo ◽  
Jaime S. Cardoso ◽  
André Ferreira ◽  
Tiago Fernandes ◽  
Miguel Moreira ◽  
...  

The usage of unmanned aerial vehicles (UAV) has increased in recent years and new application scenarios have emerged. Some of them involve tasks that require a high degree of autonomy, leading to increasingly complex systems. In order for a robot to be autonomous, it requires appropriate perception sensors that interpret the environment and enable the correct execution of the main task of mobile robotics: navigation. In the case of UAVs, flying at low altitude greatly increases the probability of encountering obstacles, so they need a fast, simple, and robust method of collision avoidance. This work covers the problem of navigation in unknown scenarios by implementing a simple, yet robust, environment-reactive approach. The implementation is done with both CPU and GPU map representations to allow wider coverage of possible applications. This method searches for obstacles that cross a cylindrical safety volume, and selects an escape point from a spiral for avoiding the obstacle. The algorithm is able to successfully navigate in complex scenarios, using both a high and low-power computer, typically found aboard UAVs, relying only on a depth camera with a limited FOV and range. Depending on the configuration, the algorithm can process point clouds at nearly 40 Hz in Jetson Nano, while checking for threats at 10 kHz. Some preliminary tests were conducted with real-world scenarios, showing both the advantages and limitations of CPU and GPU-based methodologies.


Sign in / Sign up

Export Citation Format

Share Document