A low-cost autonomous mobile robotics experiment: Control, vision, sonar, and Handy Board

2009 ◽  
Vol 20 (2) ◽  
pp. 203-213 ◽  
Author(s):  
Tamer Inanc ◽  
Huan Dinh
Sensors ◽  
2018 ◽  
Vol 18 (11) ◽  
pp. 3673 ◽  
Author(s):  
Zhili Long ◽  
Ronghua He ◽  
Yuxiang He ◽  
Haoyao Chen ◽  
Zuohua Li

This paper presents a modeling approach to feature classification and environment mapping for indoor mobile robotics via a rotary ultrasonic array and fuzzy modeling. To compensate for the distance error detected by the ultrasonic sensor, a novel feature extraction approach termed “minimum distance of point” (MDP) is proposed to determine the accurate distance and location of target objects. A fuzzy model is established to recognize and classify the features of objects such as flat surfaces, corner, and cylinder. An environmental map is constructed for automated robot navigation based on this fuzzy classification, combined with a cluster algorithm and least-squares fitting. Firstly, the platform of the rotary ultrasonic array is established by using four low-cost ultrasonic sensors and a motor. Fundamental measurements, such as the distance of objects at different rotary angles and with different object materials, are carried out. Secondly, the MDP feature extraction algorithm is proposed to extract precise object locations. Compared with the conventional range of constant distance (RCD) method, the MDP method can compensate for errors in feature location and feature matching. With the data clustering algorithm, a range of ultrasonic distances is attained and used as the input dataset. The fuzzy classification model—including rules regarding data fuzzification, reasoning, and defuzzification—is established to effectively recognize and classify the object feature types. Finally, accurate environment mapping of a service robot, based on MDP and fuzzy modeling of the measurements from the ultrasonic array, is demonstrated. Experimentally, our present approach can realize environment mapping for mobile robotics with the advantages of acceptable accuracy and low cost.


2020 ◽  
Author(s):  
Vysakh S Mohan

Edge processing for computer vision systems enable incorporating visual intelligence to mobile robotics platforms. Demand for low power, low cost and small form factor devices are on the rise.This work proposes a unified platform to generate deep learning models compatible on edge devices from Intel, NVIDIA and XaLogic. The platform enables users to create custom data annotations,train neural networks and generate edge compatible inference models. As a testimony to the tools ease of use and flexibility, we explore two use cases — vision powered prosthetic hand and drone vision. Neural network models for these use cases will be built using the proposed pipeline and will be open-sourced. Online and offline versions of the tool and corresponding inference modules for edge devices will also be made public for users to create custom computer vision use cases.


10.5772/61348 ◽  
2015 ◽  
Vol 12 (11) ◽  
pp. 156 ◽  
Author(s):  
Sobers Lourdu Xavier Francis ◽  
Sreenatha G. Anavatti ◽  
Matthew Garratt ◽  
Hyunbgo Shim

2020 ◽  
Author(s):  
Vysakh S Mohan

Edge processing for computer vision systems enable incorporating visual intelligence to mobile robotics platforms. Demand for low power, low cost and small form factor devices are on the rise.This work proposes a unified platform to generate deep learning models compatible on edge devices from Intel, NVIDIA and XaLogic. The platform enables users to create custom data annotations,train neural networks and generate edge compatible inference models. As a testimony to the tools ease of use and flexibility, we explore two use cases — vision powered prosthetic hand and drone vision. Neural network models for these use cases will be built using the proposed pipeline and will be open-sourced. Online and offline versions of the tool and corresponding inference modules for edge devices will also be made public for users to create custom computer vision use cases.


2015 ◽  
Vol 27 (4) ◽  
pp. 318-326 ◽  
Author(s):  
Shin'ichi Yuta ◽  
◽  

<div class=""abs_img""> <img src=""[disp_template_path]/JRM/abst-image/00270004/01.jpg"" width=""300"" /> Autonomous mobile robot in RWRC 2014</div> The Tsukuba Challenge, an open experiment for autonomous mobile robotics researchers, lets mobile robots travel in a real – and populated – city environment. Following the challenge in 2013, the mobile robots must navigate autonomously to their destination while, as the task of Tsukuba Challenge 2014, looking for and finding specific persons sitting in the environment. Total 48 teams (54 robots) seeking success in this complex challenge. </span>


Sign in / Sign up

Export Citation Format

Share Document