scholarly journals E-BOT PLATAFORMA ROBÓTICA DE APOYO EN LA ENSEÑANZA DE ALGORITMOS Y PROGRAMACIÓN

Author(s):  
JEFFERSON GÚTIERREZ ◽  
◽  
NICOLAS CALDERON ◽  
JOSE FRANCO

This article presents the design and construction of a mobile robot (E-BOT) both in the hardware and in the software part, which works as a support tool for teaching processes and allows to consolidate learning in related topics. a Programming having as its scope repetitive structures, this through a series of previously constructed learning elements. E-BOT has a series of actuators (Led, oled, servomotor, motors) and sensors (ultrasound and reflective light) which are controlled by means of Arduino that allow an easy interaction of EBOT with the real world. Additionally, the construction of a verification software is carried out that allows us to validate the correct operation of all the modules built.

2019 ◽  
Vol 7 (1) ◽  
pp. 35-52 ◽  
Author(s):  
Balamurali Gunji ◽  
Deepak B.B.V.L. ◽  
Saraswathi M.B.L. ◽  
Umamaheswara Rao Mogili

Purpose The purpose of this paper is to obtain an optimal mobile robot path planning by the hybrid algorithm, which is developed by two nature inspired meta-heuristic algorithms, namely, cuckoo-search and bat algorithm (BA) in an unknown or partially known environment. The cuckoo-search algorithm is based on the parasitic behavior of the cuckoo, and the BA is based on the echolocation behavior of the bats. Design/methodology/approach The developed algorithm starts by sensing the obstacles in the environment using ultrasonic sensor. If there are any obstacles in the path, the authors apply the developed algorithm to find the optimal path otherwise reach the target point directly through diagonal distance. Findings The developed algorithm is implemented in MATLAB for the simulation to test the efficiency of the algorithm for different environments. The same path is considered to implement the experiment in the real-world environment. The ARDUINO microcontroller along with the ultrasonic sensor is considered to obtain the path length and time of travel of the robot to reach the goal point. Originality/value In this paper, a new hybrid algorithm has been developed to find the optimal path of the mobile robot using cuckoo search and BAs. The developed algorithm is tested with the real-world environment using the mobile robot.


2011 ◽  
Vol 23 (5) ◽  
pp. 684-700 ◽  
Author(s):  
Yoshihiko Kawazoe ◽  
◽  
Masaki Mitsuoka ◽  
Sho Masada

There are presently no robots around us in our society if we define a robot as an autonomous machine working in the arena of offices, homes, disaster sites, etc., not in factories. Mechatronics, dynamics, and robotics involving humans are a world of strong nonlinearity. This paper investigates the approach to the emergence of the target behavior of an autonomous mobile robot by learning with Subsumption Architecture (SA) to break through the problems of the conventional robotics with the SMPA (Sense-Model-Plan-Act) framework in the real world. It has showed the way things are learned in the real world with SA and has been developed into a practical curriculum for education as an introduction to robotics that has an intellectual and emotional appeal.


2016 ◽  
Vol 28 (4) ◽  
pp. 441-450 ◽  
Author(s):  
Naoki Akai ◽  
◽  
Yasunari Kakigi ◽  
Shogo Yoneyama ◽  
Koichi Ozaki ◽  
...  

[abstFig src='/00280004/02.jpg' width='300' text='Navigation under strong rainy condition' ] The Real World Robot Challenge (RWRC), a technical challenge for mobile outdoor robots, has robots automatically navigate a predetermined path over 1 km with the objective of detecting specific persons. RWRC 2015 was conducted in the rain and every robot could not complete the mission. This was because sensors on the robots detected raindrops and the robots then generated unexpected behavior, indicating the need to study the influence of rain on mobile navigation systems – a study clearly not yet sufficient. We begin by describing our robot’s waterproofing function, followed by investigating the influence of rain on the external sensors commonly used in mobile robot navigation and discuss how the robot navigates autonomous in the rain. We conducted navigation experiments in artificial and actual rainy environments and those results showed that the robot navigates stably in the rain.


Author(s):  
Kai O. Arras ◽  
Nicola Tomatis ◽  
Roland Siegwart
Keyword(s):  

2014 ◽  
Vol 26 (2) ◽  
pp. 177-184 ◽  
Author(s):  
Sam Ann Rahok ◽  
◽  
Hirohisa Oneda ◽  
Akio Tanaka ◽  
Koichi Ozaki ◽  
...  

This paper describes a robust navigation method for real-world environments. The method uses a 3-axis magnetic sensor and a laser range scanner. The magnetic field that occurs in the environment is used as key landmarks in the proposed navigation method, and physical landmarks scanned by the laser range scanner are taken into account in compensating for the mobile robot’s lateral error. An evaluation experiment was conducted during the final run of the Real World Robot Challenge (RWRC) 2013, and the result showed that the mobile robot equipped with the proposed method robustly navigated a 1.6 km course.


2017 ◽  
Vol 29 (6) ◽  
pp. 1025-1036 ◽  
Author(s):  
Satoshi Muramatsu ◽  
Daisuke Chugo ◽  
Sho Yokota ◽  
Hiroshi Hashimoto ◽  
◽  
...  

The purpose of our education approach is the giving the knowledge and the experience for the research activity to the freshman student who assigned the Lab. We think that “the interaction between the robot and the real world information which is observed by sensor” is important for the student education for student’s research in the Lab. Here, the interaction means how perform the robot based on the real world information which is observed by sensors. The development of the mobile robot which work in Robot competition (Tsukuba Challenge) requires “the interaction between the mobile robot and real world.” For this reason, we think that this activity (participating in the Tsukuba Challenge anf development of the mobile robot) is effective for the student education, and we educate the student by utilizing the development of the mobile robot for Tsukuba Challenge. Although our approach performed in short span, we achieve the good result. For example, the students achieves the research outcome by utilizing the learned skill, and submitted the paper to the academic conference. We verified that our education approach is effective for improvement the student education and motivation.


2015 ◽  
Vol 27 (4) ◽  
pp. 337-345 ◽  
Author(s):  
Shinya Ohkawa ◽  
◽  
Yoshihiro Takita ◽  
Hisashi Date ◽  
Kazuhiro Kobayashi

<div class=""abs_img""> <img src=""[disp_template_path]/JRM/abst-image/00270004/03.jpg"" width=""300"" /> Autonomous robot “AR Chair”</div> This paper is discusses an autonomous mobile robot entered in the Real World Robotics Challenges 2014 (RWRC) in Tsukuba. Our project was to develop a wheelchair able to navigate stairs autonomously. Step 1 develops a center articulated vehicle, called the AR Chair, which has 4 wheels and a controller including LIDARs. The center articulated vehicle has a stiff structure and travels with the front and rear wheels on the same path, so there is no inner wheels difference. The robotic vehicle carries users weighing up to 100 kg. The autonomous controller is the same as the Smart Dump 7 combined with the RWRC 2013 to achieve the challenge, excluding the geometrical relationship of the steering angle and communication command for motor drivers to the AR Chair. The advantage of the robot is shown by experimental data from the RWRC 2014’s final run. </span>


2015 ◽  
Vol 27 (4) ◽  
pp. 327-336 ◽  
Author(s):  
Naoki Akai ◽  
◽  
Kenji Yamauchi ◽  
Kazumichi Inoue ◽  
Yasunari Kakigi ◽  
...  

<div class=""abs_img""> <img src=""[disp_template_path]/JRM/abst-image/00270004/02.jpg"" width=""450"" />View of SARA with and without cowl</div> Held in Japan every year since 2007, the Real World Robot Challenge (RWRC) is a technical challenge for mobile robots. Every robot is given the missions of traveling a long distance and finding specific persons autonomously. The robots must also have an affinity for people and be remotely monitored. In order to complete the missions, we developed a new mobile robot, SARA, which we entered in RWRC 2014. The robot successfully completed all of the missions of the challenge. In this paper, the systems we implemented are detailed. Moreover, results of experiments and of the challenge are presented, and knowledges we gained through the experience are discussed. </span>


Author(s):  
Menglong Yang ◽  
Katashi Nagao

The aim of this paper is to digitize the environments in which humans live, at low cost, and reconstruct highly accurate three-dimensional environments that are based on those in the real world. This three-dimensional content can be used such as for virtual reality environments and three-dimensional maps for automatic driving systems. In general, however, a three-dimensional environment must be carefully reconstructed by manually moving the sensors used to first scan the real environment on which the three-dimensional one is based. This is done so that every corner of an entire area can be measured, but time and costs increase as the area expands. Therefore, a system that creates three-dimensional content that is based on real-world large-scale buildings at low cost is proposed. This involves automatically scanning the indoors with a mobile robot that uses low-cost sensors and generating 3D point clouds. When the robot reaches an appropriate measurement position, it collects the three-dimensional data of shapes observable from that position by using a 3D sensor and 360-degree panoramic camera. The problem of determining an appropriate measurement position is called the “next best view problem,” and it is difficult to solve in a complicated indoor environment. To deal with this problem, a deep reinforcement learning method is employed. It combines reinforcement learning, with which an autonomous agent learns strategies for selecting behavior, and deep learning done using a neural network. As a result, 3D point cloud data can be generated with better quality than the conventional rule-based approach.


2021 ◽  
Vol 11 (8) ◽  
pp. 3360
Author(s):  
Huei-Yung Lin ◽  
Chien-Hsing He

This paper presents a novel self-localization technique for mobile robots based on image feature matching from omnidirectional vision. The proposed method first constructs a virtual space with synthetic omnidirectional imaging to simulate a mobile robot equipped with an omnidirectional vision system in the real world. In the virtual space, a number of vertical and horizontal lines are generated according to the structure of the environment. They are imaged by the virtual omnidirectional camera using the catadioptric projection model. The omnidirectional images derived from the virtual and real environments are then used to match the synthetic lines and real scene edges. Finally, the pose and trajectory of the mobile robot in the real world are estimated by the efficient perspective-n-point (EPnP) algorithm based on the line feature matching. In our experiments, the effectiveness of the proposed self-localization technique was validated by the navigation of a mobile robot in a real world environment.


Sign in / Sign up

Export Citation Format

Share Document