Fast and Robust Vision System for Shogi Robot

2015 ◽  
Vol 27 (2) ◽  
pp. 182-190
Author(s):  
Gou Koutaki ◽  
◽  
Keiichi Uchimura

<div class=""abs_img""> <img src=""[disp_template_path]/JRM/abst-image/00270002/08.jpg"" width=""150"" />Developed shogi robot system</div> The authors developed a low-cost, safety shogi robot system. A Web camera installed on the lower frame is used to recognize pieces and their positions on the board, after which the game program is played. A robot arm moves a selected piece to the position used in playing a human player. A fast, robust image processing algorithm is needed because a low-cost wide-angle Web camera and robot are used. The authors describe image processing and robot systems, then discuss experiments conducted to verify the feasibility of the proposal, showing that even a low-cost system can be highly reliable. </span>

2021 ◽  
Author(s):  
Ching-Wei Chuang ◽  
Harry H. Cheng

Abstract In the modern world, building an autonomous multi-robot system is essential to coordinate and control robots to help humans because using several low-cost robots becomes more robust and efficient than using one expensive, powerful robot to execute tasks to achieve the overall goal of a mission. One research area, multi-robot task allocation (MRTA), becomes substantial in a multi-robot system. Assigning suitable tasks to suitable robots is crucial in coordination, which may directly influence the result of a mission. In the past few decades, although numerous researchers have addressed various algorithms or approaches to solve MRTA problems in different multi-robot systems, it is still difficult to overcome certain challenges, such as dynamic environments, changeable task information, miscellaneous robot abilities, the dynamic condition of a robot, or uncertainties from sensors or actuators. In this paper, we propose a novel approach to handle MRTA problems with Bayesian Networks (BNs) under these challenging circumstances. Our experiments exhibit that the proposed approach may effectively solve real problems in a search-and-rescue mission in centralized, decentralized, and distributed multi-robot systems with real, low-cost robots in dynamic environments. In the future, we will demonstrate that our approach is trainable and can be utilized in a large-scale, complicated environment. Researchers might be able to apply our approach to other applications to explore its extensibility.


2017 ◽  
Vol 5 (1) ◽  
pp. 28-42 ◽  
Author(s):  
Iryna Borshchova ◽  
Siu O’Young

Purpose The purpose of this paper is to develop a method for a vision-based automatic landing of a multi-rotor unmanned aerial vehicle (UAV) on a moving platform. The landing system must be highly accurate and meet the size, weigh, and power restrictions of a small UAV. Design/methodology/approach The vision-based landing system consists of a pattern of red markers placed on a moving target, an image processing algorithm for pattern detection, and a servo-control for tracking. The suggested approach uses a color-based object detection and image-based visual servoing. Findings The developed prototype system has demonstrated the capability of landing within 25 cm of the desired point of touchdown. This auto-landing system is small (100×100 mm), light-weight (100 g), and consumes little power (under 2 W). Originality/value The novelty and the main contribution of the suggested approach are a creative combination of work in two fields: image processing and controls as applied to the UAV landing. The developed image processing algorithm has low complexity as compared to other known methods, which allows its implementation on general-purpose low-cost hardware. The theoretical design has been verified systematically via simulations and then outdoors field tests.


2011 ◽  
Vol 143-144 ◽  
pp. 737-741 ◽  
Author(s):  
Hai Bo Liu ◽  
Wei Wei Li ◽  
Yu Jie Dong

Vision system is an important part of the whole robot soccer system.In order to win the game, the robot system must be more quick and more accuracy.A color image segmentation method using improved seed-fill algorithm in YUV color space is introduced in this paper. The new method dramatically reduces the work of calculation,and speeds up the image processing. The result of comparing it with the old method based on RGB color space was showed in the paper.The second step of the vision sub system is identification the color block that separated by the first step.A improved seed fill algorithm is used in the paper.The implementation on MiroSot Soccer Robot System shows that the new method is fast and accurate.


Author(s):  
Z. Asrih ◽  
A. El Mourabit ◽  
I. El Hajjouji ◽  
A. Ezzine ◽  
Y. Laaziz ◽  
...  

2017 ◽  
Vol 15 (41) ◽  
pp. 9-26
Author(s):  
Andrés Espinal Rojas ◽  
Andrés Arango Espinal ◽  
Luis Ramos ◽  
Jorge Humberto Erazo Aux

This paper describes the development and implementation of a six-pointed Unmanned Aerial Vehicle [UAV] prototype, designed for finding lost people in hard to access areas, using Arduino MultiWii platform. A platform capable of performing a stable flight to identify people through an on-board camera and an image processing algorithm was developed. Although the use of UAV represents a low cost and quick response –in terms of displacement– solution, capable to prevent or reduce the number of deaths of lost people in away places, also represents a technological challenge, since the recognition of objects from an aerial view is difficult, due to the distance of the UAV to the objective, the UAV’s position and its constant movement. The solution proposed implements an aerial device that performs the image capture, wireless transmission and image processing while it is in a controlled and stable flight.


Author(s):  
Tomás Serrano-Ramírez ◽  
Ninfa del Carmen Lozano-Rincón ◽  
Arturo Mandujano-Nava ◽  
Yosafat Jetsemaní Sámano-Flores

Computer vision systems are an essential part in industrial automation tasks such as: identification, selection, measurement, defect detection and quality control in parts and components. There are smart cameras used to perform tasks, however, their high acquisition and maintenance cost is restrictive. In this work, a novel low-cost artificial vision system is proposed for classifying objects in real time, using the Raspberry Pi 3B + embedded system, a Web camera and the Open CV artificial vision library. The suggested technique comprises the training of a supervised classification system of the Haar Cascade type, with image banks of the object to be recognized, subsequently generating a predictive model which is put to the test with real-time detection, as well as the calculation for the prediction error. This seeks to build a powerful vision system, affordable and also developed using free software.


Author(s):  
Marcos Roberto dos Santos ◽  
Guilherme Afonso Madalozzo ◽  
José Maurício Cunha Fernandes ◽  
Rafael Rieder

Computer vision and image processing procedures could obtain crop data frequently and precisely, such as vegetation indexes, and correlating them with other variables, like biomass and crop yield. This work presents the development of a computer vision system for high-throughput phenotyping, considering three solutions: an image capture software linked to a low-cost appliance; an image-processing program for feature extraction; and a web application for results' presentation. As a case study, we used normalized difference vegetation index (NDVI) data from a wheat crop experiment of the Brazilian Agricultural Research Corporation. Regression analysis showed that NDVI explains 98.9, 92.8, and 88.2% of the variability found in the biomass values for crop plots with 82, 150, and 200 kg of N ha1 fertilizer applications, respectively. As a result, NDVI generated by our system presented a strong correlation with the biomass, showing a way to specify a new yield prediction model from the beginning of the crop.


Author(s):  
Chengtao Cai ◽  
Bing Fan ◽  
Xiangyu Weng ◽  
Qidan Zhu ◽  
Li Su

Purpose Because of their large field of view, omnistereo vision systems have been widely used as primary vision sensors in autonomous mobile robot tasks. The purpose of this article is to achieve real-time and accurate tracking by the omnidirectional vision robot system. Design/methodology/approach The authors provide in this study the key techniques required to obtain an accurate omnistereo target tracking and location robot system, including stereo rectification and target tracking in complex environment. A simple rectification model is proposed, and a local image processing method is used to reduce the computation time in the localization process. A target tracking method is improved to make it suitable for omnidirectional vision system. Using the proposed methods and some existing methods, an omnistereo target tracking and location system is established. Findings The experiments are conducted with all the necessary stages involved in obtaining a high-performance omnistereo vision system. The proposed correction algorithm can process the image in real time. The experimental results of the improved tracking algorithm are better than the original algorithm. The statistical analysis of the experimental results demonstrates the effectiveness of the system. Originality/value A simple rectification model is proposed, and a local image processing method is used to reduce the computation time in the localization process. A target tracking method is improved to make it suitable for omnidirectional vision system. Using the proposed methods and some existing methods, an omnistereo target tracking and location system is established.


2012 ◽  
Vol 463-464 ◽  
pp. 1543-1546
Author(s):  
Mihaela Tilneac ◽  
Sanda Grigorescu ◽  
Victor Paléologue ◽  
Valer Dolga

The aim of this paper is robot with vision system to interact with its environment. The task considered is recognizing and finding objects extracted from the image space. The work presents a Matlab program for different objects colour and shape recognition. The image taken by a low cost Web camera is processed and information is transferred to a robot controller for moving above the identified object.


Sign in / Sign up

Export Citation Format

Share Document