A Visually Servoed Active Camera for Miniature Mobile Robots

2000 ◽  
Author(s):  
Kemal B. Yesin ◽  
Bradley J. Nelson

Abstract In this paper, an active vision system for a miniature mobile reconnaissance robot is presented. The system consists of a single-chip CMOS video sensor, a wireless video transmitter and miniature brushless D.C. motors. Visual tracking and servoing techniques were used to test the dynamic capabilities of the system. Additionally, a simple yet effective motion detection and tracking algorithm suitable for systems with limited computational power was developed and implemented. Available technologies for image sensing and actuation are surveyed for compatibility with the severe size, weight and power restrictions that the robot presents. The video system is designed to be concealed inside the robot and performs a deploy-retract motion in addition to pan and tilt. Different mechanism designs to reduce the number of actuators are presented.

Author(s):  
D. Y. Erokhin ◽  
A. B. Feldman ◽  
S. E. Korepanov

Detection of moving objects in video sequence received from moving video sensor is a one of the most important problem in computer vision. The main purpose of this work is developing set of algorithms, which can detect and track moving objects in real time computer vision system. This set includes three main parts: the algorithm for estimation and compensation of geometric transformations of images, an algorithm for detection of moving objects, an algorithm to tracking of the detected objects and prediction their position. The results can be claimed to create onboard vision systems of aircraft, including those relating to small and unmanned aircraft.


Robotica ◽  
1998 ◽  
Vol 16 (3) ◽  
pp. 309-327 ◽  
Author(s):  
Rajeev Sharma ◽  
Narayan Srinivasa

Assembly robots that use an active camera system for visual feedback can achieve greater flexibility, including the ability to operate in an uncertain and changing environment. Incorporating active vision into a robot control loop involves some inherent difficulties, including calibration, and the need for redefining the servoing goal as the camera configuration changes. In this paper, we propose a novel self-organizing neural network that learns a calibration-free spatial representation of 3D point targets in a manner that is invariant to changing camera configurations. This representation is used to develop a new framework for robot control with active vision. The salient feature of this framework is that it decouples active camera control from robot control. The feasibility of this approach is established with the help of computer simulations and experiments with the University of Illinois Active Vision System (UIAVS).


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Qian-Bing Zhu ◽  
Bo Li ◽  
Dan-Dan Yang ◽  
Chi Liu ◽  
Shun Feng ◽  
...  

AbstractThe challenges of developing neuromorphic vision systems inspired by the human eye come not only from how to recreate the flexibility, sophistication, and adaptability of animal systems, but also how to do so with computational efficiency and elegance. Similar to biological systems, these neuromorphic circuits integrate functions of image sensing, memory and processing into the device, and process continuous analog brightness signal in real-time. High-integration, flexibility and ultra-sensitivity are essential for practical artificial vision systems that attempt to emulate biological processing. Here, we present a flexible optoelectronic sensor array of 1024 pixels using a combination of carbon nanotubes and perovskite quantum dots as active materials for an efficient neuromorphic vision system. The device has an extraordinary sensitivity to light with a responsivity of 5.1 × 107 A/W and a specific detectivity of 2 × 1016 Jones, and demonstrates neuromorphic reinforcement learning by training the sensor array with a weak light pulse of 1 μW/cm2.


2018 ◽  
Vol 23 (1) ◽  
pp. 179-189 ◽  
Author(s):  
Tadayoshi Aoyama ◽  
Makoto Chikaraishi ◽  
Akimasa Fujiwara ◽  
Liang Li ◽  
Mingjun Jiang ◽  
...  

Author(s):  
CLAUDIO S. PINHANEZ

A vision system was built using a behavior-based model, the subsumption architecture. The so-called active eye moves the camera’s axis through the environment, detecting areas with high concentration of edges, with the help of a kind of saccadic movement. The design and implementation process is detailed in the article, paying particular attention to the fovea-like sensor structure which enables the active eye to efficiently use local information to control its movements. Numerical measures for the eye’s behavior were developed, and applied to evaluate the incremental building process and the effects of the saccadic movements on the whole system. A higher level behavior was also implemented, with the purpose of detecting long straight edges in the image, producing pictures similar to hand drawings. Robustness and efficiency problems are addressed at the end of the paper. The results seem to prove that interesting behaviors can be achieved using simple vision methods and algorithms, if their results are properly interconnected and timed.


2014 ◽  
Author(s):  
Yong-Sung Kim ◽  
Gyu-Hee Park ◽  
Seung-Hwan Kim ◽  
Hyung-Joon Cho

Sign in / Sign up

Export Citation Format

Share Document