A Novel Robot Vision Employing a Silicon Retina

2001 ◽  
Vol 13 (6) ◽  
pp. 614-620 ◽  
Author(s):  
Kazuhiro Shimonomura ◽  
◽  
Seiji Kameda ◽  
Kazuo Ishii ◽  
Tetsuya Yagi ◽  
...  

A Robot vision system was designed using a silicon retina, which has been developed to mimick the parallel circuit structure of the vertebrate retina. The silicon retina used here is an analog CMOS very large-scale integrated circuit, which executes Laplacian-Gaussian like filtering on the image in real time. The processing is robust to change of illumination condition. Analog circuit modules were designed to detect the contour from the output image of the silicon retina and to binarize the output image. The images processed by the silicon retina as well as those by the analog circuit modules are received by the DOS/V-compatible mother-board with NTSC signal, which enables higher level processings using digital image processing techniques. This novel robot vision system can achieve real time and robust processings in natural illumination condition with a compact hardware and a low power consumption.

2003 ◽  
Vol 15 (2) ◽  
pp. 185-191 ◽  
Author(s):  
Kazuhiro Shimonomura ◽  
◽  
Keisuke Inoue ◽  
Seiji Kameda ◽  
Tetsuya Yagi ◽  
...  

We designed a vision system with a novel architecture composed of a silicon retina, an analog CMOS VLSI intelligent sensor, and FPGA. Two basic pre-processes are done with the silicon retina: a Laplacian-Gaussian (∇2G)-like spatial filtering and a subtraction of consecutive frames. Analog outputs of the silicon retina were binarized and transferred to FPGA in which digital image processing was executed. The system was applied to real-time target tracking under indoor illumination. Namely, the center of a target object was found as the median of the binarized image. The object could be tracked within the video frame rate in indoor illumination. The system has a compact hardware and a low power consumption and therefore is suitable for robot vision.


Author(s):  
Satoshi Hoshino ◽  
◽  
Kyohei Niimura

Mobile robots equipped with camera sensors are required to perceive humans and their actions for safe autonomous navigation. For simultaneous human detection and action recognition, the real-time performance of the robot vision is an important issue. In this paper, we propose a robot vision system in which original images captured by a camera sensor are described by the optical flow. These images are then used as inputs for the human and action classifications. For the image inputs, two classifiers based on convolutional neural networks are developed. Moreover, we describe a novel detector (a local search window) for clipping partial images around the target human from the original image. Since the camera sensor moves together with the robot, the camera movement has an influence on the calculation of optical flow in the image, which we address by further modifying the optical flow for changes caused by the camera movement. Through the experiments, we show that the robot vision system can detect humans and recognize the action in real time. Furthermore, we show that a moving robot can achieve human detection and action recognition by modifying the optical flow.


Author(s):  
Kazuhiro Shimonomura

The author of this chapter describes a binocular robotic vision system that was designed to emulate the neural images of cortical cells under vergence eye movements. The robotic vision system is constructed by employing a combinational strategy of neuromorphic engineering and conventional digital technology. The system consists of two silicon retinas and a field programmable gate array (FPGA). The silicon retinas carry out Laplacian-Gaussian-like spatial filtering, mimicking the response properties of the vertebrate retina. The outputs of the silicon retina chips on the left and right cameras are transmitted to the FPGA. The FPGA receives the outputs from the two simple cell chips and calculates the responses of complex cells based on the disparity energy model. This system provides complex cell outputs tuned to five different disparities in real-time. The vergence control signal is obtained by pooling these multiple complex cell responses. The system is useful for predicting the neural images of the complex cells and for evaluating the functional roles of cortical cells in real situations.


2008 ◽  
Vol 20 (1) ◽  
pp. 68-74 ◽  
Author(s):  
Hirotsugu Okuno ◽  
◽  
Tetsuya Yagi

A mixed analog-digital integrated vision sensor was designed to detect an approaching object in real-time. To respond selectively to approaching stimuli, the sensor employed an algorithm inspired by the visual nervous system of a locust, which can avoid collisions robustly by using visual information. An electronic circuit model was designed to mimic the architecture of the locust nervous system. Computer simulations showed that the model provided appropriate responses for collision avoidance. We implemented the model with a compact hardware system consisting of a silicon retina and field-programmable gate array (FPGA) circuits; the system was confirmed to respond selectively to approaching stimuli that constituted a collision threat.


Robotics ◽  
2013 ◽  
pp. 812-818
Author(s):  
Kazuhiro Shimonomura

The author of this chapter describes a binocular robotic vision system that was designed to emulate the neural images of cortical cells under vergence eye movements. The robotic vision system is constructed by employing a combinational strategy of neuromorphic engineering and conventional digital technology. The system consists of two silicon retinas and a field programmable gate array (FPGA). The silicon retinas carry out Laplacian-Gaussian-like spatial filtering, mimicking the response properties of the vertebrate retina. The outputs of the silicon retina chips on the left and right cameras are transmitted to the FPGA. The FPGA receives the outputs from the two simple cell chips and calculates the responses of complex cells based on the disparity energy model. This system provides complex cell outputs tuned to five different disparities in real-time. The vergence control signal is obtained by pooling these multiple complex cell responses. The system is useful for predicting the neural images of the complex cells and for evaluating the functional roles of cortical cells in real situations.


Sign in / Sign up

Export Citation Format

Share Document