scholarly journals Optical Flow for Real-Time Human Detection and Action Recognition Based on CNN Classifiers

Author(s):  
Satoshi Hoshino ◽  
◽  
Kyohei Niimura

Mobile robots equipped with camera sensors are required to perceive surrounding humans and their actions for safe and autonomous navigation. In this work, moving humans are the target objects. For robot vision, real-time performance is an important requirement. Therefore, we propose a robot vision system in which the original images captured by a camera sensor are described by optical flow. These images are then used as inputs to a classifier. For classifying images into human and not-human classifications, and the actions, we use a convolutional neural network (CNN), rather than coding invariant features. Moreover, we present a local search window as a novel detector for clipping partial images around target objects in an original image. Through the experiments, we ultimately show that the robot vision system is able to detect moving humans and recognize action in real time.

Author(s):  
Satoshi Hoshino ◽  
◽  
Kyohei Niimura

Mobile robots equipped with camera sensors are required to perceive humans and their actions for safe autonomous navigation. For simultaneous human detection and action recognition, the real-time performance of the robot vision is an important issue. In this paper, we propose a robot vision system in which original images captured by a camera sensor are described by the optical flow. These images are then used as inputs for the human and action classifications. For the image inputs, two classifiers based on convolutional neural networks are developed. Moreover, we describe a novel detector (a local search window) for clipping partial images around the target human from the original image. Since the camera sensor moves together with the robot, the camera movement has an influence on the calculation of optical flow in the image, which we address by further modifying the optical flow for changes caused by the camera movement. Through the experiments, we show that the robot vision system can detect humans and recognize the action in real time. Furthermore, we show that a moving robot can achieve human detection and action recognition by modifying the optical flow.


2001 ◽  
Vol 13 (6) ◽  
pp. 614-620 ◽  
Author(s):  
Kazuhiro Shimonomura ◽  
◽  
Seiji Kameda ◽  
Kazuo Ishii ◽  
Tetsuya Yagi ◽  
...  

A Robot vision system was designed using a silicon retina, which has been developed to mimick the parallel circuit structure of the vertebrate retina. The silicon retina used here is an analog CMOS very large-scale integrated circuit, which executes Laplacian-Gaussian like filtering on the image in real time. The processing is robust to change of illumination condition. Analog circuit modules were designed to detect the contour from the output image of the silicon retina and to binarize the output image. The images processed by the silicon retina as well as those by the analog circuit modules are received by the DOS/V-compatible mother-board with NTSC signal, which enables higher level processings using digital image processing techniques. This novel robot vision system can achieve real time and robust processings in natural illumination condition with a compact hardware and a low power consumption.


2021 ◽  
Author(s):  
Jing Li ◽  
Jialin Yin ◽  
Lin Deng

Abstract In the development of modern agriculture, the intelligent use of mechanical equipment is one of the main signs for agricultural modernization. Navigation technology is the key technology for agricultural machinery to control autonomously in operating environment, and it is a hotspot in the field of intelligent research on agricultural machinery. Facing the accuracy requirements of autonomous navigation for intelligent agricultural robots, this paper proposes a visual navigation algorithm for agricultural robots based on deep learning image understanding. The method first uses cascaded deep convolutional network and hybrid dilated convolution fusion method to process images collected by vision system. Then it extracts the route of processed images based on improved Hough transform algorithm. At the same time, the posture of agricultural robots is adjusted to realize autonomous navigation. Finally, our proposed method is verified by using non-interference experimental scenes and noisy experimental scenes. Experimental results show that the method can perform autonomous navigation in complex and noisy environments, and has good practicability and applicability.


Author(s):  
Ruting Yao ◽  
Yili Zheng ◽  
Fengjun Chen ◽  
Jian Wu ◽  
Hui Wang

Forestry mobile robots can effectively solve the problems of low efficiency and poor safety in the forestry operation process. To realize the autonomous navigation of forestry mobile robots, a vision system consisting of a monocular camera and two-dimensional LiDAR and its calibration method are investigated. First, the adaptive algorithm is used to synchronize the data captured by the two in time. Second, a calibration board with a convex checkerboard is designed for the spatial calibration of the devices. The nonlinear least squares algorithm is employed to solve and optimize the external parameters. The experimental results show that the time synchronization precision of this calibration method is 0.0082s, the communication rate is 23Hz, and the gradient tolerance of spatial calibration is 8.55e−07. The calibration results satisfy the requirements of real-time operation and accuracy of the forestry mobile robot vision system. Furthermore, the engineering applications of the vision system are discussed herein. This study lays the foundation for further forestry mobile robots research, which is relevant to intelligent forest machines.


2013 ◽  
Vol 25 (4) ◽  
pp. 586-595 ◽  
Author(s):  
Motofumi Kobatake ◽  
◽  
Tadayoshi Aoyama ◽  
Takeshi Takaki ◽  
Idaku Ishii

In this paper, we propose a novel concept of realtime microscopic particle image velocimetry (PIV) for apparent high-speed microchannel flows in lab-on-achip (LOC). We introduce a frame-straddling dualcamera high-speed vision system that synchronizes two different camera inputs for the same camera view with a submicrosecond time delay. In order to improve upper and lower limits of measurable velocity in microchannel flow observation, we designed an improved gradient-based optical flow algorithm that adaptively selects a pair of images in the optimal frame-straddling time between the two camera inputs based on the amplitude of the estimated optical flow. This avoids large image displacement between frames that often generates serious errors in optical flow estimation. Our method is implemented using software on a frame-straddling dual-camera high-speed vision platform that captures real-time video and processes 512 × 512 pixel images at 2000 fps for the two camera heads and controls the frame-straddling time delay between them from 0 to 0.25 ms with 9.9 ns step. Our microscopic PIV system with frame-straddling dualcamera high-speed vision simultaneously estimates the velocity distribution of high-speed microchannel flow at 1 × 108pixel/s or more. Results of experiments using real microscopic flows on microchannels thousands of µm wide on LOCs verify the performance of the real-time microscopic PIV system we developed.


Sign in / Sign up

Export Citation Format

Share Document