scholarly journals Multi-Target Tracking Using a Vision Chip and its Applications to Real-Time Visual Measurement

2005 ◽  
Vol 17 (2) ◽  
pp. 121-129 ◽  
Author(s):  
Yoshihiro Watanabe ◽  
◽  
Takashi Komuro ◽  
Shingo Kagami ◽  
Masatoshi Ishikawa

Real-time image processing at high frame rates could play an important role in various visual measurement. Such image processing can be realized by using a high-speed vision system imaging at high frame rates and having appropriate algorithms processed at high speed. We introduce a vision chip for high-speed vision and propose a multi-target tracking algorithm for the vision chip utilizing the unique features. We describe two visual measurement applications, target counting and rotation measurement. Both measurements enable excellent measurement precision and high flexibility because of high-frame-rate visual observation achievable. Experimental results show the advantages of vision chips compared with conventional visual systems.

2011 ◽  
Vol 23 (1) ◽  
pp. 53-65 ◽  
Author(s):  
Yao-DongWang ◽  
◽  
Idaku Ishii ◽  
Takeshi Takaki ◽  
Kenji Tajima ◽  
...  

This paper introduces a high-speed vision system called IDP Express, which can execute real-time image processing and High-Frame-Rate (HFR) video recording simultaneously. In IDP Express, 512×512 pixel images from two camera heads and the processed results on a dedicated FPGA (Field Programmable Gate Array) board are transferred to standard PC memory at a rate of 1000 fps or more. Owing to the simultaneous HFR video processing and recording, IDP Express can be used as an intelligent video logging system for long-term high-speed phenomenon analysis. In this paper, a real-time abnormal behavior detection algorithm was implemented on IDP-Express to capture HFR videos of crucial moments of unpredictable abnormal behaviors in high-speed periodic motions. Several experiments were performed for a high-speed slider machine with repetitive operation at a frequency of 15 Hz and videos of the abnormal behaviors were automatically recorded to verify the effectiveness of our intelligent HFR video logging system.


2013 ◽  
Vol 25 (4) ◽  
pp. 586-595 ◽  
Author(s):  
Motofumi Kobatake ◽  
◽  
Tadayoshi Aoyama ◽  
Takeshi Takaki ◽  
Idaku Ishii

In this paper, we propose a novel concept of realtime microscopic particle image velocimetry (PIV) for apparent high-speed microchannel flows in lab-on-achip (LOC). We introduce a frame-straddling dualcamera high-speed vision system that synchronizes two different camera inputs for the same camera view with a submicrosecond time delay. In order to improve upper and lower limits of measurable velocity in microchannel flow observation, we designed an improved gradient-based optical flow algorithm that adaptively selects a pair of images in the optimal frame-straddling time between the two camera inputs based on the amplitude of the estimated optical flow. This avoids large image displacement between frames that often generates serious errors in optical flow estimation. Our method is implemented using software on a frame-straddling dual-camera high-speed vision platform that captures real-time video and processes 512 × 512 pixel images at 2000 fps for the two camera heads and controls the frame-straddling time delay between them from 0 to 0.25 ms with 9.9 ns step. Our microscopic PIV system with frame-straddling dualcamera high-speed vision simultaneously estimates the velocity distribution of high-speed microchannel flow at 1 × 108pixel/s or more. Results of experiments using real microscopic flows on microchannels thousands of µm wide on LOCs verify the performance of the real-time microscopic PIV system we developed.


2018 ◽  
Vol 30 (1) ◽  
pp. 117-127
Author(s):  
Xianwu Jiang ◽  
Qingyi Gu ◽  
Tadayoshi Aoyama ◽  
Takeshi Takaki ◽  
Idaku Ishii ◽  
...  

In this study, we develop a real-time high-frame-rate vision system with frame-by-frame automatic exposure (AE) control that can simultaneously synthesize multiple images with different exposure times into a high-dynamic-range (HDR) image for scenarios with dynamic change in illumination. By accelerating the video capture and processing for time-division multithread AE control at the millisecond level, the proposed system can virtually function as multiple AE cameras with different exposure times. This system can capture color HDR images of 512 × 512 pixels in real time at 500 fps by synthesizing four 8-bit color images with different exposure times at consecutive frames, captured at an interval of 2 ms, with pixel-level parallel processing accelerated by a GPU (Graphic Processing Unit) board. Several experimental results for scenarios with a large change in illumination are demonstrated to confirm the performance of the proposed system for real-time HDR imaging.


2003 ◽  
Vol 15 (2) ◽  
pp. 185-191 ◽  
Author(s):  
Kazuhiro Shimonomura ◽  
◽  
Keisuke Inoue ◽  
Seiji Kameda ◽  
Tetsuya Yagi ◽  
...  

We designed a vision system with a novel architecture composed of a silicon retina, an analog CMOS VLSI intelligent sensor, and FPGA. Two basic pre-processes are done with the silicon retina: a Laplacian-Gaussian (∇2G)-like spatial filtering and a subtraction of consecutive frames. Analog outputs of the silicon retina were binarized and transferred to FPGA in which digital image processing was executed. The system was applied to real-time target tracking under indoor illumination. Namely, the center of a target object was found as the median of the binarized image. The object could be tracked within the video frame rate in indoor illumination. The system has a compact hardware and a low power consumption and therefore is suitable for robot vision.


1974 ◽  
Vol 18 (5) ◽  
pp. 498-506
Author(s):  
H. Alsberg ◽  
R. Nathan

The role of vision in teleoperation has been recognized as an important element in the man-machine control loop. In most applications of remote manipulation, direct vision cannot be used. To overcome this handicap, the human operator's control capabilities are augmented by a “tele”-vision system. This medium provides a practical and useful link between workspace and the control station from which the operator performs his tasks. The function of the video system is to reproduce the original scenes in pictorial form. Systematic errors in terms of photometry, resolution, geometry and perhaps color can be removed by decalibration procedures. Human performance deteriorates when the images are degraded as a result of instrumental and transmission limitations. Recovering images from various degradation effects is commonly referred to as restoration. Image enhancement is used to bring out selected qualities in a picture to increase the perception of the observer. At the Image Processing Laboratory (IPL) of JPL, we employ a general purpose digital computer (IBM 360/44) utilizing an extensive special purpose software system (VICAR) to perform an almost unlimited repertoire of processing operations. This approach has proven to be most flexible, versatile and suitable for experimental work. Guided by the experience of the IPL and the recent advances in LSI technology, we are reporting on special hardwired algorithms which have speeded up the processing by several orders of magnitude. Although quantum limited imaging was made possible by noise removal and contrast enhancement as part of a development in electron microscopy, these methods and experiences are transferrable to other teleoperator applications. The processing and enhancement of images are controlled by the operator/scientist matching his perceptual needs to optimally adjust the instrument. Central to the near real time image processing is a high speed digital solid state mass memory operating at input/output speeds compatible with standard TV rates. Thus, the operator, as the most important link in the loop, is provided with a real time interactive display which enables him to perceive the remote workspace as required to execute remote manipulation tasks.


2006 ◽  
Vol 89 (6) ◽  
pp. 34-43 ◽  
Author(s):  
Shingo Kagami ◽  
Takashi Komuro ◽  
Yoshihiro Watanabe ◽  
Masatoshi Ishikawa
Keyword(s):  

2020 ◽  
Author(s):  
Jun Ki Kim ◽  
Youngkyu Kim ◽  
Jungmin Oh ◽  
Seung-Ho Choi ◽  
Ahra Jung ◽  
...  

BACKGROUND Recently, high-speed digital imaging (HSDI), especially HSD endoscopic imaging is being routinely used for the diagnosis of vocal fold disorders. However, high-speed digital endoscopic imaging devices are usually large and costly, which limits access by patients in underdeveloped countries and in regions with inadequate medical infrastructure. Modern smartphones have sufficient functionality to process the complex calculations that are required for processing high-resolution images and videos with a high frame rate. Recently, several attempts have been made to integrate medical endoscopes with smartphones to make them more accessible to underdeveloped countries. OBJECTIVE To develop a smartphone adaptor for endoscopes to reduce the cost of devices, and to demonstrate the possibility of high-speed vocal cord imaging using the high-speed imaging functions of a high-performance smartphone camera. METHODS A customized smartphone adaptor was designed for clinical endoscopy using selective laser melting (SLM)-based 3D printing. Existing laryngoscope was attached to the smartphone adaptor to acquire high-speed vocal cord endoscopic images. Only existing basic functions of the smartphone camera were used for HSDI of the vocal folds. For image processing, segmented glottal areas were calculated from whole HSDI frames, and characteristics such as volume, shape and longitudinal edge length were analyzed. RESULTS High-speed digital smartphone imaging with the smartphone-endoscope adaptor could achieve 940 frames per second, and was used to image the vocal folds of five volunteers. The image processing and analytics demonstrated successful calculation of relevant diagnostic variables from the acquired images. CONCLUSIONS A smartphone-based HSDI endoscope system can function as a point-of-care clinical diagnostic device. Furthermore, this system is suitable for use as an accessible diagnostic method in underdeveloped areas with inadequate medical service infrastructure.


2012 ◽  
Author(s):  
Husniza Razalli ◽  
Rahmita Wirza O. K. Rahmat ◽  
Ramlan Mahmud

Masalah sistem pengesanan mata yang tegar tanpa sebarang gangguan adalah satu isu yang penting dan mencabar di dalam bidang visi komputer. Masalah ini bukan hanya mengurangkan masalah dalam carian ciri–ciri paras rupa untuk proses pengecaman tetapi juga boleh digunakan untuk memudahkan tugas pengenalpastian dan interaksi antara manusia dan sistem komputer. Walaupun kebanyakan hasil kerja terdahulu telah pun mempunyai keupayaan menentukan lokasi mata manusia tetapi objektif utama rencana ini bukan tertumpu kepada pengesanan mata sahaja. Objektif kajian adalah untuk merekabentuk sebuah sistem masa nyata dan terperinci, iaitu sistem pengesanan muka berskala dengan ciri–ciri petunjuk pergerakan mata berdasarkan pergerakan anak mata (iris) dengan mengunakan teknik penempatan yang terhasil daripada teknik pemprosesan imej dan teknik muatan bulatan. Hasil daripada kajian ini telah pun berjaya diimplimentasikan menggunakan kamera web dengan ralat yang minimum. Kata kunci: Pengesanan mata masa nyata; penempatan anak mata; pemprosesan imej; pengesanan bucu; muatan bulatan Robust, non–intrusive human eye detection problem has been a fundamental and challenging problem for computer vision area. Not only it is a problem of its own, it can be used to ease the problem of finding the locations of other facial features for recognition tasks and human–computer interaction purposes as well. Many previous works have the capability of determining the locations of the human eyes but the main task in this paper is not only a vision system with eye detection capability. Our aim is to design a real–time face tracker system and iris localization using edge point detection method indicates from image processing and circle fitting technique. As a result, our eye tracker system was successfully implemented using non–intrusive webcam with less error. Key words: Real–time face tracking; iris localization; image processing; edge detection; circle fitting


Sign in / Sign up

Export Citation Format

Share Document