1A1-C12 High- Speed Active Vision with High-Frame-Rate Video Recording

2009 ◽  
Vol 2009 (0) ◽  
pp. _1A1-C12_1-_1A1-C12_4
Author(s):  
Tetsuro Tatebe ◽  
Yuta Moriue ◽  
Takeshi Takaki ◽  
Idaku Ishii ◽  
Kenji Tajima
2011 ◽  
Vol 23 (1) ◽  
pp. 53-65 ◽  
Author(s):  
Yao-DongWang ◽  
◽  
Idaku Ishii ◽  
Takeshi Takaki ◽  
Kenji Tajima ◽  
...  

This paper introduces a high-speed vision system called IDP Express, which can execute real-time image processing and High-Frame-Rate (HFR) video recording simultaneously. In IDP Express, 512×512 pixel images from two camera heads and the processed results on a dedicated FPGA (Field Programmable Gate Array) board are transferred to standard PC memory at a rate of 1000 fps or more. Owing to the simultaneous HFR video processing and recording, IDP Express can be used as an intelligent video logging system for long-term high-speed phenomenon analysis. In this paper, a real-time abnormal behavior detection algorithm was implemented on IDP-Express to capture HFR videos of crucial moments of unpredictable abnormal behaviors in high-speed periodic motions. Several experiments were performed for a high-speed slider machine with repetitive operation at a frequency of 15 Hz and videos of the abnormal behaviors were automatically recorded to verify the effectiveness of our intelligent HFR video logging system.


2015 ◽  
Vol 27 (1) ◽  
pp. 12-23 ◽  
Author(s):  
Qingyi Gu ◽  
◽  
Sushil Raut ◽  
Ken-ichi Okumura ◽  
Tadayoshi Aoyama ◽  
...  

<div class=""abs_img""><img src=""[disp_template_path]/JRM/abst-image/00270001/02.jpg"" width=""300"" />Synthesized panoramic images</div> In this paper, we propose a real-time image mosaicing system that uses a high-frame-rate video sequence. Our proposed system can mosaic 512 × 512 color images captured at 500 fps as a single synthesized panoramic image in real time by stitching the images based on their estimated frame-to-frame changes in displacement and orientation. In the system, feature point extraction is accelerated by implementing a parallel processing circuit module for Harris corner detection, and hundreds of selected feature points in the current frame can be simultaneously corresponded with those in their neighbor ranges in the previous frame, assuming that frame-to-frame image displacement becomes smaller in high-speed vision. The efficacy of our system for improved feature-based real-time image mosaicing at 500 fps was verified by implementing it on a field-programmable gate array (FPGA)-based high-speed vision platform and conducting several experiments: (1) capturing an indoor scene using a camera mounted on a fast-moving two-degrees-of-freedom active vision, (2) capturing an outdoor scene using a hand-held camera that was rapidly moved in a periodic fashion by hand. </span>


2021 ◽  
Author(s):  
Jamin Islam

For the purpose of autonomous satellite grasping, a high-speed, low-cost stereo vision system is required with high accuracy. This type of system must be able to detect an object and estimate its range. Hardware solutions are often chosen over software solutions, which tend to be too slow for high frame-rate applications. Designs utilizing field programmable gate arrays (FPGAs) provide flexibility and are cost effective versus solutions that provide similar performance (i.e., Application Specific Integrated Circuits). This thesis presents the architecture and implementation of a high frame-rate stereo vision system based on an FPGA platform. The system acquires stereo images, performs stereo rectification and generates disparity estimates at frame-rates close to 100 fpSi and on a large-enough FPGA, it can process 200 fps. The implementation presents novelties in performance and in the choice of the algorithm implemented. It achieves superior performance to existing systems that estimate scene depth. Furthermore, it demonstrates equivalent accuracy to software implementations of the dynamic programming maximum likelihood stereo correspondence algorithm.


Author(s):  
Irina Znamenskaya ◽  
Nikolay Sysoev ◽  
Igor Doroshchenko

Digital imaging became one of the main tools for studying unsteady flows. Modern high-speed cameras support video recording at high frame rates which makes it possible to study extended high-speed processes. We demonstrate here different animations: water temperature field evolution with a frame rate of 115 Hz; high-speed shadowgraph visualisation of different flows - water jet formation process (100 000 frames / s), shadowgraph animations of the shock waves created by the pulsed discharges (124 000 frames / s). Also, as an example of plasma flow visualization technique, we offer 9 sequential images of the shock wave - pulse gas discharge visualization obtained by the high-speed CCD camera with the 100 ns delay between frames. We developed in-house software based on the machine vision and learning techniques for automatic flow animations processing. The examples of the automatic oblique shock detection using Canny edge detection and Hough transform and thermal plume detection based on the pre-trained convolutional neural network are provided and discussed.


Author(s):  
C. G. Giannopapa ◽  
J. Hatton ◽  
E. Franken ◽  
B. van der Linden ◽  
P. Jenniskens

The Automated Transfer Vehicle (ATV) “Jules Verne” is the first completely automated rendezvous and docking spaceship to service to the International Space Station (ISS). As a cargo ship, it is designed for one-time use. After completing its mission, it is subjected to hypersonic flow during the re-entry into earth’s atmosphere, with high associated heat flux leading to structural heating and fragmentation of the vehicle. During its first voyage on September 29, 2008, the ATV reentry was observed using various instruments including a wide field view camera and high frame rate cameras. Using the wide field view camera the trajectory path can be reconstructed. The high frame rate camera gives information about the sequence of the events of the explosions and fragmentations of various parts of the spacecraft. The aim of this paper is to present the detailed events that occurred during the ATV re-entry.


2003 ◽  
Vol 125 (2) ◽  
pp. 238-245 ◽  
Author(s):  
Scott Tashman ◽  
William Anderst

Dynamic assessment of three-dimensional (3D) skeletal kinematics is essential for understanding normal joint function as well as the effects of injury or disease. This paper presents a novel technique for measuring in-vivo skeletal kinematics that combines data collected from high-speed biplane radiography and static computed tomography (CT). The goals of the present study were to demonstrate that highly precise measurements can be obtained during dynamic movement studies employing high frame-rate biplane video-radiography, to develop a method for expressing joint kinematics in an anatomically relevant coordinate system and to demonstrate the application of this technique by calculating canine tibio-femoral kinematics during dynamic motion. The method consists of four components: the generation and acquisition of high frame rate biplane radiographs, identification and 3D tracking of implanted bone markers, CT-based coordinate system determination, and kinematic analysis routines for determining joint motion in anatomically based coordinates. Results from dynamic tracking of markers inserted in a phantom object showed the system bias was insignificant (−0.02 mm). The average precision in tracking implanted markers in-vivo was 0.064 mm for the distance between markers and 0.31° for the angles between markers. Across-trial standard deviations for tibio-femoral translations were similar for all three motion directions, averaging 0.14 mm (range 0.08 to 0.20 mm). Variability in tibio-femoral rotations was more dependent on rotation axis, with across-trial standard deviations averaging 1.71° for flexion/extension, 0.90° for internal/external rotation, and 0.40° for varus/valgus rotation. Advantages of this technique over traditional motion analysis methods include the elimination of skin motion artifacts, improved tracking precision and the ability to present results in a consistent anatomical reference frame.


2021 ◽  
Vol 2057 (1) ◽  
pp. 012034
Author(s):  
A I Fedyushkin ◽  
A N Rozhkov ◽  
A O Rudenko

Abstract The collision of water drops with a thin cylinder is studied. The droplet flight trajectory and the cylinder axis are mutually perpendicular. In the experiments, the drop diameter is 3 mm, and the diameter of horizontal stainless-steel cylinders is 0.4 and 0.8 mm. The drops are formed by a liquid slowly pumped through a vertical stainless-steel capillary with an outer diameter of 0.8 mm, from which droplets are periodically separated under the action of gravity. The droplet velocity before collision is defined by the distance between the capillary cut and the target (cylinder); in experiments, this distance is approximately 5, 10, and 20 mm. The drop velocities before the impact are estimated in the range of 0.2–0.5 m/s. The collision process is monitored by high-speed video recording methods with a frame rate of 240 and 960 Hz. The test liquids are water. Experiments and numerical simulation show that, depending on the drop impact height (droplets velocity) different scenarios of a drop collision with a thin cylinder are possible: a short-term recoil of a drop from an obstacle, a drop flowing around a cylindrical obstacle while maintaining the continuity of the drop, the breakup of a drop into two secondary drops, one of which can continue flight and the other one is captured by the cylinder, or both secondary droplets continue to fly, and the drop can be also captured by the cylinder, until the impact of the next drop(s) forces the accumulated drop to detach from the cylinder. Numerical modeling satisfactorily reproduces the phenomena observed in the experiment.


2021 ◽  
Vol 11 (6) ◽  
pp. 2676
Author(s):  
Yubo Ni ◽  
Feng Liu ◽  
Yi Wu ◽  
Xiangjun Wang

This paper introduces a continuous-time fast motion estimation framework using high frame-rate cameras. To recover the high-speed motions trajectory, we inherent the bundle adjustment using a different frame-rate strategy. Based on the optimized trajectory, a cubic B-spline representation was proposed to parameter the continuous-time position, velocity and acceleration during this fast motion. We designed a high-speed visual system consisting of the high frame-rate cameras and infrared cameras, which can capture the fast scattered motion of explosion fragments and evaluate our method. The experiments show that bundle adjustment can greatly improve the accuracy and stability of the trajectory estimation, and the B-spline representation of the high frame-rate can estimate the velocity, acceleration, momentum and force of each fragments at any given time during its motion. The related estimated result can achieve under 1% error.


2020 ◽  
Vol 6 (28) ◽  
pp. eaba8595 ◽  
Author(s):  
Hui Gao ◽  
Yuxi Wang ◽  
Xuhao Fan ◽  
Binzhang Jiao ◽  
Tingan Li ◽  
...  

The hologram is an ideal method for displaying three-dimensional images visible to the naked eye. Metasurfaces consisting of subwavelength structures show great potential in light field manipulation, which is useful for overcoming the drawbacks of common computer-generated holography. However, there are long-existing challenges to achieving dynamic meta-holography in the visible range, such as low frame rate and low frame number. In this work, we demonstrate a design of meta-holography that can achieve 228 different holographic frames and an extremely high frame rate (9523 frames per second) in the visible range. The design is based on a space channel metasurface and a high-speed dynamic structured laser beam modulation module. The space channel consists of silicon nitride nanopillars with a high modulation efficiency. This method can satisfy the needs of a holographic display and be useful in other applications, such as laser fabrication, optical storage, optics communications, and information processing.


Sign in / Sign up

Export Citation Format

Share Document