High-Speed Acquisition and Pre-processing of Polarimetric Image Sequences

Author(s):  
Luc Gendre ◽  
Alban Foulonneau ◽  
Laurent Bigué
Keyword(s):  
Author(s):  
S. Gao ◽  
Z. Ye ◽  
C. Wei ◽  
X. Liu ◽  
X. Tong

<p><strong>Abstract.</strong> The high-speed videogrammetric measurement system, which provides a convenient way to capture three-dimensional (3D) dynamic response of moving objects, has been widely used in various applications due to its remarkable advantages including non-contact, flexibility and high precision. This paper presents a distributed high-speed videogrammetric measurement system suitable for monitoring of large-scale structures. The overall framework consists of hardware and software two parts, namely observation network construction and data processing. The core component of the observation network is high-speed cameras to provide multiview image sequences. The data processing part automatically obtains the 3D structural deformations of the key points from the captured image sequences. A distributed parallel processing framework is adopted to speed up the image sequence processing. An empirical experiment was conducted to measure the dynamics of a double-tube five-layer building structure on the shaking table using the presented videogrammetric measurement system. Compared with the high-accuracy total station measurement, the presented system can achieve a sub-millimeter level of coordinates discrepancy. The 3D deformation results demonstrate the potential of the non-contact high-speed videogrammetric measurement system in dynamic monitoring of large-scale shake table tests.</p>


2013 ◽  
Vol 1 (1) ◽  
pp. 14-25 ◽  
Author(s):  
Tsuyoshi Miyazaki ◽  
Toyoshiro Nakashima ◽  
Naohiro Ishii

The authors describe an improved method for detecting distinctive mouth shapes in Japanese utterance image sequences. Their previous method uses template matching. Two types of mouth shapes are formed when a Japanese phone is pronounced: one at the beginning of the utterance (the beginning mouth shape, BeMS) and the other at the end (the ending mouth shape, EMS). The authors’ previous method could detect mouth shapes, but it misdetected some shapes because the time period in which the BeMS was formed was short. Therefore, they predicted that a high-speed camera would be able to capture the BeMS with higher accuracy. Experiments showed that the BeMS could be captured; however, the authors faced another problem. Deformed mouth shapes that appeared in the transition from one shape to another were detected as the BeMS. This study describes the use of optical flow to prevent the detection of such mouth shapes. The time period in which the mouth shape is deformed is detected using optical flow, and the mouth shape during this time is ignored. The authors propose an improved method of detecting the BeMS and EMS in Japanese utterance image sequences by using template matching and optical flow.


2016 ◽  
Vol 82 (7) ◽  
pp. 547-557 ◽  
Author(s):  
Tiantian Feng ◽  
Huan Mi ◽  
Marco Scaioni ◽  
Gang Qiao ◽  
Ping Lu ◽  
...  

2012 ◽  
Vol 201-202 ◽  
pp. 1076-1079
Author(s):  
De Yong You ◽  
Xiang Dong Gao

Laser welding process has been widely used in industrial manufacturing. The purpose of this paper is to explore the inter-relation between laser welding results and the laser-induced plume behavior. High-power disk laser welding of stainless steel type304 was performed at different welding speeds. Combing the high speed camera and ultraviolet sensing filter, the plume image sequences of laser welding process have been obtained. Plume features including plume volume and plume flowing direction have been extracted by using high-speed photography and image processing technology. The dynamic behavior of laser-induced plume was investigated. The results showed that the laser-induced plume feature, especially the plume volume, was closely related to laser welding process conditions.


Author(s):  
C. Jepping ◽  
F. Bethmann ◽  
T. Luhmann

This paper deals with the correction of exterior orientation parameters of stereo image sequences over deformed free-form surfaces without control points. Such imaging situation can occur, for example, during photogrammetric car crash test recordings where onboard high-speed stereo cameras are used to measure 3D surfaces. As a result of such measurements 3D point clouds of deformed surfaces are generated for a complete stereo sequence. The first objective of this research focusses on the development and investigation of methods for the detection of corresponding spatial and temporal tie points within the stereo image sequences (by stereo image matching and 3D point tracking) that are robust enough for a reliable handling of occlusions and other disturbances that may occur. The second objective of this research is the analysis of object deformations in order to detect stable areas (congruence analysis). For this purpose a RANSAC-based method for congruence analysis has been developed. This process is based on the sequential transformation of randomly selected point groups from one epoch to another by using a 3D similarity transformation. The paper gives a detailed description of the congruence analysis. The approach has been tested successfully on synthetic and real image data.


2019 ◽  
Vol 6 (1) ◽  
Author(s):  
Sushil Raut ◽  
Kohei Shimasaki ◽  
Sanjay Singh ◽  
Takeshi Takaki ◽  
Idaku Ishii

AbstractIn this study, the novel approach of real-time video stabilization system using a high-frame-rate (HFR) jitter sensing device is demonstrated to realize the computationally efficient technique of digital video stabilization for high-resolution image sequences. This system consists of a high-speed camera to extract and track feature points in gray-level $$512\times 496$$512×496 image sequences at 1000 fps and a high-resolution CMOS camera to capture $$2048\times 2048$$2048×2048 image sequences considering their hybridization to achieve real-time stabilization. The high-speed camera functions as a real-time HFR jitter sensing device to measure an apparent jitter movement of the system by considering two ways of computational acceleration; (1) feature point extraction with a parallel processing circuit module of the Harris corner detection and (2) corresponding hundreds of feature points at the current frame to those in the neighbor ranges at the previous frame on the assumption of small frame-to-frame displacement in high-speed vision. The proposed hybrid-camera system can digitally stabilize the $$2048\times 2048$$2048×2048 images captured with the high-resolution CMOS camera by compensating the sensed jitter-displacement in real time for displaying to human eyes on a computer display. The experiments were conducted to demonstrate the effectiveness of hybrid-camera-based digital video stabilization such as (a) verification when the hybrid-camera system in the pan direction in front of a checkered pattern, (b) stabilization in video shooting a photographic pattern when the system moved with a mixed-displacement motion of jitter and constant low-velocity in the pan direction, and (c) stabilization in video shooting a real-world outdoor scene when an operator holding hand-held hybrid-camera module while walking on the stairs.


2011 ◽  
Vol 2011 ◽  
pp. 1-12 ◽  
Author(s):  
Bing-Fei Wu ◽  
Wang-Chuan Lu ◽  
Cheng-Lung Jen

This paper presents a vision-based technology for localizing targets in 3D environment. It is achieved by the combination of different types of sensors including optical wheel encoders, an electrical compass, and visual observations with a single camera. Based on the robot motion model and image sequences, extended Kalman filter is applied to estimate target locations and the robot pose simultaneously. The proposed localization system is applicable in practice because it is not necessary to have the initializing setting regarding starting the system from artificial landmarks of known size. The technique is especially suitable for navigation and target tracing for an indoor robot and has a high potential extension to surveillance and monitoring for Unmanned Aerial Vehicles with aerial odometry sensors. The experimental results present “cm” level accuracy of the localization of the targets in indoor environment under a high-speed robot movement.


2016 ◽  
Vol 25 (4) ◽  
pp. 576-589 ◽  
Author(s):  
Maria E. Powell ◽  
Dimitar D. Deliyski ◽  
Robert E. Hillman ◽  
Steven M. Zeitels ◽  
James A. Burns ◽  
...  

Purpose Videostroboscopy (VS) uses an indirect physiological signal to predict the phase of the vocal fold vibratory cycle for sampling. Simulated stroboscopy (SS) extracts the phase of the glottal cycle directly from the changing glottal area in the high-speed videoendoscopy (HSV) image sequence. The purpose of this study is to determine the reliability of SS relative to VS for clinical assessment of vocal fold vibratory function in patients with mass lesions. Methods VS and SS recordings were obtained from 28 patients with vocal fold mass lesions before and after phonomicrosurgery and 17 controls who were vocally healthy. Two clinicians rated clinically relevant vocal fold vibratory features using both imaging techniques, indicated their internal level of confidence in the accuracy of their ratings, and provided reasons for low or no confidence. Results SS had fewer asynchronous image sequences than VS. Vibratory outcomes were able to be computed for more patients using SS. In addition, raters demonstrated better interrater reliability and reported equal or higher levels of confidence using SS than VS. Conclusion Stroboscopic techniques on the basis of extracting the phase directly from the HSV image sequence are more reliable than acoustic-based VS. Findings suggest that SS derived from high-speed videoendoscopy is a promising improvement over current VS systems.


2018 ◽  
Author(s):  
Zhaoqiang Wang ◽  
Lanxin Zhu ◽  
Hao Zhang ◽  
Guo Li ◽  
Chengqiang Yi ◽  
...  

AbstractLight-field microscopy has emerged as a technique of choice for high-speed volumetric imaging of fast biological processes. However, artefacts, non-uniform resolution, and a slow reconstruction speed have limited its full capabilities for in toto extraction of the dynamic spatiotemporal patterns in samples. Here, we combined a view-channel-depth (VCD) neural network with light-field microscopy to mitigate these limitations, yielding artefact-free three-dimensional image sequences with uniform spatial resolution and three-order-higher video-rate reconstruction throughput. We imaged neuronal activities across moving C. elegans and blood flow in a beating zebrafish heart at single-cell resolution with volume rates up to 200 Hz.


Sign in / Sign up

Export Citation Format

Share Document