Real-Time Projection Mapping Using High-Frame-Rate Structured Light 3D Vision

2015 ◽  
Vol 8 (4) ◽  
pp. 265-272 ◽  
Author(s):  
Jun CHEN ◽  
Takashi YAMAMOTO ◽  
Tadayoshi AOYAMA ◽  
Takeshi TAKAKI ◽  
Idaku ISHII
2014 ◽  
Vol 26 (3) ◽  
pp. 311-320 ◽  
Author(s):  
Yongjiu Liu ◽  
◽  
Hao Gao ◽  
Qingyi Gu ◽  
Tadayoshi Aoyama ◽  
...  

<div class=""abs_img""><img src=""[disp_template_path]/JRM/abst-image/00260003/04.jpg"" width=""300"" />HFR 3D vision system</span></div> This paper presents a fast motion-compensated structured-light vision system that realizes 3-D shape measurement at 500 fps using a high-frame-rate camera-projector system. Multiple light patterns with an 8-bit gray code, are projected on the measured scene at 1000 fps, and are processed in real time for generating 512 × 512 depth images at 500 fps by using the parallel processing of a motion-compensated structured-light method on a GPU board. Several experiments were performed on fast-moving 3-D objects using the proposed method. </span>


2020 ◽  
Author(s):  
Idaku Ishii ◽  
Deepak Kumar ◽  
Sushil Raut ◽  
Kohei Shimasaki ◽  
Taku Senoo

Abstract An informative object pointing method using a spatiotemporal-modulated pattern projection is proposed to recognize and localize pointed objects by using a distantly located high-frame-rate vision system. We developed a prototype for projection-mapping-based object pointing that consists of an AI-camera-enabled projection (AiCP) system used as a transmitter, for informative projection mapping, and an HFR vision system operated as a receiver. The AiCP system detects multiple objects in real time at 30 fps with a CNN-based object detector, and simultaneously encodes and projects the recognition results of the detector as 480-Hz-modulated light patterns on to the objects to be pointed. The multiple 480-fps cameras can directly recognize and track the objects pointed at by the AiCP system without camera calibration or complex recognition methods by decoding the brightness signals of pixels in the images. To demonstrate the eectiveness of our proposed method, several desktop experiments using miniature objects and scenes were conducted under various conditions.


1997 ◽  
Vol 119 (2) ◽  
pp. 151-160 ◽  
Author(s):  
Y. M. Zhang ◽  
R. Kovacevic

Seam tracking and weld penetration control are two fundamental issues in automated welding. Although the seam tracking technique has matured, the latter still remains a unique unsolved problem. It was found that the full penetration status during GTA welding can be determined with sufficient accuracy using the sag depression. To achieve a new full penetration sensing technique, a structured-light 3D vision system is developed to extract the sag geometry behind the pool. The laser stripe, which is the intersection of the structured-light and weldment, is thinned and then used to acquire the sag geometry. To reduce possible control delay, a small distance is selected between the pool rear and laser stripe. An adaptive dynamic search for rapid thinning of the stripe and the maximum principle of slope difference for unbiased recognition of sag border were proposed to develop an effective real-time image processing algorithm for sag geometry acquisition. Experiments have shown that the proposed sensor and image algorithm can provide reliable feedback information of sag geometry for the full penetration control system.


2021 ◽  
Vol 15 (4) ◽  
pp. 820-833
Author(s):  
Junming Zeng ◽  
Lei Kuang ◽  
Miguel Cacho-Soblechero ◽  
Pantelis Georgiou

Author(s):  
Jaewoon Lee ◽  
Yeonjin Kim ◽  
Myeong-Hyeon Heo ◽  
Dongho Kim ◽  
Byeong-Seok Shin

2015 ◽  
Vol 27 (1) ◽  
pp. 12-23 ◽  
Author(s):  
Qingyi Gu ◽  
◽  
Sushil Raut ◽  
Ken-ichi Okumura ◽  
Tadayoshi Aoyama ◽  
...  

<div class=""abs_img""><img src=""[disp_template_path]/JRM/abst-image/00270001/02.jpg"" width=""300"" />Synthesized panoramic images</div> In this paper, we propose a real-time image mosaicing system that uses a high-frame-rate video sequence. Our proposed system can mosaic 512 × 512 color images captured at 500 fps as a single synthesized panoramic image in real time by stitching the images based on their estimated frame-to-frame changes in displacement and orientation. In the system, feature point extraction is accelerated by implementing a parallel processing circuit module for Harris corner detection, and hundreds of selected feature points in the current frame can be simultaneously corresponded with those in their neighbor ranges in the previous frame, assuming that frame-to-frame image displacement becomes smaller in high-speed vision. The efficacy of our system for improved feature-based real-time image mosaicing at 500 fps was verified by implementing it on a field-programmable gate array (FPGA)-based high-speed vision platform and conducting several experiments: (1) capturing an indoor scene using a camera mounted on a fast-moving two-degrees-of-freedom active vision, (2) capturing an outdoor scene using a hand-held camera that was rapidly moved in a periodic fashion by hand. </span>


Author(s):  
Alessandro Ramalli ◽  
Francesco Guidi ◽  
Alessandro Dallai ◽  
Enrico Boni ◽  
Ling Tong ◽  
...  

Author(s):  
J. T. Yen ◽  
K. K. Shung ◽  
L. Sun ◽  
C. Feng ◽  
J. M. Cannata ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document