scholarly journals Real-Time Dynamic 3-D Object Shape Reconstruction and High-Fidelity Texture Mapping for 3-D Video

2004 ◽  
Vol 14 (3) ◽  
pp. 357-369 ◽  
Author(s):  
T. Matsuyama ◽  
X. Wu ◽  
T. Takai ◽  
T. Wada
Sensors ◽  
2020 ◽  
Vol 20 (2) ◽  
pp. 521 ◽  
Author(s):  
Cheng Fei ◽  
Yanyang Ma ◽  
Shan Jiang ◽  
Junliang Liu ◽  
Baoqing Sun ◽  
...  

In this paper, a real-time, dynamic three-dimensional (3D) shape reconstruction scheme based on the Fourier-transform profilometry (FTP) method is achieved with a short-wave infrared (SWIR) indium gallium arsenide (InGaAs) camera for monitoring applications in low illumination environments. A SWIR 3D shape reconstruction system is built for generating and acquiring the SWIR two-dimensional (2D) fringe pattern of the target. The depth information of the target is reconstructed by employing an improved FTP method, which has the advantages of high reconstruction accuracy and speed. The maximum error in depth for static 3D shape reconstruction is 1.15 mm for a plastic model with a maximum depth of 36 mm. Meanwhile, a real-time 3D shape reconstruction with a frame rate of 25 Hz can be realized by this system, which has great application prospects in real-time dynamic 3D shape reconstruction, such as low illumination monitoring. In addition, for real-time dynamic 3D shape reconstruction, without considering the edge areas, the maximum error in depth among all frames is 1.42 mm for a hemisphere with a depth of 35 mm, and the maximum error of the average of all frames in depth is 0.52 mm.


CICTP 2020 ◽  
2020 ◽  
Author(s):  
Lina Mao ◽  
Wenquan Li ◽  
Pengsen Hu ◽  
Guiliang Zhou ◽  
Huiting Zhang ◽  
...  

2021 ◽  
Vol 157 ◽  
pp. 107720
Author(s):  
Christina Insam ◽  
Arian Kist ◽  
Henri Schwalm ◽  
Daniel J. Rixen
Keyword(s):  

2021 ◽  
Vol 11 (4) ◽  
pp. 1933
Author(s):  
Hiroomi Hikawa ◽  
Yuta Ichikawa ◽  
Hidetaka Ito ◽  
Yutaka Maeda

In this paper, a real-time dynamic hand gesture recognition system with gesture spotting function is proposed. In the proposed system, input video frames are converted to feature vectors, and they are used to form a posture sequence vector that represents the input gesture. Then, gesture identification and gesture spotting are carried out in the self-organizing map (SOM)-Hebb classifier. The gesture spotting function detects the end of the gesture by using the vector distance between the posture sequence vector and the winner neuron’s weight vector. The proposed gesture recognition method was tested by simulation and real-time gesture recognition experiment. Results revealed that the system could recognize nine types of gesture with an accuracy of 96.6%, and it successfully outputted the recognition result at the end of gesture using the spotting result.


Sign in / Sign up

Export Citation Format

Share Document