scholarly journals High-Detail Animation of Human Body Shape and Pose From High-Resolution 4D Scans Using Iterative Closest Point and Shape Maps

2020 ◽  
Vol 10 (21) ◽  
pp. 7535
Author(s):  
Marta Nowak ◽  
Robert Sitnik

In this article, we present a method of analysis for 3D scanning sequences of human bodies in motion that allows us to obtain a computer animation of a virtual character containing both skeleton motion and high-detail deformations of the body surface geometry, resulting from muscle activity, the dynamics of the motion, and tissue inertia. The developed algorithm operates on a sequence of 3D scans with high spatial and temporal resolution. The presented method can be applied to scans in the form of both triangle meshes and 3D point clouds. One of the contributions of this work is the use of the Iterative Closest Point algorithm with motion constraints for pose tracking, which has been problematic so far. We also introduce shape maps as a tool to represent local body segment deformations. An important feature of our method is the possibility to change the topology and resolution of the output mesh and the topology of the animation skeleton in individual sequences, without requiring time-consuming retraining of the model. Compared to the state-of-the-art Skinned Multi-Person Linear (SMPL) method, the proposed algorithm yields almost twofold better accuracy in shape mapping.

Author(s):  
H. A. Lauterbach ◽  
D. Borrmann ◽  
A. Nüchter

3D laser scanners are typically not able to collect color information. Therefore coloring is often done by projecting photos of an additional camera to the 3D scans. The capturing process is time consuming and therefore prone to changes in the environment. The appearance of the colored point cloud is mainly effected by changes of lighting conditions and corresponding camera settings. In case of panorama images these exposure variations are typically corrected by radiometrical aligning the input images to each other. In this paper we adopt existing methods for panorama optimization in order to correct the coloring of point clouds. Therefore corresponding pixels from overlapping images are selected by using geometrically closest points of the registered 3D scans and their neighboring pixels in the images. The dynamic range of images in raw format allows for correction of large exposure differences. Two experiments demonstrate the abilities of the approach.


2020 ◽  
Vol 57 (6) ◽  
pp. 061002
Author(s):  
彭真 Peng Zhen ◽  
吕远健 Lü Yuanjian ◽  
渠超 Qu Chao ◽  
朱大虎 Zhu Dahu

Author(s):  
M. R. Hess ◽  
V. Petrovic ◽  
F. Kuester

Digital documentation of cultural heritage structures is increasingly more common through the application of different imaging techniques. Many works have focused on the application of laser scanning and photogrammetry techniques for the acquisition of threedimensional (3D) geometry detailing cultural heritage sites and structures. With an abundance of these 3D data assets, there must be a digital environment where these data can be visualized and analyzed. Presented here is a feedback driven visualization framework that seamlessly enables interactive exploration and manipulation of massive point cloud data. The focus of this work is on the classification of different building materials with the goal of building more accurate as-built information models of historical structures. User defined functions have been tested within the interactive point cloud visualization framework to evaluate automated and semi-automated classification of 3D point data. These functions include decisions based on observed color, laser intensity, normal vector or local surface geometry. Multiple case studies are presented here to demonstrate the flexibility and utility of the presented point cloud visualization framework to achieve classification objectives.


Author(s):  
Katashi Nagao ◽  
Menglong Yang ◽  
Yusuke Miyakawa

A method is presented that extends the real world into all buildings. This building-scale virtual reality (VR) method differs from augmented reality (AR) in that it uses automatically generated 3D point cloud maps of building interiors. It treats an entire indoor area a pose tracking area by using data collected using an RGB-D camera mounted on a VR headset and using deep learning to build a model from the data. It modifies the VR space in accordance with its intended usage by using segmentation and replacement of the 3D point clouds. This is difficult to do with AR but is essential if VR is to be used for actual real-world applications, such as disaster simulation including simulation of fires and flooding in buildings. 3D pose tracking in the building-scale VR is more accurate than conventional RGB-D simultaneous localization and mapping.


Author(s):  
E. Maset ◽  
A. Fusiello ◽  
F. Crosilla ◽  
R. Toldo ◽  
D. Zorzetto

This paper addresses the problem of 3D building reconstruction from thermal infrared (TIR) images. We show that a commercial Computer Vision software can be used to automatically orient sequences of TIR images taken from an Unmanned Aerial Vehicle (UAV) and to generate 3D point clouds, without requiring any GNSS/INS data about position and attitude of the images nor camera calibration parameters. Moreover, we propose a procedure based on Iterative Closest Point (ICP) algorithm to create a model that combines high resolution and geometric accuracy of RGB images with the thermal information deriving from TIR images. The process can be carried out entirely by the aforesaid software in a simple and efficient way.


Author(s):  
Qi-shuai Wang ◽  
Guo-ping Cai

This article proposes a pose estimation method for a fast tumbling space noncooperative target. The core idea of this method is to extract the target’s body-fixed coordinate system by using the geometric characteristics of the target’s point cloud and then by the body-fixed coordinate system to realize pose initialization and pose tracking of the target. In the extraction of the body-fixed coordinate system, a point cloud of the target, which can be obtained by a time-of-flight camera, can be divided into small plane point clouds firstly; then the geometric information of these plane point clouds can be utilized to extract the target’s descriptive structures, such as the target surfaces and the solar panel supports; and finally the body-fixed coordinate system can be determined by the geometric characteristics of these structures. The body-fixed coordinate system obtained above can be used to determine the pose of consecutive point clouds of the target, that is, to realize the pose initialization and the pose tracking, and accumulated bias often emerges in the pose tracking. To mitigate the accumulated bias, a pose graph optimization method is adopted. In the end of this article, the performance of the proposed method is evaluated by numerical simulations. Simulation results show that when the distance between the target and the chaser is 10 m, the errors of the estimation results of the target’s attitude and position are 0.025° and 0.026 m, respectively. This means that the proposed method can achieve high-precision pose estimation of the noncooperative target.


Sign in / Sign up

Export Citation Format

Share Document