3d registration
Recently Published Documents


TOTAL DOCUMENTS

534
(FIVE YEARS 114)

H-INDEX

26
(FIVE YEARS 3)

Author(s):  
Abdurrahman Yilmaz ◽  
Hakan Temeltas

AbstractThe problem of matching point clouds is an efficient way of registration, which is significant for many research fields including computer vision, machine learning, and robotics. There may be linear or non-linear transformation between point clouds, but determining the affine relation is more challenging among linear cases. Various methods have been presented to overcome this problem in the literature and one of them is the affine variant of the iterative closest point (ICP) algorithm. However, traditional affine ICP variants are highly sensitive to effects such as noises, deformations, and outliers; the least-square metric is substituted with the correntropy criterion to increase the robustness of ICPs to such effects. Correntropy-based robust affine ICPs available in the literature use point-to-point metric to estimate transformation between point clouds. Conversely, in this study, a line/surface normal that examines point-to-curve or point-to-plane distances is employed together with the correntropy criterion for affine point cloud registration problems. First, the maximum correntropy criterion measure is built for line/surface normal conditions. Then, the closed-form solution that maximizes the similarity between point sets is achieved for 2D registration and extended for 3D registration. Finally, the application procedure of the developed robust affine ICP method is given and its registration performance is examined through extensive experiments on 2D and 3D point sets. The results achieved highlight that our method can align point clouds more robustly and precisely than the state-of-the-art methods in the literature, while the registration time of the process remains at reasonable levels.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Michael J. Wester ◽  
David J. Schodt ◽  
Hanieh Mazloom-Farsibaf ◽  
Mohamadreza Fazel ◽  
Sandeep Pallikkuth ◽  
...  

AbstractWe describe a robust, fiducial-free method of drift correction for use in single molecule localization-based super-resolution methods. The method combines periodic 3D registration of the sample using brightfield images with a fast post-processing algorithm that corrects residual registration errors and drift between registration events. The method is robust to low numbers of collected localizations, requires no specialized hardware, and provides stability and drift correction for an indefinite time period.


2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Yingqi Kong

The panoramic video technology is introduced to collect multiangle data of design objects, draw a 3D spatial model with the collected data, solve the first-order differential equation for the 3D spatial model, obtain the spatial positioning extremes of the object scales, and realize the alignment and fusion of panoramic video images according to the positioning extremes above and below the scale space. Then, the panoramic video is generated and displayed by computer processing so that the tourist can watch the scene with virtual information added to the panoramic video by wearing the display device elsewhere. It solves the technical difficulties of the high complexity of the algorithm in the system of panoramic video stitching and the existence of stitching cracks and the “GHOST” phenomenon in the stitched video, as well as the technical difficulties that the 3D registration is easily affected by the time-consuming environment and target tracking detection algorithm. The simulation results show that the panoramic video stitching method performs well in real time and effectively suppresses stitching cracks and the “GHOST” phenomenon, and the augmented reality 3D registration method performs well for the local enhancement of the panoramic video.


2021 ◽  
Vol 13 (22) ◽  
pp. 4583
Author(s):  
Chang Li ◽  
Bingrui Li ◽  
Sisi Zhao

To reduce the 3D systematic error of the RGB-D camera and improve the measurement accuracy, this paper is the first to propose a new 3D compensation method for the systematic error of a Kinect V2 in a 3D calibration field. The processing of the method is as follows. First, the coordinate system between the RGB-D camera and 3D calibration field is transformed using 3D corresponding points. Second, the inliers are obtained using the Bayes SAmple Consensus (BaySAC) algorithm to eliminate gross errors (i.e., outliers). Third, the parameters of the 3D registration model are calculated by the iteration method with variable weights that can further control the error. Fourth, three systematic error compensation models are established and solved by the stepwise regression method. Finally, the optimal model is selected to calibrate the RGB-D camera. The experimental results show the following: (1) the BaySAC algorithm can effectively eliminate gross errors; (2) the iteration method with variable weights could better control slightly larger accidental errors; and (3) the 3D compensation method can compensate 91.19% and 61.58% of the systematic error of the RGB-D camera in the depth and 3D directions, respectively, in the 3D control field, which is superior to the 2D compensation method. The proposed method can control three types of errors (i.e., gross errors, accidental errors and systematic errors) and model errors and can effectively improve the accuracy of depth data.


2021 ◽  
Author(s):  
Mohamad Harastani ◽  
Slavica Jonic

Cryogenic electron tomography (cryo-ET) allows studying biological macromolecular complexes in cells by three-dimensional (3D) data analysis. The complexes continuously change their shapes (conformations) to achieve biological functions. The shape heterogeneity in the samples imaged in the cryo electron microscope is a bottleneck for comprehending biological mechanisms and developing drugs. Low signal-to-noise ratio and spatial anisotropy (missing wedge artefacts) make cryo-ET data particularly challenging for resolving the shape variability. Other shape variability analysis techniques simplify the problem by considering discrete rather than continuous conformational changes of complexes. Recently, HEMNMA-3D was introduced for cryo-ET continuous shape variability analysis, based on elastic and rigid-body 3D registration between simulated shapes and cryo-ET data. The simulated motions are obtained by normal mode analysis of a high- or low-resolution 3D reference model of the complex under study. The rigid-body alignment is achieved via fast rotational matching with missing wedge compensation. HEMNMA-3D provides a visual insight into molecular dynamics by grouping and averaging subtomograms of similar shapes and by animating movies of registered motions. This article reviews the method and compares it with existing literature on a simulated dataset for nucleosome shape variability.


2021 ◽  
Vol 7 (2) ◽  
pp. 25-28
Author(s):  
Julio C. Alvarez-Gomez ◽  
Gerardo Jimenez Palavicini ◽  
Hubert Roth ◽  
Jürgen Wahrburg

Abstract A key component of an intensity-based 2D/3D registration is the digitally reconstructed radiograph (DRR) module, which creates 2D projections from pre-operative 3D data, e.g., CT and MRI scans. On average, an intensity-based 2D/3D registration requires ten iterations and the rendering of twelve DRR images per iteration. In a typical DRR implementation, the rendering time is about two seconds, and the registration runtime is four minutes. We present an implementation of the Siddon-Jacobs algorithm that uses a novel pixel-step approach to determine the pixel location of the rendering plane. In addition, we calculate the intensity of each pixel in the rendering plane using a parallel computing approach. The DRR rendering time is reduced to 10ms on average so that the registration runtime can be achieved in an average of 4.8 seconds.


2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Sandro Hodel ◽  
Anna-Katharina Calek ◽  
Philipp Fürnstahl ◽  
Sandro F. Fucentese ◽  
Lazaros Vlachopoulos

Abstract Purpose To assess a novel method of three-dimensional (3D) joint line (JL) restoration based on the contralateral tibia and fibula. Methods 3D triangular surface models were generated from computed tomographic data of 96 paired lower legs (48 cadavers) without signs of pathology. Three segments of the tibia and fibula, excluding the tibia plateau, were defined (tibia, fibula, tibial tuberosity (TT) and fibular tip). A surface registration algorithm was used to superimpose the mirrored contralateral model onto the original model. JL approximation and absolute mean errors for each segment registration were measured and its relationship to gender, height, weight and tibia and fibula length side-to-side differences analyzed. Fibular tip to JL distance was measured and analyzed. Results Mean JL approximation did not yield significant differences among the three segments. Mean absolute JL error was highest for the tibia 1.4 ± 1.4 mm (range: 0 to 6.0 mm) and decreased for the fibula 0.8 ± 1.0 mm (range: 0 to 3.7 mm) and for TT and fibular tip segment 0.7 ± 0.6 (range: 0 to 2.4 mm) (p = 0.03). Mean absolute JL error of the TT and fibular tip segment was independent of gender, height, weight and tibia and fibula length side-to-side differences. Mean fibular tip to JL distance was 11.9 ± 3.4 mm (range: 3.4 to 22.1 mm) with a mean absolute side-to-side difference of 1.6 ± 1.1 mm (range: 0 to 5.3 mm). Conclusion 3D registration of the contralateral tibia and fibula reliably approximated the original JL. The registration of, TT and fibular tip, as robust anatomical landmarks, improved the accuracy of JL restoration independent of tibia and fibula length side-to-side differences. Level of evidence IV


Spine ◽  
2021 ◽  
Vol Publish Ahead of Print ◽  
Author(s):  
Benyu Tang ◽  
Haoqun Yao ◽  
Shaobai Wang ◽  
Yanlong Zhong ◽  
Kai Cao ◽  
...  

2021 ◽  
Author(s):  
Takuya Adachi ◽  
Yuki Kato ◽  
Daii Kiyotomo ◽  
Katsushige Kawamukai ◽  
Yoichi Machida

Abstract BackgroundFour-dimensional CT(4D-CT) is an advanced imaging method with the ability to acquire kinematic and three-dimensional morphological information. Although its use for analysis of the six degrees of freedom in the knee is expected, its accuracy has not been reported. This study aimed to use the optical motion-capture method to verify the accuracy of 4D-CT analysis of knee joint movement.MethodsOne static CT and three 4D-CT examinations of the knee joint model were obtained. The knee joint model was passively moved in the CT gantry during 4D-CT acquisitions. 4D-CT and static CT examinations were matched to perform 3D-3D registration. An optical motion-capture system recorded the position-posture of the knee joint model simultaneously with the 4D-CT acquisitions. These results were used as the correct answer value, the position-posture measurements using 4D-CT were compared to these values, and the accuracy of the 4D-CT analysis of knee joint movements was quantitatively assessed. ResultsThe position-posture measurements obtained from 4D-CT showed similar tendency to those obtained from the motion-capture system. In the femorotibial joint, the difference in the spatial orientation between the two measurements was 0.7 mm in the X direction, 0.9 mm in the Y direction, and 2.8 mm in the Z direction. The difference in angle was 1.9° in the varus/valgus direction, 1.1° in the internal/external rotation, and 1.8° in extension/flexion. In the patellofemoral joint, the difference between the two measurements was 0.9 mm in the X direction, 1.3 mm in the Y direction, and 1.2 mm in the Z direction. The difference in angle was 0.9° for varus/valgus, 1.1° for internal/external rotation, and 1.3° for extension / flexion. Conclusions4D-CT with 3D-3D registration could record the position-posture of knee joint movements with an error of less than 3 mm and less than 2° when compared with the highly accurate motion-capture system. Knee joint movement analysis using 4D-CT with 3D-3D registration showed excellent accuracy for in vivo applications.


Sign in / Sign up

Export Citation Format

Share Document