Robotic motion compensation for bone movement, using ultrasound images

Author(s):  
P.M.B. Torres ◽  
P. J. S. Gonçalves ◽  
J.M.M. Martins

Purpose – The purpose of this paper is to present a robotic motion compensation system, using ultrasound images, to assist orthopedic surgery. The robotic system can compensate for femur movements during bone drilling procedures. Although it may have other applications, the system was thought to be used in hip resurfacing (HR) prosthesis surgery to implant the initial guide tool. The system requires no fiducial markers implanted in the patient, by using only non-invasive ultrasound images. Design/methodology/approach – The femur location in the operating room is obtained by processing ultrasound (USA) and computer tomography (CT) images, obtained, respectively, in the intra-operative and pre-operative scenarios. During surgery, the bone position and orientation is obtained by registration of USA and CT three-dimensional (3D) point clouds, using an optical measurement system and also passive markers attached to the USA probe and to the drill. The system description, image processing, calibration procedures and results with simulated and real experiments are presented and described to illustrate the system in operation. Findings – The robotic system can compensate for femur movements, during bone drilling procedures. In most experiments, the update was always validated, with errors of 2 mm/4°. Originality/value – The navigation system is based entirely on the information extracted from images obtained from CT pre-operatively and USA intra-operatively. Contrary to current surgical systems, it does not use any type of implant in the bone to track the femur movements.

Author(s):  
Bo Sun ◽  
Yadan Zeng ◽  
Houde Dai ◽  
Junhao Xiao ◽  
Jianwei Zhang

Purpose This paper aims to present the spherical entropy image (SEI), a novel global descriptor for the scan registration of three-dimensional (3D) point clouds. This paper also introduces a global feature-less scan registration strategy based on SEI. It is advantageous for 3D data processing in the scenarios such as mobile robotics and reverse engineering. Design/methodology/approach The descriptor works through representing the scan by a spherical function named SEI, whose properties allow to decompose the six-dimensional transformation into 3D rotation and 3D translation. The 3D rotation is estimated by the generalized convolution theorem based on the spherical Fourier transform of SEI. Then, the translation recovery is determined by phase only matched filtering. Findings No explicit features and planar segments should be contained in the input data of the method. The experimental results illustrate the parameter independence, high reliability and efficiency of the novel algorithm in registration of feature-less scans. Originality/value A novel global descriptor (SEI) for the scan registration of 3D point clouds is presented. It inherits both descriptive power of signature-based methods and robustness of histogram-based methods. A high reliability and efficiency registration method of scans based on SEI is also demonstrated.


Sensors ◽  
2020 ◽  
Vol 21 (1) ◽  
pp. 201
Author(s):  
Michael Bekele Maru ◽  
Donghwan Lee ◽  
Kassahun Demissie Tola ◽  
Seunghee Park

Modeling a structure in the virtual world using three-dimensional (3D) information enhances our understanding, while also aiding in the visualization, of how a structure reacts to any disturbance. Generally, 3D point clouds are used for determining structural behavioral changes. Light detection and ranging (LiDAR) is one of the crucial ways by which a 3D point cloud dataset can be generated. Additionally, 3D cameras are commonly used to develop a point cloud containing many points on the external surface of an object around it. The main objective of this study was to compare the performance of optical sensors, namely a depth camera (DC) and terrestrial laser scanner (TLS) in estimating structural deflection. We also utilized bilateral filtering techniques, which are commonly used in image processing, on the point cloud data for enhancing their accuracy and increasing the application prospects of these sensors in structure health monitoring. The results from these sensors were validated by comparing them with the outputs from a linear variable differential transformer sensor, which was mounted on the beam during an indoor experiment. The results showed that the datasets obtained from both the sensors were acceptable for nominal deflections of 3 mm and above because the error range was less than ±10%. However, the result obtained from the TLS were better than those obtained from the DC.


2019 ◽  
Vol 93 (3) ◽  
pp. 411-429 ◽  
Author(s):  
Maria Immacolata Marzulli ◽  
Pasi Raumonen ◽  
Roberto Greco ◽  
Manuela Persia ◽  
Patrizia Tartarino

Abstract Methods for the three-dimensional (3D) reconstruction of forest trees have been suggested for data from active and passive sensors. Laser scanner technologies have become popular in the last few years, despite their high costs. Since the improvements in photogrammetric algorithms (e.g. structure from motion—SfM), photographs have become a new low-cost source of 3D point clouds. In this study, we use images captured by a smartphone camera to calculate dense point clouds of a forest plot using SfM. Eighteen point clouds were produced by changing the densification parameters (Image scale, Point density, Minimum number of matches) in order to investigate their influence on the quality of the point clouds produced. In order to estimate diameter at breast height (d.b.h.) and stem volumes, we developed an automatic method that extracts the stems from the point cloud and then models them with cylinders. The results show that Image scale is the most influential parameter in terms of identifying and extracting trees from the point clouds. The best performance with cylinder modelling from point clouds compared to field data had an RMSE of 1.9 cm and 0.094 m3, for d.b.h. and volume, respectively. Thus, for forest management and planning purposes, it is possible to use our photogrammetric and modelling methods to measure d.b.h., stem volume and possibly other forest inventory metrics, rapidly and without felling trees. The proposed methodology significantly reduces working time in the field, using ‘non-professional’ instruments and automating estimates of dendrometric parameters.


Author(s):  
Bisheng Yang ◽  
Yuan Liu ◽  
Fuxun Liang ◽  
Zhen Dong

High Accuracy Driving Maps (HADMs) are the core component of Intelligent Drive Assistant Systems (IDAS), which can effectively reduce the traffic accidents due to human error and provide more comfortable driving experiences. Vehicle-based mobile laser scanning (MLS) systems provide an efficient solution to rapidly capture three-dimensional (3D) point clouds of road environments with high flexibility and precision. This paper proposes a novel method to extract road features (e.g., road surfaces, road boundaries, road markings, buildings, guardrails, street lamps, traffic signs, roadside-trees, power lines, vehicles and so on) for HADMs in highway environment. Quantitative evaluations show that the proposed algorithm attains an average precision and recall in terms of 90.6% and 91.2% in extracting road features. Results demonstrate the efficiencies and feasibilities of the proposed method for extraction of road features for HADMs.


Author(s):  
T. Guo ◽  
A. Capra ◽  
M. Troyer ◽  
A. Gruen ◽  
A. J. Brooks ◽  
...  

Recent advances in automation of photogrammetric 3D modelling software packages have stimulated interest in reconstructing highly accurate 3D object geometry in unconventional environments such as underwater utilizing simple and low-cost camera systems. The accuracy of underwater 3D modelling is affected by more parameters than in single media cases. This study is part of a larger project on 3D measurements of temporal change of coral cover in tropical waters. It compares the accuracies of 3D point clouds generated by using images acquired from a system camera mounted in an underwater housing and the popular GoPro cameras respectively. A precisely measured calibration frame was placed in the target scene in order to provide accurate control information and also quantify the errors of the modelling procedure. In addition, several objects (cinder blocks) with various shapes were arranged in the air and underwater and 3D point clouds were generated by automated image matching. These were further used to examine the relative accuracy of the point cloud generation by comparing the point clouds of the individual objects with the objects measured by the system camera in air (the best possible values). Given a working distance of about 1.5 m, the GoPro camera can achieve a relative accuracy of 1.3 mm in air and 2.0 mm in water. The system camera achieved an accuracy of 1.8 mm in water, which meets our requirements for coral measurement in this system.


Author(s):  
Elizabeth Anne Shotton

Purpose The harbours of Ireland, under threat from deterioration and rising sea levels, are being documented using terrestrial LiDAR augmented by archival research to develop comprehensive histories and timeline models for public dissemination. While methods to extract legible three-dimensional models from scan data have been developed and such operational formats for heritage management are imperative, the need for this format in interpretive visualisations should be reconsidered. The paper aims to discuss these issues. Design/methodology/approach Interpretive visualisations are forms of history making, where factual evidence is drawn together with conjecture to illustrate a plausible account of events, and differentiation between fact and conjecture is the key to their intellectual transparency. A procedure for superimposing conjectural reconstructions, generated using Rhinoceros and CloudCompare, on original scan data in Cyclone and visualised on a web-based viewer is discussed. Findings Embellishing scan data with conjectural elements to visualise the evolution of harbours is advantageous for both research and public dissemination. The accuracy and density of the scans enables the interrogation of the harbour form and the irregular details, the latter in danger of generalisation if translated into parametric or mesh format. Equally, the ethereal quality of the point cloud conveys a sense of tentativeness, consistent with a provisional hypothesis. Finally, coding conjectural elements allows users to intuit the difference between fact and historical narrative. Originality/value While various web-based point clouds viewers are used to disseminate research, the novelty here is the potential to develop didactic representations using point clouds that successfully capture a provisional thesis regarding each harbour’s evolution in an intellectually transparent manner to enable further inquiry.


2020 ◽  
Vol 10 (3) ◽  
pp. 1140 ◽  
Author(s):  
Jorge L. Martínez ◽  
Mariano Morán ◽  
Jesús Morales ◽  
Alfredo Robles ◽  
Manuel Sánchez

Autonomous navigation of ground vehicles on natural environments requires looking for traversable terrain continuously. This paper develops traversability classifiers for the three-dimensional (3D) point clouds acquired by the mobile robot Andabata on non-slippery solid ground. To this end, different supervised learning techniques from the Python library Scikit-learn are employed. Training and validation are performed with synthetic 3D laser scans that were labelled point by point automatically with the robotic simulator Gazebo. Good prediction results are obtained for most of the developed classifiers, which have also been tested successfully on real 3D laser scans acquired by Andabata in motion.


2019 ◽  
Vol 11 (10) ◽  
pp. 1204 ◽  
Author(s):  
Yue Pan ◽  
Yiqing Dong ◽  
Dalei Wang ◽  
Airong Chen ◽  
Zhen Ye

Three-dimensional (3D) digital technology is essential to the maintenance and monitoring of cultural heritage sites. In the field of bridge engineering, 3D models generated from point clouds of existing bridges is drawing increasing attention. Currently, the widespread use of the unmanned aerial vehicle (UAV) provides a practical solution for generating 3D point clouds as well as models, which can drastically reduce the manual effort and cost involved. In this study, we present a semi-automated framework for generating structural surface models of heritage bridges. To be specific, we propose to tackle this challenge via a novel top-down method for segmenting main bridge components, combined with rule-based classification, to produce labeled 3D models from UAV photogrammetric point clouds. The point clouds of the heritage bridge are generated from the captured UAV images through the structure-from-motion workflow. A segmentation method is developed based on the supervoxel structure and global graph optimization, which can effectively separate bridge components based on geometric features. Then, recognition by the use of a classification tree and bridge geometry is utilized to recognize different structural elements from the obtained segments. Finally, surface modeling is conducted to generate surface models of the recognized elements. Experiments using two bridges in China demonstrate the potential of the presented structural model reconstruction method using UAV photogrammetry and point cloud processing in 3D digital documentation of heritage bridges. By using given markers, the reconstruction error of point clouds can be as small as 0.4%. Moreover, the precision and recall of segmentation results using testing date are better than 0.8, and a recognition accuracy better than 0.8 is achieved.


Sensors ◽  
2020 ◽  
Vol 20 (21) ◽  
pp. 6187
Author(s):  
Milena F. Pinto ◽  
Aurelio G. Melo ◽  
Leonardo M. Honório ◽  
André L. M. Marcato ◽  
André G. S. Conceição ◽  
...  

When performing structural inspection, the generation of three-dimensional (3D) point clouds is a common resource. Those are usually generated from photogrammetry or through laser scan techniques. However, a significant drawback for complete inspection is the presence of covering vegetation, hiding possible structural problems, and making difficult the acquisition of proper object surfaces in order to provide a reliable diagnostic. Therefore, this research’s main contribution is developing an effective vegetation removal methodology through the use of a deep learning structure that is capable of identifying and extracting covering vegetation in 3D point clouds. The proposed approach uses pre and post-processing filtering stages that take advantage of colored point clouds, if they are available, or operate independently. The results showed high classification accuracy and good effectiveness when compared with similar methods in the literature. After this step, if color is available, then a color filter is applied, enhancing the results obtained. Besides, the results are analyzed in light of real Structure From Motion (SFM) reconstruction data, which further validates the proposed method. This research also presented a colored point cloud library of bushes built for the work used by other studies in the field.


Author(s):  
I.-C. Lee ◽  
F. Tsai

A series of panoramic images are usually used to generate a 720° panorama image. Although panoramic images are typically used for establishing tour guiding systems, in this research, we demonstrate the potential of using panoramic images acquired from multiple sites to create not only 720° panorama, but also three-dimensional (3D) point clouds and 3D indoor models. Since 3D modeling is one of the goals of this research, the location of the panoramic sites needed to be carefully planned in order to maintain a robust result for close-range photogrammetry. After the images are acquired, panoramic images are processed into 720° panoramas, and these panoramas which can be used directly as panorama guiding systems or other applications. <br><br> In addition to these straightforward applications, interior orientation parameters can also be estimated while generating 720° panorama. These parameters are focal length, principle point, and lens radial distortion. The panoramic images can then be processed with closerange photogrammetry procedures to extract the exterior orientation parameters and generate 3D point clouds. In this research, VisaulSFM, a structure from motion software is used to estimate the exterior orientation, and CMVS toolkit is used to generate 3D point clouds. Next, the 3D point clouds are used as references to create building interior models. In this research, Trimble Sketchup was used to build the model, and the 3D point cloud was added to the determining of locations of building objects using plane finding procedure. In the texturing process, the panorama images are used as the data source for creating model textures. This 3D indoor model was used as an Augmented Reality model replacing a guide map or a floor plan commonly used in an on-line touring guide system. <br><br> The 3D indoor model generating procedure has been utilized in two research projects: a cultural heritage site at Kinmen, and Taipei Main Station pedestrian zone guidance and navigation system. The results presented in this paper demonstrate the potential of using panoramic images to generate 3D point clouds and 3D models. However, it is currently a manual and labor-intensive process. A research is being carried out to Increase the degree of automation of these procedures.


Sign in / Sign up

Export Citation Format

Share Document