scholarly journals ORIENTATION OF POINT CLOUDS FOR COMPLEX SURFACES IN MEDICAL SURGERY USING TRINOCULAR VISUAL ODOMETRY AND STEREO ORB-SLAM2

Author(s):  
O. Kahmen ◽  
N. Haase ◽  
T. Luhmann

Abstract. In photogrammetry, computer vision and robotics, visual odometry (VO) and SLAM algorithms are well-known methods to estimate camera poses from image sequences. When dealing with unknown scenes there is often no reference data available and also the scene needs to be reconstructed for further analysis. In this contribution a trinocular visual odometry approach is implemented and compared to stereo VO and ORB-SLAM2 in an experimental setup imitating the scene of a knee replacement surgery. Two datasets are analysed. While a test-field provides excellent conditions for feature detection algorithms with its artificial texture assembled, extracted images show the knee joint itself solely in order to use only the homogenous, but in real application stable, region of the knee joint. The camera trajectories of VO and ORB-SLAM2 are transformed to corresponding coordinate systems and are subsequently evaluated. The tracking algorithms show poor quality when only the inappropriate surface of the knee is used but perform well when the artificial texture of the test-field is used. The third camera does not lead to a significant advantage in this setup using our implementation. Possible reasons, e.g. less overlap, are discussed in this contribution. Nevertheless, the quality of the oriented point clouds, obtained by trinocular dense matching, is less than 1mm for most of the analysed data. The experiment will be used to focus on further developments, e.g. dealing with specular reflections, and for evaluation purposes using different SLAM/ VO algorithms.

Author(s):  
M. Karpina ◽  
M. Jarząbek-Rychard ◽  
P. Tymków ◽  
A. Borkowski

Manual in-situ measurements of geometric tree parameters for the biomass volume estimation are time-consuming and economically non-effective. Photogrammetric techniques can be deployed in order to automate the measurement procedure. The purpose of the presented work is an automatic tree growth estimation based on Unmanned Aircraft Vehicle (UAV) imagery. The experiment was conducted in an agriculture test field with scots pine canopies. The data was collected using a Leica Aibotix X6V2 platform equipped with a Nikon D800 camera. Reference geometric parameters of selected sample plants were measured manually each week. In situ measurements were correlated with the UAV data acquisition. The correlation aimed at the investigation of optimal conditions for a flight and parameter settings for image acquisition. The collected images are processed in a state of the art tool resulting in a generation of dense 3D point clouds. The algorithm is developed in order to estimate geometric tree parameters from 3D points. Stem positions and tree tops are identified automatically in a cross section, followed by the calculation of tree heights. The automatically derived height values are compared to the reference measurements performed manually. The comparison allows for the evaluation of automatic growth estimation process. The accuracy achieved using UAV photogrammetry for tree heights estimation is about 5cm.


2009 ◽  
Vol 7 (2) ◽  
pp. 121-135 ◽  
Author(s):  
Gail Elizabeth Parsons ◽  
Helen Godfrey ◽  
Rebecca Francine Jester

2021 ◽  
pp. 51-64
Author(s):  
Ahmed A. Elngar ◽  
◽  
◽  
◽  
◽  
...  

Feature detection, description and matching are essential components of various computer vision applications; thus, they have received a considerable attention in the last decades. Several feature detectors and descriptors have been proposed in the literature with a variety of definitions for what kind of points in an image is potentially interesting (i.e., a distinctive attribute). This chapter introduces basic notation and mathematical concepts for detecting and describing image features. Then, it discusses properties of perfect features and gives an overview of various existing detection and description methods. Furthermore, it explains some approaches to feature matching. Finally, the chapter discusses the most used techniques for performance evaluation of detection algorithms.


Author(s):  
Stylianos Asteriadis ◽  
Stylianos Asteriadis ◽  
Nikos Nikolaidis ◽  
Nikos Nikolaidis ◽  
Ioannis Pitas ◽  
...  

Facial feature localization is an important task in numerous applications of face image analysis that include face recognition and verification, facial expression recognition, driver‘s alertness estimation, head pose estimation etc. Thus, the area has been a very active research field for many years and a multitude of methods appear in the literature. Depending on the targeted application, the proposed methods have different characteristics and are designed to perform in different setups. Thus, a method of general applicability seems to be away from the current state of the art. This chapter intends to offer an up-to-date literature review of facial feature detection algorithms. A review of the image databases and performance metrics that are used to benchmark these algorithms is also provided.


Author(s):  
Hongmou Zhang ◽  
Ines Ernst ◽  
Sergey Zuev ◽  
Anko Borner ◽  
Martin Knoche ◽  
...  

2020 ◽  
Vol 32 ◽  
pp. 03051
Author(s):  
Ankita Pujare ◽  
Priyanka Sawant ◽  
Hema Sharma ◽  
Khushboo Pichhode

In the fields of image processing, feature detection, the edge detection is an important aspect. For detection of sharp changes in the properties of an image, edges are recognized as important factors which provides more information or data regarding the analysis of an image. In this work coding of various edge detection algorithms such as Sobel, Canny, etc. have been done on the MATLAB software, also this work is implemented on the FPGA Nexys 4 DDR board. The results are then displayed on a VGA screen. The implementation of this work using Verilog language of FPGA has been executed on Vivado 18.2 software tool.


2020 ◽  
Vol 12 (10) ◽  
pp. 1680
Author(s):  
Chenguang Dai ◽  
Zhenchao Zhang ◽  
Dong Lin

Building extraction and change detection are two important tasks in the remote sensing domain. Change detection between airborne laser scanning data and photogrammetric data is vulnerable to dense matching errors, mis-alignment errors and data gaps. This paper proposes an unsupervised object-based method for integrated building extraction and change detection. Firstly, terrain, roofs and vegetation are extracted from the precise laser point cloud, based on “bottom-up” segmentation and clustering. Secondly, change detection is performed in an object-based bidirectional manner: Heightened buildings and demolished buildings are detected by taking the laser scanning data as reference, while newly-built buildings are detected by taking the dense matching data as reference. Experiments on two urban data sets demonstrate its effectiveness and robustness. The object-based change detection achieves a recall rate of 92.31% and a precision rate of 88.89% for the Rotterdam dataset; it achieves a recall rate of 85.71% and a precision rate of 100% for the Enschede dataset. It can not only extract unchanged building footprints, but also assign heightened or demolished labels to the changed buildings.


Sign in / Sign up

Export Citation Format

Share Document