Automatic registration of laser reflectance and colour intensity images for 3D reconstruction

2002 ◽  
Vol 39 (3-4) ◽  
pp. 157-168 ◽  
Author(s):  
P. Dias ◽  
V. Sequeira ◽  
J.G.M. Gonçalves ◽  
F. Vaz
Author(s):  
K. Bakuła ◽  
P. Kupidura ◽  
Ł. Jełowicki

Multispectral Airborne Laser Scanning provides a new opportunity for airborne data collection. It provides high-density topographic surveying and is also a useful tool for land cover mapping. Use of a minimum of three intensity images from a multiwavelength laser scanner and 3D information included in the digital surface model has the potential for land cover/use classification and a discussion about the application of this type of data in land cover/use mapping has recently begun. In the test study, three laser reflectance intensity images (orthogonalized point cloud) acquired in green, near-infrared and short-wave infrared bands, together with a digital surface model, were used in land cover/use classification where six classes were distinguished: water, sand and gravel, concrete and asphalt, low vegetation, trees and buildings. In the tested methods, different approaches for classification were applied: spectral (based only on laser reflectance intensity images), spectral with elevation data as additional input data, and spectro-textural, using morphological granulometry as a method of texture analysis of both types of data: spectral images and the digital surface model. The method of generating the intensity raster was also tested in the experiment. Reference data were created based on visual interpretation of ALS data and traditional optical aerial and satellite images. The results have shown that multispectral ALS data are unlike typical multispectral optical images, and they have a major potential for land cover/use classification. An overall accuracy of classification over 90% was achieved. The fusion of multi-wavelength laser intensity images and elevation data, with the additional use of textural information derived from granulometric analysis of images, helped to improve the accuracy of classification significantly. The method of interpolation for the intensity raster was not very helpful, and using intensity rasters with both first and last return numbers slightly improved the results.


Author(s):  
K. Bakuła ◽  
P. Kupidura ◽  
Ł. Jełowicki

Multispectral Airborne Laser Scanning provides a new opportunity for airborne data collection. It provides high-density topographic surveying and is also a useful tool for land cover mapping. Use of a minimum of three intensity images from a multiwavelength laser scanner and 3D information included in the digital surface model has the potential for land cover/use classification and a discussion about the application of this type of data in land cover/use mapping has recently begun. In the test study, three laser reflectance intensity images (orthogonalized point cloud) acquired in green, near-infrared and short-wave infrared bands, together with a digital surface model, were used in land cover/use classification where six classes were distinguished: water, sand and gravel, concrete and asphalt, low vegetation, trees and buildings. In the tested methods, different approaches for classification were applied: spectral (based only on laser reflectance intensity images), spectral with elevation data as additional input data, and spectro-textural, using morphological granulometry as a method of texture analysis of both types of data: spectral images and the digital surface model. The method of generating the intensity raster was also tested in the experiment. Reference data were created based on visual interpretation of ALS data and traditional optical aerial and satellite images. The results have shown that multispectral ALS data are unlike typical multispectral optical images, and they have a major potential for land cover/use classification. An overall accuracy of classification over 90% was achieved. The fusion of multi-wavelength laser intensity images and elevation data, with the additional use of textural information derived from granulometric analysis of images, helped to improve the accuracy of classification significantly. The method of interpolation for the intensity raster was not very helpful, and using intensity rasters with both first and last return numbers slightly improved the results.


2013 ◽  
pp. 957-978
Author(s):  
C.J. Prabhakar ◽  
P.U. Praveen Kumar ◽  
P.S. Hiremath

Over the last two decades, research community of computer vision has developed various techniques suitable for underwater applications using intensity images. This chapter will explore 3D reconstruction of underwater natural scenes and objects based on stereo vision, which will be helpful in mine detection, inspection of shipwrecks, detection of telecommunication cables and pipelines. The general steps involved in 3D reconstruction using stereo vision are provided. The brief summary of papers for 3D reconstruction of underwater environment is presented. 3D reconstruction of underwater natural scenes and objects is challenging problem due to light propagation in underwater. In contrast to light propagation in the air, the light rays are attenuated and scattered, having a great effect on image quality. We have proposed preprocessing technique to enhance degraded underwater images. At the end of the chapter, we have presented the proposed stereo vision based 3D reconstruction technique to reconstruct 3D surface of underwater objects. Ultimately, this chapter intends to give an overview of the 3D reconstruction technique using stereo vision in order to help a reader in understanding stereo vision and its benefits for underwater applications.


1998 ◽  
Author(s):  
Gregory Randall ◽  
Alicia Fernandez ◽  
Omar Trujillo-Cenoz ◽  
Gustavo Apelbaum ◽  
Marcelo Bertalmio ◽  
...  

Author(s):  
Yubin Liang ◽  
Yan Qiu ◽  
Tiejun Cui

Co-registration of terrestrial laser scanner and digital camera has been an important topic of research, since reconstruction of visually appealing and measurable models of the scanned objects can be achieved by using both point clouds and digital images. This paper presents an approach for co-registration of terrestrial laser scanner and digital camera. A perspective intensity image of the point cloud is firstly generated by using the collinearity equation. Then corner points are extracted from the generated perspective intensity image and the camera image. The fundamental matrix F is then estimated using several interactively selected tie points and used to obtain more matches with RANSAC. The 3D coordinates of all the matched tie points are directly obtained or estimated using the least squares method. The robustness and effectiveness of the presented methodology is demonstrated by the experimental results. Methods presented in this work may also be used for automatic registration of terrestrial laser scanning point clouds.


Author(s):  
Yubin Liang ◽  
Yan Qiu ◽  
Tiejun Cui

Co-registration of terrestrial laser scanner and digital camera has been an important topic of research, since reconstruction of visually appealing and measurable models of the scanned objects can be achieved by using both point clouds and digital images. This paper presents an approach for co-registration of terrestrial laser scanner and digital camera. A perspective intensity image of the point cloud is firstly generated by using the collinearity equation. Then corner points are extracted from the generated perspective intensity image and the camera image. The fundamental matrix F is then estimated using several interactively selected tie points and used to obtain more matches with RANSAC. The 3D coordinates of all the matched tie points are directly obtained or estimated using the least squares method. The robustness and effectiveness of the presented methodology is demonstrated by the experimental results. Methods presented in this work may also be used for automatic registration of terrestrial laser scanning point clouds.


Author(s):  
C.J. Prabhakar ◽  
P.U. Praveen Kumar ◽  
P.S. Hiremath

Over the last two decades, research community of computer vision has developed various techniques suitable for underwater applications using intensity images. This chapter will explore 3D reconstruction of underwater natural scenes and objects based on stereo vision, which will be helpful in mine detection, inspection of shipwrecks, detection of telecommunication cables and pipelines. The general steps involved in 3D reconstruction using stereo vision are provided. The brief summary of papers for 3D reconstruction of underwater environment is presented. 3D reconstruction of underwater natural scenes and objects is challenging problem due to light propagation in underwater. In contrast to light propagation in the air, the light rays are attenuated and scattered, having a great effect on image quality. We have proposed preprocessing technique to enhance degraded underwater images. At the end of the chapter, we have presented the proposed stereo vision based 3D reconstruction technique to reconstruct 3D surface of underwater objects. Ultimately, this chapter intends to give an overview of the 3D reconstruction technique using stereo vision in order to help a reader in understanding stereo vision and its benefits for underwater applications.


Author(s):  
Jose-Maria Carazo ◽  
I. Benavides ◽  
S. Marco ◽  
J.L. Carrascosa ◽  
E.L. Zapata

Obtaining the three-dimensional (3D) structure of negatively stained biological specimens at a resolution of, typically, 2 - 4 nm is becoming a relatively common practice in an increasing number of laboratories. A combination of new conceptual approaches, new software tools, and faster computers have made this situation possible. However, all these 3D reconstruction processes are quite computer intensive, and the middle term future is full of suggestions entailing an even greater need of computing power. Up to now all published 3D reconstructions in this field have been performed on conventional (sequential) computers, but it is a fact that new parallel computer architectures represent the potential of order-of-magnitude increases in computing power and should, therefore, be considered for their possible application in the most computing intensive tasks.We have studied both shared-memory-based computer architectures, like the BBN Butterfly, and local-memory-based architectures, mainly hypercubes implemented on transputers, where we have used the algorithmic mapping method proposed by Zapata el at. In this work we have developed the basic software tools needed to obtain a 3D reconstruction from non-crystalline specimens (“single particles”) using the so-called Random Conical Tilt Series Method. We start from a pair of images presenting the same field, first tilted (by ≃55°) and then untilted. It is then assumed that we can supply the system with the image of the particle we are looking for (ideally, a 2D average from a previous study) and with a matrix describing the geometrical relationships between the tilted and untilted fields (this step is now accomplished by interactively marking a few pairs of corresponding features in the two fields). From here on the 3D reconstruction process may be run automatically.


Author(s):  
Adriana Verschoor ◽  
Ronald Milligan ◽  
Suman Srivastava ◽  
Joachim Frank

We have studied the eukaryotic ribosome from two vertebrate species (rabbit reticulocyte and chick embryo ribosomes) in several different electron microscopic preparations (Fig. 1a-d), and we have applied image processing methods to two of the types of images. Reticulocyte ribosomes were examined in both negative stain (0.5% uranyl acetate, in a double-carbon preparation) and frozen hydrated preparation as single-particle specimens. In addition, chick embryo ribosomes in tetrameric and crystalline assemblies in frozen hydrated preparation have been examined. 2D averaging, multivariate statistical analysis, and classification methods have been applied to the negatively stained single-particle micrographs and the frozen hydrated tetramer micrographs to obtain statistically well defined projection images of the ribosome (Fig. 2a,c). 3D reconstruction methods, the random conical reconstruction scheme and weighted back projection, were applied to the negative-stain data, and several closely related reconstructions were obtained. The principal 3D reconstruction (Fig. 2b), which has a resolution of 3.7 nm according to the differential phase residual criterion, can be compared to the images of individual ribosomes in a 2D tetramer average (Fig. 2c) at a similar resolution, and a good agreement of the general morphology and of many of the characteristic features is seen.Both data sets show the ribosome in roughly the same ’view’ or orientation, with respect to the adsorptive surface in the electron microscopic preparation, as judged by the agreement in both the projected form and the distribution of characteristic density features. The negative-stain reconstruction reveals details of the ribosome morphology; the 2D frozen-hydrated average provides projection information on the native mass-density distribution within the structure. The 40S subunit appears to have an elongate core of higher density, while the 60S subunit shows a more complex pattern of dense features, comprising a rather globular core, locally extending close to the particle surface.


Sign in / Sign up

Export Citation Format

Share Document