object point
Recently Published Documents


TOTAL DOCUMENTS

41
(FIVE YEARS 13)

H-INDEX

5
(FIVE YEARS 1)

2021 ◽  
Vol 906 (1) ◽  
pp. 012051
Author(s):  
Danail Nedyalkov

Abstract The methodological approach when using a scanned physical object to build a building information model (BIM) is based on laser scanning technology and aims to create technical documentation of existing buildings, most often with the status of historically significant sites. The BIM technology can be used as an integral part for the creation of the documentation in the process of construction and of the new sites, as well as their administrative and managerial control in the process of their construction and operation. The essence of the experiment is to model space in a parametric three-dimensional model (BIM) in the ArchiCAD program, using a laser-scanned physical object (point cloud). The cloud obtained from the laser scan contains detailed spatial information, which is used in the basis of creation of a construction information model (BIM) and control during the development of the model. The laser-scanned physical object (point cloud) contains the same geometric information as the construction information model (BIM), but with a much smaller amount of data, the file size is visible - point cloud - 30.41 MB, BIM - 9.83 MB). The advantages of BIM over the point cloud is to give the ability to edit the model, to study the energy behavior of the model, to create construction and technical documentation of the scanned object, as well as to disclose the ability to fill in technical data and parameters based on the map and cadastral basis. By means of the density of the point cloud (parameter of the equipment used - laser scanner) of the scanned object, information is obtained and used with sufficient detail and accuracy about the physical data of the real object and this is the basis for the full and comprehensive content of BIM. Based on the sufficient detail created in the BIM for the physical object, it is possible for its combinability and its actual use in the real environment.


Author(s):  
E. Nocerino ◽  
F. Menna ◽  
A. Gruen

Abstract. Uncontrolled refraction of optical rays in underwater photogrammetry is known to reduce its accuracy potential. Several strategies have been proposed aiming at restoring the accuracy to levels comparable with photogrammetry applied in air. These methods are mainly based on rigours modelling of the refraction phenomenon or empirical iterative refraction corrections. The authors of this contribution have proposed two mitigation strategies of image residuals systematic patterns in the image plane: (i) empirical weighting of image observations as function of their radial position; (ii) iterative look-up table corrections computed in a squared grid. Here, a novel approach is developed. It explicitly takes into account the object point-to-camera distance dependent error introduced by refraction in multimedia photogrammetry. A polynomial correction function is iteratively computed to correct the image residuals clustered in radial slices in the image plane as function of the point-to-camera distance. The effectiveness of the proposed method is demonstrated by simulations that allow to: (i) separate the geometric error under investigation from other effects not easily modellable and (ii) have reliable reference data against which to assess the accuracy of the result.


2021 ◽  
Vol 502 (4) ◽  
pp. 6117-6139
Author(s):  
V Christiaens ◽  
M-G Ubeira-Gabellini ◽  
H Cánovas ◽  
P Delorme ◽  
B Pairet ◽  
...  

ABSTRACT Understanding how giant planets form requires observational input from directly imaged protoplanets. We used VLT/NACO and VLT/SPHERE to search for companions in the transition disc of 2MASS J19005804-3645048 (hereafter CrA-9), an accreting M0.75 dwarf with an estimated age of 1–2 Myr. We found a faint point source at ∼0.7-arcsec separation from CrA-9 (∼108 au projected separation). Our 3-epoch astrometry rejects a fixed background star with a 5σ significance. The near-IR absolute magnitudes of the object point towards a planetary-mass companion. However, our analysis of the 1.0–3.8$\,\mu$m spectrum extracted for the companion suggests it is a young M5.5 dwarf, based on both the 1.13-μm Na index and comparison with templates of the Montreal Spectral Library. The observed spectrum is best reproduced with high effective temperature ($3057^{+119}_{-36}$K) BT-DUSTY and BT-SETTL models, but the corresponding photometric radius required to match the measured flux is only $0.60^{+0.01}_{-0.04}$ Jovian radius. We discuss possible explanations to reconcile our measurements, including an M-dwarf companion obscured by an edge-on circum-secondary disc or the shock-heated part of the photosphere of an accreting protoplanet. Follow-up observations covering a larger wavelength range and/or at finer spectral resolution are required to discriminate these two scenarios.


Author(s):  
T. Peters ◽  
C. Brenner ◽  
M. Song

Abstract. The goal of this paper is to use transfer learning for semi supervised semantic segmentation in 2D images: given a pretrained deep convolutional network (DCNN), our aim is to adapt it to a new camera-sensor system by enforcing predictions to be consistent for the same object in space. This is enabled by projecting 3D object points into multi-view 2D images. Since every 3D object point is usually mapped to a number of 2D images, each of which undergoes a pixelwise classification using the pretrained DCNN, we obtain a number of predictions (labels) for the same object point. This makes it possible to detect and correct outlier predictions. Ultimately, we retrain the DCNN on the corrected dataset in order to adapt the network to the new input data. We demonstrate the effectiveness of our approach on a mobile mapping dataset containing over 10’000 images and more than 1 billion 3D points. Moreover, we manually annotated a subset of the mobile mapping images and show that we were able to rise the mean intersection over union (mIoU) by approximately 10% with Deeplabv3+, using our approach.


Author(s):  
H. Meißner ◽  
K. Stebner ◽  
T. Kraft ◽  
M. Geßner ◽  
R. Berger

Abstract. Many drones are used to obtain high resolution imagery. Subsequent 3D object point derivation from images of these systems is an established technique. While rotor-craft drones are often used to capture fine, detailed structures and objects in small-scale areas fixed-wing versions are commonly used to cover larger areas even far beyond line of sight. Usually, these drones fly at much higher velocities during data acquisition and therefore the according sensor requirements are much higher.This paper presents the evaluation of a prototype camera system for fast flying fixed-wing drones. Focus of investigation is to find out if higher operating velocities, up to 100 km/h during image acquisition, has any influence on photogrammetric survey and image quality itself. It will be shown that images, obtained by the presented camera system and carrier, do not suffer from motion blur and that the overall survey accuracy is approximately 1/4 of ground sample distance.Survey accuracy analysis is carried out using standard photgrammetric procedures using signaled control- and checkpoints and verifying their conformity in image space and object space.Fundamentals of image quality will be introduced, as well asan approach to determine and evaluate motion smear of remote sensing senors (in theory and practical use case). Furthermore, it will be shown that the designed camera system mounted on a fixed-wing carrier does not suffer from motion smear.


IEEE Access ◽  
2020 ◽  
Vol 8 ◽  
pp. 83782-83790
Author(s):  
Bin Li ◽  
Yonghan Zhang ◽  
Bo Zhao ◽  
Hongyao Shao

Sign in / Sign up

Export Citation Format

Share Document