photometric error
Recently Published Documents


TOTAL DOCUMENTS

24
(FIVE YEARS 7)

H-INDEX

6
(FIVE YEARS 0)

2021 ◽  
Vol 34 ◽  
pp. 100-105
Author(s):  
V. Andruk ◽  
L. Pakuliak ◽  
O. Yizhakievych ◽  
S. Shatokhina

The process of treatment of about 500 digitized plates has started in MAO NAS of Ukraine. Plates were taken with the Tautenburg 2m Schmidt telescope in 1963-1989. Linear dimensions of plates are 24x24 cm with a working field of 3.3x3.3 degrees and a scale of 51.4 "/ mm. Astronegatives were digitized on the Tautenburg Plate Scanner in five strips with linear dimensions of 5 400x23 800 px. The software developed in MAO NAS of Ukraine for the image processing of these scans takes into account the horizontal overlap and the vertical offset of strips. The photometric range of fixed objects is 12 magnitudes, around V = 7 m - 19 m , due to the separation of objects into faint and bright parts by their images’ diameters. Positions of stars and other fixed objects are obtained in the GAIA DR2 reference system. Magnitudes are defined in the V-band of the Johnson color system. The resulted positional accuracy defined from 180 plates’ processing is σ RA,DEC = 0.10"for both coordinates, photometric error on the whole range of magnitudes is σ V = 0.14 m . The convergence of resulted magnitudes with ones from photoelectric standards’ data is 0.19 m . In parallel with image processing and plate data reduction, the search for minor planets’ images was carried out. Nine positions and magnitudes of 4 asteroids registered on the plates obtained in 1963-1965 were defined and used for further analysis.


Author(s):  
Géza Csörnyei ◽  
László Dobos ◽  
István Csabai

Abstract We investigate the effect of strong emission line galaxies on the performance of empirical photometric redshift estimation methods. In order to artificially control the contribution of photometric error and emission lines to total flux, we develop a PCA-based stochastic mock catalogue generation technique that allows for generating infinite signal-to-noise ratio model spectra with realistic emission lines on top of theoretical stellar continua. Instead of running the computationally expensive stellar population synthesis and nebular emission codes, our algorithm generates realistic spectra with a statistical approach, and – as an alternative to attempting to constrain the priors on input model parameters – works by matching output observational parameters. Hence, it can be used to match the luminosity, colour, emission line and photometric error distribution of any photometric sample with sufficient flux-calibrated spectroscopic follow-up. We test three simple empirical photometric estimation methods and compare the results with and without photometric noise and strong emission lines. While photometric noise clearly dominates the uncertainty of photometric redshift estimates, the key findings are that emission lines play a significant role in resolving colour space degeneracies and good spectroscopic coverage of the entire colour space is necessary to achieve good results with empirical photo-z methods. Template fitting methods, on the other hand, must use a template set with sufficient variation in emission line strengths and ratios, or even better, first estimate the redshift empirically and fit the colours with templates at the best-fit redshift to calculate the K-correction and various physical parameters.


Sensors ◽  
2021 ◽  
Vol 21 (3) ◽  
pp. 923
Author(s):  
Ente Guo ◽  
Zhifeng Chen ◽  
Yanlin Zhou ◽  
Dapeng Oliver Wu

Estimating the depth of image and egomotion of agent are important for autonomous and robot in understanding the surrounding environment and avoiding collision. Most existing unsupervised methods estimate depth and camera egomotion by minimizing photometric error between adjacent frames. However, the photometric consistency sometimes does not meet the real situation, such as brightness change, moving objects and occlusion. To reduce the influence of brightness change, we propose a feature pyramid matching loss (FPML) which captures the trainable feature error between a current and the adjacent frames and therefore it is more robust than photometric error. In addition, we propose the occlusion-aware mask (OAM) network which can indicate occlusion according to change of masks to improve estimation accuracy of depth and camera pose. The experimental results verify that the proposed unsupervised approach is highly competitive against the state-of-the-art methods, both qualitatively and quantitatively. Specifically, our method reduces absolute relative error (Abs Rel) by 0.017–0.088.


2020 ◽  
Vol 50 (2-3) ◽  
pp. 169-183
Author(s):  
L. Naponiello ◽  
L. Betti ◽  
A. Biagini ◽  
M. Focardi ◽  
E. Papini ◽  
...  

Abstract In this paper we report the observations of HD189733b, Kepler-41b, Kepler-42b, GJ 436b, WASP-77ab, HAT-P-32b and EPIC 211818569 as measured at the Osservatorio Polifunzionale del Chianti, a new astro-nomical site in Italy. Commissioning observing runs have been done in order to test capabilities, systematics and limits of the system and to improve its accuracy. For this purpose, a software algorithm has been developed to estimate the differential photometric error of any transit observation, so that the integration time can be chosen to reach optimal signal-to-noise ratios, and to obtain a picture of what kind of transits this setup can reveal. Currently, the system is able to reach an accuracy of about 1 mmag and so it is ready for the much needed exoplanetary transit follow-up.


2020 ◽  
Vol 638 ◽  
pp. A118
Author(s):  
S. P. Bos

Context. Photometric and astrometric monitoring of directly imaged exoplanets will deliver unique insights into their rotational periods, the distribution of cloud structures, weather, and orbital parameters. As the host star is occulted by the coronagraph, a speckle grid (SG) is introduced to serve as astrometric and photometric reference. Speckle grids are implemented as diffractive pupil-plane optics that generate artificial speckles at known location and brightness. Their performance is limited by the underlying speckle halo caused by evolving uncorrected wavefront errors. The speckle halo will interfere with the coherent SGs, affecting their photometric and astrometric precision. Aims. Our aim is to show that by imposing opposite amplitude or phase modulation on the opposite polarization states, a SG can be instantaneously incoherent with the underlying halo, greatly increasing the precision. We refer to these as vector speckle grids (VSGs). Methods. We derive analytically the mechanism by which the incoherency arises and explore the performance gain in idealised simulations under various atmospheric conditions. Results. We show that the VSG is completely incoherent for unpolarized light and that the fundamental limiting factor is the cross-talk between the speckles in the grid. In simulation, we find that for short-exposure images the VSG reaches a ∼0.3–0.8% photometric error and ∼3−10 × 10−3λ/D astrometric error, which is a performance increase of a factor ∼20 and ∼5, respectively. Furthermore, we outline how VSGs could be implemented using liquid-crystal technology to impose the geometric phase on the circular polarization states. Conclusions. The VSG is a promising new method for generating a photometric and astrometric reference SG that has a greatly increased astrometric and photometric precision.


2020 ◽  
Vol 10 (4) ◽  
pp. 1467
Author(s):  
Chao Sheng ◽  
Shuguo Pan ◽  
Wang Gao ◽  
Yong Tan ◽  
Tao Zhao

Traditional Simultaneous Localization and Mapping (SLAM) (with loop closure detection), or Visual Odometry (VO) (without loop closure detection), are based on the static environment assumption. When working in dynamic environments, they perform poorly whether using direct methods or indirect methods (feature points methods). In this paper, Dynamic-DSO which is a semantic monocular direct visual odometry based on DSO (Direct Sparse Odometry) is proposed. The proposed system is completely implemented with the direct method, which is different from the most current dynamic systems combining the indirect method with deep learning. Firstly, convolutional neural networks (CNNs) are applied to the original RGB image to generate the pixel-wise semantic information of dynamic objects. Then, based on the semantic information of the dynamic objects, dynamic candidate points are filtered out in keyframes candidate points extraction; only static candidate points are reserved in the tracking and optimization module, to achieve accurate camera pose estimation in dynamic environments. The photometric error calculated by the projection points in dynamic region of subsequent frames are removed from the whole photometric error in pyramid motion tracking model. Finally, the sliding window optimization which neglects the photometric error calculated in the dynamic region of each keyframe is applied to obtain the precise camera pose. Experiments on the public TUM dynamic dataset and the modified Euroc dataset show that the positioning accuracy and robustness of the proposed Dynamic-DSO is significantly higher than the state-of-the-art direct method in dynamic environments, and the semi-dense cloud map constructed by Dynamic-DSO is clearer and more detailed.


Drones ◽  
2019 ◽  
Vol 3 (3) ◽  
pp. 69
Author(s):  
Laurent Jospin ◽  
Alexis Stoven-Dubois ◽  
Davide Antonio Cucci

Autonomous flight with unmanned aerial vehicles (UAVs) nowadays depends on the availability and reliability of Global Navigation Satellites Systems (GNSS). In cluttered outdoor scenarios, such as narrow gorges, or near tall artificial structures, such as bridges or dams, reduced sky visibility and multipath effects compromise the quality and the trustworthiness of the GNSS position fixes, making autonomous, or even manual, flight difficult and dangerous. To overcome this problem, cooperative navigation has been proposed: a second UAV flies away from any occluding objects and in line of sight from the first and provides the latter with positioning information, removing the need for full and reliable GNSS coverage in the area of interest. In this work we use high-power light-emitting diodes (LEDs) to signalize the second drone and we present a computer vision pipeline that allows to track the second drone in real-time from a distance up to 100 m and to compute its relative position with decimeter accuracy. This is based on an extension to the classical iterative algorithm for the Perspective-n-Points problem in which the photometric error is minimized according to a image formation model. This extension allow to substantially increase the accuracy of point-feature measurements in image space (up to 0.05 pixels), which directly translates into higher positioning accuracy with respect to conventional methods.


2017 ◽  
Vol 36 (10) ◽  
pp. 1053-1072 ◽  
Author(s):  
Michael Bloesch ◽  
Michael Burri ◽  
Sammy Omari ◽  
Marco Hutter ◽  
Roland Siegwart

This paper presents a visual-inertial odometry framework that tightly fuses inertial measurements with visual data from one or more cameras, by means of an iterated extended Kalman filter. By employing image patches as landmark descriptors, a photometric error is derived, which is directly integrated as an innovation term in the filter update step. Consequently, the data association is an inherent part of the estimation process and no additional feature extraction or matching processes are required. Furthermore, it enables the tracking of noncorner-shaped features, such as lines, and thereby increases the set of possible landmarks. The filter state is formulated in a fully robocentric fashion, which reduces errors related to nonlinearities. This also includes partitioning of a landmark’s location estimate into a bearing vector and distance and thereby allows an undelayed initialization of landmarks. Overall, this results in a compact approach, which exhibits a high level of robustness with respect to low scene texture and motion blur. Furthermore, there is no time-consuming initialization procedure and pose estimates are available starting at the second image frame. We test the filter on different real datasets and compare it with other state-of-the-art visual-inertial frameworks. Experimental results show that robust localization with high accuracy can be achieved with this filter-based framework.


2013 ◽  
Vol 34 (3) ◽  
pp. 273-296 ◽  
Author(s):  
Arti Goyal ◽  
Mukul Mhaskey ◽  
Gopal-Krishna ◽  
Paul J. Wiita ◽  
C. S. Stalin ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document