scholarly journals Gaia Data Release 2

2018 ◽  
Vol 616 ◽  
pp. A15 ◽  
Author(s):  
N. C. Hambly ◽  
M. Cropper ◽  
S. Boudreault ◽  
C. Crowley ◽  
R. Kohley ◽  
...  

Context. The European Space Agency’s Gaia satellite was launched into orbit around L2 in December 2013. This ambitious mission has strict requirements on residual systematic errors resulting from instrumental corrections in order to meet a design goal of sub-10 microarcsecond astrometry. During the design and build phase of the science instruments, various critical calibrations were studied in detail to ensure that this goal could be met in orbit. In particular, it was determined that the video-chain offsets on the analogue side of the analogue-to-digital conversion electronics exhibited instabilities that could not be mitigated fully by modifications to the flight hardware. Aims. We provide a detailed description of the behaviour of the electronic offset levels on short (<1 ms) timescales, identifying various systematic effects that are known collectively as “offset non-uniformities”. The effects manifest themselves as transient perturbations on the gross zero-point electronic offset level that is routinely monitored as part of the overall calibration process. Methods. Using in-orbit special calibration sequences along with simple parametric models, we show how the effects can be calibrated, and how these calibrations are applied to the science data. While the calibration part of the process is relatively straightforward, the application of the calibrations during science data processing requires a detailed on-ground reconstruction of the readout timing of each charge-coupled device (CCD) sample on each device in order to predict correctly the highly time-dependent nature of the corrections. Results. We demonstrate the effectiveness of our offset non-uniformity models in mitigating the effects in Gaia data. Conclusions. We demonstrate for all CCDs and operating instrument/modes on board Gaia that the video-chain noise-limited performance is recovered in the vast majority of science samples.

1978 ◽  
Vol 48 ◽  
pp. 31-35
Author(s):  
R. B. Hanson

Several outstanding problems affecting the existing parallaxes should be resolved to form a coherent system for the new General Catalogue proposed by van Altena, as well as to improve luminosity calibrations and other parallax applications. Lutz has reviewed several of these problems, such as: (A) systematic differences between observatories, (B) external error estimates, (C) the absolute zero point, and (D) systematic observational effects (in right ascension, declination, apparent magnitude, etc.). Here we explore the use of cluster and spectroscopic parallaxes, and the distributions of observed parallaxes, to bring new evidence to bear on these classic problems. Several preliminary results have been obtained.


2021 ◽  
Vol 217 (2) ◽  
Author(s):  
Alexander G. Hayes ◽  
P. Corlies ◽  
C. Tate ◽  
M. Barrington ◽  
J. F. Bell ◽  
...  

AbstractThe NASA Perseverance rover Mast Camera Zoom (Mastcam-Z) system is a pair of zoomable, focusable, multi-spectral, and color charge-coupled device (CCD) cameras mounted on top of a 1.7 m Remote Sensing Mast, along with associated electronics and two calibration targets. The cameras contain identical optical assemblies that can range in focal length from 26 mm ($25.5^{\circ }\, \times 19.1^{\circ }\ \mathrm{FOV}$ 25.5 ∘ × 19.1 ∘ FOV ) to 110 mm ($6.2^{\circ } \, \times 4.2^{\circ }\ \mathrm{FOV}$ 6.2 ∘ × 4.2 ∘ FOV ) and will acquire data at pixel scales of 148-540 μm at a range of 2 m and 7.4-27 cm at 1 km. The cameras are mounted on the rover’s mast with a stereo baseline of $24.3\pm 0.1$ 24.3 ± 0.1  cm and a toe-in angle of $1.17\pm 0.03^{\circ }$ 1.17 ± 0.03 ∘ (per camera). Each camera uses a Kodak KAI-2020 CCD with $1600\times 1200$ 1600 × 1200 active pixels and an 8 position filter wheel that contains an IR-cutoff filter for color imaging through the detectors’ Bayer-pattern filters, a neutral density (ND) solar filter for imaging the sun, and 6 narrow-band geology filters (16 total filters). An associated Digital Electronics Assembly provides command data interfaces to the rover, 11-to-8 bit companding, and JPEG compression capabilities. Herein, we describe pre-flight calibration of the Mastcam-Z instrument and characterize its radiometric and geometric behavior. Between April 26$^{th}$ t h and May 9$^{th}$ t h , 2019, ∼45,000 images were acquired during stand-alone calibration at Malin Space Science Systems (MSSS) in San Diego, CA. Additional data were acquired during Assembly Test and Launch Operations (ATLO) at the Jet Propulsion Laboratory and Kennedy Space Center. Results of the radiometric calibration validate a 5% absolute radiometric accuracy when using camera state parameters investigated during testing. When observing using camera state parameters not interrogated during calibration (e.g., non-canonical zoom positions), we conservatively estimate the absolute uncertainty to be $<10\%$ < 10 % . Image quality, measured via the amplitude of the Modulation Transfer Function (MTF) at Nyquist sampling (0.35 line pairs per pixel), shows $\mathrm{MTF}_{\mathit{Nyquist}}=0.26-0.50$ MTF Nyquist = 0.26 − 0.50 across all zoom, focus, and filter positions, exceeding the $>0.2$ > 0.2 design requirement. We discuss lessons learned from calibration and suggest tactical strategies that will optimize the quality of science data acquired during operation at Mars. While most results matched expectations, some surprises were discovered, such as a strong wavelength and temperature dependence on the radiometric coefficients and a scene-dependent dynamic component to the zero-exposure bias frames. Calibration results and derived accuracies were validated using a Geoboard target consisting of well-characterized geologic samples.


Author(s):  
E Gaztanaga ◽  
S J Schmidt ◽  
M D Schneider ◽  
J A Tyson

Abstract We test the impact of some systematic errors in weak lensing magnification measurements with the COSMOS 30-band photo-z Survey flux limited to Iauto &lt; 25.0 using correlations of both source galaxy counts and magnitudes. Systematic obscuration effects are measured by comparing counts and magnification correlations. We use the ACS-HST catalogs to identify potential blending objects (close pairs) and perform the magnification analyses with and without blended objects. We find that blending effects start to be important (∼ 0.04 mag obscuration) at angular scales smaller than 0.1 arcmin. Extinction and other systematic obscuration effects can be as large as 0.10 mag (U-band) but are typically smaller than 0.02 mag depending on the band. After applying these corrections, we measure a 3.9σ magnification signal that is consistent for both counts and magnitudes. The corresponding projected mass profiles of galaxies at redshift z ≃ 0.6 (MI ≃ −21) is Σ = 25 ± 6M⊙h3/pc2 at 0.1 Mpc/h, consistent with NFW type profile with M200 ≃ 2 × 1012M⊙h/pc2. Tangential shear and flux-size magnification over the same lenses show similar mass profiles. We conclude that magnification from counts and fluxes using photometric redshifts has the potential to provide complementary weak lensing information in future wide field surveys once we carefully take into account systematic effects, such as obscuration and blending.


Author(s):  
J. Gordon Robertson

Abstract One of the basic parameters of a charge coupled device (CCD) camera is its gain, that is, the number of detected electrons per output Analogue to Digital Unit (ADU). This is normally determined by finding the statistical variances from a series of flat-field exposures with nearly constant levels over substantial areas, and making use of the fact that photon (Poisson) noise has variance equal to the mean. However, when a CCD has been installed in a spectroscopic instrument fed by numerous optical fibres, or with an echelle format, it is no longer possible to obtain illumination that is constant over large areas. Instead of making do with selected small areas, it is shown here that the wide variation of signal level in a spectroscopic ‘flat-field’ can be used to obtain accurate values of the CCD gain, needing only a matched pair of exposures (that differ in their realisation of the noise). Once the gain is known, the CCD readout noise (in electrons) is easily found from a pair of bias frames. Spatial stability of the image in the two flat-fields is important, although correction of minor shifts is shown to be possible, at the expense of further analysis.


2019 ◽  
Vol 214 ◽  
pp. 06024
Author(s):  
Victor Estrade ◽  
Cécile Germain ◽  
Isabelle Guyon ◽  
David Rousseau

Experimental science often has to cope with systematic errors that coherently bias data. We analyze this issue on the analysis of data produced by experiments of the Large Hadron Collider at CERN as a case of supervised domain adaptation. Systematics-aware learning should create an efficient representation that is insensitive to perturbations induced by the systematic effects. We present an experimental comparison of the adversarial knowledge-free approach and a less data-intensive alternative.


2016 ◽  
Vol 40 ◽  
pp. 1660099 ◽  
Author(s):  
Stanislav Chekmenev

An experimental method which is aimed to find a permanent EDM of a charged particle was proposed by the JEDI (Jülich Electric Dipole moment Investigations) collaboration. EDMs can be observed by their influence on spin motion. The only possible way to perform a direct measurement is to use a storage ring. For this purpose, it was decided to carry out the first precursor experiment at the Cooler Synchrotron (COSY). Since the EDM of a particle violates CP invariance it is expected to be tiny, treatment of all various sources of systematic errors should be done with a great level of precision. One should clearly understand how misalignments of the magnets affects the beam and the spin motion. It is planned to use a RF Wien filter for the precusor experiment. In this paper the simulations of the systematic effects for the RF Wien filter device method will be discussed.


1994 ◽  
Vol 38 ◽  
pp. 47-57 ◽  
Author(s):  
D. L. Bish ◽  
Steve. J. Chipera

Abstract Accuracy, or how well a measurement conforms to the true value of a parameter, is important in XRD analyses in three primary areas, 1) 26 position or d-spacing; 2) peak shape; and 3) intensity. Instrumental factors affecting accuracy include zero-point, axial-divergence, and specimen- displacement errors, step size, and even uncertainty in X-ray wavelength values. Sample factors affecting accuracy include specimen transparency, structural strain, crystallite size, and preferred orientation effects. In addition, a variety of other sample-related factors influence the accuracy of quantitative analyses, including variations in sample composition and order/disorder. The conventional method of assessing accuracy during experimental diffractometry measurements is through the use of certified internal standards. However, it is possible to obtain highly accurate d-spacings without an internal standard using a well-aligned powder diffractometer coupled with data analysis routines that allow analysis of and correction for important systematic errors. The first consideration in such measurements is the use of methods yielding precise peak positions, such as profile fitting. High accuracy can be achieved if specimen-displacement, specimen- transparency, axial-divergence, and possibly zero-point corrections are included in data analysis. It is also important to consider that most common X-ray wavelengths (other than Cu Kα1) have not been measured with high accuracy. Accuracy in peak-shape measurements is important in the separation of instrumental and sample contributions to profile shape, e.g., in crystallite size and strain measurements. The instrumental contribution must be determined accurately using a standard material free from significant sample-related effects, such as NIST SRM 660 (LaB6). Although full-pattern fitting methods for quantitative analysis are available, the presence of numerous systematic errors makes the use of an internal standard, such as a-alumina mandatory to ensure accuracy; accuracy is always suspect when using external-standard, constrained-total quantitative analysis methods. One of the most significant problems in quantitative analysis remains the choice of representative standards. Variations in sample chemistry, order-disorder, and preferred orientation can be accommodated only with a thorough understanding of the coupled effects of all three on intensities. It is important to recognize that sample preparation methods that optimize accuracy for one type of measurement may not be appropriate for another. For example, the very fine crystallite size that is optimum for quantitative analysis is unnecessary and can even be detrimental in d-spacing and peak shape measurements.


Sign in / Sign up

Export Citation Format

Share Document