scholarly journals PlanetEvidence: Planet or Noise?

2021 ◽  
Vol 162 (6) ◽  
pp. 304
Author(s):  
Jacob Golomb ◽  
Graça Rocha ◽  
Tiffany Meshkat ◽  
Michael Bottom ◽  
Dimitri Mawet ◽  
...  

Abstract The work presented here attempts at answering the following question: how do we decide when a given detection is a planet or just residual noise in exoplanet direct imaging data? To this end we implement a metric meant to replace the empirical frequentist-based thresholds for detection. Our method, implemented within a Bayesian framework, introduces an “evidence-based” approach to help decide whether a given detection is a true planet or just noise. We apply this metric jointly with a postprocessing technique and Karhunen–Loeve Image Processing (KLIP), which models and subtracts the stellar PSF from the image. As a proof of concept we implemented a new routine named PlanetEvidence that integrates the nested sampling technique (Multinest) with the KLIP algorithm. This is a first step to recast such a postprocessing method into a fully Bayesian perspective. We test our approach on real direct imaging data, specifically using GPI data of β Pictoris b, and on synthetic data. We find that for the former the method strongly favors the presence of a planet (as expected) and recovers the true parameter posterior distributions. For the latter case our approach allows us to detect (true) dim sources invisible to the naked eye as real planets, rather than background noise, and set a new lower threshold for detection at ∼2.5σ level. Further it allows us to quantify our confidence that a given detection is a real planet and not just residual noise.

2010 ◽  
Vol 14 (3) ◽  
pp. 545-556 ◽  
Author(s):  
J. Rings ◽  
J. A. Huisman ◽  
H. Vereecken

Abstract. Coupled hydrogeophysical methods infer hydrological and petrophysical parameters directly from geophysical measurements. Widespread methods do not explicitly recognize uncertainty in parameter estimates. Therefore, we apply a sequential Bayesian framework that provides updates of state, parameters and their uncertainty whenever measurements become available. We have coupled a hydrological and an electrical resistivity tomography (ERT) forward code in a particle filtering framework. First, we analyze a synthetic data set of lysimeter infiltration monitored with ERT. In a second step, we apply the approach to field data measured during an infiltration event on a full-scale dike model. For the synthetic data, the water content distribution and the hydraulic conductivity are accurately estimated after a few time steps. For the field data, hydraulic parameters are successfully estimated from water content measurements made with spatial time domain reflectometry and ERT, and the development of their posterior distributions is shown.


2013 ◽  
Vol 8 (S299) ◽  
pp. 42-43
Author(s):  
Mihoko Konishi ◽  
Hiroshi Shibai ◽  
Taro Matsuo ◽  
Kodai Yamamoto ◽  
Jun Sudo ◽  
...  

AbstractThere are faint contaminants near primary stars in the direct imaging of exoplanets. Our goal is to estimate statistically the ratio of exoplanets in the detected batch of point sources by calculating the fraction of contamination. In this study, we compared the detected number of stars with the number of contaminants predicted by our model. We found that the observed number of faint stars were fewer than the predicted results towards the Pleiades and GOODS-South field when the parameters of the conventional stellar distribution models were employed. We thus estimated new model parameters in correspondence to the results of the observations.


Mathematics ◽  
2020 ◽  
Vol 8 (1) ◽  
pp. 109
Author(s):  
Francisco J. Ariza-Hernandez ◽  
Martin P. Arciga-Alejandre ◽  
Jorge Sanchez-Ortiz ◽  
Alberto Fleitas-Imbert

In this paper, we consider the inverse problem of derivative order estimation in a fractional logistic model. In order to solve the direct problem, we use the Grünwald-Letnikov fractional derivative, then the inverse problem is tackled within a Bayesian perspective. To construct the likelihood function, we propose an explicit numerical scheme based on the truncated series of the derivative definition. By MCMC samples of the marginal posterior distributions, we estimate the order of the derivative and the growth rate parameter in the dynamic model, as well as the noise in the observations. To evaluate the methodology, a simulation was performed using synthetic data, where the bias and mean square error are calculated, the results give evidence of the effectiveness for the method and the suitable performance of the proposed model. Moreover, an example with real data is presented as evidence of the relevance of using a fractional model.


2020 ◽  
Vol 496 (2) ◽  
pp. 2346-2361 ◽  
Author(s):  
Berta Margalef-Bentabol ◽  
Marc Huertas-Company ◽  
Tom Charnock ◽  
Carla Margalef-Bentabol ◽  
Mariangela Bernardi ◽  
...  

ABSTRACT With the advent of future big-data surveys, automated tools for unsupervised discovery are becoming ever more necessary. In this work, we explore the ability of deep generative networks for detecting outliers in astronomical imaging data sets. The main advantage of such generative models is that they are able to learn complex representations directly from the pixel space. Therefore, these methods enable us to look for subtle morphological deviations which are typically missed by more traditional moment-based approaches. We use a generative model to learn a representation of expected data defined by the training set and then look for deviations from the learned representation by looking for the best reconstruction of a given object. In this first proof-of-concept work, we apply our method to two different test cases. We first show that from a set of simulated galaxies, we are able to detect ${\sim}90{{\ \rm per\ cent}}$ of merging galaxies if we train our network only with a sample of isolated ones. We then explore how the presented approach can be used to compare observations and hydrodynamic simulations by identifying observed galaxies not well represented in the models. The code used in this is available at https://github.com/carlamb/astronomical-outliers-WGAN.


2018 ◽  
Vol 614 ◽  
pp. A16 ◽  
Author(s):  
A. Cheetham ◽  
D. Ségransan ◽  
S. Peretti ◽  
J.-B. Delisle ◽  
J. Hagelberg ◽  
...  

Using high-contrast imaging with the SPHERE instrument at the Very Large Telescope (VLT), we report the first images of a cold brown dwarf companion to the exoplanet host star HD 4113A. The brown dwarf HD 4113C is part of a complex dynamical system consisting of a giant planet, a stellar host, and a known wide M-dwarf companion. Its separation of 535 ± 3 mas and H-band contrast of 13.35 ± 0.10 mag correspond to a projected separation of 22 AU and an isochronal mass estimate of 36 ± 5 MJ based on COND models. The companion shows strong methane absorption, and through fitting an atmosphere model, we estimate a surface gravity of logg = 5 and an effective temperature of ~500–600 K. A comparison of its spectrum with observed T dwarfs indicates a late-T spectral type, with a T9 object providing the best match. By combining the observed astrometry from the imaging data with 27 years of radial velocities, we use orbital fitting to constrain its orbital and physical parameters, as well as update those of the planet HD 4113A b, discovered by previous radial velocity measurements. The data suggest a dynamical mass of 66−4+5 MJ and moderate eccentricity of 0.44−0.07+0.08 for the brown dwarf. This mass estimate appears to contradict the isochronal estimate and that of objects with similar temperatures, which may be caused by the newly detected object being an unresolved binary brown dwarf system or the presence of an additional object in the system. Through dynamical simulations, we show that the planet may undergo strong Lidov-Kozai cycles, raising the possibility that it formed on a quasi-circular orbit and gained its currently observed high eccentricity (e ~ 0.9) through interactions with the brown dwarf. Follow-up observations combining radial velocities, direct imaging, and Gaia astrometry will be crucial to precisely constrain the dynamical mass of the brown dwarf and allow for an in-depth comparison with evolutionary and atmosphere models.


Author(s):  
Xin Xu

Purpose Emitter parameter estimation via signal sorting is crucial for communication, electronic reconnaissance and radar intelligence analysis. However, due to problems of transmitter circuit, environmental noises and certain unknown interference sources, the estimated emitter parameter measurements are still inaccurate and biased. As a result, it is indispensable to further refine the parameter values. Though the benchmark clustering algorithms are assumed to be capable of inferring the true parameter values by discovering cluster centers, the high computational and communication cost makes them difficult to adapt for distributed learning on massive measurement data. The paper aims to discuss these issues. Design/methodology/approach In this work, the author brings forward a distributed emitter parameter refinement method based on maximum likelihood. The author’s method is able to infer the underlying true parameter values from the huge measurement data efficiently in a distributed working mode. Findings Experimental results on a series of synthetic data indicate the effectiveness and efficiency of the author’s method when compared against the benchmark clustering methods. Originality/value With the refined parameter values, the complex stochastic parameter patterns could be discovered and the emitters could be identified by merging observations of consistent parameter values together. Actually, the author is in the process of applying her distributed parameter refinement method for PRI parameter pattern discovery and emitter identification. The superior performance ensures its wide application in both civil and military fields.


2021 ◽  
Author(s):  
Erik D. Fagerholm ◽  
W.M.C. Foulkes ◽  
Yasir Gallero-Salas ◽  
Fritjof Helmchen ◽  
Rosalyn J. Moran ◽  
...  

An isotropic dynamical system is one that looks the same in every direction, i.e., if we imagine standing somewhere within an isotropic system, we would not be able to differentiate between different lines of sight. Conversely, anisotropy is a measure of the extent to which a system deviates from perfect isotropy, with larger values indicating greater discrepancies between the structure of the system along its axes. Here, we derive the form of a generalised scalable (mechanically similar) discretized field theoretic Lagrangian that allows for levels of anisotropy to be directly estimated via timeseries of arbitrary dimensionality. We generate synthetic data for both isotropic and anisotropic systems and, by using Bayesian model inversion and reduction, show that we can discriminate between the two datasets - thereby demonstrating proof of principle. We then apply this methodology to murine calcium imaging data collected in rest and task states, showing that anisotropy can be estimated directly from different brain states and cortical regions in an empirical in vivo biological setting. We hope that this theoretical foundation, together with the methodology and publicly available MATLAB code, will provide an accessible way for researchers to obtain new insight into the structural organization of neural systems in terms of how scalable neural regions grow - both ontogenetically during the development of an individual organism, as well as phylogenetically across species.


2019 ◽  
Vol 11 (4) ◽  
pp. 507-524
Author(s):  
Silvère Gousset ◽  
◽  
Laurence Croizé ◽  
Etienne Le Coarer ◽  
Yann Ferrec ◽  
...  

Abstract NanoCarb is an innovative Fourier-transform imaging spectrometer dedicated to the measurement of CO2 and CH4. Both its unusual optical principle and sampling strategy allow to reach a compact design, ideal for small satellite constellation as investigated by the European project SCARBO. The NanoCarb performance assessment as well as a proof of concept is required in this framework. We have developed a design strategy to optimize the performances. We demonstrate the potential of the concept through an estimation of the sensitivity, compliant with the space mission target. We also present a preliminary mitigation of the bias induced by water on CO2 and CH4 retrieval, illustrating the efficiency and the flexibility of the NanoCarb partial interferogram sampling technique. The presented design reaches a sub-ppm random error for CO2 and sub-10 ppb random error for CH4, considering 128 km swath and 2 by 2 km2 ground resolution. Design optimization and more systematic performances are discussed.


2020 ◽  
Vol 74 (4) ◽  
pp. 427-438 ◽  
Author(s):  
Joel Wahl ◽  
Mikael Sjödahl ◽  
Kerstin Ramser

Preprocessing of Raman spectra is generally done in three separate steps: (1) cosmic ray removal, (2) signal smoothing, and (3) baseline subtraction. We show that a convolutional neural network (CNN) can be trained using simulated data to handle all steps in one operation. First, synthetic spectra are created by randomly adding peaks, baseline, mixing of peaks and baseline with background noise, and cosmic rays. Second, a CNN is trained on synthetic spectra and known peaks. The results from preprocessing were generally of higher quality than what was achieved using a reference based on standardized methods (second-difference, asymmetric least squares, cross-validation). From 105 simulated observations, 91.4% predictions had smaller absolute error (RMSE), 90.3% had improved quality (SSIM), and 94.5% had reduced signal-to-noise (SNR) power. The CNN preprocessing generated reliable results on measured Raman spectra from polyethylene, paraffin and ethanol with background contamination from polystyrene. The result shows a promising proof of concept for the automated preprocessing of Raman spectra.


Sign in / Sign up

Export Citation Format

Share Document