scholarly journals A super-Earth and a mini-Neptune around Kepler-59

2019 ◽  
Vol 491 (4) ◽  
pp. 5238-5247 ◽  
Author(s):  
X Saad-Olivera ◽  
C F Martinez ◽  
A Costa de Souza ◽  
F Roig ◽  
D Nesvorný

ABSTRACT We characterize the radii and masses of the star and planets in the Kepler-59 system, as well as their orbital parameters. The star parameters are determined through a standard spectroscopic analysis, resulting in a mass of $1.359\pm 0.155\, \mathrm{M}_\odot$ and a radius of $1.367\pm 0.078\, \mathrm{R}_\odot$. The obtained planetary radii are $1.5\pm 0.1\, R_\oplus$ for the inner and $2.2\pm 0.1\, R_\oplus$ for the outer planet. The orbital parameters and the planetary masses are determined by the inversion of Transit Timing Variations (TTV) signals. We consider two different data sets: one provided by Holczer et al. (2016), with TTVs only for Kepler-59c, and the other provided by Rowe et al. (2015), with TTVs for both planets. The inversion method applies an algorithm of Bayesian inference (MultiNest) combined with an efficient N-body integrator (Swift). For each of the data set, we found two possible solutions, both having the same probability according to their corresponding Bayesian evidences. All four solutions appear to be indistinguishable within their 2-σ uncertainties. However, statistical analyses show that the solutions from Rowe et al. (2015) data set provide a better characterization. The first solution infers masses of $5.3_{-2.1}^{+4.0}~M_{\mathrm{\oplus }}$ and $4.6_{-2.0}^{+3.6}~M_{\mathrm{\oplus }}$ for the inner and outer planet, respectively, while the second solution gives masses of $3.0^{+0.8}_{-0.8}~M_{\mathrm{\oplus }}$ and $2.6^{+0.9}_{-0.8}~M_{\mathrm{\oplus }}$. These values point to a system with an inner super-Earth and an outer mini-Neptune. A dynamical study shows that the planets have almost co-planar orbits with small eccentricities (e < 0.1), close to the 3:2 mean motion resonance. A stability analysis indicates that this configuration is stable over million years of evolution.

2020 ◽  
Vol 70 (1) ◽  
pp. 145-161 ◽  
Author(s):  
Marnus Stoltz ◽  
Boris Baeumer ◽  
Remco Bouckaert ◽  
Colin Fox ◽  
Gordon Hiscott ◽  
...  

Abstract We describe a new and computationally efficient Bayesian methodology for inferring species trees and demographics from unlinked binary markers. Likelihood calculations are carried out using diffusion models of allele frequency dynamics combined with novel numerical algorithms. The diffusion approach allows for analysis of data sets containing hundreds or thousands of individuals. The method, which we call Snapper, has been implemented as part of the BEAST2 package. We conducted simulation experiments to assess numerical error, computational requirements, and accuracy recovering known model parameters. A reanalysis of soybean SNP data demonstrates that the models implemented in Snapp and Snapper can be difficult to distinguish in practice, a characteristic which we tested with further simulations. We demonstrate the scale of analysis possible using a SNP data set sampled from 399 fresh water turtles in 41 populations. [Bayesian inference; diffusion models; multi-species coalescent; SNP data; species trees; spectral methods.]


1998 ◽  
Vol 185 ◽  
pp. 167-168
Author(s):  
T. Appourchaux ◽  
M.C. Rabello-Soares ◽  
L. Gizon

Two different data sets have been used to derive low-degree rotational splittings. One data set comes from the Luminosity Oscillations Imager of VIRGO on board SOHO; the observation starts on 27 March 96 and ends on 26 March 97, and are made of intensity time series of 12 pixels (Appourchaux et al, 1997, Sol. Phys., 170, 27). The other data set was kindly made available by the GONG project; the observation starts on 26 August 1995 and ends on 21 August 1996, and are made of complex Fourier spectra of velocity time series for l = 0 − 9. For the GONG data, the contamination of l = 1 from the spatial aliases of l = 6 and l = 9 required some cleaning. To achieve this, we applied the inverse of the leakage matrix of l = 1, 6 and 9 to the original Fourier spectra of the same degrees; cleaning of all 3 degrees was achieved simultaneously (Appourchaux and Gizon, 1997, these proceedings).


2013 ◽  
Vol 6 (4) ◽  
pp. 7593-7631 ◽  
Author(s):  
P. Paatero ◽  
S. Eberly ◽  
S. G. Brown ◽  
G. A. Norris

Abstract. EPA PMF version 5.0 and the underlying multilinear engine executable ME-2 contain three methods for estimating uncertainty in factor analytic models: classical bootstrap (BS), displacement of factor elements (DISP), and bootstrap enhanced by displacement of factor elements (BS-DISP). The goal of these methods is to capture the uncertainty of PMF analyses due to random errors and rotational ambiguity. It is shown that the three methods complement each other: depending on characteristics of the data set, one method may provide better results than the other two. Results are presented using synthetic data sets, including interpretation of diagnostics, and recommendations are given for parameters to report when documenting uncertainty estimates from EPA PMF or ME-2 applications.


Geophysics ◽  
2000 ◽  
Vol 65 (3) ◽  
pp. 791-803 ◽  
Author(s):  
Weerachai Siripunvaraporn ◽  
Gary Egbert

There are currently three types of algorithms in use for regularized 2-D inversion of magnetotelluric (MT) data. All seek to minimize some functional which penalizes data misfit and model structure. With the most straight‐forward approach (exemplified by OCCAM), the minimization is accomplished using some variant on a linearized Gauss‐Newton approach. A second approach is to use a descent method [e.g., nonlinear conjugate gradients (NLCG)] to avoid the expense of constructing large matrices (e.g., the sensitivity matrix). Finally, approximate methods [e.g., rapid relaxation inversion (RRI)] have been developed which use cheaply computed approximations to the sensitivity matrix to search for a minimum of the penalty functional. Approximate approaches can be very fast, but in practice often fail to converge without significant expert user intervention. On the other hand, the more straightforward methods can be prohibitively expensive to use for even moderate‐size data sets. Here, we present a new and much more efficient variant on the OCCAM scheme. By expressing the solution as a linear combination of rows of the sensitivity matrix smoothed by the model covariance (the “representers”), we transform the linearized inverse problem from the M-dimensional model space to the N-dimensional data space. This method is referred to as DASOCC, the data space OCCAM’s inversion. Since generally N ≪ M, this transformation by itself can result in significant computational saving. More importantly the data space formulation suggests a simple approximate method for constructing the inverse solution. Since MT data are smooth and “redundant,” a subset of the representers is typically sufficient to form the model without significant loss of detail. Computations required for constructing sensitivities and the size of matrices to be inverted can be significantly reduced by this approximation. We refer to this inversion as REBOCC, the reduced basis OCCAM’s inversion. Numerical experiments on synthetic and real data sets with REBOCC, DASOCC, NLCG, RRI, and OCCAM show that REBOCC is faster than both DASOCC and NLCG, which are comparable in speed. All of these methods are significantly faster than OCCAM, but are not competitive with RRI. However, even with a simple synthetic data set, we could not always get RRI to converge to a reasonable solution. The basic idea behind REBOCC should be more broadly applicable, in particular to 3-D MT inversion.


2011 ◽  
Vol 29 (7) ◽  
pp. 1317-1330 ◽  
Author(s):  
I. Fiorucci ◽  
G. Muscari ◽  
R. L. de Zafra

Abstract. The Ground-Based Millimeter-wave Spectrometer (GBMS) was designed and built at the State University of New York at Stony Brook in the early 1990s and since then has carried out many measurement campaigns of stratospheric O3, HNO3, CO and N2O at polar and mid-latitudes. Its HNO3 data set shed light on HNO3 annual cycles over the Antarctic continent and contributed to the validation of both generations of the satellite-based JPL Microwave Limb Sounder (MLS). Following the increasing need for long-term data sets of stratospheric constituents, we resolved to establish a long-term GMBS observation site at the Arctic station of Thule (76.5° N, 68.8° W), Greenland, beginning in January 2009, in order to track the long- and short-term interactions between the changing climate and the seasonal processes tied to the ozone depletion phenomenon. Furthermore, we updated the retrieval algorithm adapting the Optimal Estimation (OE) method to GBMS spectral data in order to conform to the standard of the Network for the Detection of Atmospheric Composition Change (NDACC) microwave group, and to provide our retrievals with a set of averaging kernels that allow more straightforward comparisons with other data sets. The new OE algorithm was applied to GBMS HNO3 data sets from 1993 South Pole observations to date, in order to produce HNO3 version 2 (v2) profiles. A sample of results obtained at Antarctic latitudes in fall and winter and at mid-latitudes is shown here. In most conditions, v2 inversions show a sensitivity (i.e., sum of column elements of the averaging kernel matrix) of 100 ± 20 % from 20 to 45 km altitude, with somewhat worse (better) sensitivity in the Antarctic winter lower (upper) stratosphere. The 1σ uncertainty on HNO3 v2 mixing ratio vertical profiles depends on altitude and is estimated at ~15 % or 0.3 ppbv, whichever is larger. Comparisons of v2 with former (v1) GBMS HNO3 vertical profiles, obtained employing the constrained matrix inversion method, show that v1 and v2 profiles are overall consistent. The main difference is at the HNO3 mixing ratio maximum in the 20–25 km altitude range, which is smaller in v2 than v1 profiles by up to 2 ppbv at mid-latitudes and during the Antarctic fall. This difference suggests a better agreement of GBMS HNO3 v2 profiles with both UARS/ and EOS Aura/MLS HNO3 data than previous v1 profiles.


1999 ◽  
Vol 11 ◽  
pp. 169-198 ◽  
Author(s):  
D. Opitz ◽  
R. Maclin

An ensemble consists of a set of individually trained classifiers (such as neural networks or decision trees) whose predictions are combined when classifying novel instances. Previous research has shown that an ensemble is often more accurate than any of the single classifiers in the ensemble. Bagging (Breiman, 1996c) and Boosting (Freund & Shapire, 1996; Shapire, 1990) are two relatively new but popular methods for producing ensembles. In this paper we evaluate these methods on 23 data sets using both neural networks and decision trees as our classification algorithm. Our results clearly indicate a number of conclusions. First, while Bagging is almost always more accurate than a single classifier, it is sometimes much less accurate than Boosting. On the other hand, Boosting can create ensembles that are less accurate than a single classifier -- especially when using neural networks. Analysis indicates that the performance of the Boosting methods is dependent on the characteristics of the data set being examined. In fact, further results show that Boosting ensembles may overfit noisy data sets, thus decreasing its performance. Finally, consistent with previous studies, our work suggests that most of the gain in an ensemble's performance comes in the first few classifiers combined; however, relatively large gains can be seen up to 25 classifiers when Boosting decision trees.


Geophysics ◽  
2019 ◽  
Vol 85 (1) ◽  
pp. M1-M13 ◽  
Author(s):  
Yichuan Wang ◽  
Igor B. Morozov

For seismic monitoring injected fluids during enhanced oil recovery or geologic [Formula: see text] sequestration, it is useful to measure time-lapse (TL) variations of acoustic impedance (AI). AI gives direct connections to the mechanical and fluid-related properties of the reservoir or [Formula: see text] storage site; however, evaluation of its subtle TL variations is complicated by the low-frequency and scaling uncertainties of this attribute. We have developed three enhancements of TL AI analysis to resolve these issues. First, following waveform calibration (cross-equalization) of the monitor seismic data sets to the baseline one, the reflectivity difference was evaluated from the attributes measured during the calibration. Second, a robust approach to AI inversion was applied to the baseline data set, based on calibration of the records by using the well-log data and spatially variant stacking and interval velocities derived during seismic data processing. This inversion method is straightforward and does not require subjective selections of parameterization and regularization schemes. Unlike joint or statistical inverse approaches, this method does not require prior models and produces accurate fitting of the observed reflectivity. Third, the TL AI difference is obtained directly from the baseline AI and reflectivity difference but without the uncertainty-prone subtraction of AI volumes from different seismic vintages. The above approaches are applied to TL data sets from the Weyburn [Formula: see text] sequestration project in southern Saskatchewan, Canada. High-quality baseline and TL AI-difference volumes are obtained. TL variations within the reservoir zone are observed in the calibration time-shift, reflectivity-difference, and AI-difference images, which are interpreted as being related to the [Formula: see text] injection.


Geophysics ◽  
2022 ◽  
pp. 1-59
Author(s):  
Fucai Dai ◽  
Feng Zhang ◽  
Xiangyang Li

SS-waves (SV-SV waves and SH-SH waves) are capable of inverting S-wave velocity ( VS) and density ( ρ) because they are sensitive to both parameters. SH-SH waves can be separated from multicomponent data sets more effectively than the SV-SV wave because the former is decoupled from the PP-wave in isotropic media. In addition, the SH-SH wave can be better modeled than the SV-SV wave in the case of strong velocity/impedance contrast because the SV-SV wave has multicritical angles, some of which can be quite small when velocity/ impedance contrast is strong. We derived an approximate equation of the SH-SH wave reflection coefficient as a function of VS and ρ in natural logarithm variables. The approximation has high accuracy, and it enables the inversion of VS and ρ in a direct manner. Both coefficients corresponding to VS and ρ are “model-parameter independent” and thus there is no need for prior estimate of any model parameter in inversion. Then, we developed an SH-SH wave inversion method, and demonstrated it by using synthetic data sets and a real SH-SH wave prestack data set from the west of China. We found that VS and ρ can be reliably estimated from the SH-SH wave of small angles.


2020 ◽  
Vol 221 (1) ◽  
pp. 586-602 ◽  
Author(s):  
Bin Liu ◽  
Yonghao Pang ◽  
Deqiang Mao ◽  
Jing Wang ◽  
Zhengyu Liu ◽  
...  

SUMMARY 4-D electrical resistivity tomography (ERT), an important geophysical method, is widely used to observe dynamic processes within static subsurface structures. However, because data acquisition and inversion consume large amounts of time, rapid changes that occur in the medium during a single acquisition cycle are difficult to detect in a timely manner via 4-D inversion. To address this issue, a scheme is proposed in this paper for restructuring continuously measured data sets and performing GPU-parallelized inversion. In this scheme, multiple reference time points are selected in an acquisition cycle, which allows all of the acquired data to be sequentially utilized in a 4-D inversion. In addition, the response of the 4-D inversion to changes in the medium has been enhanced by increasing the weight of new data being added dynamically to the inversion process. To improve the reliability of the inversion, our scheme uses actively varied time-regularization coefficients, which are adjusted according to the range of the changes in model resistivity; this range is predicted by taking the ratio between the independent inversion of the current data set and historical 4-D inversion model. Numerical simulations and experiments show that this new 4-D inversion method is able to locate and depict rapid changes in medium resistivity with a high level of accuracy.


Mathematics ◽  
2020 ◽  
Vol 8 (9) ◽  
pp. 1606
Author(s):  
Daniela Onita ◽  
Adriana Birlutiu ◽  
Liviu P. Dinu

Images and text represent types of content that are used together for conveying a message. The process of mapping images to text can provide very useful information and can be included in many applications from the medical domain, applications for blind people, social networking, etc. In this paper, we investigate an approach for mapping images to text using a Kernel Ridge Regression model. We considered two types of features: simple RGB pixel-value features and image features extracted with deep-learning approaches. We investigated several neural network architectures for image feature extraction: VGG16, Inception V3, ResNet50, Xception. The experimental evaluation was performed on three data sets from different domains. The texts associated with images represent objective descriptions for two of the three data sets and subjective descriptions for the other data set. The experimental results show that the more complex deep-learning approaches that were used for feature extraction perform better than simple RGB pixel-value approaches. Moreover, the ResNet50 network architecture performs best in comparison to the other three deep network architectures considered for extracting image features. The model error obtained using the ResNet50 network is less by approx. 0.30 than other neural network architectures. We extracted natural language descriptors of images and we made a comparison between original and generated descriptive words. Furthermore, we investigated if there is a difference in performance between the type of text associated with the images: subjective or objective. The proposed model generated more similar descriptions to the original ones for the data set containing objective descriptions whose vocabulary is simpler, bigger and clearer.


Sign in / Sign up

Export Citation Format

Share Document