New estimates of the mass and density of asteroid (16) Psyche

2020 ◽  
Author(s):  
Lauri Siltala ◽  
Mikael Granvik

<p>Asteroid mass determination is performed by analyzing an asteroid's gravitational interaction with another object, such as a spacecraft, Mars, a companion in the case of binary asteroids, or a separate asteroid during a close encounter. During asteroid-asteroid close encounters, perturbations caused by the masses of larger asteroids can be detected in the post-encounter orbits of the smaller test asteroid involved in such an encounter. This can be described as an inverse problem where the aim is to fit six orbital elements for each asteroid and mass(es) for the perturbing asteroid(s), for a total of 13 parameters at minimum unless more asteroid-asteroid encounters are modeled simultaneously.<br /><br />To solve this inverse problem, which is traditionally done with least-squares methods, we have implemented a Markov-chain Monte Carlo (MCMC) based solution and recently (Siltala & Granvik 2020) reported, among others, significantly lower than expected masses and densities for the asteroid (16) Psyche in particular. Psyche is an interesting, and topical, object as it is the target of NASA's eponymous Psyche mission and is commonly thought to be of metallic or stony-iron composition, which our previous density estimates disagreed with. In our previous work our two separate mass estimates for Psyche were based on modeling encounters with two separate test asteroids in both cases. Since then we have further refined our mass estimate for Psyche by simultaneously using eight separate test asteroids for this object, significantly increasing the amount of observational data included on the model which, in turn, will narrow down the uncertainties of our results at the cost of additional model complexity. Here we report and discuss our latest results for the mass of Psyche based on this case and compute corresponding densities based on existing literature values for the volume. We obtain a mass of (0.972 ± 0.148) * 10^-11 solar masses for Psyche corresponding to a bulk density of (3.37 ± 0.58) g/cm³ which is higher than our previous results while remaining consistent with them considering the uncertainties involved. It still remains lower than other previous literature values. We compare our results to these previous literature values and briefly discuss possible physical implications of these results.<br /><br />In addition, due to previous interest from the scientific community, we have also computed mass estimates for Ceres and Vesta, both of which already have very precisely known masses from the Dawn mission. As such, our results for these two asteroids are not of direct scientific interest but they serve as an useful benchmark to verify that our algorithm provides realistic results as we have 'ground truth' values to compare our results to. We find that for both cases, our results are in line with those of Dawn.</p>

2021 ◽  
Vol 2021 (3) ◽  
Author(s):  
Thomas G. Rizzo ◽  
George N. Wojcik

Abstract Extra dimensions have proven to be a very useful tool in constructing new physics models. In earlier work, we began investigating toy models for the 5-D analog of the kinetic mixing/vector portal scenario where the interactions of dark matter, taken to be, e.g., a complex scalar, with the brane-localized fields of the Standard Model (SM) are mediated by a massive U(1)D dark photon living in the bulk. These models were shown to have many novel features differentiating them from their 4-D analogs and which, in several cases, avoided some well-known 4-D model building constraints. However, these gains were obtained at the cost of the introduction of a fair amount of model complexity, e.g., dark matter Kaluza-Klein excitations. In the present paper, we consider an alternative setup wherein the dark matter and the dark Higgs, responsible for U(1)D breaking, are both localized to the ‘dark’ brane at the opposite end of the 5-D interval from where the SM fields are located with only the dark photon now being a 5-D field. The phenomenology of such a setup is explored for both flat and warped extra dimensions and compared to the previous more complex models.


2019 ◽  
Vol 623 ◽  
pp. A6 ◽  
Author(s):  
R. JL. Fétick ◽  
L. Jorda ◽  
P. Vernazza ◽  
M. Marsset ◽  
A. Drouard ◽  
...  

Context. Over the past decades, several interplanetary missions have studied small bodies in situ, leading to major advances in our understanding of their geological and geophysical properties. These missions, however, have had a limited number of targets. Among them, the NASA Dawn mission has characterised in detail the topography and albedo variegation across the surface of asteroid (4) Vesta down to a spatial resolution of ~20 m pixel−1 scale. Aims. Here our aim was to determine how much topographic and albedo information can be retrieved from the ground with VLT/SPHERE in the case of Vesta, having a former space mission (Dawn) providing us with the ground truth that can be used as a benchmark. Methods. We observed Vesta with VLT/SPHERE/ZIMPOL as part of our ESO large programme (ID 199.C-0074) at six different epochs, and deconvolved the collected images with a parametric point spread function (PSF). We then compared our images with synthetic views of Vesta generated from the 3D shape model of the Dawn mission, on which we projected Vesta’s albedo information. Results. We show that the deconvolution of the VLT/SPHERE images with a parametric PSF allows the retrieval of the main topographic and albedo features present across the surface of Vesta down to a spatial resolution of ~20–30 km. Contour extraction shows an accuracy of ~1 pixel (3.6 mas). The present study provides the very first quantitative estimate of the accuracy of ground-based adaptive-optics imaging observations of asteroid surfaces. Conclusions. In the case of Vesta, the upcoming generation of 30–40 m telescopes (ELT, TMT, GMT) should in principle be able to resolve all of the main features present across its surface, including the troughs and the north–south crater dichotomy, provided that they operate at the diffraction limit.


2020 ◽  
Vol 21 (1) ◽  
Author(s):  
Zhengqiao Zhao ◽  
Alexandru Cristian ◽  
Gail Rosen

Abstract Background It is a computational challenge for current metagenomic classifiers to keep up with the pace of training data generated from genome sequencing projects, such as the exponentially-growing NCBI RefSeq bacterial genome database. When new reference sequences are added to training data, statically trained classifiers must be rerun on all data, resulting in a highly inefficient process. The rich literature of “incremental learning” addresses the need to update an existing classifier to accommodate new data without sacrificing much accuracy compared to retraining the classifier with all data. Results We demonstrate how classification improves over time by incrementally training a classifier on progressive RefSeq snapshots and testing it on: (a) all known current genomes (as a ground truth set) and (b) a real experimental metagenomic gut sample. We demonstrate that as a classifier model’s knowledge of genomes grows, classification accuracy increases. The proof-of-concept naïve Bayes implementation, when updated yearly, now runs in 1/4th of the non-incremental time with no accuracy loss. Conclusions It is evident that classification improves by having the most current knowledge at its disposal. Therefore, it is of utmost importance to make classifiers computationally tractable to keep up with the data deluge. The incremental learning classifier can be efficiently updated without the cost of reprocessing nor the access to the existing database and therefore save storage as well as computation resources.


2016 ◽  
Vol 2 (1) ◽  
pp. 711-714 ◽  
Author(s):  
Daniel Laidig ◽  
Sebastian Trimpe ◽  
Thomas Seel

AbstractWe examine the usefulness of event-based sampling approaches for reducing communication in inertial-sensor-based analysis of human motion. To this end we consider realtime measurement of the knee joint angle during walking, employing a recently developed sensor fusion algorithm. We simulate the effects of different event-based sampling methods on a large set of experimental data with ground truth obtained from an external motion capture system. This results in a reduced wireless communication load at the cost of a slightly increased error in the calculated angles. The proposed methods are compared in terms of best balance of these two aspects. We show that the transmitted data can be reduced by 66% while maintaining the same level of accuracy.


1995 ◽  
Vol 73 (8) ◽  
pp. 1387-1395 ◽  
Author(s):  
Marco Cucco ◽  
Giorgio Malacarne

Variation in parental effort of Pallid Swifts (Apus pallidus) was investigated for 3 years in a colony in northwestern Italy. The masses of adults and of bolus loads brought to chicks were monitored by electronic balances inserted under nests, and feeding rates were monitored by video cameras. Fluctuations in daily food availability were measured with an insect-suction trap. Manipulation experiments on broods originally consisting of three chicks were performed to increase (four chicks) or reduce (two chicks) adult effort, with the aim of determining if parents tend to allocate food primarily to themselves or to their offspring, and if mass loss in adults results from reproductive stress or from adaptive programmed anorexia. With the enlargement of brood size, mean bolus mass remained constant, but the visitation rate increased significantly. Daily food abundance did not influence the amount of food allocated to chicks (neither time spent foraging nor the bolus mass changed), but positively influenced the mass of adults, which showed large daily variations. These results indicate that parents tend to invest constantly in offspring, at their own expense when food is scarce. Our data lend support to the cost of reproduction hypothesis instead of adaptive anorexia, since adults lose mass mainly in the brooding period, when demand is highest, and always regain mass when prey availability is greater.


1996 ◽  
Vol 118 (4) ◽  
pp. 641-648 ◽  
Author(s):  
Izuru Takewaki ◽  
Tsuneyoshi Nakamura ◽  
Yasumasa Arita

A hybrid inverse mode problem is formulated for a fixed-fixed mass-spring model. A problem of eigenvalue analysis and its inverse problem are combined in this hybrid inverse mode formulation. It is shown if all the masses and the mid-span stiffnesses of the model are prescribed, then the stiffnesses of the left and right spans (side-spans) can be found for a specified lowest eigenvalue and a specified set of lowest-mode drifts in the side-spans. Sufficient conditions are introduced and proved for a specified eigenvalue and a specified set of drifts in the side-spans to provide positive stiffnesses of the side-spans and to be those in the lowest eigenvibration. A set of solution stiffnesses in the side-spans is derived uniquely in closed form.


2015 ◽  
Vol 10 (S318) ◽  
pp. 212-217
Author(s):  
E. V. Pitjeva ◽  
N. P. Pitjev

AbstractAn estimation of the mass of the main asteroid belt was made on the basis of the new version of EPM2014 ephemerides of the Institute of Applied Astronomy of Russian Academy of Sciences using about 800000 positional observations of planets and spacecraft. We obtained the individual estimations of masses of large asteroids from radar data, as well as estimates of the masses of asteroids by using known diameters and estimated average densities for the three taxonomic types (C, S, M), and used the known mass values of binary asteroids and asteroids to which spacecraft approached. A two-dimensional homogeneous annulus with dimensions corresponding observed width of the main asteroid belt (2.06 au and 3.27 au) was used instead of a previous massive one-dimensional ring for modeling total perturbations from small asteroids. The obtained value of the total mass of the main asteroid belt is (12.25 ± 0.19)10−10M⊙.


2014 ◽  
Vol 2014 ◽  
pp. 1-7 ◽  
Author(s):  
Shou-Lei Wang ◽  
Yu-Fei Yang ◽  
Yu-Hua Zeng

The estimation of implied volatility is a typical PDE inverse problem. In this paper, we propose theTV-L1model for identifying the implied volatility. The optimal volatility function is found by minimizing the cost functional measuring the discrepancy. The gradient is computed via the adjoint method which provides us with an exact value of the gradient needed for the minimization procedure. We use the limited memory quasi-Newton algorithm (L-BFGS) to find the optimal and numerical examples shows the effectiveness of the presented method.


2021 ◽  
Vol 2092 (1) ◽  
pp. 012001
Author(s):  
Yu Jiang ◽  
Gen Nakamura ◽  
Kenji Shirota

Abstract This paper deals with an inverse problem for recovering the viscoelasticity of a living body from MRE (Magnetic Resonance Elastography) data. Based on a viscoelastic partial differential equation whose solution can approximately simulate MRE data, the inverse problem is transformed to a least square variational problem. This is to search for viscoelastic coefficients of this equation such that the solution to a boundary value problem of this equation fits approximately to MRE data with respect to the least square cost function. By computing the Gateaux derivatives of the cost function, we minimize the cost function by the projected gradient method is proposed for recovering the unknown coefficients. The reconstruction results based on simulated data and real experimental data are presented and discussed.


2021 ◽  
Author(s):  
Viktória Burkus ◽  
Attila Kárpáti ◽  
László Szécsi

Surface reconstruction for particle-based fluid simulation is a computational challenge on par with the simula- tion itself. In real-time applications, splatting-style rendering approaches based on forward rendering of particle impostors are prevalent, but they suffer from noticeable artifacts. In this paper, we present a technique that combines forward rendering simulated features with deep-learning image manipulation to improve the rendering quality of splatting-style approaches to be perceptually similar to ray tracing solutions, circumventing the cost, complexity, and limitations of exact fluid surface rendering by replacing it with the flat cost of a neural network pass. Our solution is based on the idea of training generative deep neural networks with image pairs consisting of cheap particle impostor renders and ground truth high quality ray-traced images.


Sign in / Sign up

Export Citation Format

Share Document