scholarly journals Automated Dynamic Mascon Generation for GRACE and GRACE-FO Harmonic Processing

2021 ◽  
Vol 13 (16) ◽  
pp. 3134
Author(s):  
Yara Mohajerani ◽  
David Shean ◽  
Anthony Arendt ◽  
Tyler C. Sutterley

Commonly used mass-concentration (mascon) solutions estimated from Level-1B Gravity Recovery and Climate Experiment (GRACE) and GRACE Follow-On data, provided by processing centers such as the Jet Propulsion Laboratory (JPL) or the Goddard Space Flight Center (GSFC), do not give users control over the placement of mascons or inversion assumptions, such as regularization. While a few studies have focused on regional or global mascon optimization from spherical harmonics data, a global optimization based on the geometry of geophysical signal as a standardized product with user-defined points has not been addressed. Finding the optimal configuration with enough coverage to account for far-field leakage is not a trivial task and is often approached in an ad-hoc manner, if at all. Here, we present an automated approach to defining non-uniform, global mascon solutions that focus on a region of interest specified by the user, while maintaining few global degrees of freedom to minimize noise and leakage. We showcase our approach in High Mountain Asia (HMA) and Alaska, and compare the results with global uniform mascon solutions from range-rate data. We show that the custom mascon solutions can lead to improved regional trends due to a more careful sampling of geophysically distinct regions. In addition, the custom mascon solutions exhibit different seasonal variation compared to the regularized solutions. Our open-source pipeline will allow the community to quickly and efficiently develop optimized global mascon solutions for an arbitrary point or polygon anywhere on the surface of the Earth.

2008 ◽  
Vol 54 (188) ◽  
pp. 767-777 ◽  
Author(s):  
Scott B. Luthcke ◽  
Anthony A. Arendt ◽  
David D. Rowlands ◽  
John J. McCarthy ◽  
Christopher F. Larsen

AbstractThe mass changes of the Gulf of Alaska (GoA) glaciers are computed from the Gravity Recovery and Climate Experiment (GRACE) inter-satellite range-rate data for the period April 2003–September 2007. Through the application of unique processing techniques and a surface mass concentration (mascon) parameterization, the mass variations in the GoA glacier regions have been estimated at high temporal (10 day) and spatial (2 × 2 arc-degrees) resolution. The mascon solutions are directly estimated from a reduction of the GRACE K-band inter-satellite range-rate data and, unlike previous GRACE solutions for the GoA glaciers, do not exhibit contamination by leakage from mass change occurring outside the region of interest. The mascon solutions reveal considerable temporal and spatial variation within the GoA glacier region, with the largest negative mass balances observed in the St Elias Mountains including the Yakutat and Glacier Bay regions. The most rapid losses occurred during the 2004 melt season due to record temperatures in Alaska during that year. The total mass balance of the GoA glacier region was −84 ± 5 Gt a−1 contributing 0.23 ± 0.01 mm a−1 to global sea-level rise from April 2003 through March 2007. Highlighting the large seasonal and interannual variability of the GoA glaciers, the rate determined over the period April 2003–March 2006 is −102 ± 5 Gt a−1, which includes the anomalously high temperatures of 2004 and does not include the large 2007 winter balance-year snowfall. The mascon solutions agree well with regional patterns of glacier mass loss determined from aircraft altimetry and in situ measurements.


1980 ◽  
Vol 3 (1) ◽  
pp. 111-132 ◽  
Author(s):  
Zenon W. Pylyshyn

AbstractThe computational view of mind rests on certain intuitions regarding the fundamental similarity between computation and cognition. We examine some of these intuitions and suggest that they derive from the fact that computers and human organisms are both physical systems whose behavior is correctly described as being governed by rules acting on symbolic representations. Some of the implications of this view are discussed. It is suggested that a fundamental hypothesis of this approach (the “proprietary vocabulary hypothesis”) is that there is a natural domain of human functioning (roughly what we intuitively associate with perceiving, reasoning, and acting) that can be addressed exclusively in terms of a formal symbolic or algorithmic vocabulary or level of analysis.Much of the paper elaborates various conditions that need to be met if a literal view of mental activity as computation is to serve as the basis for explanatory theories. The coherence of such a view depends on there being a principled distinction between functions whose explanation requires that we posit internal representations and those that we can appropriately describe as merely instantiating causal physical or biological laws. In this paper the distinction is empirically grounded in a methodological criterion called the “cognitive impenetrability condition.” Functions are said to be cognitively impenetrable if they cannot be influenced by such purely cognitive factors as goals, beliefs, inferences, tacit knowledge, and so on. Such a criterion makes it possible to empirically separate the fixed capacities of mind (called its “functional architecture”) from the particular representations and algorithms used on specific occasions. In order for computational theories to avoid being ad hoc, they must deal effectively with the “degrees of freedom” problem by constraining the extent to which they can be arbitrarily adjusted post hoc to fit some particular set of observations. This in turn requires that the fixed architectural function and the algorithms be independently validated. It is argued that the architectural assumptions implicit in many contemporary models run afoul of the cognitive impenetrability condition, since the required fixed functions are demonstrably sensitive to tacit knowledge and goals. The paper concludes with some tactical suggestions for the development of computational cognitive theories.


2008 ◽  
Vol 2008 ◽  
pp. 1-25 ◽  
Author(s):  
Michel Mandjes ◽  
Werner Scheinhardt

Fluid queues offer a natural framework for analyzing waiting times in a relay node of an ad hoc network. Because of the resource sharing policy applied, the input and output of these queues are coupled. More specifically, when there are users who wish to transmit data through a specific node, each of them obtains a share of the service capacity to feed traffic into the queue of the node, whereas the remaining fraction is used to serve the queue; here is a free design parameter. Assume now that jobs arrive at the relay node according to a Poisson process, and that they bring along exponentially distributed amounts of data. The case has been addressed before; the present paper focuses on the intrinsically harder case , that is, policies that give more weight to serving the queue. Four performance metrics are considered: (i) the stationary workload of the queue, (ii) the queueing delay, that is, the delay of a “packet” (a fluid particle) that arrives at an arbitrary point in time, (iii) the flow transfer delay, (iv) the sojourn time, that is, the flow transfer time increased by the time it takes before the last fluid particle of the flow is served. We explicitly compute the Laplace transforms of these random variables.


2018 ◽  
Vol 13 (4) ◽  
pp. 34
Author(s):  
T.A. Bubba ◽  
D. Labate ◽  
G. Zanghirati ◽  
S. Bonettini

Region of interest (ROI) tomography has gained increasing attention in recent years due to its potential to reducing radiation exposure and shortening the scanning time. However, tomographic reconstruction from ROI-focused illumination involves truncated projection data and typically results in higher numerical instability even when the reconstruction problem has unique solution. To address this problem, bothad hocanalytic formulas and iterative numerical schemes have been proposed in the literature. In this paper, we introduce a novel approach for ROI tomographic reconstruction, formulated as a convex optimization problem with a regularized term based on shearlets. Our numerical implementation consists of an iterative scheme based on the scaled gradient projection method and it is tested in the context of fan-beam CT. Our results show that our approach is essentially insensitive to the location of the ROI and remains very stable also when the ROI size is rather small.


Author(s):  
Ladislav Starek ◽  
Milos Musil ◽  
Daniel J. Inman

Abstract Several incompatibilities exist between analytical models and experimentally obtained data for many systems. In particular finite element analysis (FEA) modeling often produces analytical modal data that does not agree with measured modal data from experimental modal analysis (EMA). These two methods account for the majority of activity in vibration modeling used in industry. The existence of these discrepancies has spanned the discipline of model updating as summarized in the review articles by Inman (1990), Imregun (1991), and Friswell (1995). In this situation the analytical model is characterized by a large number of degrees of freedom (and hence modes), ad hoc damping mechanisms and real eigenvectors (mode shapes). The FEM model produces a mass, damping and stiffness matrix which is numerically solved for modal data consisting of natural frequencies, mode shapes and damping ratios. Common practice is to compare this analytically generated modal data with natural frequencies, mode shapes and damping ratios obtained from EMA. The EMA data is characterized by a small number of modes, incomplete and complex mode shapes and non proportional damping. It is very common in practice for this experimentally obtained modal data to be in minor disagreement with the analytically derived modal data. The point of view taken is that the analytical model is in error and must be refined or corrected based on experimented data. The approach proposed here is to use the results of inverse eigenvalue problems to develop methods for model updating for damped systems. The inverse problem has been addressed by Lancaster and Maroulas (1987), Starek and Inman (1992,1993,1994,1997) and is summarized for undamped systems in the text by Gladwell (1986). There are many sophisticated model updating methods available. The purpose of this paper is to introduce using inverse eigenvalues calculated as a possible approach to solving the model updating problem. The approach is new and as such many of the practical and important issues of noise, incomplete data, etc. are not yet resolved. Hence, the method introduced here is only useful for low order lumped parameter models of the type used for machines rather than structures. In particular, it will be assumed that the entries and geometry of the lumped components is also known.


Sensors ◽  
2020 ◽  
Vol 20 (22) ◽  
pp. 6524
Author(s):  
Xiaoliang Wang ◽  
Deren Gong ◽  
Yifei Jiang ◽  
Qiankun Mo ◽  
Zeyu Kang ◽  
...  

Spacecraft formation flying (SFF) in highly elliptical orbit (HEO) has attracted a great deal of attention in many space exploration applications, while precise guidance, navigation, and control (GNC) technology—especially precise ranging—are the basis of success for such SFF missions. In this paper, we introduce a novel K-band microwave ranging (MWR) equipment for the on-orbit verification of submillimeter-level precise ranging technology in future HEO SFF missions. The ranging technique is a synchronous dual one-way ranging (DOWR) microwave phase accumulation system, which achieved a ranging accuracy of tens of microns in the laboratory environment. The detailed design and development process of the MWR equipment are provided, ranging error sources are analyzed, and relative orbit dynamic models for HEO formation scenes are given with real perturbations considered. Moreover, an adaptive Kalman filter algorithm is introduced for SFF relative navigation design, incorporating process noise uncertainty. The performance of SFF relative navigation while using MWR is tested in a hardware-in-the-loop (HIL) simulation system within a high-precision six degrees of freedom (6-DOF) moving platform. The final range estimation errors from MWR using the adaptive filter were less than 35 μm and 8.5 μm/s for range rate, demonstrating the promising accuracy for future HEO formation mission applications.


2020 ◽  
Vol 12 (19) ◽  
pp. 3197 ◽  
Author(s):  
Vagner G. Ferreira ◽  
Bin Yong ◽  
Kurt Seitz ◽  
Bernhard Heck ◽  
Thomas Grombein

In the so-called point-mass modeling, surface densities are represented by point masses, providing only an approximated solution of the surface integral for the gravitational potential. Here, we propose a refinement for the point-mass modeling based on Taylor series expansion in which the zeroth-order approximation is equivalent to the point-mass solution. Simulations show that adding higher-order terms neglected in the point-mass modeling reduces the error of inverted mass changes of up to 90% on global and Antarctica scales. The method provides an alternative to the processing of the Level-2 data from the Gravity Recovery and Climate Experiment (GRACE) mission. While the evaluation of the surface densities based on improved point-mass modeling using ITSG-Grace2018 Level-2 data as observations reveals noise level of approximately 5.77 mm, this figure is 5.02, 6.05, and 5.81 mm for Center for Space Research (CSR), Goddard Space Flight Center (GSFC), and Jet Propulsion Laboratory (JPL) mascon solutions, respectively. Statistical tests demonstrate that the four solutions are not significant different (95% confidence) over Antarctica Ice Sheet (AIS), despite the slight differences seen in the noises. Therefore, the estimated noise level for the four solutions indicates the quality of GRACE mass changes over AIS. Overall, AIS shows a mass loss of −7.58 mm/year during 2003–2015 based on the improved point-mass solution, which agrees with the values derived from mascon solutions.


2011 ◽  
Vol 18 (2) ◽  
pp. 305-314 ◽  
Author(s):  
Andrzej Skalski ◽  
Paweł Turcza

Heart Segmentation in Echo ImagesCardiovascular system diseases are the major causes of mortality in the world. The most important and widely used tool for assessing the heart state is echocardiography (also abbreviated as ECHO). ECHO images are used e.g. for location of any damage of heart tissues, in calculation of cardiac tissue displacement at any arbitrary point and to derive useful heart parameters like size and shape, cardiac output, ejection fraction, pumping capacity. In this paper, a robust algorithm for heart shape estimation (segmentation) in ECHO images is proposed. It is based on the recently introduced variant of the level set method called level set without edges. This variant takes advantage of the intensity value of area information instead of module of gradient which is typically used. Such approach guarantees stability and correctness of algorithm working on the border between object and background with small absolute value of image gradient. To reassure meaningful results, the image segmentation is proceeded with automatic Region of Interest (ROI) calculation. The main idea of ROI calculations is to receive a triangle-like part of the acquired ECHO image, using linear Hough transform, thresholding and simple mathematics. Additionally, in order to improve the images quality, an anisotropic diffusion filter, before ROI calculation, was used. The proposed method has been tested on real echocardiographic image sequences. Derived results confirm the effectiveness of the presented method.


Sign in / Sign up

Export Citation Format

Share Document