consistent estimate
Recently Published Documents


TOTAL DOCUMENTS

99
(FIVE YEARS 27)

H-INDEX

18
(FIVE YEARS 1)

2021 ◽  
Vol 118 (51) ◽  
pp. e2116083118
Author(s):  
Meng Guo ◽  
Jun Korenaga

Halogens are important tracers of various planetary formation and evolution processes, and an accurate understanding of their abundances in the Earth’s silicate reservoirs can help us reconstruct the history of interactions among mantle, atmosphere, and oceans. The previous studies of halogen abundances in the bulk silicate Earth (BSE) are based on the assumption of constant ratios of element abundances, which is shown to result in a gross underestimation of the BSE halogen budget. Here we present a more robust approach using a log-log linear model. Using this method, we provide an internally consistent estimate of halogen abundances in the depleted mid-ocean ridge basalts (MORB)-source mantle, the enriched ocean island basalts (OIB)-source mantle, the depleted mantle, and BSE. Unlike previous studies, our results suggest that halogens in BSE are not more depleted compared to elements with similar volatility, thereby indicating sufficient halogen retention during planetary accretion. According to halogen abundances in the depleted mantle and BSE, we estimate that ∼87% of all stable halogens reside in the present-day mantle. Given our understanding of the history of mantle degassing and the evolution of crustal recycling, the revised halogen budget suggests that deep halogen cycle is characterized by efficient degassing in the early Earth and subsequent net regassing in the rest of Earth history. Such an evolution of deep halogen cycle presents a major step toward a more comprehensive understanding of ancient ocean alkalinity, which affects carbon partitioning within the hydrosphere, the stability of crustal and authigenic minerals, and the development of early life.


2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Tongyao Zhang

English is the universal language of the world. In the context of global economic integration, English learning is not only an essential course for business elites but also a required course for the general public. Currently, in colleges and universities across the world, English is presented as a compulsory first foreign language course. Therefore, how to improve the effect of English performance assessment in the context of smart teaching has become an important part of smart English teaching. Due to the influence of interference factors, human factors, or external factors, the traditional English language teaching evaluation system has the problems of high system sensitivity, long envelope delay jitter time, and short stationary state maintenance time. Therefore, this study develops an English learning effectiveness evaluation system based on a K-means clustering algorithm. The SQL Server 2005 database management software is used to develop the system database; various functional modules of the system are designed using ActiveX, with emphasis on the design of scoring functional modules; and different roles and permissions are given to administrators, teachers, and students. A student English learning effectiveness evaluation model based on BP neural network training and K-means clustering algorithm is designed to optimize the English learning effectiveness evaluation model and achieve effective English learning by solving the consistent estimate of the effectiveness of English learning assessment. The performance test results show that the proposed system has a lower sensitivity coefficient, a shorter envelope delay jitter time, and a longer period of steady-state maintenance, indicating that the system can achieve stable operation.


2021 ◽  
Vol 14 ◽  
Author(s):  
Polina Shichkova ◽  
Jay S. Coggan ◽  
Henry Markram ◽  
Daniel Keller

Accurate molecular concentrations are essential for reliable analyses of biochemical networks and the creation of predictive models for molecular and systems biology, yet protein and metabolite concentrations used in such models are often poorly constrained or irreproducible. Challenges of using data from different sources include conflicts in nomenclature and units, as well as discrepancies in experimental procedures, data processing and implementation of the model. To obtain a consistent estimate of protein and metabolite levels, we integrated and normalized data from a large variety of sources to calculate Adjusted Molecular Concentrations. We found a high degree of reproducibility and consistency of many molecular species across brain regions and cell types, consistent with tight homeostatic regulation. We demonstrated the value of this normalization with differential protein expression analyses related to neurodegenerative diseases, brain regions and cell types. We also used the results in proof-of-concept simulations of brain energy metabolism. The standardized Brain Molecular Atlas overcomes the obstacles of missing or inconsistent data to support systems biology research and is provided as a resource for biomolecular modeling.


2021 ◽  
Vol 13 (20) ◽  
pp. 4145
Author(s):  
Dong Chen ◽  
Varada Shevade ◽  
Allison E. Baer ◽  
Tatiana V. Loboda

Global estimates of burned areas, enabled by the wide-open access to the standard data products from the Moderate Resolution Imaging Spectroradiometer (MODIS), are heavily relied on by scientists and managers studying issues related to wildfire occurrence and its worldwide consequences. While these datasets, particularly the MODIS MCD64A1 product, have fundamentally improved our understanding of wildfire regimes at the global scale, their performance may be less reliable in certain regions due to a series of region- or ecosystem-specific challenges. Previous studies have indicated that global burned area products tend to underestimate the extent of the burned area within some parts of the boreal domain. Despite this, global products are still being regularly used by research activities and management efforts in the northern regions, likely due to a lack of understanding of the spatial scale of their Arctic-specific limitations, as well as an absence of more reliable alternative products. In this study, we evaluated the performance of two widely used global burned area products, MCD64A1 and FireCCI51, in the circumpolar boreal forests and tundra between 2001 and 2015. Our two-step evaluation shows that MCD64A1 has high commission and omission errors in mapping burned areas in the boreal forests and tundra regions in North America. The omission error overshadows the commission error, leading to MCD64A1 considerably underestimating burned areas in these high northern latitude domains. Based on our estimation, MCD64A1 missed nearly half the total burned areas in the Alaskan and Canadian boreal forests and the tundra during the 15-year period, amounting to an area (74,768 km2) that is equivalent to the land area of the United States state of South Carolina. While the FireCCI51 product performs much better than MCD64A1 in terms of commission error, we found that it also missed about 40% of burned areas in North America north of 60° N between 2001 and 2015. Our intercomparison of MCD64A1 and FireCCI51 with a regionally adapted MODIS-based Arctic Boreal Burned Area (ABBA) shows that the latter outperforms both MCD64A1 and FireCCI51 by a large margin, particularly in terms of omission error, and thus delivers a considerably more accurate and consistent estimate of fire activity in the high northern latitudes. Considering the fact that boreal forests and tundra represent the largest carbon pool on Earth and that wildfire is the dominant disturbance agent in these ecosystems, our study presents a strong case for regional burned area products like ABBA to be included in future Earth system models as the critical input for understanding wildfires’ impacts on global carbon cycling and energy budget.


2021 ◽  
Vol 118 (38) ◽  
pp. e2025211118
Author(s):  
Ermes Botte ◽  
Francesco Biagini ◽  
Chiara Magliaro ◽  
Andrea Rinaldo ◽  
Amos Maritan ◽  
...  

Variations and fluctuations are characteristic features of biological systems and are also manifested in cell cultures. Here, we describe a computational pipeline for identifying the range of three-dimensional (3D) cell-aggregate sizes in which nonisometric scaling emerges in the presence of joint mass and metabolic rate fluctuations. The 3D cell-laden spheroids with size and single-cell metabolic rates described by probability density functions were randomly generated in silico. The distributions of the resulting metabolic rates of the spheroids were computed by modeling oxygen diffusion and reaction. Then, a method for estimating scaling exponents of correlated variables through statistically significant data collapse of joint probability distributions was developed. The method was used to identify a physiologically relevant range of spheroid sizes, where both nonisometric scaling and a minimum oxygen concentration (0.04 mol⋅m−3) is maintained. The in silico pipeline described enables the prediction of the number of experiments needed for an acceptable collapse and, thus, a consistent estimate of scaling parameters. Using the pipeline, we also show that scaling exponents may be significantly different in the presence of joint mass and metabolic-rate variations typically found in cells. Our study highlights the importance of incorporating fluctuations and variability in size and metabolic rates when estimating scaling exponents. It also suggests the need for taking into account their covariations for better understanding and interpreting experimental observations both in vitro and in vivo and brings insights for the design of more predictive and physiologically relevant in vitro models.


Psychometrika ◽  
2021 ◽  
Author(s):  
Jules L. Ellis

AbstractIt is argued that the generalizability theory interpretation of coefficient alpha is important. In this interpretation, alpha is a slightly biased but consistent estimate for the coefficient of generalizability in a subjects x items design where both subjects and items are randomly sampled. This interpretation is based on the “domain sampling” true scores. It is argued that these true scores have a more solid empirical basis than the true scores of Lord and Novick (1968), which are based on “stochastic subjects” (Holland, 1990), while only a single observation is available for each within-subject distribution. Therefore, the generalizability interpretation of coefficient alpha is to be preferred, unless the true scores can be defined by a latent variable model that has undisputed empirical validity for the test and that is sufficiently restrictive to entail a consistent estimate of the reliability—as, for example, McDonald’s omega. If this model implies that the items are essentially tau-equivalent, both the generalizability and the reliability interpretation of alpha can be defensible.


2021 ◽  
Vol 95 (9) ◽  
Author(s):  
Jaakko Mäkinen

AbstractThe International Height Reference System (IHRS), adopted by International Association of Geodesy (IAG) in its Resolution No. 1 at the XXVI General Assembly of the International Union of Geodesy and Geophysics (IUGG) in Prague in 2015, contains two novelties. Firstly, the mean-tide concept is adopted for handling the permanent tide. While many national height systems continue to apply the mean-tide concept, this was the first time that the IAG officially introduced it for a potential field quantity. Secondly, the reference level of the height system is defined by the equipotential surface where the geopotential has a conventional value W0 = 62,636,853.4 m2 s–2. This value was first determined empirically to provide a good approximation to the global mean sea level and then adopted as a reference value by convention. I analyse the tidal aspects of the reference level based on W0. By definition, W0 is independent of the tidal concept that was adopted for the equipotential surface, but for different concepts, different functions are involved in the W of the equation W = W0. I find that, in the empirical determination of the adopted estimate W0, the permanent tide is treated inconsistently. However, the consistent estimate from the same data rounds off to the same value. I discuss the tidal conventions and formulas for the International Height Reference Frame (IHRF) and the realisation of the IHRS. I propose a simplified definition of IHRF geopotential numbers that would make it possible to transform between the IHRF and zero-tide geopotential numbers using a simple datum-difference surface. Such a transformation would not be adequate if rigorous mean-tide formulas were imposed. The IHRF should adopt a conventional (best) estimate of the permanent tide-generating potential, such as that which is contained in the International Earth Rotation and Reference Systems Service Conventions, and use it as a basis for other conventional formulas. The tide-free coordinates of the International Terrestrial Reference Frame and tide-free Global Geopotential Models are central in the modelling of geopotential for the purposes of the IHRF. I present a set of correction formulas that can be used to move to the zero-tide model before, during, or after the processing, and finally to the mean-tide IHRF. To reduce the confusion around the multitude of tidal concepts, I propose that modelling should primarily be done using the zero-tide concept, with the mean-tide potential as an add-on. The widespread use of the expression “systems of permanent tide” may also have contributed to the confusion, as such “systems” do not have the properties that are generally associated with other “systems” in geodesy. Hence, this paper mostly uses “concept” instead of “system” when referring to the permanent tide.


2021 ◽  
Author(s):  
Mauricio Osses ◽  
Néstor Rojas ◽  
Cecilia Ibarra ◽  
Víctor Valdebenito ◽  
Ignacio Laengle ◽  
...  

Abstract. This description paper presents a detailed and consistent estimate and analysis of exhaust pollutant emissions generated by Chile's road transport activity for the period 1990–2020. The complete database for the period 1990–2020 is available at doi: http://dx.doi.org/10.17632/z69m8xm843.2. Emissions are provided at high-spatial resolution (0.01° × 0.01°) over continental Chile from 18.5 S to 53.2 S, including local pollutants (CO, VOC, NOx, MP2.5), black carbon (BC) and greenhouse gases (CO2, CH4). The methodology considers 70 vehicle types, based on ten vehicle categories, subdivided into two fuel types and seven emission standards. Vehicle activity was calculated based on official databases of vehicle records and vehicle flow counts. Fuel consumption was calculated based on vehicle activity and contrasted with fuel sales, to calibrate the initial dataset. Emission factors come mainly from COPERT 5, adapted to local conditions in the 15 political regions of Chile, based on emission standards and fuel quality. While vehicle fleet has grown fivefold between 1990 and 2020, CO2 emissions had followed this trend at a lower rate and emissions of local pollutants have decreased, due to stricter abatement technologies, better fuel quality and enforcement of emission standards. In other words, there has been decoupling between fleet growth and emissions’ rate of change. Results were contrasted with EDGAR datasets, showing similarities in CO2 estimations and striking differences in PM, BC and CO; in the case of NOx and CH4 there is coincidence only until 2008. In all cases of divergent results, EDGAR estimates higher emissions.


2021 ◽  
Author(s):  
Helen Ockenden ◽  
Andrew Curtis ◽  
Daniel Goldberg ◽  
Antonios Giannopoulos ◽  
Robert Bingham

<p>Thwaites Glacier in West Antarctica is one of the regions of the fastest accelerating ice thinning and highest observed ice loss. The topography of the bed beneath the glacier is a key control of future ice loss, but is not currently well enough known to satisfy the requirements of ice sheet models predicting glacier behaviour. It has previously been suggested that in fast flowing ice streams the shapes of landforms at the bed should be reflected in the ice surface morphology, which is known to a much higher resolution. Indeed, recently published radar grids from Pine Island Glacier reveal bed landforms with a definite resemblance to the ice surface above them. Here, we present a new high resolution bed topography map of Thwaites Glacier, inverted from REMA and ITSLIVE data using linear perturbation theory, a mathematical formulation of this resemblance between bed and surface.  As it is based on linear physics, this method is faster than mass conservation and streamline diffusion interpolation, the two main techniques utilised by existing bed topography products in this region. Furthermore, as the theory is based on both mass and momentum balance, it provides a physically consistent estimate of elevation and basal slipperiness, in contrast to these more widely used methods. The resulting bed matches well with existing airborne and swath radar surveys, with significant detail between these radar lines. Variation in the results obtained using different reference models provides a measure of validity of the linear perturbation theory. Due to the importance of form drag in patterns of ice retreat, the inverted topographic features are potentially important for the future behaviour of Thwaites Glacier.</p>


2021 ◽  
Author(s):  
Rubén García-Hernández ◽  
Luca D'Auria ◽  
José Barrancos ◽  
German D. Padilla

<p>Determining the b-value of the Gutenberg-Richter law is of great importance in Seismology. However, its estimate is strongly dependent upon selecting a proper temporal and spatial scale due to the multiscale nature of the seismicity. This characteristic is especially relevant in volcanoes where dense clusters of earthquakes often overlap the background seismicity and where this parameter displays a higher spatial and temporal variability.</p><p>For this reason, we devised a novel approach called MUST-B (MUltiscale Spatial and Temporal estimation of the B-value) which allows a consistent estimate of the b-value, avoiding subjective “a priori ” choices, by considering simultaneously different temporal or spatial scales. This approach also includes a consistent estimation of the completeness magnitude (Mc) and the uncertainties over both b and Mc. We applied this method to datasets in volcanic areas proving its effectiveness to analyze complex seismicity patterns and its utility in volcanic monitoring and geothermal exploration. Besides, it may provide a way to distinguish seismicity caused by tectonic faults and volcanic sources in zones where there is a mix of both of them.</p><p>We present MUST-B applications to three volcanic areas: Long Valley caldera (USA), Tenerife and El Hierro (Canary Islands). The spatial analysis of the b-value in Long Valley shows an impressive chimney-like volume characterized by high b-values which coincide with the main pathway of geothermal fluids inferred by independent studies. For Tenerife, we applied MUST-B to analyze both spatial and temporal variations. The spatial pattern shows an interesting variation between 2004-2005 and the period 2016-2020. In both cases, high b-values appear in an area that hosted increased seismicity because of seismo-volcanic crises. These high b-values are also evidenced by the temporal analysis, which shows an increase in correspondence between these two periods. For El Hierro, we analyzed the seismicity preceding the 2011 submarine eruption of Tagoro volcano using a joint spatio-temporal analysis. Results show high b-values in the area where the vent opened and a drop of this parameter just before the beginning of the eruption.</p>


Sign in / Sign up

Export Citation Format

Share Document