scholarly journals Estimation of the observation standard deviation error formula thanks to the a posteriori diagnosis

2021 ◽  
Author(s):  
Stéphane Van Hyfte ◽  
Patrick Le Moigne ◽  
Eric Bazile ◽  
Antoine Verrelle

<p><em>Within the UERRA project, a daily precipitation reanalysis at a 5,5km resolution has been realized from 1961 to 2015. The reanalysis was obtained by the MESCAN analysis system which combines an a priori estimate of the atmosphere – called background – and observations using an optimum interpolation (OI) scheme. Such method requires the specification of observations and background errors. In general, constant standard deviation errors are used but more errors are made when high precipitation are observed. Then, to take this effect into account and to avoid a model over-estimation in case of light precipitation, a variable formula of the observation standard deviation error was purposed with a small value for null precipitation and greater values when precipitation are higher, following a linear equation.</em></p><p><em> Desroziers et al proposed a method to determine observations and background errors called a posteriori diagnosis. To use this iterative method, the analysis has to be ran several times until it converged. In this study, the a posteriori diagnosis is used per precipitation class to determine the observation standard deviation error formula. MESCAN was tested using the French operational model AROME at 1,3km resolution and the atmopsheric UERRA analysis downscaled to 5,5km background and combined to the French observational network over the 2016-2018 period. The observation standard deviation error formula obtained by the a posteriori diagnosis is then used in the MESCAN analysis system to produce precipitation analysis over the 2016-2018 period. Results are compared to UERRA precipitation reanalysis over independant observations by comparing bias, RMSE and scores per precipitation class.</em></p>

2017 ◽  
Author(s):  
Birthe Marie Steensen ◽  
Arve Kylling ◽  
Nina Iren Kristiansen ◽  
Michael Schulz

Abstract. Significant improvements in the way we can observe and model volcanic ash clouds have been obtained since the 2010 Eyjafjallajökull eruption. One major development has been data assimilation techniques, which aim to bring models in closer agreement to satellite observations and reducing the uncertainties for the ash emission estimate. Still, questions remains to which degree the forecasting capabilities are improved by inclusion of such techniques are and how these improvements depend on the data input. This study exploits how different satellite data and different uncertainty assumptions of the satellite and a priori emissions affect the calculated volcanic ash emission estimate, which is computed by an inversion method that couples the satellite and a priori emissions with dispersion model data. Two major ash episodes over four days in April and May of the 2010 Eyjafjallajökull eruption are studied. Specifically, inversion calculations are done for four different satellite data sets with different size distribution assumptions in the retrieval. A reference satellite data set is chosen and the range between the minimum and maximum 4 day average load of hourly retrieved ash is 121 % in April and 148 % in May, compared to the reference. The corresponding a posteriori maximum and minimum emission sum found for these four satellite retrievals range from 26 % and 47 % of the a posteriori reference estimate for the same two periods. Varying the assumptions made in the satellite retrieval therefore translates into uncertainties in the calculated emissions and the modelled ash column loads. By further exploring the weighting of uncertainties connected to a priori emissions and the other-than-size uncertainties in the satellite data, the uncertainty in the a priori estimate is found to have an order of magnitude more impact on the a posteriori solution compared to the other-than-size uncertainties in the satellite. Part of this is explained by a too high a priori estimate used in this study that is reduced by around half in the a posteriori reference estimate. Setting large uncertainties connected to both a priori and satellite input data is shown to compensate each other. Because of this an inversion based emission estimate in a forecasting setting needs well tested and considered assumptions on uncertainties for the a priori emission and satellite data. The quality of using the inversion in a forecasting environment is tested by adding gradually, with time, more observations to improve the estimated height versus time evolution of Eyjafjallajökull ash emissions. We show that the initially too high a priori emissions are reduced effectively when using just 12 hours of satellite observations. More satellite observations (> 12 h), in the Eyjafjallajökull case, place the volcanic injection at higher altitudes. Adding additional satellite observations (> 36 h) changes the a posteriori emissions to only a small extent for May and minimal for the April period, because the ash is dispersed and transported effectively out of the domain after 1–2 days. A best-guess emission estimate for the forecasting period was constructed by averaging the last 12 hours of the a posteriori emission. Using this emission for a forecast simulation performs better especially compared to model simulations with no further emissions over the forecast period in the case of a continued volcanic eruption activity. Because of undetected ash in the satellite retrieval and diffusion in the model, the forecast simulations generally contain more ash than the observed fields and the model ash is more spread out. Overall, using the a posteriori emissions in our model reduces the uncertainties connected to both the satellite observations and the a priori estimate to perform a more confident forecast in both amount of ash released and emission heights.


Author(s):  
Heinrich Schepers ◽  
Giorgio Tonelli ◽  
Rudolf Eisler
Keyword(s):  
A Priori ◽  

1994 ◽  
Vol 11 (4) ◽  
pp. 475-503
Author(s):  
Masudul Alum Choudhury

Is it the realm of theoretical constructs or positive applications thatdefines the essence of scientific inquiry? Is there unison between thenormative and the positive, between the inductive and deductivecontents, between perception and reality, between the micro- andmacro-phenomena of reality as technically understood? In short, isthere a possibility for unification of knowledge in modernist epistemologicalcomprehension? Is knowledge perceived in conceptionand application as systemic dichotomy between the purely epistemic(in the metaphysically a priori sense) and the purely ontic (in thepurely positivistically a posteriori sense) at all a reflection of reality?Is knowledge possible in such a dichotomy or plurality?Answers to these foundational questions are primal in order tounderstand a critique of modernist synthesis in Islamic thought thathas been raging among Muslim scholars for some time now. Theconsequences emanating from the modernist approach underlie muchof the nature of development in methodology, thinking, institutions,and behavior in the Muslim world throughout its history. They arefound to pervade more intensively, I will argue here, as the consequenceof a taqlid of modernism among Islamic thinkers. I will thenargue that this debility has arisen not because of a comparativemodem scientific investigation, but due to a failure to fathom theuniqueness of a truly Qur'anic epistemological inquiry in the understandingof the nature of the Islamic socioscientific worldview ...


2019 ◽  
Vol 11 ◽  
pp. 51-64
Author(s):  
M. LE MOAL

Les systèmes d’information géographique (SIG) sont devenus incontournables dans la gestion des réseaux d’eau et d’assainissement et leur efficacité repose en très grande partie sur la qualité des données exploitées. Parallèlement, les évolutions réglementaires et les pratiques des utilisateurs augmentant notamment les échanges d’informations renforcent le rôle central des données et de leur qualité. Si la plupart des solutions SIG du marché disposent de fonctions dédiées à la qualification de la qualité des données, elles procèdent de la traduction préalable de spécifications des données en règles informatiques avant de procéder aux tests qualitatifs. Cette approche chronophage requiert des compétences métier. Pour éviter ces contraintes, Axes Conseil a élaboré un procédé de contrôle des données SIG rapide et accessible à des acteurs métier de l’eau et de l’assainissement. Plutôt qu’une lourde approche de modélisation a priori, le principe est de générer un ensemble d’indicateurs explicites facilement exploitables a posteriori par les acteurs du métier. Cette approche offre une grande souplesse d’analyse et ne nécessite pas de compétences informatiques avancées.


Author(s):  
Barry Stroud

This chapter presents a straightforward structural description of Immanuel Kant’s conception of what the transcendental deduction is supposed to do, and how it is supposed to do it. The ‘deduction’ Kant thinks is needed for understanding the human mind would establish and explain our ‘right’ or ‘entitlement’ to something we seem to possess and employ in ‘the highly complicated web of human knowledge’. This is: experience, concepts, and principles. The chapter explains the point and strategy of the ‘deduction’ as Kant understands it, as well as the demanding conditions of its success, without entering into complexities of interpretation or critical assessment of the degree of success actually achieved. It also analyses Kant’s arguments regarding a priori concepts as well as a posteriori knowledge of the world around us, along with his claim that our position in the world must be understood as ‘empirical realism’.


2017 ◽  
Vol 58 (3) ◽  
pp. 313-342 ◽  
Author(s):  
Barbara S. Held

The positive/negative distinction works well in many fields—for example, in mathematics negative numbers hold their own, and in medical pathology negative results are usually celebrated. But in positive psychology negativity should be replaced with positivity for flourishing/optimal functioning to occur. That the designation of the psychological states and processes deemed positive (good/desirable) and negative (bad/undesirable) is made a priori, independent of circumstantial particularity, both intrapersonal and interpersonal, does not seem to bother positive psychologists. But it should, as it results in conceptual muddles and dead ends that cannot be solved within their conceptual framework of positivity and negativity. Especially problematic is an ambiguity I find in positive psychologists’ a priori and a posteriori understandings of positivity and negativity, an ambiguity about constitutive and causal relations that pervades their science and the conclusions drawn from it. By eliminating their a priori dichotomy of positivity and negativity, positive psychologists might well find themselves in a better position to put back together the psychological reality that they have fractured in their ontologically dubious move of carving up psychological reality a priori into positive and negative phenomena. They then might find themselves better placed to “broaden and build” their own science of flourishing.


Atmosphere ◽  
2021 ◽  
Vol 12 (7) ◽  
pp. 900
Author(s):  
Ioanna Skoulidou ◽  
Maria-Elissavet Koukouli ◽  
Arjo Segers ◽  
Astrid Manders ◽  
Dimitris Balis ◽  
...  

In this work, we investigate the ability of a data assimilation technique and space-borne observations to quantify and monitor changes in nitrogen oxides (NOx) emissions over Northwestern Greece for the summers of 2018 and 2019. In this region, four lignite-burning power plants are located. The data assimilation technique, based on the Ensemble Kalman Filter method, is employed to combine space-borne atmospheric observations from the high spatial resolution Sentinel-5 Precursor (S5P) Tropospheric Monitoring Instrument (TROPOMI) and simulations using the LOTOS-EUROS Chemical Transport model. The Copernicus Atmosphere Monitoring Service-Regional European emissions (CAMS-REG, version 4.2) inventory based on the year 2015 is used as the a priori emissions in the simulations. Surface measurements of nitrogen dioxide (NO2) from air quality stations operating in the region are compared with the model surface NO2 output using either the a priori (base run) or the a posteriori (assimilated run) NOx emissions. Relative to the a priori emissions, the assimilation suggests a strong decrease in concentrations for the station located near the largest power plant, by 80% in 2019 and by 67% in 2018. Concerning the estimated annual a posteriori NOx emissions, it was found that, for the pixels hosting the two largest power plants, the assimilated run results in emissions decreased by ~40–50% for 2018 compared to 2015, whereas a larger decrease, of ~70% for both power plants, was found for 2019, after assimilating the space-born observations. For the same power plants, the European Pollutant Release and Transfer Register (E-PRTR) reports decreased emissions in 2018 and 2019 compared to 2015 (−35% and −38% in 2018, −62% and −72% in 2019), in good agreement with the estimated emissions. We further compare the a posteriori emissions to the reported energy production of the power plants during the summer of 2018 and 2019. Mean decreases of about −35% and−63% in NOx emissions are estimated for the two larger power plants in summer of 2018 and 2019, respectively, which are supported by similar decreases in the reported energy production of the power plants (~−30% and −70%, respectively).


Sign in / Sign up

Export Citation Format

Share Document