A global analysis of spatial correlation lengths of water storage anomalies 

Author(s):  
Ehsan Sharifi ◽  
Julian Haas ◽  
Eva Boergens ◽  
Henryk Dobslaw ◽  
Andreas Güntner

<p>This study has been run in the context of the European Union research project G3P (Global Gravity-based Groundwater Product) on developing Groundwater storage (GW) as a new product for the EU Copernicus Services. GW variations can be derived on a global scale by subtracting from total water storage (TWS) variations based on the GRACE/GRACE-FO satellite missions variations in other water storage compartments such as soil moisture, snow, surface water bodies, and glaciers. Due to the nature of data acquisition by GRACE and GRACE-FO, the data need filtering in order to reduce North-South-oriented striping errors. However, this also leads to a spatially smoothed TWS signal. For a consistent subtraction of all individual storage compartments from GRACE-based TWS, the individual data sets for all other hydrological compartments need to be filtered in a similar way as GRACE-based TWS.</p><p>In order to test different filter methods, we used compartmental water storage data of the global hydrological model WGHM. The decorrelation filter known as DDK filter that is routinely used for GRACE and GRACE-FO data introduced striping artifacts in the smoothed model data. Thus, we can conclude that the DDK filter is not suitable for filtering water storage data sets that do not exhibit GRACE-like correlated error patterns. Alternatively, an isotropic Gaussian filter might be used. The best filter width of the Gaussian filter is determined by minimizing the differences between the empirical spatial correlation functions of each water storage and the spatial correlation function of GRACE-based TWS. We also analyzed time variations of correlation lengths such as seasonal effects. Finally, the selected filter widths are applied to each compartmental storage data set to remove them from TWS and to obtain the GW variations. </p><p> </p><p>Acknowledgement :</p><p>This study received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement nº 870353.</p>

2021 ◽  
Vol 19 (164) ◽  
pp. 724-742
Author(s):  
Ovidiu Constantin Bunget ◽  
Alin-Constantin Dumitrescu ◽  
Rodica Gabriela Blidisel ◽  
Oana Alina Bogdan ◽  
Valentin Burca ◽  
...  

The audit market, developed out of the need to strengthen the credibility and the quality of financial reporting, has led since the 1980s to a concentration around large audit firms, the dominance effect being marked on the one hand by the auditor’s increasing reputation and notoriety, and on the other hand by the client’s association with a reputed auditor, which contributes to improving the company’s image on the market. In this context, a major issue is represented by the level of the fees charged, as they represent key elements that may affect the auditor’s independence. Moreover, a sensitive aspect is the relationship between the fee charged for financial audit services and the one for non-audit services and the compensation practices between them. The European Commission wants to facilitate competition in an overly concentrated market and also provide the opportunity for small and medium-sized audit firms to become active players in the large corporate audit market through joint audit, in which at least one of the audit firms is not part of the Big4 group. The mandatory audit firm rotation and the limitation on the non-audit services provided are the main aspects of the recent audit reform that directly influences the fee level. The main purpose of this study is to analyse whether there is a pattern of audit costs at the community level. In this context, this paper aims to assess the uniformity of audit costs, namely to determine the structure of the audit market in the European Union. The research involves data set comparison methods, by analysing a sample of 2,896 firms listed on the stock exchange in 35 different states over the period 2013-2021.


2019 ◽  
Vol 26 (6) ◽  
pp. 795-806
Author(s):  
Petia Kostadinova ◽  
Magda Giurcanu

Utilizing a newly compiled data set, this article demonstrates that some election pledges made by the transnational Europarties are included among the European Commission priorities issued during the pre-legislative stage. The data set consists of 597 promises made by four transnational Europarties during the 2004 and 2009 European Parliament (EP) elections and of 698 subsequent Commission legislative intentions. Focusing on the time periods during the Barroso presidencies, the article’s findings suggest that (1) decision-making rules in the EP help us understand which transnational pledges are included in Commission priorities and (2) promises by two Europarties, such as the European People’s Party and the European Liberal and Democrat Party, are more likely to be considered by the Commission than those of other Europarties. Our results speak to scholarly debates on the place of the Europarties in the European Union inter-institutional relations and more broadly on the democratic legitimacy of the Union.


2002 ◽  
Vol 56 (2) ◽  
pp. 447-476 ◽  
Author(s):  
Simon Hug ◽  
Thomas König

The bargaining product of the Amsterdam Intergovernmental Conference—the Amsterdam Treaty—dwindled down the draft proposal to a consensus set of all fifteen member states of the European Union (EU). Using the two-level concept of international bargains, we provide a thorough analysis of how this consensus set was reached by issue subtraction with respect to domestic ratification constraints. Drawing on data sets covering the positions of all negotiating actors and ratifying national political parties, we first highlight the differences in the Amsterdam ratification procedures in the fifteen member states of the EU. This analysis allows us to compare the varying ratification difficulties in each country. Second, our empirical analysis of the treaty negotiations shows that member states excluded half of the Amsterdam bargaining issues to secure a smooth ratification. Because member states with higher domestic ratification constraints performed better in eliminating uncomfortable issues at the Amsterdam Intergovernmental Conference, issue subtraction can be explained by the extent to which the negotiators were constrained by domestic interests.


2020 ◽  
Vol 36 (4) ◽  
pp. 1175-1188
Author(s):  
Pierre Lamarche ◽  
Friderike Oehler ◽  
Irene Rioboo

Poverty indicators purely based on income statistics do not reflect the full picture of household’s economic well-being. Consumption and wealth are two additional key dimensions that determine the economic opportunities of people or material inequalities. We use non-parametric statistical matching methods to join consumption data from the Household Budget Survey to micro data from the European Union Statistics on Income and Living Conditions. In a second step, micro data from the Household Finance and Consumption Survey are joint to produce a common distribution of income, consumption and wealth variables. A variety of different indicators is then produced based on this joint data set, in particular household saving rates. Care has to be taken when interpreting the indicators, since the statistical matching is based on strong assumptions and a limited number of variables common to all of the three original data sets. We are able to show, however, that the assumptions made are justified by the use of strong proxies as matching variables. Thus, the resulting indicators have the potential to contribute to the analysis of inequality patterns and enhance the possibilities of social, and possibly fiscal, policy impact analysis.


1996 ◽  
Vol 48 (3) ◽  
pp. 324-357 ◽  
Author(s):  
Mark Hallerberg

The twenty-five German states from 1871 to 1914 present a useful data set for examining how increasing economic integration affects tax policy. After German unification the national government collapsed six currencies into one and liberalized preexisting restrictions on capital and labor mobility. In contrast, the empire did not directly interfere in the making of state tax policy; while states transferred certain indirect taxes to the central government, they maintained their own autonomous tax and political systems through World War I. This paper examines the extent to which tax competition forced the individual state tax systems to converge from 1871 to 1914. In spite of a diversity of political systems, tax competition did require states to harmonize their rates on mobile factors like capital and high income labor, but it did not affect tax rates on immobile factors. In states where the political system guaranteed agricultural dominance, taxes on land were reduced, while in states with more open systems, tax rates remained higher. One unexpected result is that tax rates on capital and income converged upward instead of downward. The most dominant state, Prussia, served as the lowest-common-denominator state, but pressure from the national government, especially to increase expenditures, forced all states to raise their tax rates. These results suggest possible ways for the European Union to avoid a forced downward convergence of member state tax rates on capital and mobile labor.


2020 ◽  
Author(s):  
Oleg Skrynyk ◽  
Enric Aguilar ◽  
José A. Guijarro ◽  
Sergiy Bubin

<p>Before using climatological time series in research studies, it is necessary to perform their quality control and homogenization in order to remove possible artefacts (inhomogeneities) usually present in the raw data sets. In the vast majority of cases, the homogenization procedure allows to improve the consistency of the data, which then can be verified by means of the statistical comparison of the raw and homogenized time series. However, a new question then arises: how far are the homogenized data from the true climate signal or, in other words, what errors could still be present in homogenized data?</p><p>The main objective of our work is to estimate the uncertainty produced by the adjustment algorithm of the widely used Climatol homogenization software when homogenizing daily time series of the additive climate variables. We focused our efforts on the minimum and maximum air temperature. In order to achieve our goal we used a benchmark data set created by the INDECIS<sup>*</sup> project. The benchmark contains clean data, extracted from an output of the Royal Netherlands Meteorological Institute Regional Atmospheric Climate Model (version 2) driven by Hadley Global Environment Model 2 - Earth System, and inhomogeneous data, created by introducing realistic breaks and errors.</p><p>The statistical evaluation of discrepancies between the homogenized (by means of Climatol with predefined break points) and clean data sets was performed using both a set of standard parameters and a metrics introduced in our work. All metrics used clearly identifies the main features of errors (systematic and random) present in the homogenized time series. We calculated the metrics for every time series (only over adjusted segments) as well as their averaged values as measures of uncertainties in the whole data set.</p><p>In order to determine how the two key parameters of the raw data collection, namely the length of time series and station density, influence the calculated measures of the adjustment error we gradually decreased the length of the period and number of stations in the area under study. The total number of cases considered was 56, including 7 time periods (1950-2005, 1954-2005, …, 1974-2005) and 8 different quantities of stations (100, 90, …, 30). Additionally, in order to find out how stable are the calculated metrics for each of the 56 cases and determine their confidence intervals we performed 100 random permutations in the introduced inhomogeneity time series and repeated our calculations With that the total number of homogenization exercises performed was 5600 for each of two climate variables.</p><p>Lastly, the calculated metrics were compared with the corresponding values, obtained for raw time series. The comparison showed some substantial improvement of the metric values after homogenization in each of the 56 cases considered (for the both variables).</p><p>-------------------</p><p><sup>*</sup>INDECIS is a part of ERA4CS, an ERA-NET initiated by JPI Climate, and funded by FORMAS (SE), DLR (DE), BMWFW (AT), IFD (DK), MINECO (ES), ANR (FR) with co-funding by the European Union (Grant 690462). The work has been partially supported by the Ministry of Education and Science of Kazakhstan (Grant BR05236454) and Nazarbayev University (Grant 090118FD5345).</p>


2021 ◽  
Vol 9 (4) ◽  
pp. 202-208
Author(s):  
Aleksandra Korczyc

Purpose of the study: This study aims to present the specifics of the global financial crisis, the threats it brings for Poland in the legal sphere, and possible actions to be taken in this area, particularly at the European Union and Poland level. Methodology: The article uses the historical method and the analysis of documents both at the Polish and European Union levels, including laws, regulations, and decisions. Main Findings: The scope of the financial crisis in question and its relatively easy transfer between markets entails the necessity to apply extraordinary remedial actions. Poland, through its participation in the European Union, seems to be relatively well protected against the effects of the financial crisis. However, it needs to undertake further structural reforms, in particular reforms of public finances. Applications of this study: The current study is highly significant for the government of the day in this modern world; the study could be quite effective and meaningful for Higher Education Institutions, government, banks, financial institutions. Novelty/Originality of this study: Description of the essence of the financial crisis, possibilities of its prevention - earlier possibilities of remedial actions at the institutional and legal level, possibilities of obtaining financial support, global analysis of the problem, including its causes.


2015 ◽  
Vol 22 (4) ◽  
pp. 433-446 ◽  
Author(s):  
A. Y. Sun ◽  
J. Chen ◽  
J. Donges

Abstract. Terrestrial water storage (TWS) exerts a key control in global water, energy, and biogeochemical cycles. Although certain causal relationship exists between precipitation and TWS, the latter quantity also reflects impacts of anthropogenic activities. Thus, quantification of the spatial patterns of TWS will not only help to understand feedbacks between climate dynamics and the hydrologic cycle, but also provide new insights and model calibration constraints for improving the current land surface models. This work is the first attempt to quantify the spatial connectivity of TWS using the complex network theory, which has received broad attention in the climate modeling community in recent years. Complex networks of TWS anomalies are built using two global TWS data sets, a remote sensing product that is obtained from the Gravity Recovery and Climate Experiment (GRACE) satellite mission, and a model-generated data set from the global land data assimilation system's NOAH model (GLDAS-NOAH). Both data sets have 1° × 1° grid resolutions and cover most global land areas except for permafrost regions. TWS networks are built by first quantifying pairwise correlation among all valid TWS anomaly time series, and then applying a cutoff threshold derived from the edge-density function to retain only the most important features in the network. Basinwise network connectivity maps are used to illuminate connectivity of individual river basins with other regions. The constructed network degree centrality maps show the TWS anomaly hotspots around the globe and the patterns are consistent with recent GRACE studies. Parallel analyses of networks constructed using the two data sets reveal that the GLDAS-NOAH model captures many of the spatial patterns shown by GRACE, although significant discrepancies exist in some regions. Thus, our results provide further measures for constraining the current land surface models, especially in data sparse regions.


2018 ◽  
Vol 10 (9) ◽  
pp. 54
Author(s):  
Rakhi Singh ◽  
Seema Sharma ◽  
Deepak Tandon

Indian economy is one of the fastest growing economies in the world today. In line with global trade trends, Indian export sector has been growing and contributing significantly to the economy. Given its exports structure, India is well positioned to benefit from the structural changes in technology and emerging forces of globalization. Indian economy has shown remarkable progress in terms of foreign trade after the introduction of economic reforms in 1991. The European Union (EU) is a very important trading partner of India. The trade volumes between India and EU have shown remarkable improvement in last one and a half decade. After starting out at a relatively low level in the 1990’s, the trade volumes, both with respect to Indian exports to the EU as well as with respect to Indian imports from the EU, started to increase most noticeably after the year 2001.Use of non-tariff measures (NTMs) as means of protection has captured a lot of focus after reduction of tariffs in the world trade. India even after being a strategic partner for European Union (EU) has to face lot of NTMs on its exports. Based on studies in the past, link between the incidence of NTMs imposed by the home country and the income level of the foreign country has been established. The interplay of incidence of NTMs and the GDP remains largely unexplored in the context of India-EU trade relationship. This paper tries to fill this gap and show the importance of the study in policy decisions. Authors have used UNCTAD’s NTM data and Spearman’s correlation coefficient to measure the strength and direction of the relationship between incidence of NTM with per capita GDP of the exporting country (India). The authors have used different permutations of data from the main data set (1994-95 to 2016-17) for analysis and have concluded that incidence of NTMs on Indian exports to EU is positively co-related to the per capita GDP of India.


2011 ◽  
Vol 16 (4) ◽  
pp. 488-504 ◽  
Author(s):  
Pavel Stefanovič ◽  
Olga Kurasova

In the article, an additional visualization of self-organizing maps (SOM) has been investigated. The main objective of self-organizing maps is data clustering and their graphical presentation. Opportunities of SOM visualization in four systems (NeNet, SOM-Toolbox, Databionic ESOM and Viscovery SOMine) have been investigated. Each system has its additional tools for visualizing SOM. A comparative analysis has been made for two data sets: Fisher’s iris data set and the economic indices of the European Union countries. A new SOM system is also introduced and researched. The system has a specific visualization tool. It is missing in other SOM systems. It helps to see the proportion of neurons, corresponding to the data items, belonging to the different classes, and fallen in the same SOM cell.


Sign in / Sign up

Export Citation Format

Share Document