New Evidence on Eastern Europe's Pollution Progress

2003 ◽  
Vol 3 (1) ◽  
Author(s):  
Matthew E Kahn

Abstract Under communism, Eastern Europe's cities were significantly more polluted than their Western European counterparts. An unintended consequence of communism's decline is to improve urban environmental quality. This paper uses several new data sets to measure these gains. National level data are used to document the extent of convergence across nations in sulfur dioxide and carbon dioxide emissions. Based on a panel data set from the Czech Republic, Hungary and Poland, ambient sulfur dioxide levels have fallen both because of composition and technique effects. The incidence of this local public good improvement is analyzed.

Author(s):  
Victor H Aguiar ◽  
Nail Kashaev

Abstract A long-standing question about consumer behaviour is whether individuals’ observed purchase decisions satisfy the revealed preference (RP) axioms of the utility maximization theory (UMT). Researchers using survey or experimental panel data sets on prices and consumption to answer this question face the well-known problem of measurement error. We show that ignoring measurement error in the RP approach may lead to overrejection of the UMT. To solve this problem, we propose a new statistical RP framework for consumption panel data sets that allows for testing the UMT in the presence of measurement error. Our test is applicable to all consumer models that can be characterized by their first-order conditions. Our approach is non-parametric, allows for unrestricted heterogeneity in preferences and requires only a centring condition on measurement error. We develop two applications that provide new evidence about the UMT. First, we find support in a survey data set for the dynamic and time-consistent UMT in single-individual households, in the presence of nonclassical measurement error in consumption. In the second application, we cannot reject the static UMT in a widely used experimental data set in which measurement error in prices is assumed to be the result of price misperception due to the experimental design. The first finding stands in contrast to the conclusions drawn from the deterministic RP test of Browning (1989, International Economic Review, 979–992). The second finding reverses the conclusions drawn from the deterministic RP test of Afriat (1967, International Economic Review, 8, 6–77) and Varian (1982, Econometrica, 945–973).


Author(s):  
Jack Zwanziger

One of the objectives of managed care organizations (MCOs) has been to reduce the rate of growth of health care expenditures, including that of physician fees. Yet, due to a lack of data, no one has been able to determine whether MCOs have been successful in encouraging the growth of price competition in the market for physician services in order to slow the growth in physician fees. This study uses a unique, national-level data set to determine what factors influenced the physician fees that MCOs negotiated during the 1990–92 period. The most influential characteristics were physician supply and managed care penetration, which suggest that the introduction of competition into the health care market was an effective force in reducing physician fees.


2020 ◽  
pp. 088832542094109
Author(s):  
Markéta Klásková ◽  
Ondřej Císař

This article belongs to the special cluster, “Think Tanks in Central and Eastern Europe”, guest-edited by Katarzyna Jezierska and Serena Giusti. What is the role of think tanks in Europeanization of national public spheres? To address this question, our paper explores the performance of think tanks in the immigration debate in the Czech Republic. Employing political claims analysis (PCA) and treating think tanks as boundary organizations active in multiple fields, we compare the levels of Europeanization of political claims made by think tanks with other actors. Our data set includes 2,374 political claims made on broadcast public TV in the period from April 2015 to March 2016. According to our quantitative data, Czech think tanks chose the discursive strategy of Europeanization more often than any other actor represented. Thus, think tanks have the potential to support Europeanization of national public spheres. However, their representation in media coverage is relatively low. Our results also demonstrate that think tanks should be treated as sui generis organizations since their strategy in the public sphere deviates from that of other civil society organizations. Think tanks Europeanized their claims-making, but others largely stayed on the national level while discussing the refugee crisis.


2000 ◽  
pp. 183
Author(s):  
Paolo Martano

The estimation of joint values of both the roughness length z0 and the displacement height d is considered in the context of the MoninObukhov similarity law for the windspeed profile. When focused on single level data sets from one sonic anemometer (i.e. wind velocity, Reynolds stress and sensible heat flux data sets at one height), it is shown that this problem can be reduced to a simpler least squares procedure for one variable only. This procedure is carried out over a proper function of the data, representing the relative uncertainty of the roughness length, σz0/z0. This is minimized with respect to d, giving a direct estimate of d, z0, and their statistical uncertainty. The scheme is tested against a field-experiment data set.


2014 ◽  
Vol 7 (1) ◽  
pp. 73-95 ◽  
Author(s):  
Ishita Chatterjee ◽  
Ranjan Ray

Purpose – There have been very few attempts in the economics literature to empirically study the link between criminal and corrupt behaviour due to lack of data sets on simultaneous information on both types of illegitimate activities. The paper aims to discuss these issues. Design/methodology/approach – The present study uses a large cross-country data set containing individual responses to questions on crime and corruption along with information on the respondents' characteristics. These micro-level data are supplemented by country-level macro and institutional indicators. A methodological contribution of this study is the estimation of an ordered probit model based on outcomes defined as combinations of crime and bribe victimisation. Findings – The authors find that: a crime victim is more likely to face bribe demands, males are more likely victims of corruption while females are of serious crime, older individuals and those living in the smaller towns are less exposed to crime and corruption, increasing levels of income and education increase the likelihood of crime and bribe victimisation to be reported and a stronger legal system and a happier society reduce both crime and corruption. However, the authors find no evidence of a strong and uniformly negative impact of either crime or corruption on a country's growth rate. Originality/value – This paper is, to the authors' knowledge, the first in the literature to explore the nexus between crime and corruption, their magnitudes, determinants and their effects on growth rates.


2021 ◽  
Vol 11 (2) ◽  
pp. 518
Author(s):  
Jiyuan Shi ◽  
Ji Dang ◽  
Mida Cui ◽  
Rongzhi Zuo ◽  
Kazuhiro Shimizu ◽  
...  

In this research, 200 corrosion images of steel and 500 crack images of rubber bearing are collected and manually labeled to build the data set. Then the two data sets are respectively adopted to train VGG-Unet models in two methods, aiming to conduct Damage Segmentation by inputting different size of data set. One method is Squashing Segmentation to input squashed images from high resolution directly into VGG-Unet model while Cropping Segmentation uses cropped image with size 224 × 224 as input images. Because the proportion of damage pixels in the data set is different, the results produced by the two data sets are quite different. For large size damage (such as corrosion) segmentation, Cropping Segmentation has a better result while for minor damage (such as crack) segmentation, the result is opposite. The main reason is the gap in the concentration of valid data from the data set. To improve the capability of crack segmentation based on Cropping Segmentation, Background Data Drop Rate (BDDR) is adopted to reduce the quantity of background images to control the proportion of damage pixels from the data set in pixel-level. The ratio of damage pixels from the data set can be decided by different value of BDDR. By testing, the accuracy of Cropping Segmentation becomes relatively higher under BDDR being 0.8.


Data pre-processing is the process of transforming the raw data into useful dataset. Data pre-processing is one of the most important phase of any machine learning model because the quality and efficiency of any machine learning model directly depends upon the data-set, if we skip this step and design a model with data sets containing missing values then the model we have designed will not be that efficient and will be inconsistent model. This paper describes the methodology for pre-processing the data in seven sequence of steps using python powerful libraries which are open source machine learning libraries that support both supervised and unsupervised learning like pandas is a high level data manipulation tool, scikit learn which provides various tools for model fitting, data pre-processing, model selection and many other utilities. These steps include dealing with missing value, categorical values, importing data sets etc. This analysis helps in cleaning and transforming the datasets which future applied to any learning model and produce a efficient machine learning model.


2019 ◽  
Vol 8 (4) ◽  
pp. 10431-10435

Financial Inclusion (FI) is a global concern and even developed economies are trying to achieve complete inclusion.The inclusion index is reported by many institutions and regulatory bodies considering only one or two key attributes in their reports and hence, the impact of other financial parameters is missed. Further, the reports display an aggregated value at national level. Deciphering the inclusion at individual level will help to take corrective measures and in designing new policies. This study aims to propose a decision ruleusing techniques from data analytics to segment the population into excluded and included. The consolidated weighted scoring method was used over four key financial attributes to identify the actual class.C5.0 algorithm has been applied to arrive at the decision rule which employs technique of entropy or information gain. Surveyed data with 691 records was partitioned into training (80%) and test (20%) data sets. The classification accuracy over the test data set was found to be 100%.The findings of this study could be used by policymakers for individual estimate of FI score and prioritizing the policies


2018 ◽  
Vol 154 (2) ◽  
pp. 149-155
Author(s):  
Michael Archer

1. Yearly records of worker Vespula germanica (Fabricius) taken in suction traps at Silwood Park (28 years) and at Rothamsted Research (39 years) are examined. 2. Using the autocorrelation function (ACF), a significant negative 1-year lag followed by a lesser non-significant positive 2-year lag was found in all, or parts of, each data set, indicating an underlying population dynamic of a 2-year cycle with a damped waveform. 3. The minimum number of years before the 2-year cycle with damped waveform was shown varied between 17 and 26, or was not found in some data sets. 4. Ecological factors delaying or preventing the occurrence of the 2-year cycle are considered.


2018 ◽  
Vol 21 (2) ◽  
pp. 117-124 ◽  
Author(s):  
Bakhtyar Sepehri ◽  
Nematollah Omidikia ◽  
Mohsen Kompany-Zareh ◽  
Raouf Ghavami

Aims & Scope: In this research, 8 variable selection approaches were used to investigate the effect of variable selection on the predictive power and stability of CoMFA models. Materials & Methods: Three data sets including 36 EPAC antagonists, 79 CD38 inhibitors and 57 ATAD2 bromodomain inhibitors were modelled by CoMFA. First of all, for all three data sets, CoMFA models with all CoMFA descriptors were created then by applying each variable selection method a new CoMFA model was developed so for each data set, 9 CoMFA models were built. Obtained results show noisy and uninformative variables affect CoMFA results. Based on created models, applying 5 variable selection approaches including FFD, SRD-FFD, IVE-PLS, SRD-UVEPLS and SPA-jackknife increases the predictive power and stability of CoMFA models significantly. Result & Conclusion: Among them, SPA-jackknife removes most of the variables while FFD retains most of them. FFD and IVE-PLS are time consuming process while SRD-FFD and SRD-UVE-PLS run need to few seconds. Also applying FFD, SRD-FFD, IVE-PLS, SRD-UVE-PLS protect CoMFA countor maps information for both fields.


Sign in / Sign up

Export Citation Format

Share Document