scholarly journals The challenge of measuring poverty and inequality: a comparative analysis of the main indicators

2018 ◽  
Vol 7 (1) ◽  
pp. 24
Author(s):  
Juan Ignacio Martín-Legendre

This paper presents a review of the main available indicators to measure poverty and income inequality, examining their properties and suitability for different types of economic analyses, and providing real-world data to illustrate how they work. Although some of these metrics –such as the Gini coefficient– are most frequently used for this purpose, it is crucially important for researchers and policy-makers to take into account alternative methods that can offer complementary information in order to better understand these issues at all levels.

Author(s):  
Martyna Daria Swiatczak

AbstractThis study assesses the extent to which the two main Configurational Comparative Methods (CCMs), i.e. Qualitative Comparative Analysis (QCA) and Coincidence Analysis (CNA), produce different models. It further explains how this non-identity is due to the different algorithms upon which both methods are based, namely QCA’s Quine–McCluskey algorithm and the CNA algorithm. I offer an overview of the fundamental differences between QCA and CNA and demonstrate both underlying algorithms on three data sets of ascending proximity to real-world data. Subsequent simulation studies in scenarios of varying sample sizes and degrees of noise in the data show high overall ratios of non-identity between the QCA parsimonious solution and the CNA atomic solution for varying analytical choices, i.e. different consistency and coverage threshold values and ways to derive QCA’s parsimonious solution. Clarity on the contrasts between the two methods is supposed to enable scholars to make more informed decisions on their methodological approaches, enhance their understanding of what is happening behind the results generated by the software packages, and better navigate the interpretation of results. Clarity on the non-identity between the underlying algorithms and their consequences for the results is supposed to provide a basis for a methodological discussion about which method and which variants thereof are more successful in deriving which search target.


2021 ◽  
Author(s):  
Eric Shuman ◽  
Siwar Hasan-Aslih

The murder of George Floyd ignited one of the largest mass mobilizations in US history, including both non-violent and violent BlackLivesMatter protests in the summer of 2020. Many have since asked: did the violence within the largely non-violent movement help or hurt its goals? To answer this question, we used real-world data (ACLED, 2020) about the location of all BlackLivesMatter protests during the summer of 2020 to identify US counties that featured no protests, only nonviolent protests, or both nonviolent and violent protests. We then combined this data with survey data (N = 494, Study 1), data from the Congressional Cooperative Election Study (N = 43,924, Study 2A), and data from Project Implicit (N = 180,480, Study 2B), in order to examine how exposure (i.e. living in a county with) different types of protest affected both support for the key policy goals of the movement and prejudice towards Black Americans. We found that the 2020 BLM protests had no impact on prejudice among either liberals or conservatives. However, they were, even when violent, able to increase support for BlackLivesMatter’s key policy goals among conservatives living in relatively liberal areas. As such, this research suggests that violent, disruptive actions within a broader non-violent movement may affect those likely to be resistant to the movement. We connect these findings to the notion of disruptive action, which explains why these effects do not materialize in reducing prejudice, but in generating support for important policy goals of the movement.


2021 ◽  
pp. 1-41
Author(s):  
Artem Shevlyakov ◽  
Dimitri Nikogosov ◽  
Leigh-Ann Stewart ◽  
Miguel Toribio-Mateas

Abstract Objective: To obtain a set of reference values for the intake of different types of dietary fibre in a healthy UK population. Design: This descriptive cross-sectional study used the UK Biobank data to estimate the dietary patterns of healthy individuals. Data on fibre content in different foods were used to calculate the reference values which were then calibrated using real-world data on total fibre intake. Setting: UK Biobank is a prospective cohort study of over 500,000 individuals from across the United Kingdom with the participants aged between 40 and 69 years. Participants: UK Biobank contains information on over 500,000 participants. This study was performed using the data on 19990 individuals (6941 men, 13049 women) who passed stringent quality control and filtering procedures and had reported above-zero intake of the analysed foods. Results: A set of reference values for the intake of 6 different types of soluble and insoluble fibres (cellulose, hemicelluloses, pectin and lignin), including the corresponding totals, was developed and calibrated using real-world data. Conclusions: To our knowledge, this is the first study to establish specific reference values for the intake of different types of dietary fibre. It is well-known that effects exerted by different types of fibre both directly and through modulation of microbiota are numerous. Conceivably, a deficit or excess intake of specific types of dietary fibre may detrimentally affect human health. Filling this knowledge gap opens new avenues for research in discussion in studies of nutrition and microbiota, and offers valuable tools for practitioners worldwide.


2021 ◽  
Vol 2021 ◽  
pp. 1-17
Author(s):  
Tinofirei Museba ◽  
Fulufhelo Nelwamondo ◽  
Khmaies Ouahada

Beyond applying machine learning predictive models to static tasks, a significant corpus of research exists that applies machine learning predictive models to streaming environments that incur concept drift. With the prevalence of streaming real-world applications that are associated with changes in the underlying data distribution, the need for applications that are capable of adapting to evolving and time-varying dynamic environments can be hardly overstated. Dynamic environments are nonstationary and change with time and the target variables to be predicted by the learning algorithm and often evolve with time, a phenomenon known as concept drift. Most work in handling concept drift focuses on updating the prediction model so that it can recover from concept drift while little effort has been dedicated to the formulation of a learning system that is capable of learning different types of drifting concepts at any time with minimum overheads. This work proposes a novel and evolving data stream classifier called Adaptive Diversified Ensemble Selection Classifier (ADES) that significantly optimizes adaptation to different types of concept drifts at any time and improves convergence to new concepts by exploiting different amounts of ensemble diversity. The ADES algorithm generates diverse base classifiers, thereby optimizing the margin distribution to exploit ensemble diversity to formulate an ensemble classifier that generalizes well to unseen instances and provides fast recovery from different types of concept drift. Empirical experiments conducted on both artificial and real-world data streams demonstrate that ADES can adapt to different types of drifts at any given time. The prediction performance of ADES is compared to three other ensemble classifiers designed to handle concept drift using both artificial and real-world data streams. The comparative evaluation performed demonstrated the ability of ADES to handle different types of concept drifts. The experimental results, including statistical test results, indicate comparable performances with other algorithms designed to handle concept drift and prove their significance and effectiveness.


2019 ◽  
Author(s):  
Paul G Curran ◽  
Alexander James Denison

It is an accepted fact in survey research that not all participants will respond to items with the thoughtful introspection required to produce a valid response. When participants respond without sufficient effort their responses are considered to be careless, and these responses represent error. Many methods exist for the detection of these individuals (Huang, Curran, Keeney, Poposki, & Deshon, 2012; Johnson, 2005; Meade & Craig, 2012), and several techniques exist for testing their effectiveness. These techniques often involve generating careless responses through some process, then attempting to detect those known cases in otherwise normal data. One method to produce these data is through the simulation of data with varying degrees of randomness. Despite the common use of this technique, we know little about how it actually maps onto real world data. The purpose of this paper is to compare simulated data with real world data on commonly used careless response metrics. Results suggest that care should be applied when simulating data, and that decisions researchers make when generating this data can have large effects on the apparent effectiveness of these metrics. Despite these potential limitations, it appears that with proper use and continued research simulation techniques can still be quite valuable.


2021 ◽  
Vol 14 (11) ◽  
pp. 2283-2295
Author(s):  
Teddy Cunningham ◽  
Graham Cormode ◽  
Hakan Ferhatosmanoglu ◽  
Divesh Srivastava

Sharing trajectories is beneficial for many real-world applications, such as managing disease spread through contact tracing and tailoring public services to a population's travel patterns. However, public concern over privacy and data protection has limited the extent to which this data is shared. Local differential privacy enables data sharing in which users share a perturbed version of their data, but existing mechanisms fail to incorporate user-independent public knowledge (e.g., business locations and opening times, public transport schedules, geo-located tweets). This limitation makes mechanisms too restrictive, gives unrealistic outputs, and ultimately leads to low practical utility. To address these concerns, we propose a local differentially private mechanism that is based on perturbing hierarchically-structured, overlapping n -grams (i.e., contiguous subsequences of length n ) of trajectory data. Our mechanism uses a multi-dimensional hierarchy over publicly available external knowledge of real-world places of interest to improve the realism and utility of the perturbed, shared trajectories. Importantly, including real-world public data does not negatively affect privacy or efficiency. Our experiments, using real-world data and a range of queries, each with real-world application analogues, demonstrate the superiority of our approach over a range of alternative methods.


Sign in / Sign up

Export Citation Format

Share Document