scholarly journals Statistical Evaluation of Item Nonresponse Methods Using the World Bank’s 2015 Philippines Enterprise Survey

Author(s):  
Madeline D. Cabauatan Et.al

The main objective of the study was to evaluate item nonresponse procedures through a simulation study of different nonresponse levels or missing rates. A simulation study was used to explore how each of the response rates performs under a variety of circumstances. It also investigated the performance of procedures suggested for item nonresponse under various conditions and variable trends. The imputation methods considered were the cell mean imputation, random hotdeck, nearest neighbor, and simple regression. These variables are some of the major indicators for measuring productive labor and decent work in the country. For the purpose of this study, the researcher is interested in evaluating methods for imputing missing data for the number of workers and total cost of labor per establishment from the World Bank’s 2015 Enterprise Survey for the Philippines. The performances of the imputation techniques for item nonresponse were evaluated in terms of bias and coefficient of variation for accuracy and precision. Based on the results, the cell-mean imputation was seen to be most appropriate for imputing missing values for the total number of workers and total cost of labor per establishment. Since the study was limited to the variables cited, it is recommended to explore other labor indicators. Moreover, exploring choice of other clustering groups is highly recommended as clustering groups have great effect in the resulting estimates of imputation estimation. It is also recommended to explore other imputation techniques like multiple regression and other parametric models for nonresponse such as the Bayes estimation method. For regression based imputation, since the study is limited only in using the cluster groupings estimation, it is highly recommended to use other possible variables that might be related to the variable of interest to verify the results of this study.

Author(s):  
Wisam A. Mahmood ◽  
Mohammed S. Rashid ◽  
Teaba Wala Aldeen ◽  
Teaba Wala Aldeen

Missing values commonly happen in the realm of medical research, which is regarded creating a lot of bias in case it is neglected with poor handling. However, while dealing with such challenges, some standard statistical methods have been already developed and available, yet no credible method is available so far to infer credible estimates. The existing data size gets lowered, apart from a decrease in efficiency happens when missing values is found in a dataset. A number of imputation methods have addressed such challenges in early scholarly works for handling missing values. Some of the regular methods include complete case method, mean imputation method, Last Observation Carried Forward (LOCF) method, Expectation-Maximization (EM) algorithm, and Markov Chain Monte Carlo (MCMC), Mean Imputation (Mean), Hot Deck (HOT), Regression Imputation (Regress), K-nearest neighbor (KNN),K-Mean Clustering, Fuzzy K-Mean Clustering, Support Vector Machine, and Multiple Imputation (MI) method. In the present paper, a simulation study is attempted for carrying out an investigative exploration into the efficacy of the above mentioned archetypal imputation methods along with longitudinal data setting under missing completely at random (MCAR). We took out missingness from three cases in a block having low missingness of 5% as well as higher levels at 30% and 50%. With this simulation study, we concluded LOCF method having more bias than the other methods in most of the situations after carrying out a comparison through simulation study.


2021 ◽  
Vol 29 (2) ◽  
Author(s):  
Nurul Azifah Mohd Pauzi ◽  
Yap Bee Wah ◽  
Sayang Mohd Deni ◽  
Siti Khatijah Nor Abdul Rahim ◽  
Suhartono

High quality data is essential in every field of research for valid research findings. The presence of missing data in a dataset is common and occurs for a variety of reasons such as incomplete responses, equipment malfunction and data entry error. Single and multiple data imputation methods have been developed for data imputation of missing values. This study investigated the performance of single imputation using mean and multiple imputation method using Multivariate Imputation by Chained Equations (MICE) via a simulation study. The MCAR which means missing completely at random were generated randomly for ten levels of missing rates (proportion of missing data): 5% to 50% for different sample sizes. Mean Square Error (MSE) was used to evaluate the performance of the imputation methods. Data imputation method depends on data types. Mean imputation is commonly used to impute missing values for continuous variable while MICE method can handle both continuous and categorical variables. The simulation results indicate that group mean imputation (GMI) performed better compared to overall mean imputation (OMI) and MICE with lowest value of MSE for all sample sizes and missing rates. The MSE of OMI, GMI, and MICE increases when missing rate increases. The MICE method has the lowest performance (i.e. highest MSE) when percentage of missing rates is more than 15%. Overall, GMI is more superior compared to OMI and MICE for all missing rates and sample size for MCAR mechanism. An application to a real dataset confirmed the findings of the simulation results. The findings of this study can provide knowledge to researchers and practitioners on which imputation method is more suitable when the data involves missing data.


Author(s):  
Gerald Pratley

PRODUCTION ACTIVITY It was not so many years ago it seems when speaking of motion pictures from Asia meant Japanese films as represented by Akira Kurosawa and films from India made by Satyajit Ray. But suddenly time passes and now we are impressed and immersed in the flow of films from Hong Kong, Taiwan, China, South Korea, the Philippines, with Japan a less significant player, and India and Pakistan more prolific than ever in making entertainment for the mass audience. No one has given it a name or described it as "New Wave," it is simply Asian Cinema -- the most exciting development in filmmaking taking place in the world today. In China everything is falling apart yet it manages to hold together, nothing works yet it keeps on going, nothing is ever finished or properly maintained, and yes, here time does wait for every man. But as far...


Author(s):  
Nicole Curato

Misery rarely features in conversations about democracy. And yet, in the past decades, global audiences are increasingly confronted with spectacles of human pain. The world is more stressed, worried, and sad today than we have ever seen it, a Gallup poll finds. Does democracy stand a chance in a time of widespread suffering? Drawing on three years of field research among communities affected by Typhoon Haiyan in the Philippines, this book offers ethnographic portraits of how collective suffering, trauma, and dispossession enlivens democratic action. It argues that emotional forms of communication create publics that assert voice and visibility at a time when attention is the scarcest resource, whilst also creating hierarchies of misery among suffering communities. Democracy in a Time of Misery investigates the ethical and political value of democracy in the most trying of times and reimagines how the virtues of deliberative practice can be valued in the context of widespread suffering.


2013 ◽  
Vol 1 (2) ◽  
pp. 145-175 ◽  
Author(s):  
Paul D. Hutchcroft

AbstractPrevious decades' celebrations of the triumph of democracy were frequently based on mainstream analyses that displayed two major theoretical problems. First, conceptualisations of democracy based on ‘minimal pre-conditions’ commonly conflated the formal establishment ofdemocratic structureswith the far more complex and historically challenging creation ofsubstantive democracy. Second, a deductive and generally ahistorical model asserting fixed stages of ‘democratic transition’ diverted attention from deeper and more substantive examination ofstruggles for power among social forces within specific historical contexts. By adhering to minimalist conceptions of democracy and simplistic models of democratic change, mainstream analysts quite often chose to overlook many underlying limitations and shortcomings of the democratic structures they were so keen to celebrate. Given more recent concerns over ‘authoritarian undertow’, those with the normative goal of deepening democracy must begin by deepening scholarly conceptualisations of the complex nature of democratic change. This analysis urges attention to the ‘source’ and ‘purpose’ of democracy. What were the goals of those who established democratic structures, and to what extent did these goals correspond to the ideals of democracy? In many cases throughout the world, ‘democracy’ has been used as a convenient and very effective means for both cloaking and legitimising a broad set of political, social, and economic inequalities. The need for deeper analysis is highlighted through attention to the historical character of democratic structures in the Philippines and Thailand, with particular attention to the sources and purposes of ‘democracy’ amid on-going struggles for power among social forces. In both countries, albeit coming forth from very different historical circumstances, democratic structures have been continually undermined by those with little commitment to the democratic ideal: oligarchic dominance in the Philippines, and military/bureaucratic/monarchic dominance in Thailand. Each country possesses its own set of challenges and opportunities for genuine democratic change, as those who seek to undermine elite hegemony and promote popular accountability operate in very different socio-economic and institutional contexts. Efforts to promote substantive democracy in each setting, therefore, must begin with careful historical analysis of the particular challenges that need to be addressed.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Mar Rodríguez-Girondo ◽  
Niels van den Berg ◽  
Michel H. Hof ◽  
Marian Beekman ◽  
Eline Slagboom

Abstract Background Although human longevity tends to cluster within families, genetic studies on longevity have had limited success in identifying longevity loci. One of the main causes of this limited success is the selection of participants. Studies generally include sporadically long-lived individuals, i.e. individuals with the longevity phenotype but without a genetic predisposition for longevity. The inclusion of these individuals causes phenotype heterogeneity which results in power reduction and bias. A way to avoid sporadically long-lived individuals and reduce sample heterogeneity is to include family history of longevity as selection criterion using a longevity family score. A main challenge when developing family scores are the large differences in family size, because of real differences in sibship sizes or because of missing data. Methods We discussed the statistical properties of two existing longevity family scores: the Family Longevity Selection Score (FLoSS) and the Longevity Relatives Count (LRC) score and we evaluated their performance dealing with differential family size. We proposed a new longevity family score, the mLRC score, an extension of the LRC based on random effects modeling, which is robust for family size and missing values. The performance of the new mLRC as selection tool was evaluated in an intensive simulation study and illustrated in a large real dataset, the Historical Sample of the Netherlands (HSN). Results Empirical scores such as the FLOSS and LRC cannot properly deal with differential family size and missing data. Our simulation study showed that mLRC is not affected by family size and provides more accurate selections of long-lived families. The analysis of 1105 sibships of the Historical Sample of the Netherlands showed that the selection of long-lived individuals based on the mLRC score predicts excess survival in the validation set better than the selection based on the LRC score . Conclusions Model-based score systems such as the mLRC score help to reduce heterogeneity in the selection of long-lived families. The power of future studies into the genetics of longevity can likely be improved and their bias reduced, by selecting long-lived cases using the mLRC.


Author(s):  
Steven Feldstein

This book documents the rise of digital repression—how governments are deploying new technologies to counter dissent, maintain political control, and ensure regime survival. The emergence of varied digital technologies is bringing new dimensions to political repression. At its core, the expanding use of digital repression reflects a fairly simple motivation: states are seeking and finding new ways to control, manipulate, surveil, or disrupt real or perceived threats. This book investigates the goals, motivations, and drivers of digital repression. It presents case studies in Thailand, the Philippines, and Ethiopia, highlighting how governments pursue digital strategies based on a range of factors: ongoing levels of repression, leadership, state capacity, and technological development. But a basic political motive—how to preserve and sustain political incumbency—remains a principal explanation for their use. The international community is already seeing glimpses of what the frontiers of repression look like, such as in China, where authorities have brought together mass surveillance, online censorship, DNA collection, and artificial intelligence to enforce their rule in Xinjiang. Many of these trends are going global. This has major implications for democratic governments and civil society activists around the world. The book also presents innovative ideas and strategies for civil society and opposition movements to respond to the digital autocratic wave.


2007 ◽  
Vol 28 (8) ◽  
pp. 1557-1576 ◽  
Author(s):  
Saturnino M Borras ◽  
Danilo Carranza ◽  
Jennifer C Franco

2019 ◽  
Author(s):  
Donna Coffman ◽  
Jiangxiu Zhou ◽  
Xizhen Cai

Abstract Background Causal effect estimation with observational data is subject to bias due to confounding, which is often controlled for using propensity scores. One unresolved issue in propensity score estimation is how to handle missing values in covariates.Method Several approaches have been proposed for handling covariate missingness, including multiple imputation (MI), multiple imputation with missingness pattern (MIMP), and treatment mean imputation. However, there are other potentially useful approaches that have not been evaluated, including single imputation (SI) + prediction error (PE), SI+PE + parameter uncertainty (PU), and Generalized Boosted Modeling (GBM), which is a nonparametric approach for estimating propensity scores in which missing values are automatically handled in the estimation using a surrogate split method. To evaluate the performance of these approaches, a simulation study was conducted.Results Results suggested that SI+PE, SI+PE+PU, MI, and MIMP perform almost equally well and better than treatment mean imputation and GBM in terms of bias; however, MI and MIMP account for the additional uncertainty of imputing the missingness.Conclusions Applying GBM to the incomplete data and relying on the surrogate split approach resulted in substantial bias. Imputation prior to implementing GBM is recommended.


2021 ◽  

The “leave no one behind” principle of the 2030 Agenda for Sustainable Development requires appropriate indicators for different segments of a country’s population. This entails detailed, granular data on population groups that extend beyond national trends and averages. The Asian Development Bank, in collaboration with the Philippine Statistics Authority and the World Data Lab, conducted a feasibility study to enhance the granularity, cost-effectiveness, and compilation of high-quality poverty statistics in the Philippines. This report documents the results of the study, which capitalized on satellite imagery, geospatial data, and powerful machine-learning algorithms to augment conventional data collection and sample survey techniques.


Sign in / Sign up

Export Citation Format

Share Document