two step estimation
Recently Published Documents


TOTAL DOCUMENTS

109
(FIVE YEARS 30)

H-INDEX

19
(FIVE YEARS 2)

2022 ◽  
Vol 9 ◽  
Author(s):  
Efat Mohamadi ◽  
Mohammad Mehdi Kiani ◽  
Alireza Olyaeemanesh ◽  
Amirhossein Takian ◽  
Reza Majdzadeh ◽  
...  

Background: Measuring the efficiency and productivity of hospitals is a key tool to cost contamination and management that is very important for any healthcare system for having an efficient system.Objective: The purpose of this study is to examine the effects of contextual factors on hospital efficiency in Iranian public hospitals.Methods: This was a quantitative and descriptive-analytical study conducted in two steps. First, we measured the efficiency score of teaching and non-teaching hospitals by using the Data Envelopment Analysis (DEA) method. Second, the relationship between efficiency score and contextual factors was analyzed. We used median statistics (first and third quarters) to describe the concentration and distribution of each variable in teaching and non-teaching hospitals, then the Wilcoxon test was used to compare them. The Spearman test was used to evaluate the correlation between the efficiency of hospitals and contextual variables (province area, province population, population density, and the number of beds per hospital).Results: On average, the efficiency score in non-teaching hospitals in 31 provinces was 0.67 and for teaching hospitals was 0.54. Results showed that there is no significant relationship between the efficiency score and the number of hospitals in the provinces (p = 0.1 and 0.15, respectively). The relationship between the number of hospitals and the population of the province was significant and positive. Also, there was a positive relationship between the number of beds and the area of the province in both types of teaching and non-teaching hospitals.Conclusion: Multilateral factors influence the efficiency of hospitals and to address hospital inefficiency multi-intervention packages focusing on the hospital and its context should be developed. It is necessary to pay attention to contextual factors and organizational architecture to improve efficiency.


2021 ◽  
Author(s):  
Minkyung Kim ◽  
K. Sudhir ◽  
Kosuke Uetake

This paper broadens the focus of empirical research on salesforce management to include multitasking settings with multidimensional incentives, where salespeople have private information about customers. This allows us to ask novel substantive questions around multidimensional incentive design and job design while managing the costs and benefits of private information. To this end, the paper introduces the first structural model of a multitasking salesforce in response to multidimensional incentives. The model also accommodates (i) dynamic intertemporal tradeoffs in effort choice across the tasks and (ii) salesperson’s private information about customers. We apply our model in a rich empirical setting in microfinance and illustrate how to address various identification and estimation challenges. We extend two-step estimation methods used for unidimensional compensation plans by embedding a flexible machine learning (random forest) model in the first-stage multitasking policy function estimation within an iterative procedure that accounts for salesperson heterogeneity and private information. Estimates reveal two latent segments of salespeople—a hunter segment that is more efficient in loan acquisition and a farmer segment that is more efficient in loan collection. Counterfactuals reveal heterogeneous effects: hunters’ private information hurts the firm as they engage in adverse selection; farmers’ private information helps the firm as they use it to better collect loans. The payoff complementarity induced by multiplicative incentive aggregation softens adverse specialization by hunters relative to additive aggregation but hurts performance among farmers. Overall, task specialization in job design for hunters (acquisition) and farmers (collection) hurts the firm as adverse selection harm overwhelms efficiency gain. This paper was accepted by Duncan Simester, marketing.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Hirokazu Yamada

PurposeThis study aims to find technologically important patent identification methods and indicators early and efficiently to grasp the technical qualitative level of patents, which are output indicators of research and development (R&D) results.Design/methodology/approachThis paper reports on two methods for distinguishing important patents and the indicators obtained from those methods. One of the discrimination methods is Heckman's two-step estimation procedure. The second method is to find the centrality of each patent by network analysis of the citation relationship between publications and to find the importance from the magnitude of the centrality value.FindingsIn Heckman's analysis, the number of citations within three years after publication and the applicant's right acquisition/maintenance motivation index had positive effects on patent importance. The discriminative indicators of important patents by network analysis were degree centrality, mediation centrality, proximity centrality and transit values in the aggregated subnetworks. These two analytical methods are in a relationship that can complement each other's shortcomings. To efficiently evaluate the qualitative importance of patents, it is recommended to use these two methods together.Research limitations/implicationsThe indicators of important technical patents might change depending on the technical field. Future studies can apply this research to multiple technical fields to improve robustness and to construct an algorithm that can efficiently evaluate the quality of patents.Practical implicationsThis study's results can be useful for grasping the patent position of the company or competitors numerically and for quantitatively evaluating the quality of R&D activities. Furthermore, it is possible to streamline the routine for an exploratory search of a huge number of patents. For example, it could be useful for detecting changes in the paradigm of specific technical knowledge, evolving the genealogy of technical knowledge and creating patent maps for new R&D. These methods greatly increase the effectiveness of technical knowledge information, which is the basis of R&D. In addition, the results of this study can help in evaluating patented assets.Social implicationsThis study confirmed the development process of technical knowledge. It is a fact that sharing, sympathy and mutual trust for technical issues and technical values are created among professional engineers and researchers inside and outside the organization, and their preferences and interactions develop and expand technical knowledge. Understanding the process of development and the evolution of this technical knowledge gives hints, such as expanding the discretionary power of engineers and researchers regarding corporate secrets, or reviewing the balance between control and independence, to solve Japanese management problems, which are often closed and monetized in R&D activities.Originality/valueThis study presents a scoring of the technical significance of patents by combining the two analytical methods. In addition, there are proposals as a method for detecting changes in the genealogy and paradigm of technical knowledge. As an analysis method, it is a new proposal that has never existed before.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Ruohan Wu

Purpose This paper aims to study how firms’ longitudinal and dynamic growth will be affected by their bribing decisions to address the controversies existing in the extant literature on the impacts of briberies. Design/methodology/approach The authors acquired information from Enterprise Survey by the World Bank and compiled a unique panel data set including firms from five South American countries between 2006 and 2017. The authors used multiple methods to estimate firms’ productivity. A comprehensive inspection of firms’ longitudinal development using a two-step estimation method that addressed the endogeneity issue was then conducted. Findings Bribery could significantly shorten the waiting time for resources to become available. However, bribery also substantially and robustly slows down firms’ productivity growth over time. Meanwhile, a bribing firm is very likely to bribe again in the future. Originality/value This paper contributes to the extant literature by pioneering the empirical study of firms’ bribing decisions and their longitudinal growth. First, the authors constructed unique panel data and established a longitudinal investigation upon firms’ dynamic growth after bribing, filling the literature gap by studying the time-lagging effect of bribery on firms’ growth. Second, the authors performed a comprehensive overview of South American firms’ growth by looking into the dynamics of their production, employment, resource delay and productivity across years. Third, the authors found that bribing exerted contingent impacts upon firms’ growth, reconciling the mixed evidence in the literature.


2021 ◽  
Vol 13 (2) ◽  
pp. 109-119
Author(s):  
Agus Widarjono ◽  
Sarastri Mumpuni Ruchba

This study estimates the demand for meat in Indonesian urban households encompassing beef, goat, broiler chicken, and native chicken. We estimate the demand for meat using cross-sectional data from the 2013 Indonesian Socio-Economic Household Survey data, which records food expenditure for a week before the survey. Because of some zero expenditure, the Censored Almost Ideal Demand System (AIDS) using the consistent two-step estimation is applied. The estimated own-price elasticities indicate that all meat products are price-inelastic. Nonetheless, broiler chicken is the most responsive meat product while goat is the least responsive meat product to price changes. All meat products are normal good referring to the estimated income elasticities. However, Native chicken is the most responsive and goat is the most unresponsive to the income change. The estimated cross-price elasticities conclude that broiler chicken and beef are substitute goods. The policy simulation indicates that beef is a meat product that is unresponsive to price and income changes. Native chicken is the most responsive meat product to price and income change, followed by broiler chicken.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Cemil Ciftci ◽  
Hakan Ulucan

Purpose This study aims to analyze the wage differentials of the majors in college education in Turkey, which is a country implementing an ongoing expansion in college education in recent years. Design/methodology/approach The study implements Mincreian wage regression using ordinary least squares, Heckman two-step estimation and quantile regression with sample selection correction by using household labor force surveys of TurkStat from the years 2014–2017. Findings The findings indicate one of the highest heterogeneity, close to 0.50 log points, between majors in the literature. The within-heterogeneity created by majors is highest among the graduates of social-behavioral sciences, law, biology, physics, mathematics, statistics, computer, engineering and manufacturing, as shown by a 90–10 difference, which is almost 700% for some of these majors. This study shows that the natural science and technical majors that are expected to be more productive and to be paid more fall behind in the wage distribution. Research limitations/implications Estimation results show that natural science majors, except for subjects allied to medicine and engineering, are paid lower than law and service-sector-related majors. This indicates that the predictions of the skill-biased technical change hypothesis are not valid in the wage profiles in Turkey and that some majors supply more than the sectoral needs. This casts doubts on the effectiveness of the ongoing higher education expansion process of the country. Originality/value This study contributes to the literature on wage differentials of college majors, an area with limited studies. This is the first study analyzing wage differentials of the field of studies by correcting sample selection bias for the Turkish case.


Author(s):  
Alessandro Barbiero

AbstractFocusing on point-scale random variables, i.e. variables whose support consists of the first m positive integers, we discuss how to build a joint distribution with pre-specified marginal distributions and Pearson’s correlation $$\rho $$ ρ . After recalling how the desired value $$\rho $$ ρ is not free to vary between $$-1$$ - 1 and $$+1$$ + 1 , but generally ranges a narrower interval, whose bounds depend on the two marginal distributions, we devise a procedure that first identifies a class of joint distributions, based on a parametric family of copulas, having the desired margins, and then adjusts the copula parameter in order to match the desired correlation. The proposed methodology addresses a need which often arises when assessing the performance and robustness of some new statistical technique, i.e. trying to build a huge number of replicates of a given dataset, which satisfy—on average—some of its features (for example, the empirical marginal distributions and the pairwise linear correlations). The proposal shows several advantages, such as—among others—allowing for dependence structures other than the Gaussian and being able to accommodate the copula parameter up to an assigned level of precision for $$\rho $$ ρ with a very small computational cost. Based on this procedure, we also suggest a two-step estimation technique for copula-based bivariate discrete distributions, which can be used as an alternative to full and two-step maximum likelihood estimation. Numerical illustration and empirical evidence are provided through some examples and a Monte Carlo simulation study, involving the CUB distribution and three different copulas; an application to real data is also discussed.


Econometrics ◽  
2021 ◽  
Vol 9 (2) ◽  
pp. 16
Author(s):  
Liqiong Chen ◽  
Antonio F. Galvao ◽  
Suyong Song

This paper studies estimation and inference for linear quantile regression models with generated regressors. We suggest a practical two-step estimation procedure, where the generated regressors are computed in the first step. The asymptotic properties of the two-step estimator, namely, consistency and asymptotic normality are established. We show that the asymptotic variance-covariance matrix needs to be adjusted to account for the first-step estimation error. We propose a general estimator for the asymptotic variance-covariance, establish its consistency, and develop testing procedures for linear hypotheses in these models. Monte Carlo simulations to evaluate the finite-sample performance of the estimation and inference procedures are provided. Finally, we apply the proposed methods to study Engel curves for various commodities using data from the UK Family Expenditure Survey. We document strong heterogeneity in the estimated Engel curves along the conditional distribution of the budget share of each commodity. The empirical application also emphasizes that correctly estimating confidence intervals for the estimated Engel curves by the proposed estimator is of importance for inference.


Author(s):  
David C. Wheeler ◽  
Salem Rustom ◽  
Matthew Carli ◽  
Todd P. Whitehead ◽  
Mary H. Ward ◽  
...  

There has been a growing interest in the literature on multiple environmental risk factors for diseases and an increasing emphasis on assessing multiple environmental exposures simultaneously in epidemiologic studies of cancer. One method used to analyze exposure to multiple chemical exposures is weighted quantile sum (WQS) regression. While WQS regression has been demonstrated to have good sensitivity and specificity when identifying important exposures, it has limitations including a two-step model fitting process that decreases power and model stability and a requirement that all exposures in the weighted index have associations in the same direction with the outcome, which is not realistic when chemicals in different classes have different directions and magnitude of association with a health outcome. Grouped WQS (GWQS) was proposed to allow for multiple groups of chemicals in the model where different magnitude and direction of associations are possible for each group. However, GWQS shares the limitation of WQS of a two-step estimation process and splitting of data into training and validation sets. In this paper, we propose a Bayesian group index model to avoid the estimation limitation of GWQS while having multiple exposure indices in the model. To evaluate the performance of the Bayesian group index model, we conducted a simulation study with several different exposure scenarios. We also applied the Bayesian group index method to analyze childhood leukemia risk in the California Childhood Leukemia Study (CCLS). The results showed that the Bayesian group index model had slightly better power for exposure effects and specificity and sensitivity in identifying important chemical exposure components compared with the existing frequentist method, particularly for small sample sizes. In the application to the CCLS, we found a significant negative association for insecticides, with the most important chemical being carbaryl. In addition, for children who were born and raised in the home where dust samples were taken, there was a significant positive association for herbicides with dacthal being the most important exposure. In conclusion, our approach of the Bayesian group index model appears able to make a substantial contribution to the field of environmental epidemiology.


Sign in / Sign up

Export Citation Format

Share Document