Effects of Accounting-Method Choices on Subjective Performance-Measure Weighting Decisions: Experimental Evidence on Precision and Error Covariance

2005 ◽  
Vol 80 (4) ◽  
pp. 1163-1192 ◽  
Author(s):  
Ranjani Krishnan ◽  
Joan L. Luft ◽  
Michael D. Shields

Performance-measure weights for incentive compensation are often determined subjectively. Determining these weights is a cognitively difficult task, and archival research shows that observed performance-measure weights are only partially consistent with the predictions of agency theory. Ittner et al. (2003) have concluded that psychology theory can help to explain such inconsistencies. In an experimental setting based on Feltham and Xie (1994), we use psychology theories of reasoning to predict distinctive patterns of similarity and difference between optimal and actual subjective performance-measure weights. The following predictions are supported. First, in contrast to a number of prior studies, most individuals' decisions are significantly influenced by the performance measures' error variance (precision) and error covariance. Second, directional errors in the use of these measurement attributes are relatively frequent, resulting in a mean underreaction to an accounting change that alters performance measurement error. Third, individuals seem insufficiently aware that a change in the accounting for one measure has spillover effects on the optimal weighting of the other measure in a two-measure incentive system. In consequence, they make performance-measure weighting decisions that are likely to result in misallocations of agent effort.

Stroke ◽  
2012 ◽  
Vol 43 (suppl_1) ◽  
Author(s):  
Crismely A Perdomo ◽  
Vepuka E Kauari ◽  
Elizabeth Suarez ◽  
Olajide Williams ◽  
Joshua Stillman ◽  
...  

Background and Purpose The literature demonstrates how utilizing evidence-based, standardized stroke care can improve patient outcomes; however, the contribution of electronic medical record (EMR) systems may also impact outcomes by ensuring utilization and compliance with established stroke performance measures, facilitating and improving documentation requirements, and standardizing approach to care. In 2008, documentation in patients’ medical records was done in combination of paper and a template free EMR. Originally, the EMR was used for order entry, then transitioned to full electronic documentation in 2009. At that time we implemented our stroke templates and performance measures based on regulatory standards. We hypothesized that the stroke template implementation would help us achieve performance measure criteria above state benchmarks as set out by the New York State Department of Health (NYS DOH). Methods Implementation was phased in [over 18 months], initially using a template that only included neurological assessment and free text fields for stroke measures. By July 2010, existing templates were modified and additional stroke templates were implemented to meet new regulatory requirements and meaningful use criteria. Retrospective data review was conducted for performance comparison between 2008 -- one year prior to EMR/template implementation -- and 2010. In Quarter 1 of 2011 EMR was also implemented in the Emergency Department (ED). Data was reviewed for compliance with stroke measures. Results Documentation compliance substantially improved between 2008 and Quarter 1 2011: Compliance for these measures has been maintained ≥ 85% since November 2010, ≥ 90% Q1 2011 Conclusions The EMR implementation of stroke templates and performance measures can produce substantial improvement in performance measure compliance. Future steps will include automated documentation alerts to retrieve information and real time discovery of missing documentation for concurrent quality review and improvement


2005 ◽  
Vol 27 (3) ◽  
pp. 181-198 ◽  
Author(s):  
Ulrich Scheipers ◽  
Christian Perrey ◽  
Stefan Siebers ◽  
Christian Hansen ◽  
Helmut Ermert

The application of the receiver operating characteristic (ROC) curve for computer-aided diagnostic systems is reviewed. A statistical framework is presented and different methods of evaluating the classification performance of computer-aided diagnostic systems, and, in particular, systems for ultrasonic tissue characterization, are derived. Most classifiers that are used today are dependent on a separation threshold, which can be chosen freely in many cases. The separation threshold separates the range of output values of the classification system into different target groups, thus conducting the actual classification process. In the first part of this paper, threshold specific performance measures, e.g., sensitivity and specificity; are presented. In the second part, a threshold-independent performance measure, the area under the ROC curve, is reviewed. Only the use of separation threshold-independent performance measures provides classification results that are overall representative for computer-aided diagnostic systems. The following text was motivated by the lack of a complete and definite discussion of the underlying subject in available textbooks, references and publications. Most manuscripts published so far address the theme of performance evaluation using ROC analysis in a manner too general to be practical for everyday use in the development of computer-aided diagnostic systems. Nowadays, the user of computer-aided diagnostic systems typically handles huge amounts of numerical data, not always distributed normally. Many assumptions made in more or less theoretical works on ROC analysis are no longer valid for real-life data. The paper aims at closing the gap between theoretical works and real-life data. The review provides the interested scientist with information needed to conduct ROC analysis and to integrate algorithms performing ROC analysis into classification systems while understanding the basic principles of classification.


2020 ◽  
Author(s):  
Moira Pryhoda ◽  
Rachel Wathen ◽  
Jay Dicharry ◽  
Kevin Shelburne ◽  
Bradley Davidson

The objective of this research was to determine if three alternative shoe upper closures improve biomechanical performance measures relative to a standard lace closure in court-based movements. NCAA Division 1 and club-level male athletes recruited from lacrosse, soccer, tennis, and rugby performed four court-based movements: Lateral Skater Jump repeats (LSJ), Countermovement Jump repeats (CMJ), Triangle Drop Step drill (TDS), and Anterior-Posterior drill (AP). Each athlete performed the movements in four shoe upper closures: Standard Closure, Lace Replacement, Y Wrap, and Tri Strap. Ground contact time, peak eccentric rate of force development (RFD), peak concentric GRF, peak concentric COM power, eccentric work, concentric work, and movement completion time were measured. Tri Strap saw improvements in four of seven biomechanical variables during CMJ and LSJ and one variable during TDS. Lace Replacement delivered improvements in one performance measure during CMJ, LSJ, and AP, and two variables in TDS. Y Wrap improved performance in three performance measures during LSJ and impaired performance in two measures during CMJ and three measures during AP. Tri Strap provided the most consistent performance improvements across all movements. This study allowed for the mechanical properties of the shoe lower to remain consistent across designs to examine if an alternative shoe upper closure could enhance performance. Our results indicate that increased proprioception and/or mechanical properties due to the alternative closures, especially Tri Strap, improves athlete performance, which concludes that the design of the shoe upper is an essential consideration in shoe design.


Author(s):  
Charles R Siegel ◽  
Anjan Chakrabarti ◽  
Lewis Siegel ◽  
Forrest Winslow ◽  
Thomas Hall

Introduction: Out-of-hospital cardiac arrest (OHCA) remains a highly morbid public health problem. Despite improving practices and clear guidelines, mortality from this condition remains high at 90%, with survivors often suffering from poor neurologic outcomes. To determine the feasibility of quality improvement collaboratives to narrow gaps between evidence-based practice and patient care for OHCA, we conducted a pilot study of the AHA Resuscitation Collaborative. Methods: Eight emergency medical service agencies participated in the quality improvement collaborative pilot project. We identified several OHCA performance measures to assess the quality of care, guide collaborative activities, and monitor change in performance over time. Over the course of four learning sessions, participants were trained in quality improvement and performance measurement, analyzed performance measure results, and shared successes and challenges. Results: Five remaining agencies underwent the process outlined in Figure 1. Adherence to performance measures, including compression rate compliance (Figure 2), improved over the course of the collaborative. Compression rate compliance in Figure 2 corresponds to the process improvement efforts of the Chesapeake Fire Department with achievement of goals for optimal range of chest compression rate between 100 and 120 compressions per minute during resuscitations. Conclusion: As demonstrated in Virginia, the collaborative approach was an effective framework to improve OHCA care. Improvement in performance measures, the evident commitment of dedicated peers and colleagues, consistent collaboration, and the effective diffusion of best practices all support the continued use of this model.


2020 ◽  
Vol 95 (6) ◽  
pp. 181-212
Author(s):  
Jonathan C. Glover ◽  
Hao Xue

ABSTRACT Teamwork and team incentives are increasingly prevalent in modern organizations. Performance measures used to evaluate individuals' contributions to teamwork are often non-verifiable. We study a principal-multi-agent model of relational (self-enforcing) contracts in which the optimal contract resembles a bonus pool. It specifies a minimum joint bonus floor the principal is required to pay out to the agents, and gives the principal discretion to use non-verifiable performance measures to both increase the size of the pool and to allocate the pool to the agents. The joint bonus floor is useful because of its role in motivating the agents to mutually monitor each other by facilitating a strategic complementarity in their payoffs. In an extension section, we introduce a verifiable team performance measure that is a noisy version of the individual non-verifiable measures, and show that the verifiable measure is either ignored or used to create a conditional bonus floor.


Author(s):  
Mara Madaleno ◽  
Elisabete S. Vieira ◽  
João P. C. Teodósio

Using a sample of 47 Portuguese and Spanish firms for the period 2010 to 2017, the authors study the relationship between female presence on board and firm's accounting (ROA and ROE) and market-based (MTB and Tobin's Q) performance. They find that women on the board of directors is positively related to firm's performance, as well as the gender of the CFO and the proportion of women on the listed key professionals, when we consider the market measures of performance, not being so consistent for accounting performance measures. Results were sensitive to the performance measure used. The results reinforce the political options of European Commission gender established quotas, revealing that in the Iberian countries these quotas are not being effectively implemented, even if results suggest that women on board in fact exert positive influence over market performance. This also led us to think that financial markets may also react in a positive way when the CFO of the company is a woman instead of a man, despite the sample limitations both in terms of gender and number of firms.


Author(s):  
Thomas Yew Sing Lee

The author presents performance analysis of a single buffer multiple-queue system. Four different types of service disciplines (i.e., non-preemptive, pre-emptive repeat different, state dependent random polling and globally gated) are analyzed. His model includes correlated input process and three different types of non-productive time (i.e., switchover, vacation and idle time). Special cases of the model includes server with mixed multiple and single vacations, stopping server with delayed vacation and stopping server with alternating vacation and idle time. For each of the four service disciplines the key performance measures such as average customer waiting time, loss probability, and throughput are computed. The results permit a detailed discussion of how these performance measures depends on the customer arrival rate, the customer service time, the switchover time, the vacation time, and the idle time. Moreover, extensive numerical results are presented and the four service disciplines are compared with respect to the performance measure. Previous studies of the single buffer multiple-queue systems tend to provide separate analysis for the two cases of zero and nonzero switchover time. The author is able to provide a unified analysis for the two cases. His results generalize and improve a number of known results on single buffer multiple-queue systems. Furthermore, this method does not require differentiation while it is needed if one uses the probability generating function approach. Lastly, the author's approach works for all single buffer multiple-queue systems in which the next queue to be served is determines solely on the basis of the occupancy states at the end of the cycle time.


Author(s):  
A. M. Tahsin Emtenan ◽  
Christopher M. Day

In recent years, automated traffic signal performance measures (ATSPMs) have emerged as a means of developing situational awareness of traffic conditions at intersections and assessing the quality of signal operations. As a growing number of agencies are adopting the technology, there is a need to understand how detector configurations can influence the outcomes of an analysis using ATSPM. Current practices with regard to detector configuration vary considerably from one agency to another; at one extreme, agencies may use one single detector input channel per phase without considering where the detectors are located, whereas at the other extreme, some agencies may utilize all possible channels to observe each individual lane at multiple positions. There are also variations in the design of detection zones (lengths and positions). This study takes on the problem in two parts. The first of these examines the impact of stop bar detection zone length and lane- or approach-based detector assignment on the ability of performance measures to identify accurately whether split failures occur. The second part examines the impact of setback detector distance on the use of a “percentage on green” metric that serves as a proxy measurement of the number of stops. The paper presents recommendations for performance measure calibrations and detector configurations that follow from these outcomes.


2016 ◽  
Vol 9 (8) ◽  
pp. 2893-2908 ◽  
Author(s):  
Sergey Skachko ◽  
Richard Ménard ◽  
Quentin Errera ◽  
Yves Christophe ◽  
Simon Chabrillat

Abstract. We compare two optimized chemical data assimilation systems, one based on the ensemble Kalman filter (EnKF) and the other based on four-dimensional variational (4D-Var) data assimilation, using a comprehensive stratospheric chemistry transport model (CTM). This work is an extension of the Belgian Assimilation System for Chemical ObsErvations (BASCOE), initially designed to work with a 4D-Var data assimilation. A strict comparison of both methods in the case of chemical tracer transport was done in a previous study and indicated that both methods provide essentially similar results. In the present work, we assimilate observations of ozone, HCl, HNO3, H2O and N2O from EOS Aura-MLS data into the BASCOE CTM with a full description of stratospheric chemistry. Two new issues related to the use of the full chemistry model with EnKF are taken into account. One issue is a large number of error variance parameters that need to be optimized. We estimate an observation error variance parameter as a function of pressure level for each observed species using the Desroziers method. For comparison purposes, we apply the same estimate procedure in the 4D-Var data assimilation, where both scale factors of the background and observation error covariance matrices are estimated using the Desroziers method. However, in EnKF the background error covariance is modelled using the full chemistry model and a model error term which is tuned using an adjustable parameter. We found that it is adequate to have the same value of this parameter based on the chemical tracer formulation that is applied for all observed species. This is an indication that the main source of model error in chemical transport model is due to the transport. The second issue in EnKF with comprehensive atmospheric chemistry models is the noise in the cross-covariance between species that occurs when species are weakly chemically related at the same location. These errors need to be filtered out in addition to a localization based on distance. The performance of two data assimilation methods was assessed through an 8-month long assimilation of limb sounding observations from EOS Aura MLS. This paper discusses the differences in results and their relation to stratospheric chemical processes. Generally speaking, EnKF and 4D-Var provide results of comparable quality but differ substantially in the presence of model error or observation biases. If the erroneous chemical modelling is associated with moderately fast chemical processes, but whose lifetimes are longer than the model time step, then EnKF performs better, while 4D-Var develops spurious increments in the chemically related species. If, however, the observation biases are significant, then 4D-Var is more robust and is able to reject erroneous observations while EnKF does not.


2018 ◽  
Vol 22 (1) ◽  
pp. 31-41 ◽  
Author(s):  
Nopadol Rompho

PurposeThe purpose of this study is to investigate the uses of performance measures in startup firms, including perceived importance and performance of those measures. Design/methodology/approachThe survey method is used in this study. Data are collected from founders/chief executive officers/managers of 110 startups in Thailand. The correlation analysis and analysis of variance techniques are used as the analysis tool in this study. FindingsThe results show that there is a positive relationship between the perceived importance and the performance of each metric. However, no significant differences are found in the importance and performance of each metric among the various stages of startups. Research limitations/implicationsBecause there are so few startups compared to large corporations, the sample size of this study is relatively small, which is a limitation for some statistical tests. Practical implicationsStartup should measure and monitor the correct metrics in a particular stage, instead of trying to perform well in all areas, which will lead them to lose focus, and possibly even fail. Results obtained from this study will aid startups in properly monitoring and managing their performance. Originality/valueUnlike large corporations, the performance measures used by startups vary, and depend on a startup’s stage and type. Because of the fact that there are much fewer startups than large corporations, there are a limited number of studies in this area. This research is among the first studies that try to investigate the uses of performance measure for this new type of organizations.


Sign in / Sign up

Export Citation Format

Share Document