Reactions to Different Types of Forced Distribution Performance Evaluation Systems

2009 ◽  
Vol 24 (1) ◽  
pp. 77-91 ◽  
Author(s):  
Brian D. Blume ◽  
Timothy T. Baldwin ◽  
Robert S. Rubin
2010 ◽  
Vol 16 (1) ◽  
pp. 168-179 ◽  
Author(s):  
Susan M Stewart ◽  
Melissa L Gruys ◽  
Maria Storm

AbstractSome organizations, such as General Electric, currently use or have used forced distribution performance evaluation systems in order to rate employees' performance. This paper addresses the advantages and disadvantages as well as the legal implications of using such a system. It also discusses how an organization might assess whether a forced distribution system would be a good choice and key considerations when implementing such a system. The main concern is whether the organizational culture is compatible with a forced distribution system. When a company implements such a system, some important issues to consider include providing adequate training and ongoing support to managers who will be carrying out the system and also conducting adverse impact analyses to reduce legal risk.


2010 ◽  
Vol 16 (1) ◽  
pp. 168-179 ◽  
Author(s):  
Susan M Stewart ◽  
Melissa L Gruys ◽  
Maria Storm

AbstractSome organizations, such as General Electric, currently use or have used forced distribution performance evaluation systems in order to rate employees' performance. This paper addresses the advantages and disadvantages as well as the legal implications of using such a system. It also discusses how an organization might assess whether a forced distribution system would be a good choice and key considerations when implementing such a system. The main concern is whether the organizational culture is compatible with a forced distribution system. When a company implements such a system, some important issues to consider include providing adequate training and ongoing support to managers who will be carrying out the system and also conducting adverse impact analyses to reduce legal risk.


Author(s):  
Stephanie Payne ◽  
Margaret Horner ◽  
Wendy Boswell ◽  
Amber Wolf ◽  
Stine-Cheyne Kelleen

2014 ◽  
Vol 13 (9) ◽  
pp. 4859-4867
Author(s):  
Khaled Saleh Maabreh

Distributed database management systems manage a huge amount of data as well as large and increasingly growing number of users through different types of queries. Therefore, efficient methods for accessing these data volumes will be required to provide a high and an acceptable level of system performance.  Data in these systems are varying in terms of types from texts to images, audios and videos that must be available through an optimized level of replication. Distributed database systems have many parameters like data distribution degree, operation mode and the number of sites and replication. These parameters have played a major role in any performance evaluation study. This paper investigates the main parameters that may affect the system performance, which may help with configuring the distributed database system for enhancing the overall system performance.


2013 ◽  
Vol 33 (5) ◽  
pp. 369-376 ◽  
Author(s):  
Adele Caldarelli ◽  
Clelia Fiondella ◽  
Marco Maffei ◽  
Rosanna Spanò ◽  
Massimo Aria

Evaluation ◽  
2017 ◽  
Vol 23 (3) ◽  
pp. 294-311 ◽  
Author(s):  
Boru Douthwaite ◽  
John Mayne ◽  
Cynthia McDougall ◽  
Rodrigo Paz-Ybarnegaray

There is a growing recognition that programs that seek to change people’s lives are intervening in complex systems, which puts a particular set of requirements on program monitoring and evaluation. Developing complexity-aware program monitoring and evaluation systems within existing organizations is difficult because they challenge traditional orthodoxy. Little has been written about the practical experience of doing so. This article describes the development of a complexity-aware evaluation approach in the CGIAR Research Program on Aquatic Agricultural Systems. We outline the design and methods used including trend lines, panel data, after action reviews, building and testing theories of change, outcome evidencing and realist synthesis. We identify and describe a set of design principles for developing complexity-aware program monitoring and evaluation. Finally, we discuss important lessons and recommendations for other programs facing similar challenges. These include developing evaluation designs that meet both learning and accountability requirements; making evaluation a part of a program’s overall approach to achieving impact; and, ensuring evaluation cumulatively builds useful theory as to how different types of program trigger change in different contexts.


2015 ◽  
Vol 23 (1) ◽  
pp. 32-34 ◽  
Author(s):  
S.S. Sreejith

Purpose – Explains why performance evaluation designed for manufacturers is inappropriate for information technology organizations. Design/methodology/approach – Underlines the distinctiveness of the information technology workforce and provides the basis for an effective performance- evaluation system designed for these workers. Findings – Highlights the roles of consensus and transparency in setting and modifying evaluation criteria. Practical implications – Urges the need for a fair and open rewards and recognition system to run in parallel with reformed performance evaluation. Social implications – Provides a way of updating performance evaluation systems to take account of the move from manufacturing to information technology-based jobs in many developed and developing societies. Originality/value – Reveals how best to recognize, reward and assess the performance of information technology workers.


Sign in / Sign up

Export Citation Format

Share Document