Performance Evaluation Criteria for Preparation and Measurement of Macroand Microfabricated Ion-Selective Electrodes

2002 ◽  
Vol 24 (6) ◽  
2008 ◽  
Vol 80 (1) ◽  
pp. 85-104 ◽  
Author(s):  
Ernö Lindner ◽  
Yoshio Umezawa

Over the last 30 years, IUPAC published several documents with the goal of achieving standardized nomenclature and methodology for potentiometric ion-selective electrodes (ISEs). The ISE vocabulary was formulated, measurement protocols were suggested, and the selectivity coefficients were compiled. However, in light of new discoveries and experimental possibilities in the field of ISEs, some of the IUPAC recommendations have become outdated. The goal of this technical report is to direct attention to ISE practices and the striking need for updated or refined IUPAC recommendations which are consistent with the state of the art of using macro- and microfabricated planar microelectrodes. Some of these ISE practices have never been addressed by IUPAC but have gained importance with the technological and theoretical developments of recent years. In spite of its recognized importance, a generally acceptable revision of the current IUPAC recommendations is far beyond the scope of this work.


2005 ◽  
Vol 12 (2) ◽  
pp. 121-158 ◽  
Author(s):  
M. Yilmaz ◽  
O. Comakli ◽  
S. Yapici ◽  
O. N. Sara

2007 ◽  
Vol 21 (18n19) ◽  
pp. 3500-3502
Author(s):  
DENG-FANG RUAN ◽  
YOU-RONG LI ◽  
SHUANG-YING WU ◽  
BO LAN

The exergoeconomic analysis is carried out on enhanced heat transfer surfaces at low temperature. A new criterion for evaluating the performance of enhanced heat transfer surfaces at low temperature is proposed. It can be applied to various augmentation techniques and generalizes the performance evaluation criteria obtained by means of the first and second law analysis. The validity of the new performance evaluation criterion is illustrated by the analysis of heat transfer characteristics at low temperature and assessment of the heat transfer cost of two types of enhanced heat transfer surfaces.


2015 ◽  
Vol 23 (1) ◽  
pp. 32-34 ◽  
Author(s):  
S.S. Sreejith

Purpose – Explains why performance evaluation designed for manufacturers is inappropriate for information technology organizations. Design/methodology/approach – Underlines the distinctiveness of the information technology workforce and provides the basis for an effective performance- evaluation system designed for these workers. Findings – Highlights the roles of consensus and transparency in setting and modifying evaluation criteria. Practical implications – Urges the need for a fair and open rewards and recognition system to run in parallel with reformed performance evaluation. Social implications – Provides a way of updating performance evaluation systems to take account of the move from manufacturing to information technology-based jobs in many developed and developing societies. Originality/value – Reveals how best to recognize, reward and assess the performance of information technology workers.


2021 ◽  
Vol 4 (3) ◽  
pp. 251524592110268
Author(s):  
Roberta Rocca ◽  
Tal Yarkoni

Consensus on standards for evaluating models and theories is an integral part of every science. Nonetheless, in psychology, relatively little focus has been placed on defining reliable communal metrics to assess model performance. Evaluation practices are often idiosyncratic and are affected by a number of shortcomings (e.g., failure to assess models’ ability to generalize to unseen data) that make it difficult to discriminate between good and bad models. Drawing inspiration from fields such as machine learning and statistical genetics, we argue in favor of introducing common benchmarks as a means of overcoming the lack of reliable model evaluation criteria currently observed in psychology. We discuss a number of principles benchmarks should satisfy to achieve maximal utility, identify concrete steps the community could take to promote the development of such benchmarks, and address a number of potential pitfalls and concerns that may arise in the course of implementation. We argue that reaching consensus on common evaluation benchmarks will foster cumulative progress in psychology and encourage researchers to place heavier emphasis on the practical utility of scientific models.


Sign in / Sign up

Export Citation Format

Share Document