measures of performance
Recently Published Documents


TOTAL DOCUMENTS

430
(FIVE YEARS 75)

H-INDEX

36
(FIVE YEARS 3)

2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Taufik Akbar ◽  
A.K. Siti-Nabiha

PurposeThis study investigates both internal and external stakeholders' views on the objectives and measures of performance of Indonesian Islamic microfinance banks (IMFBs).Design/methodology/approachThis study uses a qualitative approach. In-depth interviews were conducted with a wide range of internal and external stakeholders of IMFBs in Indonesia. The primary stakeholders interviewed comprised the board of directors of IMFBs located in several provinces in Indonesia, including rural and urban areas. The external stakeholders were the regulators/supervisors, represented by the Indonesian Financial Services Authority and Sharīʿah advisors of the National Sharīʿah Board as well as Muslim scholars. The data were analysed using CAQDAS, a computer-assisted tool for qualitative analysis.FindingsThe objectives of the IMFBs are seen to represent more than profits or economic well-being. Their objectives also comprise spirituality and daʿwah (Islamic propagation). Daʿwah is conducted through the provision of funding and services that are aligned with Sharīʿah (Islamic law), the dissemination of information about Islamic financing, which is based on Islamic values and principles, and the payment of zakat (Islamic alms) and charitable contributions. The measures of performance are considered to be more holistic than those of conventional banks. Profit and growth are deemed important as the means to achieve social well-being objectives.Research limitations/implicationsBetter insights into the objectives and measures of IMFBs could be achieved from interviews with other stakeholder categories, such as customers and the community. This could be the focus of future research.Originality/valueThis study added a new discussion to the limited empirical literature on IMFBs by investigating the views of stakeholders on the objectives and performance of IMFBs in Indonesia.


2021 ◽  
Author(s):  
Elizabeth Mezzacappa ◽  
Ross Arnold ◽  
Melissa Jablonski ◽  
Jonathan Jablonski ◽  
Benjamin Abruzzo

2021 ◽  
Author(s):  
◽  
Alan J. Taylor

<p>The performances of observers in auditory experiments are likely to be affected by extraneous noise from physiological or neurological sources and also by decision noise. Attempts have been made to measure the characteristics of this noise, in particular its level relative to that of masking noise provided by the experimenter. This study investigated an alternative approach, a method of analysis which seeks to reduce the effects of extraneous noise on measures derived from experimental data. Group-Operating-Characteristic (GOC) analysis was described by Watson (1963) and investigated by Boven (1976). Boven distinguished between common and unique noise. GOC analysis seeks to reduce the effects of unique noise. In the analysis, ratings of the same stimulus on different occasions are sunned. The cumulative frequency distributions of the resulting variable define a GOC curve. This curve is analogous to an ROC curve, but since the effects of unique noise tend to be averaged out during the summation, the GOC is less influenced by extraneous noise. The amount of improvement depends on the relative variance of the unique and common noise (k). Higher levels of unique noise lead to greater improvement. In this study four frequency discrimination experiments were carried out with pigeons as observers, using a three-key operant procedure. In other experiments, computer-simulated observers were used. The first two pigeon experiments, and the simulations, were based on known distributions of common noise. The ROCs for the constructed distributions provided a standard with which the GOC curve could be compared. In all cases the analysis led to improvements in the measures of performance and increased the match of the experimental results and the ideal ROC. The amount of improvement, as well as reflecting the level of unique noise, depended on the number of response categories. With smaller numbers of categories, improvement was reduced and k was underestimated. Since the pigeon observers made only "yes" or "no" responses, the results for the pigeon experiments were compared with the results of simulations with known distributions in order to obtain more accurate estimates of k. The third and fourth pigeon experiments involved frequency discrimination tasks with a standard of 450 Hz and comparison frequencies of 500, 600, 700, 800 and 900 Hz, and 650 Hz, respectively. With the multiple comparison frequencies the results were very variable. This was due to the small number of trials for each frequency and the small number of replications. The results obtained with one comparison frequency were more orderly but, like those of the previous experiment, were impossible to distinguish from those which would be expected if there was no common noise. A final set of experiments was based on a hardware simulation. Signals first used in the fourth pigeon experiment were processed by a system made up of a filter, a zero-axis crossing detector and a simulated observer. The results of these experiments were compatible with the possibility that the amount of unique noise in the pigeon experiments overwhelmed any evidence of common noise.</p>


2021 ◽  
Author(s):  
◽  
Alan J. Taylor

<p>The performances of observers in auditory experiments are likely to be affected by extraneous noise from physiological or neurological sources and also by decision noise. Attempts have been made to measure the characteristics of this noise, in particular its level relative to that of masking noise provided by the experimenter. This study investigated an alternative approach, a method of analysis which seeks to reduce the effects of extraneous noise on measures derived from experimental data. Group-Operating-Characteristic (GOC) analysis was described by Watson (1963) and investigated by Boven (1976). Boven distinguished between common and unique noise. GOC analysis seeks to reduce the effects of unique noise. In the analysis, ratings of the same stimulus on different occasions are sunned. The cumulative frequency distributions of the resulting variable define a GOC curve. This curve is analogous to an ROC curve, but since the effects of unique noise tend to be averaged out during the summation, the GOC is less influenced by extraneous noise. The amount of improvement depends on the relative variance of the unique and common noise (k). Higher levels of unique noise lead to greater improvement. In this study four frequency discrimination experiments were carried out with pigeons as observers, using a three-key operant procedure. In other experiments, computer-simulated observers were used. The first two pigeon experiments, and the simulations, were based on known distributions of common noise. The ROCs for the constructed distributions provided a standard with which the GOC curve could be compared. In all cases the analysis led to improvements in the measures of performance and increased the match of the experimental results and the ideal ROC. The amount of improvement, as well as reflecting the level of unique noise, depended on the number of response categories. With smaller numbers of categories, improvement was reduced and k was underestimated. Since the pigeon observers made only "yes" or "no" responses, the results for the pigeon experiments were compared with the results of simulations with known distributions in order to obtain more accurate estimates of k. The third and fourth pigeon experiments involved frequency discrimination tasks with a standard of 450 Hz and comparison frequencies of 500, 600, 700, 800 and 900 Hz, and 650 Hz, respectively. With the multiple comparison frequencies the results were very variable. This was due to the small number of trials for each frequency and the small number of replications. The results obtained with one comparison frequency were more orderly but, like those of the previous experiment, were impossible to distinguish from those which would be expected if there was no common noise. A final set of experiments was based on a hardware simulation. Signals first used in the fourth pigeon experiment were processed by a system made up of a filter, a zero-axis crossing detector and a simulated observer. The results of these experiments were compatible with the possibility that the amount of unique noise in the pigeon experiments overwhelmed any evidence of common noise.</p>


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
E. Dervishi ◽  
T. Yang ◽  
M. K. Dyck ◽  
J. C. S. Harding ◽  
F. Fortin ◽  
...  

AbstractMetabolites in plasma of healthy nursery pigs were quantified using nuclear magnetic resonance. Heritabilities of metabolite concentration were estimated along with their phenotypic and genetic correlations with performance, resilience, and carcass traits in growing pigs exposed to a natural polymicrobial disease challenge. Variance components were estimated by GBLUP. Heritability estimates were low to moderate (0.11 ± 0.08 to 0.19 ± 0.08) for 14 metabolites, moderate to high (0.22 ± 0.09 to 0.39 ± 0.08) for 17 metabolites, and highest for l-glutamic acid (0.41 ± 0.09) and hypoxanthine (0.42 ± 0.08). Phenotypic correlation estimates of plasma metabolites with performance and carcass traits were generally very low. Significant genetic correlation estimates with performance and carcass traits were found for several measures of growth and feed intake. Interestingly the plasma concentration of oxoglutarate was genetically negatively correlated with treatments received across the challenge nursery and finisher (− 0.49 ± 0.28; P < 0.05) and creatinine was positively correlated with mortality in the challenge nursery (0.85 ± 0.76; P < 0.05). These results suggest that some plasma metabolite phenotypes collected from healthy nursery pigs are moderately heritable and genetic correlations with measures of performance and resilience after disease challenge suggest they may be potential genetic indicators of disease resilience.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Kristine L. Beck ◽  
James Chong ◽  
Bruce D. Niendorf

PurposeThis study aims to examine whether a good corporate reputation leads to superior investment returns. Theory and empirics provide support for the idea that a good corporate reputation improves firm value, but much of the previous research fails to consider the risk of the companies they study and relies only on accounting measures of performance such as return on assets. A complete picture of the relationship between corporate reputation and shareholder value should include risk-adjusted returns and correlation with benchmark returns.Design/methodology/approachThe Harris Poll Reputation Quotient (RQ), based on the reputations of the 100 most visible companies, suggests that companies with a “solid reputation” are more likely to be attractive investments. The authors construct portfolios using deciles and the RQ categories, rebalancing annually as RQ rankings are updated. Returns are adjusted for risk using Jensen's alpha, the information ratio, the Sharpe ratio, Modigliani and Modigliani's M2 measure, and Muralidhar's M3 measure.FindingsThe results indicate that choosing a portfolio based on the highest RQ-ranked firms does outperform the market on a risk-adjusted basis, and that the relationship between rankings and time-weighted returns is roughly monotonic. The authors also observe that corporate reputation is persistent, and that the best and worst most-visible firms are more likely to be privately held.Originality/valueThis research adds to the literature by including both market-based return measures and risk in the examination of the relationship between corporate reputation and financial performance.


2021 ◽  
pp. 1-21
Author(s):  
Thomas Helmuth ◽  
Lee Spector

Abstract In genetic programming, an evolutionary method for producing computer programs that solve specified computational problems, parent selection is ordinarily based on aggregate measures of performance across an entire training set. Lexicase selection, by contrast, selects on the basis of performance on random sequences of training cases; this has been shown to enhance problem-solving power in many circumstances. Lexicase selection can also be seen as better reflecting biological evolution, by modeling sequences of challenges that organisms face over their lifetimes. Recent work has demonstrated that the advantages of lexicase selection can be amplified by down-sampling, meaning that only a random subsample of the training cases is used each generation. This can be seen as modeling the fact that individual organisms encounter only subsets of the possible environments and that environments change over time. Here we provide the most extensive benchmarking of down-sampled lexicase selection to date, showing that its benefits hold up to increased scrutiny. The reasons that down-sampling helps, however, are not yet fully understood. Hypotheses include that down-sampling allows for more generations to be processed with the same budget of program evaluations; that the variation of training data across generations acts as a changing environment, encouraging adaptation; or that it reduces overfitting, leading to more general solutions. We systematically evaluate these hypotheses, finding evidence against all three, and instead draw the conclusion that down-sampled lexicase selection's main benefit stems from the fact that it allows the evolutionary process to examine more individuals within the same computational budget, even though each individual is examined less completely.


Author(s):  
M.S. Emily Parcell ◽  
M.S. Shivani Patel ◽  
Cameron Severin ◽  
Yoona Cho ◽  
Alex Chaparro

Performing a secondary task while driving impairs various performance measures, including speed control. Distraction is associated with reductions in driving speed; however, this is often based on global measures of performance, such as course completion time or mean speed. This study investigated how a secondary task affected granular speed variation. Participants (N=16, ages 18-43) performed a secondary task of mentally subtracting pairs of numbers while negotiating a simulated road course. Various driving performance measures were obtained but only results for longitudinal velocity are reported. The results reveal that drivers exhibited significant increases and decreases (>2+/- SD) in vehicle speed under distraction, with participants showing a stronger tendency to decrease their speed (60% of the observed speed violations). This may explain why global measures of driving speed under distraction reveal a slowing down. These results may increase our understanding of the nuanced effects of distraction on driving and be useful for predicting/diagnosing distracted driving behavior.


Sign in / Sign up

Export Citation Format

Share Document