Performance Indicators of a Collaborative Business Ecosystem – A Simulation Study

Author(s):  
Paula Graça ◽  
Luís M. Camarinha-Matos
Computers ◽  
2021 ◽  
Vol 10 (12) ◽  
pp. 167
Author(s):  
Paula Graça ◽  
Luis M. Camarinha-Matos

Advances in information and communication technologies and, more specifically, in artificial intelligence resulted in more intelligent systems, which, in the business world, particularly in collaborative business ecosystems, can lead to a more streamlined, effective, and sustainable processes. Following the design science research method, this article presents a simulation model, which includes a performance assessment and influence mechanism to evaluate and influence the collaboration of the organisations in a business ecosystem. The establishment of adequate performance indicators to assess the organisations can act as an influencing factor of their behaviour, contributing to enhancing their performance and improving the ecosystem collaboration sustainability. As such, several scenarios are presented shaping the simulation model with actual data gathered from three IT industry organisations running in the same business ecosystem, assessed by a set of proposed performance indicators. The resulting outcomes show that the collaboration can be measured, and the organisations’ behaviour can be influenced by varying the weights of the performance indicators adopted by the CBE manager.


1999 ◽  
Vol 97 (11) ◽  
pp. 1173-1184 ◽  
Author(s):  
R. Berardi, M. Fehervari, C. Zannoni

1990 ◽  
Vol 18 (1) ◽  
pp. 3-12 ◽  
Author(s):  
Sally Aldridge ◽  
David Legge

2006 ◽  
Vol 11 (1) ◽  
pp. 12-24 ◽  
Author(s):  
Alexander von Eye

At the level of manifest categorical variables, a large number of coefficients and models for the examination of rater agreement has been proposed and used. The most popular of these is Cohen's κ. In this article, a new coefficient, κ s , is proposed as an alternative measure of rater agreement. Both κ and κ s allow researchers to determine whether agreement in groups of two or more raters is significantly beyond chance. Stouffer's z is used to test the null hypothesis that κ s = 0. The coefficient κ s allows one, in addition to evaluating rater agreement in a fashion parallel to κ, to (1) examine subsets of cells in agreement tables, (2) examine cells that indicate disagreement, (3) consider alternative chance models, (4) take covariates into account, and (5) compare independent samples. Results from a simulation study are reported, which suggest that (a) the four measures of rater agreement, Cohen's κ, Brennan and Prediger's κ n , raw agreement, and κ s are sensitive to the same data characteristics when evaluating rater agreement and (b) both the z-statistic for Cohen's κ and Stouffer's z for κ s are unimodally and symmetrically distributed, but slightly heavy-tailed. Examples use data from verbal processing and applicant selection.


Methodology ◽  
2016 ◽  
Vol 12 (1) ◽  
pp. 11-20 ◽  
Author(s):  
Gregor Sočan

Abstract. When principal component solutions are compared across two groups, a question arises whether the extracted components have the same interpretation in both populations. The problem can be approached by testing null hypotheses stating that the congruence coefficients between pairs of vectors of component loadings are equal to 1. Chan, Leung, Chan, Ho, and Yung (1999) proposed a bootstrap procedure for testing the hypothesis of perfect congruence between vectors of common factor loadings. We demonstrate that the procedure by Chan et al. is both theoretically and empirically inadequate for the application on principal components. We propose a modification of their procedure, which constructs the resampling space according to the characteristics of the principal component model. The results of a simulation study show satisfactory empirical properties of the modified procedure.


Sign in / Sign up

Export Citation Format

Share Document