System and architecture evaluation framework using cross-domain dynamic complexity measures

Author(s):  
Jonathan Fischi ◽  
Roshanak Nilchiani ◽  
Jon Wade
2017 ◽  
Vol 11 (4) ◽  
pp. 2018-2027 ◽  
Author(s):  
Jonathan Fischi ◽  
Roshanak Nilchiani ◽  
Jon Wade

Author(s):  
Xifan Tang ◽  
Edouard Giacomin ◽  
Giovanni De Micheli ◽  
Pierre-Emmanuel Gaillardon

2015 ◽  
Vol 76 (11) ◽  
Author(s):  
Siti Mariam Abdul Rahman ◽  
Clark Borst ◽  
Max Mulder ◽  
Rene van Paassen

In developing a more advanced human-machine systems for future Air Traffic Management (ATM) concepts requires a deep understanding of what constitutes operator workload and how taskload and sector complexity can affect it. Many efforts have been done in the past to measure and/or predict operator workload using sector complexity. However, most sector complexity metrics that include sector design are calculated according to a set of rules and subjective weightings, rendering them to be dependent of sector. This research focuses on comparing the Solution Space Diagram (SSD) method with a widely accepted complexity metric: Dynamic Density (DD). In essence, the SSD method used in this research, observed aircraft restrictions and opportunities to resolve traffic conflicts in both the speed and heading dimensions. It is hypothesized that the more area covered on the solution space, that is, the fewer options the controller has to resolve conflicts, the more difficult the task and the higher the workload experienced by the controller. To compare sector complexity measures in terms of their transferability in capturing dynamic complexity across different sectors, a human-in-the-loop experiment using two distinct sectors has been designed and conducted. Based on the experiments, it is revealed that the SSD metric has a higher correlation with the controllers' workload ratings than the number of aircraft and the un-weighted NASA DD metric. Although linear regression analysis improved the correlation between the workload ratings and the weighted DD metric as compared to the SSD metric, the DD metric proved to be more sensitive to changes in sector layout than the SSD metric. This result would indicate that the SSD metric is better able to capture controller workload than the DD metric, when tuning for a specific sector layout is not feasible. 


Author(s):  
José Roberto C. Piqueira

This work is a generalization of the Lopez-Ruiz, Mancini and Calbet (LMC); and Shiner, Davison and Landsberg (SDL) complexity measures, considering that the state of a system or process is represented by a dynamical variable during a certain time interval. As the two complexity measures are based on the calculation of informational entropy, an equivalent information source is defined and, as time passes, the individual information associated to the measured parameter is the seed to calculate instantaneous LMC and SDL measures. To show how the methodology works, an example with economic data is presented.


2018 ◽  
Vol 14 (3) ◽  
pp. 41-78 ◽  
Author(s):  
Mariam Ben Hassen ◽  
Mohamed Turki ◽  
Faïez Gargouri

This article presents a set of Sensitive Business Process (SBP) modeling requirements and proposes a multi-criteria evaluation framework to appraise the expressiveness of currently widely used business process modelling formalisms to select the most suitable for SBP representation. The modelling of SBPs, be they process oriented or knowledge oriented, presents special requirements dictated by several factors: the highly dynamic complexity and flexibility of the processes; the high number of critical activities requiring intensive acquisition, sharing, storage and (re)use of very specific crucial knowledge; the specificity and diversity of information and knowledge sources; and the high degree of collaboration and interaction (intra/inter-organizational) among participants (who apply, create and share a great amount of very important tacit organizational knowledge, in order to achieve collective objectives and create value). As SBP models get more complex, the selection of the appropriate modeling formalism gains in importance to improve the identification of crucial knowledge that is mobilized and created by these processes. Furthermore, the result of the evaluation led us to justify the choice of the better one positioned nowadays, the standard BPMN 2.0. Besides, the authors have illustrated the practical applicability of this specification on a medical process in the context of the organization of protection of the motor disabled people of Sfax-Tunisia.


2014 ◽  
Vol 687-691 ◽  
pp. 1795-1801
Author(s):  
Qi Sheng Guo ◽  
Di Zhang ◽  
Chuan Guo Lu

We developed an evaluation method of architectural degree of order for the Weapon Equipment System on the basis of complex networks, and an evaluation framework for the combat capability of weapon equipment systems based on the entropy measure. Specifically, it comprehensively evaluated the architectural degree of order of the weapon equipment system according to the time efficiency entropy, quality entropy and anti-congestion entropy of complex networks. We got the index weight by means of Group-AHP, and accordingly the results of the architectural degree of order of the weapon equipment system. Finally, we employed an example to testify our model.


Sign in / Sign up

Export Citation Format

Share Document