The Next Generation of Scientific-Based Risk Metrics

2016 ◽  
Vol 6 (3) ◽  
pp. 43-52
Author(s):  
Lanier Watkins ◽  
John S. Hurley

One of the major challenges to an organization achieving a certain level of preparedness to “effectively” combat existing and future cyber threats and vulnerabilities is its ability to ensure the security and reliability of its networks. Most of the existing efforts are quantitative, by nature, and limited solely to the networks and systems of the organization. It would be unfair to not acknowledge that for sure some progress has been achieved in the way that organizations, as a whole, are now positioning themselves to address the threats (GAO 2012). Unfortunately, so have the skill sets and resource levels improved for attackers--they are increasingly getting better at achieving the unwanted access to organizations' information assets. In large part the authors believe that some of this is due to the failure by methods to assess the overall vulnerability of the networks. In addition, significant levels of threats and vulnerabilities beyond organizations' networks and systems are not being given the level of attention that is warranted. In this paper, the authors propose a more comprehensive approach that enables an organization to more realistically assess its “cyber maturity” level in hope of better positioning itself to address existing and new cyber threats. The authors also propose the need to better understand another missing critical piece to the puzzle--the reliability and security of networks in terms of scientific risk-based metrics (e.g., the severity of individual vulnerabilities and overall vulnerability of the network). Their risk-based metrics focus on the probability of compromise due to a given vulnerability; employee non-adherence to company cyber-based policies; insider threats. They are: (1) built on the CVSS Base Score which is modified by developing weights derived from the Analytic Hierarchy Process (AHP) to make the overall score more representative of the impact the vulnerability has on the global infrastructure, and (2) rooted in repeatable quantitative characteristics (i.e., vulnerabilities) such as the sum of the probabilities that devices will be compromised via client-side or server-side attacks stemming from software or hardware vulnerabilities. The authors will demonstrate the feasibility of their method by applying their approach to a case study and highlighting the benefits and impediments which result.

2020 ◽  
pp. 1687-1697
Author(s):  
Lanier Watkins ◽  
John S. Hurley

One of the major challenges to an organization achieving a certain level of preparedness to “effectively” combat existing and future cyber threats and vulnerabilities is its ability to ensure the security and reliability of its networks. Most of the existing efforts are quantitative, by nature, and limited solely to the networks and systems of the organization. It would be unfair to not acknowledge that for sure some progress has been achieved in the way that organizations, as a whole, are now positioning themselves to address the threats (GAO 2012). Unfortunately, so have the skill sets and resource levels improved for attackers--they are increasingly getting better at achieving the unwanted access to organizations' information assets. In large part the authors believe that some of this is due to the failure by methods to assess the overall vulnerability of the networks. In addition, significant levels of threats and vulnerabilities beyond organizations' networks and systems are not being given the level of attention that is warranted. In this paper, the authors propose a more comprehensive approach that enables an organization to more realistically assess its “cyber maturity” level in hope of better positioning itself to address existing and new cyber threats. The authors also propose the need to better understand another missing critical piece to the puzzle--the reliability and security of networks in terms of scientific risk-based metrics (e.g., the severity of individual vulnerabilities and overall vulnerability of the network). Their risk-based metrics focus on the probability of compromise due to a given vulnerability; employee non-adherence to company cyber-based policies; insider threats. They are: (1) built on the CVSS Base Score which is modified by developing weights derived from the Analytic Hierarchy Process (AHP) to make the overall score more representative of the impact the vulnerability has on the global infrastructure, and (2) rooted in repeatable quantitative characteristics (i.e., vulnerabilities) such as the sum of the probabilities that devices will be compromised via client-side or server-side attacks stemming from software or hardware vulnerabilities. The authors will demonstrate the feasibility of their method by applying their approach to a case study and highlighting the benefits and impediments which result.


Author(s):  
Haiyang Ge ◽  
Haibo Gao ◽  
Nianzhong Chen ◽  
Zhiguo Lin

Abstract An enhanced failure mode effect analysis (FMEA) based risk assessment for subsea compressor systems was proposed in this study. The enhanced model was established using a combination of fuzzy analytic hierarchy process (FAHP), fuzzy comprehensive evaluation, and FMEA. Different from the traditional FMEA model, the model improved the capability to identify system faults and to further measure the risk of a subsea compressor system through effective qualitative and quantitative analyses. A case study was then conducted to demonstrate the capability of the developed method in a complete risk assessment for a subsea compressor system and the subsystems and subcomponents with low reliabilities were identified. Comparing the enhanced model with previous models, it is found that the potential risks of some components are changed, and the components with higher potential risks in the system are identified. Sensitivity analysis to investigate the impact of parameters of subcomponents on system reliability was also performed.


Author(s):  
Karim Jamaleddin ◽  
Isam Kaysi

An interchange is a road junction that typically uses grade separation to accommodate different volumes of traffic safely and efficiently through interconnecting roads. There are many types of interchanges, each of which has different characteristics and applications. In this paper, a prioritization framework is proposed for selecting a suitable interchange type based on a variety of performance measures, including operational performance, socio-environment, safety, and cost. The assessment is based on a multi-criteria analysis (MCA) approach. The relative importance of the criteria is obtained using the Analytic Hierarchy Process (AHP) whereby the weights of the criteria are derived using multilevel hierarchic structures. Consequently, using a linear additive model, each alternative is evaluated on a set of criteria derived from several objectives. This paper illustrates the implementation of the proposed framework in a case study of an urban interchange in the city of Riyadh. This research study considers three interchanges in the analysis to demonstrate the procedure; other interchange forms may also be appropriate at the site used in the study. This research helps in defining the criteria that play a significant role in determining the preferred design alternative, and the extent to which each of these criteria has an effect on the overall priority of a certain interchange configuration. The results of the study showed that the Diverging Diamond Interchange (DDI) outperformed its interchange counterparts and was chosen as the most preferred interchange configuration. The Single Point Urban Interchange (SPUI) ranked second while the Tight Urban Diamond (TUDI) was least preferred. Sensitivity analysis was undertaken to determine the impact of changes in criteria scoring or weights on the overall results, and revealed the robustness of the proposed model.


2014 ◽  
Vol 27 (8) ◽  
pp. 760-776 ◽  
Author(s):  
David A. Munoz ◽  
Harriet Black Nembhard ◽  
Jennifer L. Kraschnewski

Purpose – The purpose of this paper is to quantify complexity in translational research. The impact of major operational steps and technical requirements is calculated with respect to their ability to accelerate moving new discoveries into clinical practice. Design/methodology/approach – A three-phase integrated quality function deployment (QFD) and analytic hierarchy process (AHP) method was used to quantify complexity in translational research. A case study in obesity was used to usability. Findings – Generally, the evidence generated was valuable for understanding various components in translational research. Particularly, the authors found that collaboration networks, multidisciplinary team capacity and community engagement are crucial for translating new discoveries into practice. Research limitations/implications – As the method is mainly based on subjective opinion, some argue that the results may be biased. However, a consistency ratio is calculated and used as a guide to subjectivity. Alternatively, a larger sample may be incorporated to reduce bias. Practical implications – The integrated QFD-AHP framework provides evidence that could be helpful to generate agreement, develop guidelines, allocate resources wisely, identify benchmarks and enhance collaboration among similar projects. Originality/value – Current conceptual models in translational research provide little or no clue to assess complexity. The proposed method aimed to fill this gap. Additionally, the literature review includes various features that have not been explored in translational research.


2019 ◽  
Vol 37 (3) ◽  
pp. 327-345 ◽  
Author(s):  
Fawzeia Abdulla Al Marzooqi ◽  
Matloub Hussain ◽  
Syed Zamberi Ahmad

Purpose The purpose of this paper is to explore certain resources, capabilities and competencies needed to improve the performance of physical asset management (PAM). Design/methodology/approach The analytic hierarchy process (AHP) is used to select and prioritize the most appropriate factors for improving performance. A multi-criteria approach is used to analyze and compare the importance of 6 main criteria and 18 subcriteria identified from a survey of relevant literature. Findings The study revealed that not all factors are viewed as having equal importance in improving PAM performance, as three of the main factors attained greater importance among the six factors. Research limitations/implications This study explored the factors required for managing assets only within the third stage of asset lifecycle, that is, the utilization stage. It is recommended that future studies be conducted in such a way as to determine the importance of similar factors in the other stages of the asset lifecycle, or to identify new factors and add new criteria. Practical implications Knowledge of the differential impacts of the factors on the performance of PAM can impact asset managers and decision makers in their allocation of resources and focus their work on the highest-ranked rather than the lowest-ranked factors. Also, AHP used provides an effective mean for asset managers to identify priorities among decision criteria in their organization. Originality/value To date, no study has explored the impact of six combined factors on the performance of PAM. Previous studies have found that these factors each had equal importance. However, their relative ranking in practice and when they appear together have remained unrecognized.


Sign in / Sign up

Export Citation Format

Share Document