Modeling Uncertainty with Interval Valued Fuzzy Numbers

Author(s):  
Palash Dutta

This article describes how risk assessment is a significant aid in decision-making process. It is usually performed using models and a ‘model' is a function of some parameters which are usually affected by uncertainty due to lack of data, imprecision, vagueness, and a small sample size.. Fuzzy set is a well-established mathematical tool to handle this type of uncertainty. Normally, triangular fuzzy numbers (TFNs) or trapezoidal fuzzy numbers (TrFNs) are extensively deliberated to embody this type of uncertainty. However, in real world situations, bell-shaped fuzzy numbers may occur to characterize uncertainty. It is pragmatic that type-I fuzzy set may not always dispense single value from [0,1] and on the other hand, assigning a precise value to expert's judgment is excessively restrictive, therefore, the assignment of an interval value is more practical. Thus, interval valued fuzzy set (IVFS) comes into picture. It can be observed that representation of some model parameters of the risk assessment models are triangular interval valued fuzzy numbers (TIVFNs) while representation of some other parameters are bell-shaped IVFNs. In such circumstances, it is most important to devise a technique to combine TIVFNs and bell shaped IVFNs, as they are non-comparable. For this purpose, this article presents a technique to combine both types of incomparable IVFNs within the same framework and finally, a case study is carried out in risk assessment under this setting.

2016 ◽  
Vol 5 (2) ◽  
pp. 96-117 ◽  
Author(s):  
Palash Dutta

In risk assessment, generally model parameters are affected by uncertainty arises due to vagueness, imprecision, lack of data, small sample sizes etc. Fuzzy set theory and Dempster-Shafer theory (In short DST) of evidence should be explored to handle this type of uncertainty. Representation of parameters of risk assessment models may be Dempster-Shafer structure (in short DSS) and fuzzy numbers. To deal with such situations, it is important to device new techniques. This paper presents two algorithms to combine Dempster-Shafer structure with generalized/normal fuzzy focal elements, generalized/normal fuzzy numbers within the same framework. Sampling technique for evidence theory and alpha-cut for fuzzy numbers are considered to execute the algorithms. Finally, results are obtained in the form of fuzzy numbers (normal/generalized) at different fractiles.


2012 ◽  
Vol 30 (1) ◽  
pp. 35-41
Author(s):  
Emily J. Kapler ◽  
Mark P. Widrlechner ◽  
Philip M. Dixon ◽  
Janette R. Thompson

Use of risk-assessment models that can predict the naturalization and invasion of non-native woody plants is a potentially beneficial approach for protecting human and natural environments. This study validates the power and accuracy of four risk-assessment models previously tested in Iowa, and examines the performance of a new random forest modeling approach. The random forest model was fitted with the same data used to develop the four earlier risk-assessment models. The validation of all five models was based on a new set of 11 naturalizing and 18 non-naturalizing species in Iowa. The fitted random forest model had a high classification rate (92.0%), no biologically significant errors (accepting a plant that has a high risk of naturalizing), and few horticulturally limiting errors (rejecting a plant that has a low risk of naturalizing) (8.7%). Classification rates for validation of all five models ranged from 62.1 to 93.1%. Horticulturally limiting errors for the four models previously developed for Iowa ranged from 11.1 to 38.5%, and biologically significant errors from 4.2 to 18.5%. Because of the small sample size, few classification and error rate results were significantly different from the original tests of the models. Overall, the random forest model shows promise for powerful and accurate risk-assessment, but mixed results for the other models suggest a need for further refinement.


2021 ◽  
Vol 8 ◽  
pp. 205435812110293
Author(s):  
Danielle E. Fox ◽  
Robert R. Quinn ◽  
Paul E. Ronksley ◽  
Tyrone G. Harrison ◽  
Hude Quan ◽  
...  

Background: Simultaneous kidney-pancreas transplantation (SPK) has benefits for patients with kidney failure and type I diabetes mellitus, but is associated with greater perioperative risk compared with kidney-alone transplantation. Postoperative care settings for SPK recipients vary across Canada and may have implications for patient outcomes and hospital resource use. Objective: To compare outcomes following SPK transplantation between patients receiving postoperative care in the intensive care unit (ICU) compared with the ward. Design: Retrospective cohort study using administrative health data. Setting: In Alberta, the 2 transplant centers (Calgary and Edmonton) have different protocols for routine postoperative care of SPK recipients. In Edmonton, SPK recipients are routinely transferred to the ICU, whereas in Calgary, SPK recipients are transferred to the ward. Patients: 129 adult SPK recipients (2002-2019). Measurements: Data from the Canadian Institute for Health Information Discharge Abstract Database (CIHI-DAD) were used to identify SPK recipients (procedure codes) and the outcomes of inpatient mortality, length of initial hospital stay (LOS), and the occurrence of 16 different patient safety indicators (PSIs). Methods: We followed SPK recipients from the admission date of their transplant hospitalization until the first of hospital discharge or death. Unadjusted quantile regression was used to determine differences in LOS, and age- and sex-adjusted marginal probabilities were used to determine differences in PSIs between centers. Results: There were no perioperative deaths and no major differences in the demographic characteristics between the centers. The majority of the SPK transplants were performed in Edmonton (n = 82, 64%). All SPK recipients in Edmonton were admitted to the ICU postoperatively, compared with only 11% in Calgary. There was no statistically significant difference in the LOS or probability of a PSI between the 2 centers (LOS for Edmonton vs Calgary:16 vs 13 days, P = .12; PSIs for Edmonton vs Calgary: 60%, 95% confidence interval [CI] = 0.50-0.71 vs 44%, 95% CI = 0.29-0.59, P = .08). Limitations: This study was conducted using administrative data and is limited by variable availability. The small sample size limited precision of estimated differences between type of postoperative care. Conclusions: Following SPK transplantation, we found no difference in inpatient outcomes for recipients who received routine postoperative ICU care compared with ward care. Further research using larger data sets and interventional study designs is needed to better understand the implications of postoperative care settings on patient outcomes and health care resource utilization.


Author(s):  
R. GUO

A fundamental but impossible to be addressed problem in repairable system modelling is how to estimate the system repair improvement (or damage) effects because of the large-sample requirements from the standard statistical inference theory. On the other hand, repairable system operating and maintenance data are often imprecise and vague and therefore Type I fuzzy sets defined by point-wise membership functions are often used for the modelling repairable systems. However, it is more logical and natural to argue that Type II fuzzy sets defined by interval-valued membership function, called interval-valued fuzzy sets (IVFS), should be used in characterizing the underlying mechanism of repairable system. In this paper, we explore a small-sample based GM(1,1) modelling approach rooted in the grey system theory to extract the system intrinsic functioning times from the seemly lawless functioning-failure time records and thus to estimate the repair improvement (damage) effects. We further explore the role of interval-valued fuzzy sets theory in the analysis of the system underlying mechanism. We develop a framework of the GM(1,1)-IVFS mixed reliability analysis and illustrate our idea by an industrial example.


Sensors ◽  
2021 ◽  
Vol 21 (17) ◽  
pp. 5863 ◽  
Author(s):  
Annica Kristoffersson ◽  
Jiaying Du ◽  
Maria Ehn

Sensor-based fall risk assessment (SFRA) utilizes wearable sensors for monitoring individuals’ motions in fall risk assessment tasks. Previous SFRA reviews recommend methodological improvements to better support the use of SFRA in clinical practice. This systematic review aimed to investigate the existing evidence of SFRA (discriminative capability, classification performance) and methodological factors (study design, samples, sensor features, and model validation) contributing to the risk of bias. The review was conducted according to recommended guidelines and 33 of 389 screened records were eligible for inclusion. Evidence of SFRA was identified: several sensor features and three classification models differed significantly between groups with different fall risk (mostly fallers/non-fallers). Moreover, classification performance corresponding the AUCs of at least 0.74 and/or accuracies of at least 84% were obtained from sensor features in six studies and from classification models in seven studies. Specificity was at least as high as sensitivity among studies reporting both values. Insufficient use of prospective design, small sample size, low in-sample inclusion of participants with elevated fall risk, high amounts and low degree of consensus in used features, and limited use of recommended model validation methods were identified in the included studies. Hence, future SFRA research should further reduce risk of bias by continuously improving methodology.


Complexity ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
Cong Wang ◽  
Fangyue Yu ◽  
Zaixu Zhang ◽  
Jian Zhang

In recent years, supply chain finance (SCF) is exploited to solve the financing difficulties of small- and medium-sized enterprises (SMEs). SME credit risk assessment is a critical part in the SCF system. The diffusion of SME credit risk may cause serious consequences, leading the whole supply chain finance system unstable and insecure. Compared with traditional credit risk assessment models, the supply chain relationship, credit condition of SME, and core enterprises should all be considered to rate SME credit risk in SCF. Traditional methods mix all indicators from different index systems. They cannot give a quantitative result on how these index systems work. Furthermore, traditional credit risk assessment models are heavily dependent on the number of annotated SME data. However, it is implausible to accumulate enough credit risky SMEs in advance. In this paper, we propose an adaptive heterogenous multiview graph learning method to tackle the small sample size problem for SMEs’ credit risk forecasting. Three graphs are constructed by using indicators from supply chain operation, SME financial indicator, and nonfinancial indicator individually. All the graphs are integrated in an adaptive manner, providing a quantitative explanation on how the three parts cooperate. The experimental analysis shows that the proposed method has good performance for determining whether SME is risky or nonrisky in SCF. From the perspective of SCF, SME financing ability is still the main factor to determine the credit risk of SME.


2020 ◽  
Vol 57 (2) ◽  
pp. 237-251
Author(s):  
Achilleas Anastasiou ◽  
Alex Karagrigoriou ◽  
Anastasios Katsileros

SummaryThe normal distribution is considered to be one of the most important distributions, with numerous applications in various fields, including the field of agricultural sciences. The purpose of this study is to evaluate the most popular normality tests, comparing the performance in terms of the size (type I error) and the power against a large spectrum of distributions with simulations for various sample sizes and significance levels, as well as through empirical data from agricultural experiments. The simulation results show that the power of all normality tests is low for small sample size, but as the sample size increases, the power increases as well. Also, the results show that the Shapiro–Wilk test is powerful over a wide range of alternative distributions and sample sizes and especially in asymmetric distributions. Moreover the D’Agostino–Pearson Omnibus test is powerful for small sample sizes against symmetric alternative distributions, while the same is true for the Kurtosis test for moderate and large sample sizes.


2012 ◽  
Author(s):  
Nor Haniza Sarmin ◽  
Md Hanafiah Md Zin ◽  
Rasidah Hussin

Suatu transformasi terhadap min dilakukan menggunakan penganggar pembetulan kepincangan bagi mendapatkan statistik untuk menguji min hipotesis taburan terpencong. Penghasilan statistik ini melibatkan pengubahsuaian pemboleh ubah . Kajian simulasi yang dijalankan terhadap taburan yang terpencong iaitu taburan eksponen, kuasa dua khi dan Weibull ke atas Kebarangkalian Ralat Jenis I menunjukkan bahawa statistik t3 sesuai untuk ujian satu hujung sebelah kiri dan saiz sampel yang kecil (n=5). Kata kunci: Min; statistik; taburan terpencong; penganggar pembetulan kepincangan; kebarangkalian Ralat Jenis I A transformation of mean has been done using a bias correction estimator to produce a statistic for mean hypothesis of skewed distributions. The statistic found involves a modification of the variable . A simulation study that has been done on some skewed distributions i.e. esponential, chi-square and Weibull on the Type I Error shows that t3 is suitable for the left-tailed test and a small sample size (n=5). Key words: Mean; statistic; skewed distribution; bias correction estimator; Type I Error


Sign in / Sign up

Export Citation Format

Share Document