Interval-Valued Probability in the Analysis of Problems Containing a Mixture of Fuzzy, Possibilisitic and Interval Uncertainty

Author(s):  
Weldon A. Lodwick ◽  
K. David Jamison
2020 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Warattaya Chinnakum ◽  
Laura Berrout Ramos ◽  
Olugbenga Iyiola ◽  
Vladik Kreinovich

Purpose In real life, we only know the consequences of each possible action with some uncertainty. A typical example is interval uncertainty, when we only know the lower and upper bounds on the expected gain. A usual way to compare such interval-valued alternatives is to use the optimism–pessimism criterion developed by Nobelist Leo Hurwicz. In this approach, a weighted combination of the worst-case and the best-case gains is maximized. There exist several justifications for this criterion; however, some of the assumptions behind these justifications are not 100% convincing. The purpose of this paper is to find a more convincing explanation. Design/methodology/approach The authors used utility approach to decision-making. Findings The authors proposed new, hopefully more convincing, justifications for Hurwicz’s approach. Originality/value This is a new, more intuitive explanation of Hurwicz’s approach to decision-making under interval uncertainty.


Author(s):  
Tiago da Cruz Asmus ◽  
Graçaliz Pereira Dimuro ◽  
Benjamín Bedregal ◽  
José Antonio Sanz ◽  
Radko Mesiar ◽  
...  

Author(s):  
Bapin Mondal ◽  
Md Sadikur Rahman

Interval interpolation formulae play a significant role to find the value of an unknown function at some points under interval uncertainty. The objective of this paper is to establish Newton’s divided interpolation formula for interval-valued functions using generalized Hukuhara difference of intervals. For this purpose, arithmetic of intervals, Hukuhara difference and its some properties and concept of interval-valued function have been discussed briefly. Using Hukuhara difference of intervals, the definition of Newton’s divided gH-difference for interval-valued function has been introduced. Then Newton’s divided gH-differences interpolation formula has been derived. Finally, with the help of some numerical examples, the proposed interpolation formula has been illustrated.


Author(s):  
Л.В. Уткин ◽  
Ю.А. Жук

Предложена робастная модификация метода K-средних для решения задачи кластеризации при условии, что элементы обучающей выборки являются интервальными. Существующие методы кластеризации в большинстве либо основаны на замене интервальных данных их точными аналогами, например, центрами интервалов, либо используют специальные метрики расстояния между гиперпрямоугольниками (многомерными интервалами) или между точкой и гиперпрямоугольником, например расстояние Хаусдорфа. В отличие от существующих методов, идеей, лежащей в основе предлагаемого алгоритма, является трансформация интервального характера неопределенности во множество распределений весов примеров и расширение обучающей выборки. При этом новые элементы обучающей выборки, являющиеся точками исходных интервалов, имеют неопределенные веса, назначенные таким образом, чтобы не нарушить исходную структуру обучающей выборки, не внося никакой дополнительной необоснованной информации. Другой идеей является использование минимаксной стратегии для обеспечения робастности. Показано, что новый алгоритм отличается от стандартного алгоритма K-средних этапом решения простой задачи линейного программирования. Также показано, что в простейшем случае, когда элементы исходной обучающей выборки имеют одинаковые веса, предлагаемый алгоритм сводится к тому, что выбираются точки гиперпрямоугольников, находящиеся от текущего центра тяжести на максимальном расстоянии. Полученные результаты можно рассматривать в рамках теории Демпстера-Шейфера. Предлагаемый алгоритм целесообразно применять в случае больших интервалов данных или при малом объеме обучающей выборки. A robust modification of the K-means method for solving a clustering problem under interval-valued training data is proposed. The existing methods of clustering are mainly based on the replacement of interval-valued data with their point-valued representations, for example, with centers of intervals, or they use some special distance metrics between hyper-rectangles (multi-dimensional intervals) or between points and hyper-rectangles, for example, the Hausdorff distance. In contrast to the existing methods, the first idea underlying the proposed algorithm is transferring of interval uncertainty to sets of example weights and to an extension of the training set. At that, new elements of the training set, being points approximating intervals, have imprecise weights assigned such that they do not change an initial structure of training data and do not introduce additional unjustified information. The second idea is to use the minimax strategy for providing the robustness. It is shown in the paper that the new algorithm differs from the standard K-means algorithm by a step of solving a simple linear programming problem. It is also shown in the paper that in the simplest case when all elements of the training set have identical weights, the proposed algorithm is reduced to the choice of a point inside hyper-rectangles, which are located on the largest distance from the center of a cluster. The obtained results can be considered also in the framework of Dempster-Shafer theory. The proposed algorithm is useful when the intervals of data are rather large and when the training set is small.


2017 ◽  
Vol 26 (04) ◽  
pp. 1750014 ◽  
Author(s):  
Lev V. Utkin ◽  
Yulia A. Zhuk

A new robust SVM-based algorithm of the binary classification is proposed. It is based on the so-called uncertainty trick when training data with the interval uncertainty are transformed to training data with the weight or probabilistic uncertainty. Every interval is replaced by a set of training points with the same class label such that every point inside the interval has an unknown weight from a predefined set of weights. The robust strategy dealing with the upper bound of the interval-valued expected risk produced by a set of weights is used in the SVM. An extension of the algorithm based on using the imprecise Dirichlet model is proposed for its additional robustification. Numerical examples with synthetic and real interval-valued training data illustrate the proposed algorithm and its extension.


Sign in / Sign up

Export Citation Format

Share Document