uncertain distribution
Recently Published Documents


TOTAL DOCUMENTS

24
(FIVE YEARS 8)

H-INDEX

4
(FIVE YEARS 1)

2021 ◽  
pp. 1-16
Author(s):  
Ying Ji ◽  
Xiaowan Jin ◽  
Zeshui Xu ◽  
Shaojian Qu

In practical multiple attribute decision making (MADM) problems, the interest groups or individuals intentionally set attribute weights to achieve their own benefits. In this case, the rankings of different alternatives are changed strategically, which is called the strategic weight manipulation in MADM. Sometimes, the attribute values are given with imprecise forms. Several theories and methods have been developed to deal with uncertainty, such as probability theory, interval values, intuitionistic fuzzy sets, hesitant fuzzy sets, etc. In this paper, we study the strategic weight manipulation based on the belief degree of uncertainty theory, with uncertain attribute values obeying linear uncertain distributions. It allows the attribute values to be considered as a whole in the operation process. A series of mixed 0-1 programming models are constructed to set a strategic weight vector for a desired ranking of a particular alternative. Finally, an example based on the assessment of the performance of COVID-19 vaccines illustrates the validity of the proposed models. Comparison analysis shows that, compared to the deterministic case, it is easier to manipulate attribute weights when the attribute values obey the linear uncertain distribution. And a further comparative analysis highlights the performance of different aggregation operators in defending against the strategic manipulation, and highlights the impacts on ranking range under different belief degrees.


2021 ◽  
Vol 11 (5) ◽  
pp. 2265
Author(s):  
Yufeng Lyu ◽  
Zhenyu Liu ◽  
Xiang Peng ◽  
Jianrong Tan ◽  
Chan Qiu

Aleatoric and epistemic uncertainties can be represented probabilistically in mechanical systems. However, the distribution parameters of epistemic uncertainties are also uncertain due to sparsely available or inaccurate uncertainty information. Therefore, a unified reliability measure method that considers uncertainties of input variables and their distribution parameters simultaneously is proposed. The uncertainty information for distribution parameters of epistemic uncertainties could be as a result of insufficient data or interval information, which is represented with evidence theory. The probability density function of uncertain distribution parameters is constructed through fusing insufficient data and interval information based on a Gaussian interpolation algorithm, and the epistemic uncertainties are represented using a weighted sum of probability variables based on discrete distribution parameters. The reliability index considering aleatoric and epistemic uncertainties is calculated around the most probable point. The effectiveness of the proposed algorithm is demonstrated through comparison with the Monte Carlo method in the engineering example of a crank-slider mechanism and composite laminated plate.


2020 ◽  
Vol 2020 ◽  
pp. 1-11
Author(s):  
Shuai Li ◽  
Jie Yang ◽  
Zhipeng Qi ◽  
Juanli Zeng

The similarity of concepts is a basic task in the field of artificial intelligence, e.g., image retrieval, collaborative filtering, and public opinion guidance. As a powerful tool to express the uncertain concepts, similarity measure based on cloud model (SMCM) is always utilized to measure the similarity between two concepts. However, current studies on SMCM have two main limitations: (1) the similarity measures based on conceptual intension lack interpretability for merging the numerical characteristics and cannot discriminate some different concepts. (2) The similarity measures based on conceptual extension are always instable and inefficient. To address the above problems, an uncertain distribution-based similarity measure of cloud model (UDCM) is proposed in this paper. By analyzing the definition of the CM, we propose a new complete uncertainty including first-order and second-order uncertainty to calculate the uncertainty more accurately. Then, based on the difference between the complete uncertainty of two concepts, the computing process of UDCM and its some properties are introduced. Finally, we exhibit its advantages by comparing with other methods and verify its validity by experiments.


Author(s):  
E.N. Maksimova ◽  
E.G. Viktorov ◽  
E.O. Belyakov ◽  
B.V. Belozerov

The geology of oilfields is becoming more complex, which leads to uncertain distribution of petrophysical properties. Quality of reservoir properties prediction depends on petrophysical models and log interpretation algorithms. It is also connected with the level of expertise of each petrophysicist as well as knowledge sharing among experts and young specialists. The aim of this paper is to present Gazprom Neft Science and Technical Centre approach to development of petrophysical competences with communities of practice.


Life insurance distribution in India, like elsewhere, is in a transitory phase. Technological advancements and an untapped market have opened up new vistas in the field. Customer offerings through multiple touch-points seem to be the mantra. While a multi-channel strategy does offer the outreach, there are a number of issues, including, channel conflict, channel cannibalization and channel misalignment, which act as a dampener to such a strategy. The other question is that of finding an optimum distribution mix. We, by virtue of this paper, attempt to delineate the aforesaid, highlighting the manifestation of the same problem of distribution, albeit differently, rather contrastingly - concurrently suggesting the solutions – through two similar bank led (large bank as group company / bank promoted) companies, HDFC Life and SBI Life. Distribution woes have resulted in an uncertain distribution landscape with no clear pattern in sight. Industry numbers show a shrinking agency, a robust bancassurance and a rising online channel. But the pattern is all lopsided in HDFC Life and SBI Life. How to balance the distribution mix, then? Does that mean doing away with the not so hot agency channel? Or does preserving agency still make sense? Or fortifying the banca further is the need of the hour? Or, can the currently hot direct / online channel be pursued vehemently? The implications of such distribution patterns are immense for an industry trying to find a way out of the muddle.


Life insurance distribution in India is at an inflection point. Whilst life insurance as a phenomenon continues to be an enigma at best, it is its distribution aspect which seems to have taken the cake. Companies are grappling with the problem of designing an appropriate distribution channel mix. At a time when an industry-wise (private companies sans LIC) marked shift could be seen in favour of bancassurance over agency as a channel of choice, it is interesting that upon deeper analysis an intriguing phenomenon over the last five years has occurred, wherein two leading private bank led life insurance firms, namely HDFC Life and ICICI Prudential – both having the support of big banks (HDFC bank & ICICI bank); one being a non bank-promoted entity (HDFC Life), the other a bank-promoted one (ICICI Prudential) - are selling more through bancassurance - in line with the industry trends - but are either dismal in agency numbers or are at least preserving them or at times, are ahead of the industry agency numbers, even when the common refrain is to go the bancassurance way. In fact, ICICI Prudential is overshooting the industry bancassurance trend and HDFC more or less equals it. The last five years have unfolded marked variations in distribution for both these companies, manifestations of which are different. Equally confounding is the fact that with the direct channel (read online) - showing tremendous potential and growth - poised to make further disruptions, the resultant is an uncertain distribution landscape with no clear pattern in sight. Should these companies dispense with the agency channel? Or should they preserve agency? Or should they look to the direct / online channel. How do they strike a balance when disruptions are bound to happen further? This paper attempts to delineate the distribution patterns of these similar bank led companies and the implications thereof of such patterns


2018 ◽  
Author(s):  
Robyn J. Wright ◽  
Matthew I. Gibson ◽  
Joseph A. Christie-Oleza

AbstractRecalcitrant polymers are widely distributed in the environment. This includes natural polymers, such as chitin, but synthetic polymers are becoming increasingly abundant, for which biodegradation is uncertain. Distribution of labour in microbial communities commonly evolves in nature, particularly for arduous processes, suggesting a community may be better at degrading recalcitrant compounds than individual microorganisms. Artificial selection of microbial communities with better degradation potential has seduced scientists for over a decade, but the method has not been systematically optimised nor applied to polymer degradation. Using chitin as a case study, we successfully selected for microbial communities with enhanced chitinase activities but found that continuous optimisation of incubation times between selective generations was of utmost importance. The analysis of the community composition over the entire selection process revealed fundamental aspects in microbial ecology: when incubation times between generations were optimal, the system was dominated byGammaproteobacteria, main bearers of chitinase enzymes and drivers of chitin degradation, before being succeeded by cheating, cross-feeding and grazing organisms.ImportanceArtificial selection is a powerful and atractive technique that can enhance the biodegradation of a recalcitrant polymer and other pollutants by microbial communities. We show, for the first time, that the success of artificially selecting microbial communities requires an optimisation of the incubation times between generations when implementing this method. Hence, communities need to be transferred at the peak of the desired activity in order to avoid community drift and replacement of the efficient biodegrading community by cheaters, cross-feeders and grazers.


Author(s):  
Xingxing Zhang ◽  
Zhenfeng Zhu ◽  
Yao Zhao ◽  
Deqiang Kong

Prototype selection is a promising technique for removing redundancy and irrelevance from large-scale data. Here, we consider it as a task assignment problem, which refers to assigning each element of a source set to one representative, i.e., prototype. However, due to the outliers and uncertain distribution on source, the selected prototypes are generally less representative and interesting. To alleviate this issue, we develop in this paper a Self-supervised Deep Low-rank Assignment model (SDLA). By dynamically integrating a low-rank assignment model with deep representation learning, our model effectively ensures the goodness-of-exemplar and goodness-of-discrimination of selected prototypes. Specifically, on the basis of a denoising autoencoder, dissimilarity metrics on source are continuously self-refined in embedding space with weak supervision from selected prototypes, thus preserving categorical similarity. Conversely, working on this metric space, similar samples tend to select the same prototypes by designing a low-rank assignment model. Experimental results on applications like text clustering and image classification (using prototypes) demonstrate our method is considerably superior to the state-of-the-art methods in prototype selection.


Sign in / Sign up

Export Citation Format

Share Document