entropy measures
Recently Published Documents


TOTAL DOCUMENTS

538
(FIVE YEARS 184)

H-INDEX

38
(FIVE YEARS 7)

2022 ◽  
Vol 12 (1) ◽  
pp. 496
Author(s):  
João Sequeira ◽  
Jorge Louçã ◽  
António M. Mendes ◽  
Pedro G. Lind

We analyze the empirical series of malaria incidence, using the concepts of autocorrelation, Hurst exponent and Shannon entropy with the aim of uncovering hidden variables in those series. From the simulations of an agent model for malaria spreading, we first derive models of the malaria incidence, the Hurst exponent and the entropy as functions of gametocytemia, measuring the infectious power of a mosquito to a human host. Second, upon estimating the values of three observables—incidence, Hurst exponent and entropy—from the data set of different malaria empirical series we predict a value of the gametocytemia for each observable. Finally, we show that the independent predictions show considerable consistency with only a few exceptions which are discussed in further detail.


2022 ◽  
Vol 24 (1) ◽  
pp. 105-118
Author(s):  
Mervat Mahdy ◽  
◽  
Dina S. Eltelbany ◽  
Hoda Mohammed ◽  
◽  
...  

Entropy measures the amount of uncertainty and dispersion of an unknown or random quantity, this concept introduced at first by Shannon (1948), it is important for studies in many areas. Like, information theory: entropy measures the amount of information in each message received, physics: entropy is the basic concept that measures the disorder of the thermodynamical system, and others. Then, in this paper, we introduce an alternative measure of entropy, called 𝐻𝑁- entropy, unlike Shannon entropy, this proposed measure of order α and β is more flexible than Shannon. Then, the cumulative residual 𝐻𝑁- entropy, cumulative 𝐻𝑁- entropy, and weighted version have been introduced. Finally, comparison between Shannon entropy and 𝐻𝑁- entropy and numerical results have been introduced.


2022 ◽  
Vol 11 (1) ◽  
pp. 0-0

Motivated by the structural aspect of the probabilistic entropy, the concept of fuzzy entropy enabled the researchers to investigate the uncertainty due to vague information. Fuzzy entropy measures the ambiguity/vagueness entailed in a fuzzy set. Hesitant fuzzy entropy and hesitant fuzzy linguistic term set based entropy presents a more comprehensive evaluation of vague information. In the vague situations of multiple-criteria decision-making, entropy measure is utilized to compute the objective weights of attributes. The weights obtained due to entropy measures are not reasonable in all the situations. To model such situation, a knowledge measure is very significant, which is a structural dual to entropy. A fuzzy knowledge measure determines the level of precision in a fuzzy set. This article introduces the concept of a knowledge measure for hesitant fuzzy linguistic term sets (HFLTS) and show how it may be derived from HFLTS distance measures. Authors also investigate its application in determining the weights of criteria in multi-criteria decision-making (MCDM).


Entropy ◽  
2021 ◽  
Vol 24 (1) ◽  
pp. 39
Author(s):  
Arthur Américo ◽  
MHR Khouzani ◽  
Pasquale Malacaria

This work introduces channel-supermodular entropies, a subset of quasi-concave entropies. Channel-supermodularity is a property shared by some of the most commonly used entropies in the literature, including Arimoto–Rényi conditional entropies (which include Shannon and min-entropy as special cases), k-tries entropies, and guessing entropy. Based on channel-supermodularity, new preorders for channels that strictly include degradedness and inclusion (or Shannon ordering) are defined, and these preorders are shown to provide a sufficient condition for the more-capable and capacity ordering, not only for Shannon entropy but also regarding analogous concepts for other entropy measures. The theory developed is then applied in the context of query anonymization. We introduce a greedy algorithm based on channel-supermodularity for query anonymization and prove its optimality, in terms of information leakage, for all symmetric channel-supermodular entropies.


Entropy ◽  
2021 ◽  
Vol 23 (12) ◽  
pp. 1700
Author(s):  
Shanying Lin ◽  
Heming Jia ◽  
Laith Abualigah ◽  
Maryam Altalhi

Image segmentation is a fundamental but essential step in image processing because it dramatically influences posterior image analysis. Multilevel thresholding image segmentation is one of the most popular image segmentation techniques, and many researchers have used meta-heuristic optimization algorithms (MAs) to determine the threshold values. However, MAs have some defects; for example, they are prone to stagnate in local optimal and slow convergence speed. This paper proposes an enhanced slime mould algorithm for global optimization and multilevel thresholding image segmentation, namely ESMA. First, the Levy flight method is used to improve the exploration ability of SMA. Second, quasi opposition-based learning is introduced to enhance the exploitation ability and balance the exploration and exploitation. Then, the superiority of the proposed work ESMA is confirmed concerning the 23 benchmark functions. Afterward, the ESMA is applied in multilevel thresholding image segmentation using minimum cross-entropy as the fitness function. We select eight greyscale images as the benchmark images for testing and compare them with the other classical and state-of-the-art algorithms. Meanwhile, the experimental metrics include the average fitness (mean), standard deviation (Std), peak signal to noise ratio (PSNR), structure similarity index (SSIM), feature similarity index (FSIM), and Wilcoxon rank-sum test, which is utilized to evaluate the quality of segmentation. Experimental results demonstrated that ESMA is superior to other algorithms and can provide higher segmentation accuracy.


Entropy ◽  
2021 ◽  
Vol 23 (12) ◽  
pp. 1672
Author(s):  
Sebastian Raubitzek ◽  
Thomas Neubauer

Measures of signal complexity, such as the Hurst exponent, the fractal dimension, and the Spectrum of Lyapunov exponents, are used in time series analysis to give estimates on persistency, anti-persistency, fluctuations and predictability of the data under study. They have proven beneficial when doing time series prediction using machine and deep learning and tell what features may be relevant for predicting time-series and establishing complexity features. Further, the performance of machine learning approaches can be improved, taking into account the complexity of the data under study, e.g., adapting the employed algorithm to the inherent long-term memory of the data. In this article, we provide a review of complexity and entropy measures in combination with machine learning approaches. We give a comprehensive review of relevant publications, suggesting the use of fractal or complexity-measure concepts to improve existing machine or deep learning approaches. Additionally, we evaluate applications of these concepts and examine if they can be helpful in predicting and analyzing time series using machine and deep learning. Finally, we give a list of a total of six ways to combine machine learning and measures of signal complexity as found in the literature.


Author(s):  
Shazia Manzoor ◽  
Muhammad Kamran Siddiqui ◽  
Sarfraz Ahmad
Keyword(s):  

Entropy ◽  
2021 ◽  
Vol 23 (12) ◽  
pp. 1618
Author(s):  
Rubem P. Mondaini ◽  
Simão C. de Albuquerque Neto

The Khinchin–Shannon generalized inequalities for entropy measures in Information Theory, are a paradigm which can be used to test the Synergy of the distributions of probabilities of occurrence in physical systems. The rich algebraic structure associated with the introduction of escort probabilities seems to be essential for deriving these inequalities for the two-parameter Sharma–Mittal set of entropy measures. We also emphasize the derivation of these inequalities for the special cases of one-parameter Havrda–Charvat’s, Rényi’s and Landsberg–Vedral’s entropy measures.


Sign in / Sign up

Export Citation Format

Share Document