scholarly journals A Fractional Entropy in Fractal Phase Space: Properties and Characterization

2014 ◽  
Vol 2014 ◽  
pp. 1-16 ◽  
Author(s):  
Chandrashekar Radhakrishnan ◽  
Ravikumar Chinnarasu ◽  
Segar Jambulingam

A two-parameter generalization of Boltzmann-Gibbs-Shannon entropy based on natural logarithm is introduced. The generalization of the Shannon-Khinchin axioms corresponding to the two-parameter entropy is proposed and verified. We present the relative entropy, Jensen-Shannon divergence measure and check their properties. The Fisher information measure, the relative Fisher information, and the Jensen-Fisher information corresponding to this entropy are also derived. Also the Lesche stability and the thermodynamic stability conditions are verified. We propose a generalization of a complexity measure and apply it to a two-level system and a system obeying exponential distribution. Using different distance measures we define the statistical complexity and analyze it for two-level and five-level system.

Mathematics ◽  
2020 ◽  
Vol 8 (1) ◽  
pp. 142 ◽  
Author(s):  
Qianli Zhou ◽  
Hongming Mo ◽  
Yong Deng

As the extension of the fuzzy sets (FSs) theory, the intuitionistic fuzzy sets (IFSs) play an important role in handling the uncertainty under the uncertain environments. The Pythagoreanfuzzy sets (PFSs) proposed by Yager in 2013 can deal with more uncertain situations than intuitionistic fuzzy sets because of its larger range of describing the membership grades. How to measure the distance of Pythagorean fuzzy sets is still an open issue. Jensen–Shannon divergence is a useful distance measure in the probability distribution space. In order to efficiently deal with uncertainty in practical applications, this paper proposes a new divergence measure of Pythagorean fuzzy sets, which is based on the belief function in Dempster–Shafer evidence theory, and is called PFSDM distance. It describes the Pythagorean fuzzy sets in the form of basic probability assignments (BPAs) and calculates the divergence of BPAs to get the divergence of PFSs, which is the step in establishing a link between the PFSs and BPAs. Since the proposed method combines the characters of belief function and divergence, it has a more powerful resolution than other existing methods. Additionally, an improved algorithm using PFSDM distance is proposed in medical diagnosis, which can avoid producing counter-intuitive results especially when a data conflict exists. The proposed method and the magnified algorithm are both demonstrated to be rational and practical in applications.


2019 ◽  
Vol 44 (4) ◽  
pp. 431-447 ◽  
Author(s):  
Scott Monroe

In item response theory (IRT) modeling, the Fisher information matrix is used for numerous inferential procedures such as estimating parameter standard errors, constructing test statistics, and facilitating test scoring. In principal, these procedures may be carried out using either the expected information or the observed information. However, in practice, the expected information is not typically used, as it often requires a large amount of computation. In the present research, two methods to approximate the expected information by Monte Carlo are proposed. The first method is suitable for less complex IRT models such as unidimensional models. The second method is generally applicable but is designed for use with more complex models such as high-dimensional IRT models. The proposed methods are compared to existing methods using real data sets and a simulation study. The comparisons are based on simple structure multidimensional IRT models with two-parameter logistic item models.


2014 ◽  
Vol 529 ◽  
pp. 675-678
Author(s):  
Zheng Xia Zhang ◽  
Si Qiu Xu ◽  
Er Ning Zhou ◽  
Xiao Lin Huang ◽  
Jun Wang

The article adopted the multiscale Jensen-Shannon Divergence analysis method for EEG complexity analysis. Then the study found that this method can distinguish between three different status (Eyes closed, count, in a daze) acquisition of EEG time series. It showed that three different states of EEG time series have significant differences. In each state of the three different states (Eyes closed, count, in a daze), we aimed at comparing and analyzing the statistical complexity of EEG time series itself and the statistical complexity of EEG time series shuffled data. It was found that there are large amounts of nonlinear time series in the EEG signals. This method is also fully proved that the multiscale JSD algorithm can be used to analyze attention EEG signals. The multiscale Jensen-Shannon Divergence statistical complexity can be used as a measure of brain function parameter, which can be applied to the auxiliary clinical brain function evaluation in the future.


2021 ◽  
Author(s):  
Daniel N. Baker ◽  
Nathan Dyjack ◽  
Vladimir Braverman ◽  
Stephanie C. Hicks ◽  
Ben Langmead

AbstractSingle-cell RNA-sequencing (scRNA-seq) analyses typically begin by clustering a gene-by-cell expression matrix to empirically define groups of cells with similar expression profiles. We describe new methods and a new open source library, minicore, for efficient k-means++ center finding and k-means clustering of scRNA-seq data. Minicore works with sparse count data, as it emerges from typical scRNA-seq experiments, as well as with dense data from after dimensionality reduction. Minicore’s novel vectorized weighted reservoir sampling algorithm allows it to find initial k-means++ centers for a 4-million cell dataset in 1.5 minutes using 20 threads. Minicore can cluster using Euclidean distance, but also supports a wider class of measures like Jensen-Shannon Divergence, Kullback-Leibler Divergence, and the Bhattacharyya distance, which can be directly applied to count data and probability distributions.Further, minicore produces lower-cost centerings more efficiently than scikit-learn for scRNA-seq datasets with millions of cells. With careful handling of priors, minicore implements these distance measures with only minor (<2-fold) speed differences among all distances. We show that a minicore pipeline consisting of k-means++, localsearch++ and minibatch k-means can cluster a 4-million cell dataset in minutes, using less than 10GiB of RAM. This memory-efficiency enables atlas-scale clustering on laptops and other commodity hardware. Finally, we report findings on which distance measures give clusterings that are most consistent with known cell type labels.AvailabilityThe open source library is at https://github.com/dnbaker/minicore. Code used for experiments is at https://github.com/dnbaker/minicore-experiments.


2021 ◽  
Vol 5 (2) ◽  
pp. 9-24
Author(s):  
Arthi N ◽  
Mohana K

As the extension of the Fuzzy sets (FSs) theory, the Interval-valued Pythagorean Fuzzy Sets (IVPFS) was introduced which play an important role in handling the uncertainty. The Pythagorean fuzzy sets (PFSs) proposed by Yager in 2013 can deal with more uncertain situations than intuitionistic fuzzy sets because of its larger range of describing the membership grades. How to measure the distance of Interval-valued Pythagorean fuzzy sets is still an open issue. Jensen–Shannon divergence is a useful distance measure in the probability distribution space. In order to efficiently deal with uncertainty in practical applications, this paper proposes a new divergence measure of Interval-valued Pythagorean fuzzy sets,which is based on the belief function in Dempster–Shafer evidence theory, and is called IVPFSDM distance. It describes the Interval-Valued Pythagorean fuzzy sets in the form of basic probability assignments (BPAs) and calculates the divergence of BPAs to get the divergence of IVPFSs, which is the step in establishing a link between the IVPFSs and BPAs. Since the proposed method combines the characters of belief function and divergence, it has a more powerful resolution than other existing methods.


Sign in / Sign up

Export Citation Format

Share Document