entropy information
Recently Published Documents


TOTAL DOCUMENTS

156
(FIVE YEARS 48)

H-INDEX

16
(FIVE YEARS 2)

2021 ◽  
Vol 12 (1) ◽  
pp. 199
Author(s):  
Myungjin Lee ◽  
Hung Soo Kim ◽  
Jaewon Kwak ◽  
Jongsung Kim ◽  
Soojun Kim

This study assessed the characteristics of water-level time series of a tidal river by decomposing it into tide, wave, rainfall-runoff, and noise components. Especially, the analysis for chaotic behavior of each component was done by estimating the correlation dimension with phase-space reconstruction of time series and by using a close returns plot (CRP). Among the time series, the tide component showed chaotic characteristics to have a correlation dimension of 1.3. It was found out that the water level has stochastic characteristics showing the increasing trend of the correlation exponent in the embedding dimension. Other components also showed the stochastic characteristics. Then, the CRP was used to examine the characteristics of each component. The tide component showed the chaotic characteristics in its CRP. The CRP of water level showed an aperiodic characteristic which slightly strayed away from its periodicity, and this might be related to the tide component. This study showed that a low water level is mainly affected by a chaotic tide component through entropy information. Even though the water level did not show chaotic characteristics in the correlation dimension, it showed stochastic chaos characteristics in the CRP. Other components showed stochastic characteristics in the CRP. It was confirmed that the water level showed chaotic characteristics when it was not affected by rainfall and stochastic characteristics deviating from the bounded trajectory when water level rises due to rainfall. Therefore, we have shown that the water level related to the chaotic tide component can also have chaotic properties because water level is influenced by chaotic tide and rainfall shock, thus it showed stochastic chaos characteristics.


Entropy ◽  
2021 ◽  
Vol 23 (11) ◽  
pp. 1493
Author(s):  
Emilio Abad-Segura ◽  
Mariana-Daniela González-Zamar ◽  
Massimo Squillante

Open business organizations, where information flows, is shared, and exchanged, are more prepared to adapt and survive chaos, uncertainty, and entropy, so they will be more predisposed to change management. The aim of this study is to analyze research trends at the international level on business information–entropy correlation in the accounting process of organizations. Mathematical and statistical techniques were applied to 980 articles during the period 1974–2020, obtaining results on the scientific productivity of the driving agents of this topic: authors, research institutions, countries/territories, and journals. Five lines of research were identified during the period analyzed, which mainly study information theory, maximum entropy, information entropy, decision-making, and enthalpy. Future research should focus on analyzing the evolution of this topic, which forms new thematic axes related to bitcoin market efficiency, business hierarchy information, business model evaluation systems, catastrophic economic collapse, corporate diversification, CSR reports affecting accounting conservatism, economic income accounting, and information loss. Currently, the research presents an upward trend, which allows a growing interest in the subject to be deduced in the academic and scientific community worldwide.


2021 ◽  
pp. 1-30
Author(s):  
Cara Murray

The Dictionary of National Biography, published between 1885 and 1900, was one of Britain's biggest cyclopedia projects. The rampant expansion of the nation's archives, private collections, and museums produced an abundance of materials that frustrated the dictionary's editors, Leslie Stephen and Sidney Lee, especially because methodologies for making order of such materials were underdeveloped. Adding to their frustration was the sense of impending doom felt generally in Britain after the discovery of the second law of thermodynamics in 1859. Entropy put an end to the presiding belief in the infinite energy that fueled Britain's economic development and therefore challenged Victorian biography's premise that the capacity for self-development was boundless. Like the physicists of the era, these dictionary makers searched for ways to circumvent entropy's deadening force and reenergize their world. This project would not actually be achieved, however, until the twentieth century when Claude Shannon published his “Information Theory” in 1948. I argue that in an attempt to get out from under the chaos of information overload, the editors of the DNB invented new methods to organize information that anticipated Shannon's revolutionary theory and changed the way that we think, write, and work.


Entropy ◽  
2021 ◽  
Vol 23 (10) ◽  
pp. 1241
Author(s):  
Xin Zhao ◽  
Xiaokai Nie

Some theories are explored in this research about decision trees which give theoretical support to the applications based on decision trees. The first is that there are many splitting criteria to choose in the tree growing process. The splitting bias that influences the criterion chosen due to missing values and variables with many possible values has been studied. Results show that the Gini index is superior to entropy information as it has less bias regarding influences. The second is that noise variables with more missing values have a better chance to be chosen while informative variables do not. The third is that when there are many noise variables involved in the tree building process, it influences the corresponding computational complexity. Results show that the computational complexity increase is linear to the number of noise variables. So methods that decompose more information from the original data but increase the variable dimension can also be considered in real applications.


Information ◽  
2021 ◽  
Vol 12 (10) ◽  
pp. 391
Author(s):  
Hossein Hassani ◽  
Stephan Unger ◽  
Mohammad Reza Entezarian

We conducted a singular and sectoral vulnerability assessment of ESG factors of Dow-30-listed companies by applying the entropy weight method and analyzing each ESG factor’s information contribution to the overall ESG disclosure score. By reducing information entropy information, weaknesses in the structure of a socio-technological system can be identified and improved. The relative information gain of each indicator improves proportionally to the reduction in entropy. The social pillar contains the most crucial information, followed by the environmental and governance pillars, relative to each other. The difference between the social and economic pillars was found to be statistically not significant, while the differences between the social pillar, respective to the economic and governance pillars were statistically significant. This suggests noisy information content of the governance pillar, indicating improvement potential in governance messaging. Moreover, we found that companies with lean and flexible governance structures are more likely to convey information content better. We also discuss the impact of ESG measures on society and security.


Author(s):  
Yajin Xu ◽  
Qiong Luo ◽  
Hong Shu

Excess commuting refers to the value of unnecessary commuting or distance costs. Traditional commuting distance models adapt the most efficient scenario with people working in the nearest workplace geographically. Even though there have been some attempts to include constraints with commuter attributes and neighborhood features, problems arise with traditional geographical space and the subjectivity of these predefined characteristics. In this paper, we propose a method to calculate theoretical local minimal costs, which considers preferences that are inherently behavioral based on current work–home trips in the process of reassigning the work–home configuration. Our method is based on a feature space with a higher dimension and with the enlargement of attributes and relations of and between commuters and neighborhoods. Additionally, our solution is arrived at innovatively by improved Fuzzy C-Means clustering and linear programming. Unlike traditional clustering algorithms, our improved method adapts entropy information and selects the initial parameters based on the actual data rather than on prior knowledge. Using the real origin–destination matrix, theoretical minimal costs are calculated within each cluster, referred to as local minimal costs, and the average sum of local minimal costs is our theoretical minimal cost. The difference between the expected minimal cost and the actual cost is the excess commuting. Using our method, experimental results show that only 13% of the daily commuting distance in Wuhan could be avoided, and the theoretical distance is approximately 1.06 km shorter than the actual commuting distance.


Entropy ◽  
2021 ◽  
Vol 23 (7) ◽  
pp. 895
Author(s):  
Ariel Caticha

This paper is a review of a particular approach to the method of maximum entropy as a general framework for inference. The discussion emphasizes pragmatic elements in the derivation. An epistemic notion of information is defined in terms of its relation to the Bayesian beliefs of ideally rational agents. The method of updating from a prior to posterior probability distribution is designed through an eliminative induction process. The logarithmic relative entropy is singled out as a unique tool for updating (a) that is of universal applicability, (b) that recognizes the value of prior information, and (c) that recognizes the privileged role played by the notion of independence in science. The resulting framework—the ME method—can handle arbitrary priors and arbitrary constraints. It includes the MaxEnt and Bayes’ rules as special cases and, therefore, unifies entropic and Bayesian methods into a single general inference scheme. The ME method goes beyond the mere selection of a single posterior, and also addresses the question of how much less probable other distributions might be, which provides a direct bridge to the theories of fluctuations and large deviations.


Entropy ◽  
2021 ◽  
Vol 23 (6) ◽  
pp. 779
Author(s):  
Yunus A. Çengel

The term entropy is used in different meanings in different contexts, sometimes in contradictory ways, resulting in misunderstandings and confusion. The root cause of the problem is the close resemblance of the defining mathematical expressions of entropy in statistical thermodynamics and information in the communications field, also called entropy, differing only by a constant factor with the unit ‘J/K’ in thermodynamics and ‘bits’ in the information theory. The thermodynamic property entropy is closely associated with the physical quantities of thermal energy and temperature, while the entropy used in the communications field is a mathematical abstraction based on probabilities of messages. The terms information and entropy are often used interchangeably in several branches of sciences. This practice gives rise to the phrase conservation of entropy in the sense of conservation of information, which is in contradiction to the fundamental increase of entropy principle in thermodynamics as an expression of the second law. The aim of this paper is to clarify matters and eliminate confusion by putting things into their rightful places within their domains. The notion of conservation of information is also put into a proper perspective.


Sign in / Sign up

Export Citation Format

Share Document