average amount of information
Recently Published Documents


TOTAL DOCUMENTS

11
(FIVE YEARS 3)

H-INDEX

3
(FIVE YEARS 0)

Mathematics ◽  
2021 ◽  
Vol 9 (14) ◽  
pp. 1585
Author(s):  
Chiuhsiang Joe Lin ◽  
Chih-Feng Cheng

Fitts’ law predicts the human movement response time for a specific task through a simple linear formulation, in which the intercept and the slope are estimated from the task’s empirical data. This research was motivated by our pilot study, which found that the linear regression’s essential assumptions are not satisfied in the literature. Furthermore, the keystone hypothesis in Fitts’ law, namely that the movement time per response will be directly proportional to the minimum average amount of information per response demanded by the particular amplitude and target width, has never been formally tested. Therefore, in this study we developed an optional formulation by combining the findings from the fields of psychology, physics, and physiology to fulfill the statistical assumptions. An experiment was designed to test the hypothesis in Fitts’ law and to validate the proposed model. To conclude, our results indicated that movement time could be related to the index of difficulty at the same amplitude. The optional formulation accompanies the index of difficulty in Shannon form and performs the prediction better than the traditional model. Finally, a new approach to modeling movement time prediction was deduced from our research results.



Author(s):  
Chiuhsiang Joe Lin ◽  
Chih-Feng Cheng

Fitts' law predicts the human movement response time for a specific task by a simple linear formulation, in which the intercept and the slope are estimated from the task's empirical data. This research was motivated by our pilot study, which found that the linear regression's essential assumptions are not satisfied in the literature. Furthermore, the keystone hypothesis in Fitts' law, that the movement time per response will be directly proportional to the minimum average amount of information per response demanded by the particular amplitude and target width, has never been formally tested. Therefore, this study developed an optional formulation derived from fusing the findings in psychology, physics, and physiology for fulfilling the statistical assumptions. An experiment was designed to test the hypothesis in Fitts' law and validate the proposed model. To conclude, our results indicated that movement time could be related to the index of difficulty underlying the same constant amplitude. The optional formulation accompanies the index of difficulty in Shannon form robustly performs the prediction better than the traditional model across studies. Finally, a new approach to modeling movement time prediction is deduced from our research results



2021 ◽  
Vol 28 (1) ◽  
Author(s):  
Anatoly Balabanov ◽  
◽  
Vyacheslav Kunev ◽  
Victor Colesnic ◽  
◽  
...  

The article proposes to solve the problem of real-time application (on-line) of asymmetric bit-by-bit (flow or phoneme block, 32-, 64-,…, n-bits block) encryption of the linear and / or non-linear formants of the spectrum lines of Fast Fourier Transform (FFT) as an indirect analogue of a voice message. For this, modernized RSA-m algorithms are used and the spectrum of the voice message in the form of linear formants of number theory, while maintaining the high level of cryptographic resistance inherent of the RSA algorithm. The peculiarity of these algorithms consists in the fact that different lengths of cryptographic keys are used, which are changed with a sufficient frequency, depending on the required level of cryptographic resistance. This feature of the algorithms implements statistically independent encoding of the original message, by encrypting the adequate formants of the original message, i.e. a process characterized by a reduction (compression) of the amount of initial information and its redundancy, as well as an increase of its entropy (the average amount of information per character, phoneme or discrete (n-bit output from the ADC), because in a compressed context, statistically frequent sounds, letters, words, phonemes and even discrete, will be absent, which will significantly complicate the decryption (cryptanalysis) of the message.



Author(s):  
Kenneth M. Sayre

Information theory was established in 1948 by Claude Shannon as a statistical analysis of factors pertaining to the transmission of messages through communication channels. Among basic concepts defined within the theory are information (the amount of uncertainty removed by the occurrence of an event), entropy (the average amount of information represented by events at the source of a channel), and equivocation (the ‘noise’ that impedes faithful transmission of a message through a channel). Information theory has proved essential to the development of space probes, high-speed computing machinery and modern communication systems. The information studied by Shannon is sharply distinct from information in the sense of knowledge or of propositional content. It is also distinct from most uses of the term in the popular press (‘information retrieval’, ‘information processing’, ‘information highway’, and so on). While Shannon’s work has strongly influenced academic psychology and philosophy, its reception in these disciplines has been largely impressionistic. A major problem for contemporary philosophy is to relate the statistical conceptions of information theory to information in the semantic sense of knowledge and content.



2018 ◽  
Author(s):  
Frank Pennekamp ◽  
Alison C. Iles ◽  
Joshua Garland ◽  
Georgina Brennan ◽  
Ulrich Brose ◽  
...  

AbstractSuccessfully predicting the future states of systems that are complex, stochastic and potentially chaotic is a major challenge. Model forecasting error (FE) is the usual measure of success; however model predictions provide no insights into the potential for improvement. In short, the realized predictability of a specific model is uninformative about whether the system is inherently predictable or whether the chosen model is a poor match for the system and our observations thereof. Ideally, model proficiency would be judged with respect to the systems’ intrinsic predictability – the highest achievable predictability given the degree to which system dynamics are the result of deterministic v. stochastic processes. Intrinsic predictability may be quantified with permutation entropy (PE), a model-free, information-theoretic measure of the complexity of a time series. By means of simulations we show that a correlation exists between estimated PE and FE and show how stochasticity, process error, and chaotic dynamics affect the relationship. This relationship is verified for a dataset of 461 empirical ecological time series. We show how deviations from the expected PE-FE relationship are related to covariates of data quality and the nonlinearity of ecological dynamics.These results demonstrate a theoretically-grounded basis for a model-free evaluation of a system’s intrinsic predictability. Identifying the gap between the intrinsic and realized predictability of time series will enable researchers to understand whether forecasting proficiency is limited by the quality and quantity of their data or the ability of the chosen forecasting model to explain the data. Intrinsic predictability also provides a model-free baseline of forecasting proficiency against which modeling efforts can be evaluated.GlossaryActive information: The amount of information that is available to forecasting models (redundant information minus lost information; Fig. 1).Forecasting error (FE): A measure of the discrepancy between a model’s forecasts and the observed dynamics of a system. Common measures of forecast error are root mean squared error and mean absolute error.Entropy: Measures the average amount of information in the outcome of a stochastic process.Information: Any entity that provides answers and resolves uncertainty about a process. When information is calculated using logarithms to the base two (i.e. information in bits), it is the minimum number of yes/no questions required, on average, to determine the identity of the symbol (Jost 2006). The information in an observation consists of information inherited from the past (redundant information), and of new information.Intrinsic predictability: the maximum achievable predictability of a system (Beckage et al. 2011).Lost information: The part of the redundant information lost due to measurement or sampling error, or transformations of the data (Fig. 1).New information, Shannon entropy rate: The Shannon entropy rate quantifies the average amount of information per observation in a time series that is unrelated to the past, i.e., the new information (Fig. 1).Nonlinearity: When the deterministic processes governing system dynamics depend on the state of the system.Permutation entropy (PE): permutation entropy is a measure of the complexity of a time series (Bandt & Pompe, 2002) that is negatively correlated with a system’s predictability (Garland et al. 2015). Permutation entropy quantifies the combined new and lost information. PE is scaled to range between a minimum of 0 and a maximum of 1.Realized predictability: the achieved predictability of a system from a given forecasting model.Redundant information: The information inherited from the past, and thus the maximum amount of information available for use in forecasting (Fig. 1).Symbols, words, permutations: symbols are simply the smallest unit in a formal language such as the letters in the English alphabet i.e., {“A”, “B”,…, “Z”}. In information theory the alphabet is more abstract, such as elements in the set {“up”, “down”} or {“1”, “2”, “3”}. Words, of length m refer to concatenations of the symbols (e.g., up-down-down) in a set. Permutations are the possible orderings of symbols in a set. In this manuscript, the words are the permutations that arise from the numerical ordering of m data points in a time series.Weighted permutation entropy (WPE): a modification of permutation entropy (Fadlallah et al., 2013) that distinguishes between small-scale, noise-driven variation and large-scale, system-driven variation by considering the magnitudes of changes in addition to the rank-order patterns of PE.



2017 ◽  
Vol 2 (1) ◽  
pp. 13-26
Author(s):  
Botchkaryov. A. ◽  

A method of structural adaptation of data collection processes has been developed based on reinforcement learning of the decision block on the choice of actions at the structural and functional level subordinated to it, which provides a more efficient distribution of measuring and computing resources, higher reliability and survivability of information collection subsystems of an autonomous distributed system compared to methods of parametric adaptation. In particular, according to the results of experimental studies, the average amount of information collected in one step using the method of structural adaptation is 23.2% more than in the case of using the methods of parametric adaptation. At the same time, the amount of computational costs for the work of the structural adaptation method is on average 42.3% more than for the work of parametric adaptation methods. The reliability of the work of the method of structural adaptation was studied using the efficiency preservation coefficient for different values of the failure rate of data collection processes. Using the recovery rate coefficient for various values of relative simultaneous sudden failures, the survivability of a set of data collection processes organized by the method of structural adaptation has been investigated. In terms of reliability, the structural adaptation method exceeds the parametric adaptation methods by an average of 21.1%. The average survivability rate for the method of structural adaptation is greater than for methods of parametric adaptation by 18.4%. Key words: autonomous distributed system, data collection process, structural adaptation, reinforcement learning



2017 ◽  
Vol 15 (02) ◽  
pp. 1750014 ◽  
Author(s):  
Guang Ping He

To evade the well-known impossibility of unconditionally secure quantum two-party computations, previous quantum private comparison protocols have to adopt a third party. Here, we study how far we can go with two parties only. We propose a very feasible and efficient protocol. Intriguingly, although the average amount of information leaked cannot be made arbitrarily small, we find that this average will not exceed 14 bits for any length of the bit-string being compared.



2012 ◽  
Vol 10 (02) ◽  
pp. 1250022 ◽  
Author(s):  
GUO-QIANG HUANG ◽  
CUI-LAN LUO

Two schemes for controlled dense coding with a one-dimensional four-particle cluster state are investigated. In this protocol, the supervisor (Cliff) can control the channel and the average amount of information transmitted from the sender (Alice) to the receiver (Bob) by adjusting the local measurement angle θ. It is shown that the results for the average amounts of information are unique from the different two schemes.



2009 ◽  
Vol 07 (06) ◽  
pp. 1241-1248 ◽  
Author(s):  
GUO-QIANG HUANG ◽  
CUI-LAN LUO

Two schemes for controlled dense coding with a extended GHZ state are investigated. In these protocols, the supervisor (Cliff) can control the average amount of information transmitted from the sender (Alice) to the receiver (Bob) only by adjusting his local measurement angle θ. It is shown that the results for the average amounts of information are unique from the different two schemes.



2009 ◽  
Vol 07 (01) ◽  
pp. 365-372 ◽  
Author(s):  
CUI-LAN LUO ◽  
XIAO-FANG OUYANG

A scheme of realizing controlled dense coding via generalized measurement was presented. In this protocol, the supervisor can control the entanglement between the sender and the receiver and then the average amount of information transmitted from the sender to the receiver by only adjusting measurement angle θ. It is shown that when the quantum channel was a GHZ state, the entanglement and the average amount of information are determined by supervisor's measurement angle θ only; whereas when the quantum channel was a GHZ-class state, those are determined not only by supervisor's measurement angle θ but also the minimal coefficient of the GHZ-class state.



Sign in / Sign up

Export Citation Format

Share Document