Arithmetic Coding for Floating-Points and Elementary Mathematical Functions

Author(s):  
Marc Fischer ◽  
Oliver Riedel ◽  
Armin Lechler
2017 ◽  
Vol 13 (10) ◽  
pp. 6552-6557
Author(s):  
E.Wiselin Kiruba ◽  
Ramar K.

Amalgamation of compression and security is indispensable in the field of multimedia applications. A novel approach to enhance security with compression is discussed in this  research paper. In secure arithmetic coder (SAC), security is provided by input and output permutation methods and compression is done by interval splitting arithmetic coding. Permutation in SAC is susceptible to attacks. Encryption issues associated with SAC is dealt in this research method. The aim of this proposed method is to encrypt the data first by Table Substitution Box (T-box) and then to compress by Interval Splitting Arithmetic Coder (ISAC). This method incorporates dynamic T-box in order to provide better security. T-box is a method, constituting elements based on the random output of Pseudo Random Generator (PRNG), which gets the input from Secure Hash Algorithm-256 (SHA-256) message digest. The current scheme is created, based on the key, which is known to the encoder and decoder. Further, T-boxes are created by using the previous message digest as a key.  Existing interval splitting arithmetic coding of SAC is applied for compression of text data. Interval splitting finds a relative position to split the intervals and this in turn brings out compression. The result divulges that permutation replaced by T-box method provides enhanced security than SAC. Data is not revealed when permutation is replaced by T-box method. Security exploration reveals that the data remains secure to cipher text attacks, known plain text attacks and chosen plain text attacks. This approach results in increased security to Interval ISAC. Additionally the compression ratio  is compared by transferring the outcome of T-box  to traditional  arithmetic coding. The comparison proved that there is a minor reduction in compression ratio in ISAC than arithmetic coding. However the security provided by ISAC overcomes the issues of compression ratio in  arithmetic coding. 


1993 ◽  
Author(s):  
Krystyna Ohnesorge ◽  
Peter Stucki ◽  
Hartwig Thomas

2020 ◽  
Vol 89 ◽  
pp. 8-19
Author(s):  
V. A. Minaev ◽  
◽  
N. G. Topolsky ◽  
A. O. Faddeev ◽  
R. O. Stepanov ◽  
...  

Introduction. The complex combination of natural and technogenic factors that lead to dangerous threats to the health and life of the population, as well as to material values, creates a need to develop special mathematical models for risk assessment in the relevant territories. Herewith it is important to take into account the significant differences between these factors. The new areas of research are models that describe natural and technogenic risks using differential equations that reflect different types of functions. The article presents the development of this research area. Goals and objectives. The goal of the article is to create a model for risk assessment in natural and technical systems (PTS), based on taking into account the influences of different natural and technogenic factors on them. Objectives include justification, construction and practical implementation of the mathematical model of risk assessment in the form of differential equations system. Methods include interpretation of the considered influences on PTS in terms of risks and assessment of the dynamic interaction of natural and technogenic factors in the form of inhomogeneous differential equations. Results and discussion. Solutions for models of assessing complex natural and technogenic risks in relation to two cases that differ in NTS are found: functionally different external natural and technogenic influences on PTS, which are understood as their type, in which the effects of both natural and technogenic factors are described by different mathematical functions. Conclusions. The first model considers parabolic (reflecting threats whose intensity gradually decreases with distance from the epicenter) and linear types of influences (reflecting sudden threats). The second model considers parabolic and hyperbolic (reflecting threats, the intensity of which decreases sharply over time) types of influences. It is concluded that it is necessary to create a special computer album of complex influences on the PTS in order to prevent "replay" of various situations and develop the most effective response to emerging dangers from the EMERCOM units and other structures. Key words: model, assessment, natural and technogenic risks, functionally different influences, counteraction, EMERCOM units.


Symmetry ◽  
2019 ◽  
Vol 11 (2) ◽  
pp. 286 ◽  
Author(s):  
Athanasios Bogiatzis ◽  
Basil Papadopoulos

Thresholding algorithms segment an image into two parts (foreground and background) by producing a binary version of our initial input. It is a complex procedure (due to the distinctive characteristics of each image) which often constitutes the initial step of other image processing or computer vision applications. Global techniques calculate a single threshold for the whole image while local techniques calculate a different threshold for each pixel based on specific attributes of its local area. In some of our previous work, we introduced some specific fuzzy inclusion and entropy measures which we efficiently managed to use on both global and local thresholding. The general method which we presented was an open and adaptable procedure, it was free of sensitivity or bias parameters and it involved image classification, mathematical functions, a fuzzy symmetrical triangular number and some criteria of choosing between two possible thresholds. Here, we continue this research and try to avoid all these by automatically connecting our measures with the wanted threshold using some Artificial Neural Network (ANN). Using an ANN in image segmentation is not uncommon especially in the domain of medical images. However, our proposition involves the use of an Adaptive Neuro-Fuzzy Inference System (ANFIS) which means that all we need is a proper database. It is a simple and immediate method which could provide researchers with an alternative approach to the thresholding problem considering that they probably have at their disposal some appropriate and specialized data.


Symmetry ◽  
2021 ◽  
Vol 13 (1) ◽  
pp. 153 ◽  
Author(s):  
Christophe Humbert ◽  
Thomas Noblet

To take advantage of the singular properties of matter, as well as to characterize it, we need to interact with it. The role of optical spectroscopies is to enable us to demonstrate the existence of physical objects by observing their response to light excitation. The ability of spectroscopy to reveal the structure and properties of matter then relies on mathematical functions called optical (or dielectric) response functions. Technically, these are tensor Green’s functions, and not scalar functions. The complexity of this tensor formalism sometimes leads to confusion within some articles and books. Here, we do clarify this formalism by introducing the physical foundations of linear and non-linear spectroscopies as simple and rigorous as possible. We dwell on both the mathematical and experimental aspects, examining extinction, infrared, Raman and sum-frequency generation spectroscopies. In this review, we thus give a personal presentation with the aim of offering the reader a coherent vision of linear and non-linear optics, and to remove the ambiguities that we have encountered in reference books and articles.


Entropy ◽  
2021 ◽  
Vol 23 (8) ◽  
pp. 983
Author(s):  
Jingjian Li ◽  
Wei Wang ◽  
Hong Mo ◽  
Mengting Zhao ◽  
Jianhua Chen

A distributed arithmetic coding algorithm based on source symbol purging and using the context model is proposed to solve the asymmetric Slepian–Wolf problem. The proposed scheme is to make better use of both the correlation between adjacent symbols in the source sequence and the correlation between the corresponding symbols of the source and the side information sequences to improve the coding performance of the source. Since the encoder purges a part of symbols from the source sequence, a shorter codeword length can be obtained. Those purged symbols are still used as the context of the subsequent symbols to be encoded. An improved calculation method for the posterior probability is also proposed based on the purging feature, such that the decoder can utilize the correlation within the source sequence to improve the decoding performance. In addition, this scheme achieves better error performance at the decoder by adding a forbidden symbol in the encoding process. The simulation results show that the encoding complexity and the minimum code rate required for lossless decoding are lower than that of the traditional distributed arithmetic coding. When the internal correlation strength of the source is strong, compared with other DSC schemes, the proposed scheme exhibits a better decoding performance under the same code rate.


Sign in / Sign up

Export Citation Format

Share Document