compression factor
Recently Published Documents


TOTAL DOCUMENTS

77
(FIVE YEARS 17)

H-INDEX

11
(FIVE YEARS 2)

2021 ◽  
Vol 9 ◽  
Author(s):  
Roland Friedrich

We study the language of legal codes from different countries and legal traditions, using concepts from physics, algorithmic complexity theory and information theory. We show that vocabulary entropy, which measures the diversity of the author’s choice of words, in combination with the compression factor, which is derived from a lossless compression algorithm and measures the redundancy present in a text, is well suited for separating different writing styles in different languages, in particular also legal language. We show that different types of (legal) text, e.g. acts, regulations or literature, are located in distinct regions of the complexity-entropy plane, spanned by the information and complexity measure. This two-dimensional approach already gives new insights into the drafting style and structure of statutory texts and complements other methods.


2021 ◽  
Vol Publish Ahead of Print ◽  
Author(s):  
Yuanhao He ◽  
Longqian Liu ◽  
Stephen J. Vincent

2020 ◽  
Vol 20 (6) ◽  
pp. 5-17
Author(s):  
Hrachya Astsatryan ◽  
Aram Kocharyan ◽  
Daniel Hagimont ◽  
Arthur Lalayan

AbstractThe optimization of large-scale data sets depends on the technologies and methods used. The MapReduce model, implemented on Apache Hadoop or Spark, allows splitting large data sets into a set of blocks distributed on several machines. Data compression reduces data size and transfer time between disks and memory but requires additional processing. Therefore, finding an optimal tradeoff is a challenge, as a high compression factor may underload Input/Output but overload the processor. The paper aims to present a system enabling the selection of the compression tools and tuning the compression factor to reach the best performance in Apache Hadoop and Spark infrastructures based on simulation analyzes.


2020 ◽  
Vol 7 (2) ◽  
pp. 554-563
Author(s):  
Kazeem B. Adedeji

IoT-based smart water supply network management applications generate a huge volume of data from the installed sensing devices which are required to be processed (sometimes in-network), stored and transmitted to a remote centre for decision making. When the volume of data produced by diverse IoT smart sensing devices intensify, processing and storage of these data begin to be a serious issue. The large data size acquired from these applications increases the computational complexities, occupies the scarce bandwidth of data transmission and increases the storage space. Thus, data size reduction through the use of data compression algorithms is essential in IoT-based smart water network management applications. In this paper, the performance evaluation of four different data compression algorithms used for this purpose is presented. These algorithms, which include RLE, Huffman, LZW and Shanon-Fano encoding were realised using MATLAB software and tested on six water supply system data. The performance of each of these algorithms was evaluated based on their compression ratio, compression factor, percentage space savings, as well as the compression gain. The results obtained showed that the LZW algorithm shows better performance base on the compression ratio, compression factor, space savings and the compression gain. However, its execution time is relatively slow compared to the RLE and the two other algorithms investigated. Most importantly, the LZW algorithm has a significant reduction in the data sizes of the tested files than all other algorithms


2020 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Mahmood Al-khassaweneh ◽  
Omar AlShorman

In the big data era, image compression is of significant importance in today’s world. Importantly, compression of large sized images is required for everyday tasks; including electronic data communications and internet transactions. However, two important measures should be considered for any compression algorithm: the compression factor and the quality of the decompressed image. In this paper, we use Frei-Chen bases technique and the Modified Run Length Encoding (RLE) to compress images. The Frei-Chen bases technique is applied at the first stage in which the average subspace is applied to each 3 × 3 block. Those blocks with the highest energy are replaced by a single value that represents the average value of the pixels in the corresponding block. Even though Frei-Chen bases technique provides lossy compression, it maintains the main characteristics of the image. Additionally, the Frei-Chen bases technique enhances the compression factor, making it advantageous to use. In the second stage, RLE is applied to further increase the compression factor. The goal of using RLE is to enhance the compression factor without adding any distortion to the resultant decompressed image. Integrating RLE with Frei-Chen bases technique, as described in the proposed algorithm, ensures high quality decompressed images and high compression rate. The results of the proposed algorithms are shown to be comparable in quality and performance with other existing methods.


2020 ◽  
Vol 27 (5) ◽  
pp. 1326-1338
Author(s):  
Federica Marone ◽  
Jakob Vogel ◽  
Marco Stampanoni

Modern detectors used at synchrotron tomographic microscopy beamlines typically have sensors with more than 4–5 mega-pixels and are capable of acquiring 100–1000 frames per second at full frame. As a consequence, a data rate of a few TB per day can easily be exceeded, reaching peaks of a few tens of TB per day for time-resolved tomographic experiments. This data needs to be post-processed, analysed, stored and possibly transferred, imposing a significant burden onto the IT infrastructure. Compression of tomographic data, as routinely done for diffraction experiments, is therefore highly desirable. This study considers a set of representative datasets and investigates the effect of lossy compression of the original X-ray projections onto the final tomographic reconstructions. It demonstrates that a compression factor of at least three to four times does not generally impact the reconstruction quality. Potentially, compression with this factor could therefore be used in a transparent way to the user community, for instance, prior to data archiving. Higher factors (six to eight times) can be achieved for tomographic volumes with a high signal-to-noise ratio as it is the case for phase-retrieved datasets. Although a relationship between the dataset signal-to-noise ratio and a safe compression factor exists, this is not simple and, even considering additional dataset characteristics such as image entropy and high-frequency content variation, the automatic optimization of the compression factor for each single dataset, beyond the conservative factor of three to four, is not straightforward.


Author(s):  
Menglu Wang ◽  
Xueyang Fu ◽  
Zepei Sun ◽  
Zheng-Jun Zha

Existing deep learning-based image de-blocking methods use only pixel-level loss functions to guide network training. The JPEG compression factor, which reflects the degradation degree, has not been fully utilized. However, due to the non-differentiability, the compression factor cannot be directly utilized to train deep networks. To solve this problem, we propose compression quality ranker-guided networks for this specific JPEG artifacts removal. We first design a quality ranker to measure the compression degree, which is highly correlated with the JPEG quality. Based on this differentiable ranker, we then propose one quality-related loss and one feature matching loss to guide de-blocking and perceptual quality optimization. In addition, we utilize dilated convolutions to extract multi-scale features, which enables our single model to handle multiple compression quality factors. Our method can implicitly use the information contained in the compression factors to produce better results. Experiments demonstrate that our model can achieve comparable or even better performance in both quantitative and qualitative measurements.


2020 ◽  
Vol 9 (4) ◽  
pp. 8517-8527
Author(s):  
Suhas Yeshwant Nayak ◽  
Shenoy B. Satish ◽  
Mohamed Thariq Hameed Sultan ◽  
Chandrakant R. Kini ◽  
K. Rajath Shenoy ◽  
...  

2020 ◽  
Vol 5 (1) ◽  
pp. e000345
Author(s):  
Kin Wan ◽  
Jason Ki-kit Lau ◽  
Sin Wan Cheung ◽  
Pauline Cho

ObjectiveTo present the study design and the baseline data of a prospective cohort study investigating the safety, refractive correction and effectiveness of myopia control in subjects fitted with orthokeratology (ortho-k) lenses of different compression factors.Methods and analysisThis study is a 2-year longitudinal, double-masked, partially randomised study. Myopic children aged between 6 and 10 years are recruited and they may choose to participate in either the ortho-k or spectacle-wearing group. Subjects in the ortho-k group are randomly assigned to wear ortho-k lenses of either conventional compression factor (CCF, 0.75 D) or increased compression factor (ICF, 1.75 D). For the ortho-k subjects, the time and between-group effects within the first month of lens wear were analysed.ResultsSixty-nine ortho-k subjects (CCF: 34; ICF: 35) and 30 control subjects were recruited. There were no significant differences in baseline demographic data among the three groups of subjects (p>0.19). At the 1-month visit, the first fit success rates were 97% and 100% in the CCF and ICF ortho-k group, respectively. A higher percentage of ICF subjects could achieve full correction (CCF: 88.2%; ICF: 94.3%). The change in axial length was significantly higher in the ICF group (CCF, 0.003 mm; ICF, −0.031 mm) (p<0.05). No significant between-group differences in daytime vision or in the coverage and depth of corneal staining between the two ortho-k groups (p>0.05) were observed at any visit.ConclusionICF did not compromise the corneal integrity and the lens centration within the first month of lens wear. The preliminary performance of ortho-k lenses with ICF of 1.00D shows that it was safe to be used in the longer term for the investigation of myopia control.Trial registration numberNCT02643342.


2020 ◽  
Vol 41 (2) ◽  
pp. 139-144
Author(s):  
J.L.S. Lima ◽  
C.H.A. Ferraz

AbstractOptical soliton compression in dispersion decreasing fibers with relaxing Kerr effect was investigated numerically. The results were compared with the instantaneous nonlinear response case. It was observed that, in general, the relaxing Kerr effect produces significantly worse results than the instantaneous nonlinear response. However, by the appropriate choice of fiber length, one can obtain a higher compression factor if the relaxation time is sufficiently short.


Sign in / Sign up

Export Citation Format

Share Document