scholarly journals Studies on pre-treatment by compression for wood impregnation I: effects of compression ratio, compression direction, compression speed and compression-unloading place on the liquid impregnation of wood

2018 ◽  
Vol 64 (5) ◽  
pp. 551-556 ◽  
Author(s):  
Youke Zhao ◽  
Zhihui Wang ◽  
Ikuho Iida ◽  
Juan Guo
2019 ◽  
Vol 28 (06) ◽  
pp. 1950106
Author(s):  
Qian Dong ◽  
Bing Li

The hardware-based dictionary compression is widely adopted for high speed requirement of real-time data processing. Hash function helps to manage large dictionary to improve compression ratio but is prone to collisions, so some phrases in match search result are not true matches. This paper presents a novel match search approach called dual chaining hash refining, which can improve the efficiency of match search. From the experimental results, our method showed obvious advantage in compression speed compared with other approach that utilizes single hash function described in the previous publications.


2018 ◽  
Author(s):  
Xavier Delaunay ◽  
Aurélie Courtois ◽  
Flavien Gouillon

Abstract. The increasing volume of scientific datasets imposes the use of compression to reduce the data storage or transmission costs, specifically for the oceanography or meteorological datasets generated by Earth observation mission ground segments. These data are mostly produced in NetCDF formatted files. Indeed, the NetCDF-4/HDF5 file formats are widely spread in the global scientific community because of the nice features they offer. Particularly, the HDF5 offers the dynamically loaded filter plugin functionality allowing users to write filters, such as compression/decompression filters, to process the data before reading or writing it on the disk. In this work, we evaluate the performance of lossy and lossless compression/decompression methods through NetCDF-4 and HDF5 tools on analytical and real scientific floating-point datasets. We also introduce the Digit Rounding algorithm, a new relative error bounded data reduction method inspired by the Bit Grooming algorithm. The Digit Rounding algorithm allows high compression ratio while preserving a given number of significant digits in the dataset. It achieves higher compression ratio than the Bit Grooming algorithm while keeping similar compression speed.


2017 ◽  
Vol 2017 ◽  
pp. 1-11 ◽  
Author(s):  
Qin Jiancheng ◽  
Lu Yiqin ◽  
Zhong Yu

As the wireless network has limited bandwidth and insecure shared media, the data compression and encryption are very useful for the broadcasting transportation of big data in IoT (Internet of Things). However, the traditional techniques of compression and encryption are neither competent nor efficient. In order to solve this problem, this paper presents a combined parallel algorithm named “CZ algorithm” which can compress and encrypt the big data efficiently. CZ algorithm uses a parallel pipeline, mixes the coding of compression and encryption, and supports the data window up to 1 TB (or larger). Moreover, CZ algorithm can encrypt the big data as a chaotic cryptosystem which will not decrease the compression speed. Meanwhile, a shareware named “ComZip” is developed based on CZ algorithm. The experiment results show that ComZip in 64 b system can get better compression ratio than WinRAR and 7-zip, and it can be faster than 7-zip in the big data compression. In addition, ComZip encrypts the big data without extra consumption of computing resources.


2019 ◽  
Vol 12 (9) ◽  
pp. 4099-4113
Author(s):  
Xavier Delaunay ◽  
Aurélie Courtois ◽  
Flavien Gouillon

Abstract. The increasing volume of scientific datasets requires the use of compression to reduce data storage and transmission costs, especially for the oceanographic or meteorological datasets generated by Earth observation mission ground segments. These data are mostly produced in netCDF files. Indeed, the netCDF-4/HDF5 file formats are widely used throughout the global scientific community because of the useful features they offer. HDF5 in particular offers a dynamically loaded filter plugin so that users can write compression/decompression filters, for example, and process the data before reading or writing them to disk. This study evaluates lossy and lossless compression/decompression methods through netCDF-4 and HDF5 tools on analytical and real scientific floating-point datasets. We also introduce the Digit Rounding algorithm, a new relative error-bounded data reduction method inspired by the Bit Grooming algorithm. The Digit Rounding algorithm offers a high compression ratio while keeping a given number of significant digits in the dataset. It achieves a higher compression ratio than the Bit Grooming algorithm with slightly lower compression speed.


Author(s):  
Gerald Schaefer ◽  
Roman Starosolski

In this article, we present experiments aimed to identify a suitable compression algorithm for colour retina images (Schaefer & Starosolski, 2006). Such an algorithm, in order to prove useful in a real-life PACS, should not only reduce the file size of the images significantly but also has to be fast enough, both for compression and decompression. Furthermore, it should be covered by international standards such as ISO standards and, in particular for medical imaging, the Digital Imaging and Communication in Medicine (DICOM) standard (Mildenberger, Eichelberg, & Martin, 2002; National Electrical Manufacturers Association, 2004). For our study, we therefore selected those compression algorithms that are supported in DICOM, namely TIFF PackBits (Adobe Systems Inc., 1995), Lossless JPEG (Langdon, Gulati, & Seiler, 1992), JPEG-LS (ISO/IEC, 1999), and JPEG2000 (ISO/IEC, 2002). For comparison, we also included CALIC (Wu, 1997), which is often employed for benchmarking compression algorithms. All algorithms were evaluated in terms of compression ratio which describes the reduction of file size and speed. For speed, we consider both the time it takes to encode an image (compression speed) and to decode (decompression speed), as both are relevant within a PACS.


2021 ◽  
Vol 22 (1) ◽  
Author(s):  
Minhyeok Cho ◽  
Albert No

Abstract Background Advances in sequencing technology have drastically reduced sequencing costs. As a result, the amount of sequencing data increases explosively. Since FASTQ files (standard sequencing data formats) are huge, there is a need for efficient compression of FASTQ files, especially quality scores. Several quality scores compression algorithms are recently proposed, mainly focused on lossy compression to boost the compression rate further. However, for clinical applications and archiving purposes, lossy compression cannot replace lossless compression. One of the main challenges for lossless compression is time complexity, where it takes thousands of seconds to compress a 1 GB file. Also, there are desired features for compression algorithms, such as random access. Therefore, there is a need for a fast lossless compressor with a reasonable compression rate and random access functionality. Results This paper proposes a Fast and Concurrent Lossless Quality scores Compressor (FCLQC) that supports random access and achieves a lower running time based on concurrent programming. Experimental results reveal that FCLQC is significantly faster than the baseline compressors on compression and decompression at the expense of compression ratio. Compared to LCQS (baseline quality score compression algorithm), FCLQC shows at least 31x compression speed improvement in all settings, where a performance degradation in compression ratio is up to 13.58% (8.26% on average). Compared to general-purpose compressors (such as 7-zip), FCLQC shows 3x faster compression speed while having better compression ratios, at least 2.08% (4.69% on average). Moreover, the speed of random access decompression also outperforms the others. The concurrency of FCLQC is implemented using Rust; the performance gain increases near-linearly with the number of threads. Conclusion The superiority of compression and decompression speed makes FCLQC a practical lossless quality score compressor candidate for speed-sensitive applications of DNA sequencing data. FCLQC is available at https://github.com/Minhyeok01/FCLQC and is freely available for non-commercial usage.


BioResources ◽  
2021 ◽  
Vol 16 (1) ◽  
pp. 1740-1756
Author(s):  
Ruili Wang ◽  
Jin Yu ◽  
Tiejun Wang ◽  
Tieliang Wang ◽  
Xiang Li

To improve the comprehensive utilization of planting and breeding waste resources, corn straw and cow manure were used as raw materials for exploring the molding process parameters for preparing agricultural fertilizers via compression after mixing. The pressure, compression speed, and holding time were the experimental factors, while the block drainage, compression ratio, and dimensional stability were used as evaluation indicators. This study analyzed the influence of various factors on the quality evaluation index of the formed blocks. The results show that the best factor ranges were as follows: a pressure of 15 kN to 24 kN, a compression speed of 200 mm/min to 400 mm/min, and a holding time of 30 s to 60 s. A ternary quadratic regression and rotating-combination test design was used to optimize the combination of forming parameters as well as performing test verification. The formed block yielded the following results: a block drainage of 6.29%, a compression ratio of 3.37, a dimensional stability of 86.5%, a pressure of 23.9 kN, a compression speed of 276 mm/min, and a holding time of 53.1 s. These results can provide a reference for the molding process and equipment development of corn straw and cow manure mixed fertilizer.


Sign in / Sign up

Export Citation Format

Share Document