good compression
Recently Published Documents


TOTAL DOCUMENTS

37
(FIVE YEARS 10)

H-INDEX

4
(FIVE YEARS 0)

2021 ◽  
Author(s):  
Maria Isabel Arango Palacio ◽  
Isabella Montoya henao ◽  
Andres Felipe Agudelo Ortega ◽  
Mauricio Toro

The 34% world supply of food proteins comes from livestock and the need to supplement it, makes that the number of animals rearing increases day by day. Nowadays, this process is not effective due to the farmers not having the correcttools and devices to minimize their energy consumption. In line, the objective of this project is to design an algorithm that helps to compress and decompress images to optimize the energy that is required for classifying and obtaining theinformation of the animals. The algorithms that we imple?mented to achieve the objective previously mentioned were the lossy image compression with Fast Fourier Transform and lossless image compression with Huffman Coding, they were the ones that gave us the best results in terms of complexity execution time, the least possible loss of information and with a good compression ratio.


Author(s):  
Abdul Rajak ◽  
Vilas H. Gaidhane ◽  
Aaron D’costa

Background: Network security is used to secure data transmission over the wireless network. Encryption of information plays a vital role in communication since it protects the sender and receiver from the hackers trying to access the information. Objectives: In recent years, protecting the information data has become a challenging task for the researchers. Hence, there is a need to improve the existing methods of secure data transmission. Method: In this paper, a new approach is proposed to implement a better security chip for secured data transmission. It is based on the combination of encryption and description as well as the compression techniques. The proposed design focuses on the reduction of delay in the circuit using the compression approach. Results: The various simulations are carried out using the Xilinx, and MATLAB software. The timing signals are observed on Xilinx and the proposed algorithm has been simulated and tested on MATLAB. Conclusion: The presented approach performed better and achieved a good compression ratio and hence the loss of information was less at the receiving end.


Information ◽  
2020 ◽  
Vol 11 (4) ◽  
pp. 196
Author(s):  
Shmuel T. Klein ◽  
Dana Shapira

It seems reasonable to expect from a good compression method that its output should not be further compressible, because it should behave essentially like random data. We investigate this premise for a variety of known lossless compression techniques, and find that, surprisingly, there is much variability in the randomness, depending on the chosen method. Arithmetic coding seems to produce perfectly random output, whereas that of Huffman or Ziv-Lempel coding still contains many dependencies. In particular, the output of Huffman coding has already been proven to be random under certain conditions, and we present evidence here that arithmetic coding may produce an output that is identical to that of Huffman.


2020 ◽  
Vol 1157 ◽  
pp. 47-51
Author(s):  
Cornelia Laura Sălcianu ◽  
Ilare Bordeaşu ◽  
Nicușor Alin Sîrbu ◽  
Rodica Bădărău ◽  
Gabriel Mălaimare ◽  
...  

Inconel 718 is a very difficult metal for machining because of its high plasticity. Lately, more and more researchers are interested in using it for cavitation parts, such as the plugs and drawers of the valves. For this purpose, thermal volumetric treatments have been initiated to facilitate mechanical machining, aiming simultaneously to obtain good compression and cavitation resistance results. Therefore, this paper presents the results of cavitation erosion behavior and cavitation resistance of ICONEL 718, subjected to two thermal treatment regimes, differentiated by the duration (temperature 800 °C and residence times 5 hours, and respectively 10 hours). The assessment of the cavitation resistance provided by each heat treatment regime is based on the average durability cavitation parameter, as defined by K. Steller. The research is achieved by using the standard vibrator device with piezo - ceramic crystals from the Cavitation Laboratory of the Politechnica University of Timisoara.


RSC Advances ◽  
2020 ◽  
Vol 10 (47) ◽  
pp. 28397-28407
Author(s):  
Long Qin ◽  
Jiang Yi ◽  
Lai Xuefei ◽  
Liao Li ◽  
Xie Kenan ◽  
...  

Silver nanoparticles and HAp particles were orderly coated on the surface of G-β-TCP scaffold. So the composite had good compression strength and antibacterial property.


Wavelet Transform is successfully applied a number of fields, covering anything from pure mathematics to applied science. Numerous studies, done on wavelet Transform, have proven its advantages in image processing and data compression and have made it a encoding technique in recent data compression standards along with multi- resolution decomposition of signal and image processing applications. Pure software implementations for the Discrete Wavelet Transform (DWT), however, seem the performance bottleneck in realtime systems in terms of performance. Therefore, hardware acceleration for the DWT has developed into topic of contemporary research. On the compression of image using 2-Dimensional DWT (2D-DWT) two filters are widely-used, a highpass as well as a lowpass filter. Because filter coefficients are irrational numbers, it's advocated that they must be approximated with the use of binary fractions. The truth and efficiency with that your filter coefficients are rationalized within the implementation impacts the compression and critical hardware properties just like throughput and power consumption. An expensive precision representation ensures good compression performance, but at the expense of increased hardware resources and processing time. Conversely, lower precision with the filter coefficients ends up with smaller, faster hardware, but at the expense of poor compression performance.


2020 ◽  
Vol 15 ◽  
pp. 155892502096882
Author(s):  
Eui Kyung Roh

In this study, the mechanical properties and preferences of natural and artificial leathers were analyzed. The leathers were classified based on mechanical properties affecting their preferences. The mechanical properties of the leathers were measured via the KES-FB system, and an expert survey was conducted to evaluate leather preference. Leathers possess different mechanical properties depending on the manufacturing process and structural characteristics. Furthermore, differences were observed in the mechanical properties of natural and artificial leather. The mechanical properties of the leathers were related to the preferences for hand and bags. Accordingly, the leather was classified into three clusters. The leathers that were preferred for hand were not preferred for bags, and those preferred for bags were not preferred for hand. Therefore, different development strategies are needed, depending on the type of leather. Natural leather for bags should be light and have good compression elasticity along with its existing mechanical properties, whereas artificial leather should be light and have improved tensile resilience.


Due to the advances in the digital technology, multimedia processing has become the essential requirement in many applications. These applications find wide use in mobile, personal computer(PC), TV, surveillance and satellite broadcast. Also it is necessary that the video coding algorithms to be updated in order to meet the requirements of latest hardware devices. The processing speed and bandwidth are essential parameters in these applications. A good video compression standard can achieve these parameters adequately. In the proposed system, the video coding standard is implemented using the three important stages. In which the first sage uses multiwavelets to achieve good compression rate. Also it reduces the memory and bandwidth requirement. Second stage is the Multi Stage Vector Quantization(MVSQ) which reduces the complexity of searching process and the size of codebook. Third stage uses Adaptive Diamond Refinement Search(ADRS) algorithm for the motion estimation which has better performance than the Adaptive Diamond Orthogonal Search(ADOS) and Diamond Refinement Search(DRS) algorithms. The combination of multiwavelet, Multi Stage Vector Quantization(MVSQ) and Adaptive Diamond Refinement Search(ADRS) algorithm gives the high compression ratios. Preliminary results indicate that the proposed method has good performance in terms of average number of search points, PSNR values and compression rates.


Indexing of code vectors is a most difficult task in lattice vector quantization. In this work we focus on the problem of efficient indexing and coding of indexes. Index assignment to the quantized lattice vectors is computed by direct indexing method, through which a vector can be represented by a scalar quantity which represents the index of that vector. This eliminates the need of calculating the prefix i.e. index of the radius ( R) or norm and suffix i.e. the index of the position of vector on the shell of radius R, also eliminates index assignment to the suffix based on lattice point enumeration or leader’s indexing . Two value golomb coding is used to enumerate indices of quantized lattice vectors. We use analytical means to emphasize the dominance of two value golomb code over one value golomb code. This method is applied to achieve image compression. Indexes of particular subband of test images like barbara, peppers and boat are coded using 2-value golomb coding (2-V GC) and compression ratio is calculated. We demonstrate the effectiveness of the 2-V GC while the input is scanned columnwise as compare to rowwise. Experimentally we also show that good compression ratio is achieved when only higher order bits of the indexes are encoded instead of complete bits


2019 ◽  
Vol 3 (1) ◽  
pp. 1
Author(s):  
Andrian Andrian

<p>Now days, The most needed of digital images is influenced by the people will that want to take a part of moment life into digital image. The good digital image has the big filesize, so it will need more space memory to saving more images. There is technique in image processing to decrease file size that is compression. By combine wavelet transformation method and Principal Component Analysis in developing application can produce the good compression technique.</p>


Sign in / Sign up

Export Citation Format

Share Document