scholarly journals Perbandingan Algoritma Huffman Dan Run Length Encoding Untuk Kompresi File Audio

2018 ◽  
Vol 1 (1) ◽  
pp. 010-015
Author(s):  
Helbert Sinaga ◽  
Poltak Sihombing ◽  
Handrizal Handrizal

Penelitian ini dilakukan untuk menganalisis perbandingan hasil kompresi dan dekompresi file audio*.mp3 dan *.wav. Kompresi dilakukan dengan mengurangi jumlah bit yang diperlukan untuk menyimpan atau mengirim file tersebut. Pada penelitian ini penulis menggunakan algoritma Huffman dan Run Length Encoding yang merupakan salah satu teknik kompresi yang bersifat lossless.Algoritma Huffman memiliki tiga tahapan untuk mengkompres data, yaitu pembentukan pohon, encoding dan decodingdan berkerja berdasarkan karakter per karakter. Sedangkan teknik run length ini bekerja berdasarkan sederetan karakter yang berurutan, yaitu hanya memindahkan pengulangan byte yang sama berturut-turut secara terus-menerus. Implementasi algoritma Huffman dan Run Length Encoding ini bertujuan untuk mengkompresi file audio *.mp3 dan *.wav sehingga ukuran file hasil kompresi lebih kecil dibandingkan file asli dimana parameter yang digunakan untuk mengukur kinerja algoritma ini adalah rasio kompresi, kompleksitas yang dihasilkan. Rasio kompresi file audio *.mp3 menggunakan Algoritma Huffman memiliki rata-rata 1.204% sedangkan RLE -94.44%, dan rasio kompresi file audio *.wav memiliki rata-rata 28.954 % sedangkan RLE -45.91%. This research was conducted to analyze the comparison of the results of compression and decompression of *.mp3 and *.wav audio files. Compression was completed by reducing the number of bits needed to save or send the file. In this study, the researcher used the Huffman algorithm and Run Length Encoding which is one of the lossless compression techniques. The Huffman algorithm has three stages to compress data, namely tree formation, encoding and decoding which work based on characters per character. On the other hand, the run length technique works based on a sequence of sequential characters that only move the repetitions of the same byte in succession continuously. The implementation of the Huffman algorithm and Run Length Encoding aimed to compress audio files *.mp3 and *.wav so that the size of the compressed file was smaller than the original file where the parameter used to measure the performance of this algorithm was the compression ratio, and the resulting complexity.*.Mp3 audio file compression ratio using Huffman Algorithm had an average of 1.204% while RLE -94.44%, and compression ratio *.wav audio files had an average of 28.954% while RLE -45.91%. 

2016 ◽  
Vol 12 (2) ◽  
Author(s):  
Yosia Adi Jaya ◽  
Lukas Chrisantyo ◽  
Willy Sudiarto Raharjo

Data Compression can save some storage space and accelerate data transfer. Among many compression algorithm, Run Length Encoding (RLE) is a simple and fast algorithm. RLE can be used to compress many types of data. However, RLE is not very effective for image lossless compression because there are many little differences between neighboring pixels. This research proposes a new lossless compression algorithm called YRL that improve RLE using the idea of Relative Encoding. YRL can treat the value of neighboring pixels as the same value by saving those little differences / relative value separately. The test done by using various standard image test shows that YRL have an average compression ratio of 75.805% for 24-bit bitmap and 82.237% for 8-bit bitmap while RLE have an average compression ratio of 100.847% for 24-bit bitmap and 97.713% for 8-bit bitmap.


Author(s):  
B. J.S. Sadiq ◽  
V. Yu. Tsviatkou ◽  
M. N. Bobov

The aim of this work is to reduce the computational complexity of lossless compression in the spatial domain due to the combined coding (arithmetic and Run-Length Encoding) of a series of bits of bit planes. Known effective compression encoders separately encode the bit planes of the image or transform coefficients, which leads to an increase in computational complexity due to multiple processing of each pixel. The paper proposes the rules for combined coding and combined encoders for bit planes of pixel differences of images with a tunable and constant structure, which have lower computational complexity and the same compression ratio as compared to an arithmetic encoder of bit planes.


Author(s):  
Andreas Soegandi

The purpose of this study was to perform lossless compression on the uncompress audio file audio to minimize file size without reducing the quality. The application is developed using the entropy encoding compression method with rice coding technique. For the result, the compression ratio is good enough and easy to be developed because the algorithm is quite simple. 


Author(s):  
Riyo Oktavianty Finola

The size of the audio file can affect the time of sending data to be long and can cause waste of storage space. Therefore, compression is performed to compress the contents of the audio file into smaller ones. One of the compression techniques is lossless technique, which is a compression method where the compressed audio file can be returned to the file before it is compressed without losing information on the previous data. This study uses the Interpolative coding algorithm on mp3 audio files. Interpolative coding algorithm is an innovative way to assign dynamic code to data symbols. This method is different from other compression methods, because the code it provides for individual symbols is not static. The design of this system consists of 2 main processes namely the compression process and the decompression process and the process of calculating the performance of Compression Ratio (CR) and Redundancy. The resulting compression results in a new file with the * .ipc extension containing the compressed bit string which can then be decompressed. Application designed only one form, where in the form there is a process for compression and decompression, while the process of compressing the input file has the extension *. MP3 and produces output with the extension * IPC, and the size of the compressed audio file is smaller than the previous file size .Keywords: Compression, Audio File, Interpolative coding algorithm


Author(s):  
M. A. Danilov ◽  
◽  
M. V. Drobysh ◽  
A. N. Dubovitsky ◽  
F. G. Markov ◽  
...  

Restrictions of emissions for civil aircraft engines, on the one hand, and the need in increasing the engine efficiency, on the other hand, cause difficulties during development of low-emission combustors for such engines.


2016 ◽  
Vol 819 ◽  
pp. 202-206
Author(s):  
Reza Maziar ◽  
Kasni Sumeru ◽  
M.Y. Senawi ◽  
Farid Nasir Ani

In this study, two experiments were performed, one with the conventional compression refrigeration cycle (CRC) and the other with an ejector refrigeration cycle (ERC). The CRC system for automotive air conditioning was designed, fabricated and experiments were conducted. The system was then retrofitted with an ejector as the expansion device and experiments were repeated for the ERC system. Calculations of the entrainment ratio, compressor compression ratio and coefficient of performance (COP) were made for each cycle. The calculations showed that ERC has some advantages over the CRC. In this study, an average improvement of 5% in COP has been obtained for the ERC compared with the CRC.


Robotics ◽  
2021 ◽  
Vol 10 (2) ◽  
pp. 68
Author(s):  
Lei Shi ◽  
Cosmin Copot ◽  
Steve Vanlanduit

In gaze-based Human-Robot Interaction (HRI), it is important to determine human visual intention for interacting with robots. One typical HRI interaction scenario is that a human selects an object by gaze and a robotic manipulator will pick up the object. In this work, we propose an approach, GazeEMD, that can be used to detect whether a human is looking at an object for HRI application. We use Earth Mover’s Distance (EMD) to measure the similarity between the hypothetical gazes at objects and the actual gazes. Then, the similarity score is used to determine if the human visual intention is on the object. We compare our approach with a fixation-based method and HitScan with a run length in the scenario of selecting daily objects by gaze. Our experimental results indicate that the GazeEMD approach has higher accuracy and is more robust to noises than the other approaches. Hence, the users can lessen cognitive load by using our approach in the real-world HRI scenario.


Lipar ◽  
2020 ◽  
Vol XXI (73) ◽  
pp. 203-216
Author(s):  
Jovana Milovanović ◽  

This article discusses reception and production of academic vocabulary among native speakers of Serbian language. Academic vocabulary is one of the key elements of academic language competence, and a modest lexicon and underdeveloped academic language competence can cause problems in both comprehension and production. In this research, we used a vocabulary test consisting of 12 items taken from general culture entrance exams used at the Faculty of Philosophy, University of Belgrade. The participants are BA students of French language at the Faculty of Philology, University of Belgrade, years 1-4. The participants were instructed to provide a synonym or a definition for each item, as well as a sentence containing the given word. The aim of this research is to highlight issues in comprehension of academic vocabulary and establish the influence of factors such as word etymology or university level on the success of the participants. We analysed the results and classified them in three categories: correct, incorrect and unanswered. The majority of participants successfully identified just half of the given words (in order of success: poliglota 95,76%, bestseler 92,37%, pacifista 66,10%, suveren 58,47%, prototip 57,63%, elokventan 56,78%). The success level for the other half of the items from the test was below 50% (in order of success: erudita 49,15%, hipokrizija 39,83%, nepotizam 22,03%, skrupulozan 18,64%, šprahfeler 10,17%, eksproprijacija 8,47%). The influence of etymology was analysed through a comparison of the results for six items of French/Latin origin with the results for the other six items which did not originate from Romance languages. This analysis shows that the participants had similar results in both groups of items, with three words from each group having above 50% of correct answers (suveren, elokventan, pacifista; poliglota, bestseler, prototip). Lastly, we examined success levels from year 1, year 2, year 3 and year 4 students and determined that the median of correct answers for each year does vary, but that there is no strong linear progression (median year 1=5, year 2=6, year 3=7, year 4=6). The results indicate a lack of knowledge of academic vocabulary and difficulties in identifying and manipulating this type of lexis. We believe it is necessary to integrate academic language skills, including academic vocabulary, in high school curriculum and introduce Serbian language as a subject at university level.


2018 ◽  
Vol 162 ◽  
pp. 03021
Author(s):  
Oday Jasim ◽  
Noor Hamed ◽  
Tamarra Abdulgabar

The Iraqi Marshlands has natural and economic potential through its environment rich in various forms of lives. This region has suffered numerous setbacks due to human and natural factors, especially in the last two decades of the last century, which led to significant environmental degradation. The purpose of this paper is to prepare spatial data for the area of the marshes in Iraq as a base (Hour-al Hoveizah and central marshes and Hammar). Also, the other aim is to produce a digital geodatabase for the marshes for the years 1973, 1986, 1999, 2006 and 2016 by using ArcGIS. The process of building geodatabase has been through done in three stages: the first stage is including data collection. The second stage will be by merging the satellite images covering the Iraqi marshes and add to images in order to get the image mosaic process. Also, a georeferencing of satellite images is to be done with all the traditional maps of the same area of the marsh. Finally, complete the full geodatabase for the area of interest by using ArcGIS as the in Cartography Design. The results of this research would be a geodatabase for the Iraqi marshes.


2018 ◽  
Vol 8 (9) ◽  
pp. 1471 ◽  
Author(s):  
Seo-Joon Lee ◽  
Gyoun-Yon Cho ◽  
Fumiaki Ikeno ◽  
Tae-Ro Lee

Due to the development of high-throughput DNA sequencing technology, genome-sequencing costs have been significantly reduced, which has led to a number of revolutionary advances in the genetics industry. However, the problem is that compared to the decrease in time and cost needed for DNA sequencing, the management of such large volumes of data is still an issue. Therefore, this research proposes Blockchain Applied FASTQ and FASTA Lossless Compression (BAQALC), a lossless compression algorithm that allows for the efficient transmission and storage of the immense amounts of DNA sequence data that are being generated by Next Generation Sequencing (NGS). Also, security and reliability issues exist in public sequence databases. For methods, compression ratio comparisons were determined for genetic biomarkers corresponding to the five diseases with the highest mortality rates according to the World Health Organization. The results showed an average compression ratio of approximately 12 for all the genetic datasets used. BAQALC performed especially well for lung cancer genetic markers, with a compression ratio of 17.02. BAQALC performed not only comparatively higher than widely used compression algorithms, but also higher than algorithms described in previously published research. The proposed solution is envisioned to contribute to providing an efficient and secure transmission and storage platform for next-generation medical informatics based on smart devices for both researchers and healthcare users.


Sign in / Sign up

Export Citation Format

Share Document