An Efficient Temporal Redundancy Transformation for Wavelet Based Video Compression

2016 ◽  
Vol 16 (03) ◽  
pp. 1650015
Author(s):  
S. Sowmyayani ◽  
P. Arockia Jansi Rani

The objective of this work is to propose a novel idea of transforming temporal redundancies present in videos. Initially, the frames are divided into sub-blocks. Then, the temporally redundant blocks are grouped together thus generating new frames with spatially redundant temporal data. The transformed frames are given to compression in the wavelet domain. This new approach greatly reduces the computational time. The reason is that the existing video codecs use block matching methods for motion estimation which is a time consuming process. The proposed method avoids the use of block matching method. The existing H.264/AVC takes approximately one hour to compress a video file where as the proposed method takes only one minute for the same task. The experimental results substantially proved that the proposed method performs better than the existing H.264/AVC standard in terms of time, compression ratio and PSNR.

Author(s):  
Prasanga Dhungel ◽  
Prashant Tandan ◽  
Sandesh Bhusal ◽  
Sobit Neupane ◽  
Subarna Shakya

We present a new approach to video compression for video surveillance by refining the shortcomings of conventional approach and substitute each traditional component with their neural network counterpart. Our proposed work consists of motion estimation, compression and compensation and residue compression, learned end-to-end to minimize the rate-distortion trade off. The whole model is jointly optimized using a single loss function. Our work is based on a standard method to exploit the spatio-temporal redundancy in video frames to reduce the bit rate along with the minimization of distortions in decoded frames. We implement a neural network version of conventional video compression approach and encode the redundant frames with lower number of bits. Although, our approach is more concerned toward surveillance, it can be extended easily to general purpose videos too. Experiments show that our technique is efficient and outperforms standard MPEG encoding at comparable bitrates while preserving the visual quality.


2016 ◽  
Vol 855 ◽  
pp. 178-183 ◽  
Author(s):  
Chia Ming Wu ◽  
Jen Yi Huang

Motion estimation has been the most key role on video processing. It is usually applied to block matching algorithm for choosing the best motion vector. The two adjacent images are searched to find the displacement of the same object in the video image. Many fast motion vector block matching algorithms are proposed, and they achieve the efficiency of motion compensation and video compression. In our paper, we propose a new algorithm that is based on ARPS. The experimental results show that the PSNR of the proposed method is better than that of other block matching methods on many kinds of video.


2020 ◽  
Vol 15 (2) ◽  
pp. 144-196 ◽  
Author(s):  
Mohammad R. Khosravi ◽  
Sadegh Samadi ◽  
Reza Mohseni

Background: Real-time video coding is a very interesting area of research with extensive applications into remote sensing and medical imaging. Many research works and multimedia standards for this purpose have been developed. Some processing ideas in the area are focused on second-step (additional) compression of videos coded by existing standards like MPEG 4.14. Materials and Methods: In this article, an evaluation of some techniques with different complexity orders for video compression problem is performed. All compared techniques are based on interpolation algorithms in spatial domain. In details, the acquired data is according to four different interpolators in terms of computational complexity including fixed weights quartered interpolation (FWQI) technique, Nearest Neighbor (NN), Bi-Linear (BL) and Cubic Cnvolution (CC) interpolators. They are used for the compression of some HD color videos in real-time applications, real frames of video synthetic aperture radar (video SAR or ViSAR) and a high resolution medical sample. Results: Comparative results are also described for three different metrics including two reference- based Quality Assessment (QA) measures and an edge preservation factor to achieve a general perception of various dimensions of the mentioned problem. Conclusion: Comparisons show that there is a decidable trade-off among video codecs in terms of more similarity to a reference, preserving high frequency edge information and having low computational complexity.


2020 ◽  
pp. 1-16
Author(s):  
Meriem Khelifa ◽  
Dalila Boughaci ◽  
Esma Aïmeur

The Traveling Tournament Problem (TTP) is concerned with finding a double round-robin tournament schedule that minimizes the total distances traveled by the teams. It has attracted significant interest recently since a favorable TTP schedule can result in significant savings for the league. This paper proposes an original evolutionary algorithm for TTP. We first propose a quick and effective constructive algorithm to construct a Double Round Robin Tournament (DRRT) schedule with low travel cost. We then describe an enhanced genetic algorithm with a new crossover operator to improve the travel cost of the generated schedules. A new heuristic for ordering efficiently the scheduled rounds is also proposed. The latter leads to significant enhancement in the quality of the schedules. The overall method is evaluated on publicly available standard benchmarks and compared with other techniques for TTP and UTTP (Unconstrained Traveling Tournament Problem). The computational experiment shows that the proposed approach could build very good solutions comparable to other state-of-the-art approaches or better than the current best solutions on UTTP. Further, our method provides new valuable solutions to some unsolved UTTP instances and outperforms prior methods for all US National League (NL) instances.


2020 ◽  
Vol 2020 (1) ◽  
Author(s):  
Mujeeb ur Rehman ◽  
Dumitru Baleanu ◽  
Jehad Alzabut ◽  
Muhammad Ismail ◽  
Umer Saeed

Abstract The objective of this paper is to present two numerical techniques for solving generalized fractional differential equations. We develop Haar wavelets operational matrices to approximate the solution of generalized Caputo–Katugampola fractional differential equations. Moreover, we introduce Green–Haar approach for a family of generalized fractional boundary value problems and compare the method with the classical Haar wavelets technique. In the context of error analysis, an upper bound for error is established to show the convergence of the method. Results of numerical experiments have been documented in a tabular and graphical format to elaborate the accuracy and efficiency of addressed methods. Further, we conclude that accuracy-wise Green–Haar approach is better than the conventional Haar wavelets approach as it takes less computational time compared to the Haar wavelet method.


2011 ◽  
Vol 145 ◽  
pp. 277-281
Author(s):  
Vaci Istanda ◽  
Tsong Yi Chen ◽  
Wan Chun Lee ◽  
Yuan Chen Liu ◽  
Wen Yen Chen

As the development of network learning, video compression is important for both data transmission and storage, especially in a digit channel. In this paper, we present the return prediction search (RPS) algorithm for block motion estimation. The proposed algorithm exploits the temporal correlation and characteristic of returning origin to obtain one or two predictive motion vector and selects one motion vector, which presents better result, to be the initial search center. In addition, we utilize the center-biased block matching algorithms to refine the final motion vector. Moreover, we used adaptive threshold technique to reduce the computational complexity in motion estimation. Experimental results show that RPS algorithm combined with 4SS, BBGDS, and UCBDS effectively improves the performance in terms of mean-square error measure with less average searching points. On the other hand, accelerated RPS (ARPS) algorithm takes only 38% of the searching computations than 3SS algorithm, and the reconstruction image quality of the ARPS algorithm is superior to 3SS algorithm about 0.30dB in average overall test sequences. In addition, we create an asynchronous learning environment which provides students and instructors flexibility in learning and teaching activities. The purpose of this web site is to teach and display our researchable results. Therefore, we believe this web site is one of the keys to help the modern student achieve mastery of complex Motion Estimation.


2013 ◽  
Vol 2013 ◽  
pp. 1-19
Author(s):  
Wai-Yuan Tan ◽  
Hong Zhou

To incorporate biologically observed epidemics into multistage models of carcinogenesis, in this paper we have developed new stochastic models for human cancers. We have further incorporated genetic segregation of cancer genes into these models to derive generalized mixture models for cancer incidence. Based on these models we have developed a generalized Bayesian approach to estimate the parameters and to predict cancer incidence via Gibbs sampling procedures. We have applied these models to fit and analyze the SEER data of human eye cancers from NCI/NIH. Our results indicate that the models not only provide a logical avenue to incorporate biological information but also fit the data much better than other models. These models would not only provide more insights into human cancers but also would provide useful guidance for its prevention and control and for prediction of future cancer cases.


2011 ◽  
Vol 90-93 ◽  
pp. 2858-2863
Author(s):  
Wei Li ◽  
Xu Wang

Due to the soft and hard threshold function exist shortcomings. This will reduce the performance in wavelet de-noising. in order to solve this problem,This article proposes Modulus square approach. the new approach avoids the discontinuity of the hard threshold function and also decreases the fixed bias between the estimated wavelet coefficients and the wavelet coefficients of the soft-threshold method.Simulation results show that SNR and MSE are better than simply using soft and hard threshold,having good de-noising effect in Deformation Monitoring.


2020 ◽  
Vol 4 (1) ◽  
pp. 35-46
Author(s):  
Winarno (Universitas Singaperbangsa Karawang) ◽  
A. A. N. Perwira Redi (Universitas Pertamina)

AbstractTwo-echelon location routing problem (2E-LRP) is a problem that considers distribution problem in a two-level / echelon transport system. The first echelon considers trips from a main depot to a set of selected satellite. The second echelon considers routes to serve customers from the selected satellite. This study proposes two metaheuristics algorithms to solve 2E-LRP: Simulated Annealing (SA) and Large Neighborhood Search (LNS) heuristics. The neighborhood / operator moves of both algorithms are modified specifically to solve 2E-LRP. The proposed SA uses swap, insert, and reverse operators. Meanwhile the proposed LNS uses four destructive operator (random route removal, worst removal, route removal, related node removal, not related node removal) and two constructive operator (greedy insertion and modived greedy insertion). Previously known dataset is used to test the performance of the both algorithms. Numerical experiment results show that SA performs better than LNS. The objective function value for SA and LNS are 176.125 and 181.478, respectively. Besides, the average computational time of SA and LNS are 119.02s and 352.17s, respectively.AbstrakPermasalahan penentuan lokasi fasilitas sekaligus rute kendaraan dengan mempertimbangkan sistem transportasi dua eselon juga dikenal dengan two-echelon location routing problem (2E-LRP) atau masalah lokasi dan rute kendaraan dua eselon (MLRKDE). Pada eselon pertama keputusan yang perlu diambil adalah penentuan lokasi fasilitas (diistilahkan satelit) dan rute kendaraan dari depo ke lokasi satelit terpilih. Pada eselon kedua dilakukan penentuan rute kendaraan dari satelit ke masing-masing pelanggan mempertimbangan jumlah permintaan dan kapasitas kendaraan. Dalam penelitian ini dikembangkan dua algoritma metaheuristik yaitu Simulated Annealing (SA) dan Large Neighborhood Search (LNS). Operator yang digunakan kedua algoritma tersebut didesain khusus untuk permasalahan MLRKDE. Algoritma SA menggunakan operator swap, insert, dan reverse. Algoritma LNS menggunakan operator perusakan (random route removal, worst removal, route removal, related node removal, dan not related node removal) dan perbaikan (greedy insertion dan modified greedy insertion). Benchmark data dari penelitian sebelumnya digunakan untuk menguji performa kedua algoritma tersebut. Hasil eksperimen menunjukkan bahwa performa algoritma SA lebih baik daripada LNS. Rata-rata nilai fungsi objektif dari SA dan LNS adalah 176.125 dan 181.478. Waktu rata-rata komputasi algoritma SA and LNS pada permasalahan ini adalah 119.02 dan 352.17 detik.


Sign in / Sign up

Export Citation Format

Share Document