scholarly journals Fractal Video Coding Using Fast Normalized Covariance Based Similarity Measure

2016 ◽  
Vol 2016 ◽  
pp. 1-11 ◽  
Author(s):  
Ravindra E. Chaudhari ◽  
Sanjay B. Dhok

Fast normalized covariance based similarity measure for fractal video compression with quadtree partitioning is proposed in this paper. To increase the speed of fractal encoding, a simplified expression of covariance between range and overlapped domain blocks within a search window is implemented in frequency domain. All the covariance coefficients are normalized by using standard deviation of overlapped domain blocks and these are efficiently calculated in one computation by using two different approaches, namely, FFT based and sum table based. Results of these two approaches are compared and they are almost equal to each other in all aspects, except the memory requirement. Based on proposed simplified similarity measure, gray level transformation parameters are computationally modified and isometry transformations are performed using rotation/reflection properties of IFFT. Quadtree decompositions are used for the partitions of larger size of range block, that is, 16 × 16, which is based on target level of motion compensated prediction error. Experimental result shows that proposed method can increase the encoding speed and compression ratio by 66.49% and 9.58%, respectively, as compared to NHEXS method with increase in PSNR by 0.41 dB. Compared to H.264, proposed method can save 20% of compression time with marginal variation in PSNR and compression ratio.

2011 ◽  
Vol 179-180 ◽  
pp. 1350-1355
Author(s):  
Duo Li Zhang ◽  
Chuan Jie Wang ◽  
Yu Kun Song ◽  
Gao Ming Du ◽  
Xian Wen Cheng

H.264/AVC standard has been widely used in video compression at various kinds of application domain. Motion estimation takes the most calculation workload of H.264/AVC encoder. Memory optimization has played an even more important role in encoder design. Firstly, dependency relation between motion vectors was analyzed and removed at a little cost of estimation accuracy decrement, and then a 3-stage macro-block level pipeline architecture was proposed to increase parallel process ability of motion estimation. Then an optimized memory organization strategy of reference frame data was put forward, aiming at avoiding row changing frequently in SDRAM access. Finally, based on the 3-stage pipeline structure, a shared cyclic search window memory was proposed: 1) data relativity between adjacent macro-block was analyzed, 2) and search window memory size was elaborated, 3) and then a slice based structure and the work process were discussed. Analysis and experiment result show that 50% of on chip memory resource and cycles for off chip SDRAM access can be saved. The whole design was implemented with Verilog HDL and integrated into a H.264 encoder, which can demo 1280*720@30 video successfully at frequency of 120MHz under a cyclone III FPGA development board.


2013 ◽  
Vol 380-384 ◽  
pp. 1488-1494
Author(s):  
Wang Wei ◽  
Jin Yue Peng

In the research and development of intelligence system, clustering analysis is a very important problem. According to the new direct clustering algorithm using similarity measure of Vague sets as evaluation criteria presented by paper, the Vague direct clustering method are used to analysis using different similarity measure of Vague sets. The experimental result shows that the direct clustering method based on the similarity of Vague sets is effective, and the direct clustering method based on different similarity measure of Vague sets is the same basically, but difference on the steps of clustering. To select different algorithms according different conditions in the work of the actual applications.


2014 ◽  
Vol 635-637 ◽  
pp. 1039-1044 ◽  
Author(s):  
He Qun Qiang ◽  
Chun Hua Qian ◽  
Sheng Rong Gong

In general, it is difficult to segment accurately image regions or boundary contours and represent them by feature vectors for shape-based image query. Therefore, the object similarity is often computed by their boundaries. Hausdorff distance is nonlinear for computing distance, it can be used to measure the similarity between two patterns of points of edge images. Classical Hausdorff measure need to express image as a feature matrix firstly, then calculate feature values or feature vectors, so it is time-consuming. Otherwise, it is difficult for part pattern matching when shadow and noise existed. In this paper, an algorithm that use Hausdorff distance on the image boundaries to measure similarity is proposed. Experimental result has showed that the proposed algorithm is robust.


2014 ◽  
Vol 556-562 ◽  
pp. 5685-5689 ◽  
Author(s):  
Li Lv

Embedded Internet has preset functions and strong adaptability, which can be used in household appliances, medical equipment, and industrial machinery and so on. Through the method of embedded network, it completely changed the way of people live. In this paper we use clustering algorithm to improve the embedded network system, and use JSP technology to enhance the real-time and interactive of system, develop the aerobics visualization movement system, which realizes the interactive action between teachers and students. In order to test the performance of the system, we use AMBE-2000 as communication module to test the video compression and processing functions of embedded system. The results show that the compression ratio is big, the flow is small, and compression time is short. It is a stable and reliable network system. It provides a new method for embedded computer system in sports teaching.


In last thirty years, there has been so much of intensive research has been carried out on video compression techniques and now it has become mature and used in a large number of applications. In this paper, we are trying to present video compression using H.264 compression with Tucker decomposition. The largest Kn sub-tensors and their eigenvectors with run length encoding to compress the frames in the video was obtained by implementing tucker decomposition of tensor. DWT is used to separate each frames into sub-images and TD on DWT coefficient to compact the energy of sub-images. The obtained experimental results supported that our proposed method yields higher compression ratio with good PSNR.


2019 ◽  
Vol 1 ◽  
pp. 1-3
Author(s):  
Hangyu Wang ◽  
Haowen Yan

<p><strong>Abstract.</strong> For the early curve generalization algorithms, most of them only consider the reduction of the number of vertices, and do not take into consideration the important role of bends, especially the characteristic bends, on the shape of the curve. And the existing generalization methods based on the bends of the curve have complex algorithms and a large amount of calculation, focus on relationship between adjacent bends excessively and ignore the relationship among the overall bends. In addition, the threshold setting for filtering the bends is based on the unreasonable experience. Aiming at the problems above, a generalization algorithm based on the area of bends is proposed to achieve the purpose of simplifying the curve with the head/tail breaks classification method in this paper. Experiment shows that the algorithm is simple and efficient, and can iteratively take account of the overall bends with reasonable threshold, discarding the small bends and retaining the characteristic bends of the original curve to obtain generalization results which conform the natural law and is highly similar to the original graphics at different levels of detail.</p><p>Head/tail breaks is a classification method that is always applied to the classification of heavy-tailed data. Heavy-tailed data is universal in nature and human society. For example, there are more small towns than big cities in the world. However, small towns are less important than big cities in the field of economy and politics. Thus, cartographers will mark the big cities on the map and eliminate the small town. Map generalization is a progress of retaining important elements and delete unimportant elements. Head/tail breaks is able to extract significant data which can be retained as a generalization result by arithmetic mean.</p><p>Figure 1 shows the algorithm flow chart. First of all, we divide the curve into several bends with oblique-dividing-curve method. Secondly, we calculate the area of each bend, and then use head/tail breaks to complete the classification of the area of bends. If the percentage of bends in the head is less than 40%, it means the data conform to heavy-tailed distribution and can be classified with head/tail breaks. If the percentage of bends in the head is greater than 40%, the head/tail breaks is not applicable to this data. After classification, for the bends which is more important in the head, we reserve them directly. For the bends in the tail, we extract the feature points of each bend by retaining the point farthest from the axis so as to maintain the local shape of the original curve. Finally, we merge the bends in the head and the feature points as a generalization result.</p><p>The experimental result is shown in the Fig 2. The data of this experiment is administrative division map of Gansu Province extracted from a China map with a scale of 1&amp;thinsp;:&amp;thinsp;10,000,000. Because algorithm can be executed iteratively, it can generate results at different levels of detail. We can see that from the result in detail to concise result, the graphic changes progressively and there is no oversimplified result. With comparison of three algorithms in the Fig 3, the generalization results of both this paper and bend group division algorithm have better retention of characteristic bends than Douglas-Peuker algorithm. However, the algorithm of this paper has higher compression ratio and less execution time than bend group division algorithm, as shown in Table 1.</p><p>The algorithm of this paper is based on nature law rather than empirical threshold, and can generate progressive results at different levels of details by iteration. In addition, it takes overall relationship of bends into consideration, so the generalization result is unique. The experimental result shows this algorithm has not only better retention of characteristic bends than Douglas-Peuker algorithm but also higher compression ratio and less execution time than bend group division algorithm. To further optimize the algorithm, we will study how to evaluate the apparent extent of the curved feature better and how to extract and eliminate the small bend inside of the bend in the head in order to improve compression ratio in the future.</p>


2016 ◽  
Vol 1 (1) ◽  
Author(s):  
Hadary Mallafi

One of the limitations in data uploading process is the maximum request length, besides that the data size that us transferred is also an issue because it influences the data sending cost. One of the way to cope with the problem of maximum request length is by downsizing the file size (chunking). Another way to do it is by enlarging the maximum reques length. Downsizing the file size can be done by chunking the files into a smaller size or by compressing it. In this paper, the author conducted a research about the file compression process that is done in client server using the technology of AJAX and Webservice. In addition to that, the file compression is combined with file chunking. In this research, the compression method that is used is dictionary based i.e. Lempel Ziv 77(LZ77). This compression is used since it can be performed in AJAX. The analysis that is made by the researcher about the compression ratio, data sending process speed, compression time, decompression time, the compression method capability in handling the maximum request length and the combination method of compression and chunking in uploading process.  The result of this research shows that compression method can handle the maximum requet length. Based on the experiment conducted, the relations between the compression ratio and window length is positively corelated. It means that the greater the window length is the more the compression ratio is.  Meanwhile, the relation between window length and uploading time is negatively linearly corelated. It means that the greater the window length is the faster the uploading time is. In addition, it can also be observed that the relation between the decompression and the file size is positively linearly correlated. It means that the greater the file size is the more time needed for decompression is.


Sign in / Sign up

Export Citation Format

Share Document