scholarly journals A Benchmark Evaluation of Adaptive Image Compression for Multi Picture Object Stereoscopic Images

2021 ◽  
Vol 7 (8) ◽  
pp. 160
Author(s):  
Alessandro Ortis ◽  
Marco Grisanti ◽  
Francesco Rundo ◽  
Sebastiano Battiato

A stereopair consists of two pictures related to the same subject taken by two different points of view. Since the two images contain a high amount of redundant information, new compression approaches and data formats are continuously proposed, which aim to reduce the space needed to store a stereoscopic image while preserving its quality. A standard for multi-picture image encoding is represented by the MPO format (Multi-Picture Object). The classic stereoscopic image compression approaches compute a disparity map between the two views, which is stored with one of the two views together with a residual image. An alternative approach, named adaptive stereoscopic image compression, encodes just the two views independently with different quality factors. Then, the redundancy between the two views is exploited to enhance the low quality image. In this paper, the problem of stereoscopic image compression is presented, with a focus on the adaptive stereoscopic compression approach, which allows us to obtain a standardized format of the compressed data. The paper presents a benchmark evaluation on large and standardized datasets including 60 stereopairs that differ by resolution and acquisition technique. The method is evaluated by varying the amount of compression, as well as the matching and optimization methods resulting in 16 different settings. The adaptive approach is also compared with other MPO-compliant methods. The paper also presents an Human Visual System (HVS)-based assessment experiment which involved 116 people in order to verify the perceived quality of the decoded images.

2021 ◽  
Vol 5 (2) ◽  
pp. 31
Author(s):  
Olga Svynchuk ◽  
Oleg Barabash ◽  
Joanna Nikodem ◽  
Roman Kochan ◽  
Oleksandr Laptiev

The rapid growth of geographic information technologies in the field of processing and analysis of spatial data has led to a significant increase in the role of geographic information systems in various fields of human activity. However, solving complex problems requires the use of large amounts of spatial data, efficient storage of data on on-board recording media and their transmission via communication channels. This leads to the need to create new effective methods of compression and data transmission of remote sensing of the Earth. The possibility of using fractal functions for image processing, which were transmitted via the satellite radio channel of a spacecraft, is considered. The information obtained by such a system is presented in the form of aerospace images that need to be processed and analyzed in order to obtain information about the objects that are displayed. An algorithm for constructing image encoding–decoding using a class of continuous functions that depend on a finite set of parameters and have fractal properties is investigated. The mathematical model used in fractal image compression is called a system of iterative functions. The encoding process is time consuming because it performs a large number of transformations and mathematical calculations. However, due to this, a high degree of image compression is achieved. This class of functions has an interesting property—knowing the initial sets of numbers, we can easily calculate the value of the function, but when the values of the function are known, it is very difficult to return the initial set of values, because there are a huge number of such combinations. Therefore, in order to de-encode the image, it is necessary to know fractal codes that will help to restore the raster image.


2012 ◽  
Vol 488-489 ◽  
pp. 1587-1591
Author(s):  
Amol G. Baviskar ◽  
S. S. Pawale

Fractal image compression is a lossy compression technique developed in the early 1990s. It makes use of the local self-similarity property existing in an image and finds a contractive mapping affine transformation (fractal transform) T, such that the fixed point of T is close to the given image in a suitable metric. It has generated much interest due to its promise of high compression ratios with good decompression quality. Image encoding based on fractal block-coding method relies on assumption that image redundancy can be efficiently exploited through block-self transformability. It has shown promise in producing high fidelity, resolution independent images. The low complexity of decoding process also suggested use in real time applications. The high encoding time, in combination with patents on technology have unfortunately discouraged results. In this paper, we have proposed efficient domain search technique using feature extraction for the encoding of fractal image which reduces encoding-decoding time and proposed technique improves quality of compressed image.


Author(s):  
Krzysztof Fornalczyk ◽  
Piotr Napieralski ◽  
Dominik Szajerman ◽  
Adam Wojciechowski ◽  
Przemyslaw Sztoch ◽  
...  

Author(s):  
I. I. Levin ◽  
M. D. Chekina

The developed fractal image compression method, implemented for reconfigurable computing systems is described. The main idea parallel fractal image compression based on parallel execution pairwise comparison of domain and rank blocks. Achievement high performance occurs at the expense of simultaneously comparing maximum number of pairs. Implementation fractal image compression for reconfigurable computing systems has two critical resources, as number of input channels and FPGA Look-up Table (LUT). The main critical resource for fractal image compression is data channels, and implementation this task for reconfigurable computing systems requires parallel-pipeline computations organization replace parallel, preliminarily produced performance reduction parallel computational structure. The main critical resource for fractal image compression is data channels, and implementation this task for reconfigurable computing systems requires parallel-pipeline computations organization replace parallel computations organiation. For using parallel-pipeline computations organization, preliminarily have produce performance reduction parallel computational structure. Each operator has routed to computational structure sequentially (bit by bit) to save computational resources and reduces equipment downtime. Storing iterated functions system coefficients for image encoding has been introduced in data structure, which correlates between corresponding parameters the numbers of rank and domain blocks. Applying this approach for parallel-pipeline programs allows scaling computing structure to plurality programmable logic arrays (FPGAs). Task implementation on the reconfigurable computer system Tertius-2 containing eight FPGAs 15 000 times provides performed acceleration relatively with universal multi-core processor, and 18 – 25 times whit to existing solutions for FPGAs.


2009 ◽  
Vol 09 (04) ◽  
pp. 511-529
Author(s):  
ALEXANDER WONG

This paper presents PECSI, a perceptually-enhanced image compression framework designed to provide high compression rates for still images while preserving visual quality. PECSI utilizes important human perceptual characteristics during image encoding stages (e.g. downsampling and quantization) and image decoding stages (e.g. upsampling and deblocking) to find a better balance between image compression and the perceptual quality of an image. The proposed framework is computationally efficient and easy to integrate into existing block-based still image compression standards. Experimental results show that the PECSI framework provides improved perceptual quality at the same compression rate as existing still image compression methods. Alternatively, the framework can be used to achieve higher compression ratios while maintaining the same level of perceptual quality.


2020 ◽  
Vol 07 (03) ◽  
pp. 281-299
Author(s):  
Florin Leon ◽  
Petru Caşcaval ◽  
Costin Bădică

This paper addresses the issue of optimal allocation of spare modules in large series-redundant systems in order to obtain a required reliability under cost constraints. Both cases of active and standby redundancy are considered. Moreover, for a subsystem with standby redundancy, two cases are examined: in the first case, all the spares are maintained in cold state (cold standby redundancy) and, in the second one, to reduce the time needed to put a spare into operation when the active one fails, one of the spares is maintained in warm conditions. To solve this optimization problem, for the simpler case of active redundancy an analytical method based on the Lagrange multipliers technique is first applied. Then the results are improved by using Pairwise Hill Climbing, an original fine-tuning algorithm. An alternative approach is an innovative evolutionary algorithm, RELIVE, in which an individual lives for several generations and improves its fitness based on local search. These methods are especially needed in case of very large systems.


Energies ◽  
2019 ◽  
Vol 12 (11) ◽  
pp. 2088 ◽  
Author(s):  
Ji Pei ◽  
Majeed Koranteng Osman ◽  
Wenjie Wang ◽  
Desmond Appiah ◽  
Tingyun Yin ◽  
...  

Researches have over the past few years have been applying optimization algorithms to quickly find optimum parameter combinations during cavitation optimization design. This method, although better than the traditional trial-and-error design method, consumes lots of computational resources, since it involves several numerical simulations to determine the critical cavitation point for each test case. As such, the Traditional method for NPSHr prediction was compared to a novel and alternative approach in an axially-split double-suction centrifugal pump. The independent and dependent variables are interchanged at the inlet and outlet boundary conditions, and an algorithm adapted to estimate the static pressure at the pump outlet. Experiments were conducted on an original size pump, and the two numerical procedures agreed very well with the hydraulic and cavitation results. For every flow condition, the time used by the computational resource to calculate the NPSHr for each method was recorded and compared. The total number of hours used by the new and alternative approach to estimate the NPSHr was reduced by 54.55% at 0.6 Qd, 45.45% at 0.8 Qd, 50% at 1.0 Qd, and 44.44% at 1.2 Qd respectively. This new method was demonstrated to be very efficient and robust for real engineering applications and can, therefore, be applied to reduce the computation time during the application of intelligent cavitation optimization methods in pump design.


Sign in / Sign up

Export Citation Format

Share Document