Evaluation of Compression Algorithms for 3D Pavement Image Data

2021 ◽  
Vol 27 (4) ◽  
pp. 04021042
Author(s):  
Joshua Qiang Li ◽  
Kelvin C. P. Wang ◽  
Guangwei Yang
Author(s):  
A. Akoguz ◽  
S. Bozkurt ◽  
A. A. Gozutok ◽  
G. Alp ◽  
E. G. Turan ◽  
...  

High resolution level in satellite imagery came with its fundamental problem as big amount of telemetry data which is to be stored after the downlink operation. Moreover, later the post-processing and image enhancement steps after the image is acquired, the file sizes increase even more and then it gets a lot harder to store and consume much more time to transmit the data from one source to another; hence, it should be taken into account that to save even more space with file compression of the raw and various levels of processed data is a necessity for archiving stations to save more space. Lossless data compression algorithms that will be examined in this study aim to provide compression without any loss of data holding spectral information. Within this objective, well-known open source programs supporting related compression algorithms have been implemented on processed GeoTIFF images of Airbus Defence & Spaces SPOT 6 & 7 satellites having 1.5 m. of GSD, which were acquired and stored by ITU Center for Satellite Communications and Remote Sensing (ITU CSCRS), with the algorithms Lempel-Ziv-Welch (LZW), Lempel-Ziv-Markov chain Algorithm (LZMA & LZMA2), Lempel-Ziv-Oberhumer (LZO), Deflate & Deflate 64, Prediction by Partial Matching (PPMd or PPM2), Burrows-Wheeler Transform (BWT) in order to observe compression performances of these algorithms over sample datasets in terms of how much of the image data can be compressed by ensuring lossless compression.


Author(s):  
S. ARIVAZHAGAN ◽  
D. GNANADURAI ◽  
J. R. ANTONY VANCE ◽  
K. M. SAROJINI ◽  
L. GANESAN

With the fast evolution of Multimedia systems, Image compression algorithms are very much needed to achieve effective transmission and compact storage by removing the redundant information of the image data. Wavelet transforms have received significant attention, recently, due to their suitability for a number of important signal and image compression applications and the lapped nature of this transform and the computational simplicity, which comes in the form of filter bank implementations. In this paper, the implementation of image compression algorithms based on discrete wavelet transform such as embedded zero tree wavelet (EZW) coder, set partitioning in hierarchical trees coder without lists (SPIHT — No List) and packetizable zero tree wavelet (PZW) coder in DSP processor is dealt in detail and their performance analysis is carried out in terms of different compression ratios, execution timing and for different packet losses. PSNR is used as the criteria for the measurement of reconstructed image quality.


Author(s):  
Robert M. Glaeser ◽  
Bing K. Jap

The dynamical scattering effect, which can be described as the failure of the first Born approximation, is perhaps the most important factor that has prevented the widespread use of electron diffraction intensities for crystallographic structure determination. It would seem to be quite certain that dynamical effects will also interfere with structure analysis based upon electron microscope image data, whenever the dynamical effect seriously perturbs the diffracted wave. While it is normally taken for granted that the dynamical effect must be taken into consideration in materials science applications of electron microscopy, very little attention has been given to this problem in the biological sciences.


Author(s):  
Richard S. Chemock

One of the most common tasks in a typical analysis lab is the recording of images. Many analytical techniques (TEM, SEM, and metallography for example) produce images as their primary output. Until recently, the most common method of recording images was by using film. Current PS/2R systems offer very large capacity data storage devices and high resolution displays, making it practical to work with analytical images on PS/2s, thereby sidestepping the traditional film and darkroom steps. This change in operational mode offers many benefits: cost savings, throughput, archiving and searching capabilities as well as direct incorporation of the image data into reports.The conventional way to record images involves film, either sheet film (with its associated wet chemistry) for TEM or PolaroidR film for SEM and light microscopy. Although film is inconvenient, it does have the highest quality of all available image recording techniques. The fine grained film used for TEM has a resolution that would exceed a 4096x4096x16 bit digital image.


Author(s):  
Klaus-Ruediger Peters

Differential hysteresis processing is a new image processing technology that provides a tool for the display of image data information at any level of differential contrast resolution. This includes the maximum contrast resolution of the acquisition system which may be 1,000-times higher than that of the visual system (16 bit versus 6 bit). All microscopes acquire high precision contrasts at a level of <0.01-25% of the acquisition range in 16-bit - 8-bit data, but these contrasts are mostly invisible or only partially visible even in conventionally enhanced images. The processing principle of the differential hysteresis tool is based on hysteresis properties of intensity variations within an image.Differential hysteresis image processing moves a cursor of selected intensity range (hysteresis range) along lines through the image data reading each successive pixel intensity. The midpoint of the cursor provides the output data. If the intensity value of the following pixel falls outside of the actual cursor endpoint values, then the cursor follows the data either with its top or with its bottom, but if the pixels' intensity value falls within the cursor range, then the cursor maintains its intensity value.


Author(s):  
M.F. Schmid ◽  
R. Dargahi ◽  
M. W. Tam

Electron crystallography is an emerging field for structure determination as evidenced by a number of membrane proteins that have been solved to near-atomic resolution. Advances in specimen preparation and in data acquisition with a 400kV microscope by computer controlled spot scanning mean that our ability to record electron image data will outstrip our capacity to analyze it. The computed fourier transform of these images must be processed in order to provide a direct measurement of amplitudes and phases needed for 3-D reconstruction.In anticipation of this processing bottleneck, we have written a program that incorporates a menu-and mouse-driven procedure for auto-indexing and refining the reciprocal lattice parameters in the computed transform from an image of a crystal. It is linked to subsequent steps of image processing by a system of data bases and spawned child processes; data transfer between different program modules no longer requires manual data entry. The progress of the reciprocal lattice refinement is monitored visually and quantitatively. If desired, the processing is carried through the lattice distortion correction (unbending) steps automatically.


Author(s):  
B. Roy Frieden

Despite the skill and determination of electro-optical system designers, the images acquired using their best designs often suffer from blur and noise. The aim of an “image enhancer” such as myself is to improve these poor images, usually by digital means, such that they better resemble the true, “optical object,” input to the system. This problem is notoriously “ill-posed,” i.e. any direct approach at inversion of the image data suffers strongly from the presence of even a small amount of noise in the data. In fact, the fluctuations engendered in neighboring output values tend to be strongly negative-correlated, so that the output spatially oscillates up and down, with large amplitude, about the true object. What can be done about this situation? As we shall see, various concepts taken from statistical communication theory have proven to be of real use in attacking this problem. We offer below a brief summary of these concepts.


Author(s):  
R.D. Leapman ◽  
S.B. Andrews

Elemental mapping of biological specimens by electron energy loss spectroscopy (EELS) can be carried out both in the scanning transmission electron microscope (STEM), and in the energy-filtering transmission electron microscope (EFTEM). Choosing between these two approaches is complicated by the variety of specimens that are encountered (e.g., cells or macromolecules; cryosections, plastic sections or thin films) and by the range of elemental concentrations that occur (from a few percent down to a few parts per million). Our aim here is to consider the strengths of each technique for determining elemental distributions in these different types of specimen.On one hand, it is desirable to collect a parallel EELS spectrum at each point in the specimen using the ‘spectrum-imaging’ technique in the STEM. This minimizes the electron dose and retains as much quantitative information as possible about the inelastic scattering processes in the specimen. On the other hand, collection times in the STEM are often limited by the detector read-out and by available probe current. For example, a 256 x 256 pixel image in the STEM takes at least 30 minutes to acquire with read-out time of 25 ms. The EFTEM is able to collect parallel image data using slow-scan CCD array detectors from as many as 1024 x 1024 pixels with integration times of a few seconds. Furthermore, the EFTEM has an available beam current in the µA range compared with just a few nA in the STEM. Indeed, for some applications this can result in a factor of ~100 shorter acquisition time for the EFTEM relative to the STEM. However, the EFTEM provides much less spectral information, so that the technique of choice ultimately depends on requirements for processing the spectrum at each pixel (viz., isolated edges vs. overlapping edges, uniform thickness vs. non-uniform thickness, molar vs. millimolar concentrations).


Sign in / Sign up

Export Citation Format

Share Document