scholarly journals Method for Fast and Complexity-Reduced Asymmetric Image Compression

1970 ◽  
Vol 110 (4) ◽  
pp. 117-120
Author(s):  
I. Bilinskis ◽  
A. Skageris ◽  
K. Sudars

Standartised image compression/reconstruction algorithms are symmetric in the sense that the computational complexities characterizing image compression and reconstruction stages are almost equal. An approach to assymetric image compression is suggested and discussed. Image compression performed according to this approach is extremely simple and the computational burden of the commpression/reconstruction task is shifted assymetrically to the image reconstruction stage. This approach, based on typical DASP methods, is described and discussed. The described image compression/reconstruction algorithms have been evaluated both on the basis of computer simulations and experimental studies and the obtained results are given. Ill. 2, bibl. 3 (in English; abstracts in English and Lithuanian).http://dx.doi.org/10.5755/j01.eee.110.4.303

Author(s):  
Santosh Bhattacharyya

Three dimensional microscopic structures play an important role in the understanding of various biological and physiological phenomena. Structural details of neurons, such as the density, caliber and volumes of dendrites, are important in understanding physiological and pathological functioning of nervous systems. Even so, many of the widely used stains in biology and neurophysiology are absorbing stains, such as horseradish peroxidase (HRP), and yet most of the iterative, constrained 3D optical image reconstruction research has concentrated on fluorescence microscopy. It is clear that iterative, constrained 3D image reconstruction methodologies are needed for transmitted light brightfield (TLB) imaging as well. One of the difficulties in doing so, in the past, has been in determining the point spread function of the system.We have been developing several variations of iterative, constrained image reconstruction algorithms for TLB imaging. Some of our early testing with one of them was reported previously. These algorithms are based on a linearized model of TLB imaging.


Micromachines ◽  
2021 ◽  
Vol 12 (2) ◽  
pp. 164
Author(s):  
Dongxu Wu ◽  
Fusheng Liang ◽  
Chengwei Kang ◽  
Fengzhou Fang

Optical interferometry plays an important role in the topographical surface measurement and characterization in precision/ultra-precision manufacturing. An appropriate surface reconstruction algorithm is essential in obtaining accurate topography information from the digitized interferograms. However, the performance of a surface reconstruction algorithm in interferometric measurements is influenced by environmental disturbances and system noise. This paper presents a comparative analysis of three algorithms commonly used for coherence envelope detection in vertical scanning interferometry, including the centroid method, fast Fourier transform (FFT), and Hilbert transform (HT). Numerical analysis and experimental studies were carried out to evaluate the performance of different envelope detection algorithms in terms of measurement accuracy, speed, and noise resistance. Step height standards were measured using a developed interferometer and the step profiles were reconstructed by different algorithms. The results show that the centroid method has a higher measurement speed than the FFT and HT methods, but it can only provide acceptable measurement accuracy at a low noise level. The FFT and HT methods outperform the centroid method in terms of noise immunity and measurement accuracy. Even if the FFT and HT methods provide similar measurement accuracy, the HT method has a superior measurement speed compared to the FFT method.


2017 ◽  
Vol 2017 ◽  
pp. 1-10
Author(s):  
Hsuan-Ming Huang ◽  
Ing-Tsung Hsiao

Background and Objective. Over the past decade, image quality in low-dose computed tomography has been greatly improved by various compressive sensing- (CS-) based reconstruction methods. However, these methods have some disadvantages including high computational cost and slow convergence rate. Many different speed-up techniques for CS-based reconstruction algorithms have been developed. The purpose of this paper is to propose a fast reconstruction framework that combines a CS-based reconstruction algorithm with several speed-up techniques.Methods. First, total difference minimization (TDM) was implemented using the soft-threshold filtering (STF). Second, we combined TDM-STF with the ordered subsets transmission (OSTR) algorithm for accelerating the convergence. To further speed up the convergence of the proposed method, we applied the power factor and the fast iterative shrinkage thresholding algorithm to OSTR and TDM-STF, respectively.Results. Results obtained from simulation and phantom studies showed that many speed-up techniques could be combined to greatly improve the convergence speed of a CS-based reconstruction algorithm. More importantly, the increased computation time (≤10%) was minor as compared to the acceleration provided by the proposed method.Conclusions. In this paper, we have presented a CS-based reconstruction framework that combines several acceleration techniques. Both simulation and phantom studies provide evidence that the proposed method has the potential to satisfy the requirement of fast image reconstruction in practical CT.


2012 ◽  
Vol 155-156 ◽  
pp. 440-444
Author(s):  
He Yan ◽  
Xiu Feng Wang

JPEG2000 algorithm has been developed based on the DWT techniques, which have shown how the results achieved in different areas in information technology can be applied to enhance the performance. Lossy image compression algorithms sacrifice perfect image reconstruction in favor of decreased storage requirements. Wavelets have become a popular technology for information redistribution for high-performance image compression algorithms. Lossy compression algorithms sacrifice perfect image reconstruction in favor of improved compression rates while minimizing image quality lossy.


2019 ◽  
Vol 28 (1) ◽  
pp. 426-435 ◽  
Author(s):  
Zhengzhi Liu ◽  
Stylianos Chatzidakis ◽  
John M. Scaglione ◽  
Can Liao ◽  
Haori Yang ◽  
...  

Sensors ◽  
2018 ◽  
Vol 18 (11) ◽  
pp. 3701 ◽  
Author(s):  
Jin Zheng ◽  
Jinku Li ◽  
Yi Li ◽  
Lihui Peng

Electrical Capacitance Tomography (ECT) image reconstruction has developed for decades and made great achievements, but there is still a need to find a new theoretical framework to make it better and faster. In recent years, machine learning theory has been introduced in the ECT area to solve the image reconstruction problem. However, there is still no public benchmark dataset in the ECT field for the training and testing of machine learning-based image reconstruction algorithms. On the other hand, a public benchmark dataset can provide a standard framework to evaluate and compare the results of different image reconstruction methods. In this paper, a benchmark dataset for ECT image reconstruction is presented. Like the great contribution of ImageNet that transformed machine learning research, this benchmark dataset is hoped to be helpful for society to investigate new image reconstruction algorithms since the relationship between permittivity distribution and capacitance can be better mapped. In addition, different machine learning-based image reconstruction algorithms can be trained and tested by the unified dataset, and the results can be evaluated and compared under the same standard, thus, making the ECT image reconstruction study more open and causing a breakthrough.


Sign in / Sign up

Export Citation Format

Share Document