scholarly journals FPGA Implementation of Adaptive Multiplier-Based Linear Image Interpolation

This work provides an image interpolation for multimedia applications by utilizing an adaptive multiplier-based stepwise linear interpolation with clam filter. Image interpolation is also termed as image up-scaling. Generally, while enlarging an image some vacant bit positions are introduced and due to this empty pixel positions, the quality of the image is decreased. Therefore to maintain the quality of the image, new pixels are introduced and those pixels are used to fill the vacant bit positions by using interpolation techniques. In the adaptive interpolation techniques, edge pixels are identified and filtered at prior to the interpolation process. This will improve the quality of the interpolated image. However, the adaptive interpolation scheme increases the complexity of the system. To reduce the complexity, this work uses low complexity stepwise linear interpolation and to maintain the quality it uses multiplier-based linear stepwise (MBLSI) and edge enhancement technique. The experimental results demonstrate that the complexity of the proposed work is less as compared with other related work as well as the quality is also maintained. The proposed work utilizes 275 LUTs to provide the average peak signal to noise ratio (PSNR) of 20.44 dB and structural similarity index (SSIM) as 0.8250. This proposed work increases the PSNR by 0.89 dB from the conventional multiplier-based stepwise linear interpolation. Further the proposed interpolation algorithm utilizes less number of resources in field programmable gate array (FPGA) by comparing with other related interpolation techniques.

Sensors ◽  
2021 ◽  
Vol 21 (16) ◽  
pp. 5540
Author(s):  
Nayeem Hasan ◽  
Md Saiful Islam ◽  
Wenyu Chen ◽  
Muhammad Ashad Kabir ◽  
Saad Al-Ahmadi

This paper proposes an encryption-based image watermarking scheme using a combination of second-level discrete wavelet transform (2DWT) and discrete cosine transform (DCT) with an auto extraction feature. The 2DWT has been selected based on the analysis of the trade-off between imperceptibility of the watermark and embedding capacity at various levels of decomposition. DCT operation is applied to the selected area to gather the image coefficients into a single vector using a zig-zig operation. We have utilized the same random bit sequence as the watermark and seed for the embedding zone coefficient. The quality of the reconstructed image was measured according to bit correction rate, peak signal-to-noise ratio (PSNR), and similarity index. Experimental results demonstrated that the proposed scheme is highly robust under different types of image-processing attacks. Several image attacks, e.g., JPEG compression, filtering, noise addition, cropping, sharpening, and bit-plane removal, were examined on watermarked images, and the results of our proposed method outstripped existing methods, especially in terms of the bit correction ratio (100%), which is a measure of bit restoration. The results were also highly satisfactory in terms of the quality of the reconstructed image, which demonstrated high imperceptibility in terms of peak signal-to-noise ratio (PSNR ≥ 40 dB) and structural similarity (SSIM ≥ 0.9) under different image attacks.


Author(s):  
Jelena Vlaović ◽  
Drago Žagar ◽  
Snježana Rimac-Drlje ◽  
Mario Vranješ

With the development of Video on Demand applications due to the availability of high-speed internet access, adaptive streaming algorithms have been developing and improving. The focus is on improving user’s Quality of Experience (QoE) and taking it into account as one of the parameters for the adaptation algorithm. Users often experience changing network conditions, so the goal is to ensure stable video playback with satisfying QoE level. Although subjective Video Quality Assessment (VQA) methods provide more accurate results regarding user’s QoE, objective VQA methods cost less and are less time-consuming. In this article, nine different objective VQA methods are compared on a large set of video sequences with various spatial and temporal activities. VQA methods used in this analysis are: Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index (SSIM), MultiScale Structural Similarity Index (MS-SSIM), Video Quality Metric (VQM), Mean Sum of Differences (DELTA), Mean Sum of Absolute Differences (MSAD), Mean Squared Error (MSE), Netflix Video Multimethod Assessment Fusion (Netflix VMAF) and Visual Signal-to-Noise Ratio (VSNR). The video sequences used for testing purposes were encoded according to H.264/AVC with twelve different target coding bitrates, at three different spatial resolutions (resulting in a total of 190 sequences). In addition to objective quality assessment, subjective quality assessment was performed for these sequences. All results acquired by objective VQA methods have been compared with subjective Mean Opinion Score (MOS) results using Pearson Linear Correlation Coefficient (PLCC). Measurement results obtained on a large set of video sequences with different spatial resolutions show that VQA methods like SSIM and VQM correlate better with MOS results compared to PSNR, SSIM, VSNR, DELTA, MSE, VMAF and MSAD. However, the PLCC results for SSIM and VQM are too low (0.7799 and 0.7734, respectively), for the usage of these methods in streaming services instead of subjective testing. These results suggest that more efficient VQA methods should be developed to be used in streaming testing procedures as well as to support the video segmentation process. Furthermore, when comparing results obtained for different spatial resolutions, it can be concluded that the quality of video sequences encoded at lower spatial resolutions in cases of lower target coding bitrate is higher compared to the quality of video sequences encoded at higher spatial resolutions at the same target coding bitrate, particularly when video sequences with higher spatial and temporal information are used.


Author(s):  
Diptasree Debnath ◽  
Emlon Ghosh ◽  
Barnali Gupta Banik

Steganography is a widely-used technique for digital data hiding. Image steganography is the most popular among all other kinds of steganography. In this article, a novel key-based blind method for RGB image steganography where multiple images can be hidden simultaneously is described. The proposed method is based on Discrete Cosine Transformation (DCT) and Discrete Wavelet Transformation (DWT) which provides enhanced security as well as improve the quality of the stego. Here, the cover image has been taken as RGB although the method can be implemented on grayscale images as well. The fundamental concept of visual cryptography has been utilized here in order to increase the capacity to a great extent. To make the method more robust and imperceptible, pseudo-random number sequence and a correlation coefficient have been used for embedding and the extraction of the secrets, respectively. The robustness of the method is tested against steganalysis attacks such as crop, rotate, resize, noise addition, and histogram equalization. The method has been applied on multiple sets of images and the quality of the resultant images have been analyzed through various matrices namely ‘Peak Signal to Noise Ratio,' ‘Structural Similarity index,' ‘Structural Content,' and ‘Maximum Difference.' The results obtained are very promising and have been compared with existing methods to prove its efficiency.


2013 ◽  
Vol 13 (01) ◽  
pp. 1350006 ◽  
Author(s):  
RAJANI GUPTA ◽  
PRASHANT BANSOD ◽  
R. S. GAMAD

The paper reveals the analysis of the compression quality of true color medical images of echocardiogram (ECHO), X-radiation (X-ray) and computed tomography (CT) and further a comparison of compressed biomedical images of various sizes using two lossy compression techniques, set partitioning in hierarchical trees (SPIHT) and discrete cosine transform (DCT) to the original image is carried out. The study also evaluates the results after analyzing various objective parameters associated with the image. The objective of this analysis is to exhibits the effect of compression ratio on absolute average difference (AAD), cross correlation (CC), image fidelity (IF), mean square error (MSE), peak signal to noise ratio (PSNR) and structural similarity index measurement (SSIM) of the compressed image by SPIHT and DCT compression techniques. The results signify that the quality of the compressed image depends on resolution of the underlying structure where CT is found to be better than other image modalities. The X-ray compression results are equivalent by both the techniques. The compression results for large size biomedical images by SPIHT signifies that ECHO having comparable results to CT and X-ray while their DCT results are substandard. The compression results for comparatively smaller images of ECHO are not as good as X-ray and CT by both the compression techniques. The quality measurement of the compressed image has been designed using MATLAB.


2006 ◽  
Vol 03 (02) ◽  
pp. 139-159 ◽  
Author(s):  
S. E. EL-KHAMY ◽  
M. M. HADHOUD ◽  
M. I. DESSOUKY ◽  
B. M. SALAM ◽  
F. E. ABD EL-SAMIE

In this paper, an adaptive algorithm is suggested for the implementation of polynomial based image interpolation techniques such as Bilinear, Bicubic, Cubic Spline and Cubic O-MOMS. This algorithm is based on the minimization of the squared estimation error at each pixel in the interpolated image by adaptively estimating the distance of the pixel to be estimated from its neighbors. The adaptation process at each pixel is performed iteratively to yield the best estimate of this pixel value. This adaptive interpolation algorithm takes into consideration the mathematical model by which a low resolution (LR) image is obtained from a high resolution (HR) image. This adaptive algorithm is compared to traditional polynomial based interpolation techniques and to the warped distance interpolation techniques. The performance of this algorithm is also compared to the performance of other algorithms used in commercial interpolation softwares such as the ACDSee and the Photopro programs. Results show that the suggested adaptive algorithm is superior from the Peak Signal to Noise Ratio (PSNR) point of view to other traditional techniques and it has a higher ability of edge preservation than traditional image techniques. The computational cost of the adaptive algorithm is studied and found to be moderate.


Author(s):  
Ahmed Nagm ◽  
Mohammed Safy

<p>Integrated healthcare systems require the transmission of medical images between medical centres. The presence of watermarks in such images has become important for patient privacy protection. However, some important issues should be considered while watermarking an image. Among these issues, the watermark should be robust against attacks and does not affect the quality of the image. In this paper, a watermarking approach employing a robust dynamic secret code is proposed. This approach is to process every pixel of the digital image and not only the pixels of the regions of non-interest at the same time it preserves the image details. The performance of the proposed approach is evaluated using several performance measures such as the Mean Square Error (MSE), the Mean Absolute Error (MAE), the Peak Signal to Noise Ratio (PSNR), the Universal Image Quality Index (UIQI) and the Structural Similarity Index (SSIM). The proposed approach has been tested and shown robustness in detecting the intentional attacks that change image, specifically the most important diagnostic information.</p>


2018 ◽  
Vol 5 ◽  
pp. 58-67
Author(s):  
Milan Chikanbanjar

Digital images have been a major form of transmission of visual information, but due to the presence of noise, the image gets corrupted. Thus, processing of the received image needs to be done before being used in an application. Denoising of image involves data manipulation to remove noise in order to produce a good quality image retaining different details. Quantitative measures have been used to show the improvement in the quality of the restored image by the use of various thresholding techniques by the use of parameters mainly, MSE (Mean Square Error), PSNR (Peak-Signal-to-Noise-Ratio) and SSIM (Structural Similarity index). Here, non-linear wavelet transform denoising techniques of natural images are studied, analyzed and compared using thresholding techniques such as soft, hard, semi-soft, LevelShrink, SUREShrink, VisuShrink and BayesShrink. On most of the tests, PSNR and SSIM values for LevelShrink Hard thresholding method is higher as compared to other thresholding methods. For instance, from tests PSNR and SSIM values of lena image for VISUShrink Hard, VISUShrink Soft, VISUShrink Semi Soft, LevelShrink Hard, LevelShrink Soft, LevelShrink Semi Soft, SUREShrink, BayesShrink thresholding methods at the variance of 10 are 23.82, 16.51, 23.25, 24.48, 23.25, 20.67, 23.42, 23.14 and 0.28, 0.28, 0.28, 0.29, 0.22, 0.25, 0.16 respectively which shows that the PSNR and SSIM values for LevelShrink Hard thresholding method is higher as compared to other thresholding methods, and so on. Thus, it can be stated that the performance of LevelShrink Hard thresholding method is better on most of tests.


2021 ◽  
Vol 36 (1) ◽  
pp. 642-649
Author(s):  
G. Sharvani Reddy ◽  
R. Nanmaran ◽  
Gokul Paramasivam

Aim: Image is the most powerful tool to analyze the information. Sometimes the captured image gets affected with blur and noise in the environment, which degrades the quality of the image. Image restoration is a technique in image processing where the degraded image can be restored or recovered to its nearest original image. Materials and Methods: In this research Lucy-Richardson algorithm is used for restoring blurred and noisy images using MATLAB software. And the proposed work is compared with Wiener filter, and the sample size for each group is 30. Results: The performance was compared based on three parameters, Power Signal to Noise Ratio (PSNR), Structural Similarity Index Measure (SSIM), Normalized Correlation (NC). High values of PSNR, SSIM and NC indicate the better performance of restoration algorithms. Lucy-Richardson provides a mean PSNR of 10.4086db, mean SSIM of 0.4173%, and NC of 0.7433% and Wiener filter provides a mean PSNR of 6.3979db, SSIM of 0.3016%, NC of 0.3276%. Conclusion: Based on the experimental results and statistical analysis using independent sample T test, image restoration using Lucy-Richardson algorithm significantly performs better than Wiener filter on restoring the degraded image with PSNR (P<0.001) and SSIM (P<0.001).


2020 ◽  
Vol 10 (17) ◽  
pp. 5898
Author(s):  
Qirong Bu ◽  
Jie Luo ◽  
Kuan Ma ◽  
Hongwei Feng ◽  
Jun Feng

In this paper, we propose an enhanced pix2pix dehazing network, which generates clear images without relying on a physical scattering model. This network is a generative adversarial network (GAN) which combines multiple guided filter layers. First, the input of hazy images is smoothed to obtain high-frequency features according to different smoothing kernels of the guided filter layer. Then, these features are embedded in higher dimensions of the network and connected with the output of the generator’s encoder. Finally, Visual Geometry Group (VGG) features are introduced to serve as a loss function to improve the quality of the texture information restoration and generate better hazy-free images. We conduct experiments on NYU-Depth, I-HAZE and O-HAZE datasets. The enhanced pix2pix dehazing network we propose produces increases of 1.22 dB in the Peak Signal-to-Noise Ratio (PSNR) and 0.01 in the Structural Similarity Index Metric (SSIM) compared with a second successful comparison method using the indoor test dataset. Extensive experiments demonstrate that the proposed method has good performance for image dehazing.


Segmentation separates an image into different sections badsed on the desire of the user. Segmentation will be carried out in an image, until the region of interest (ROI) of an object is extracted. Segmentation reliability predicts the progress of the various segmentation techniques. In this paper, various segmentation methods are proposed and quality of segmentation is verified by using quality metrics like Mean Squared Error (MSE),Signal to Noise Ratio (SNR), Peak- Signal to Noise Ratio (PSNR), Edge Preservation Index (EPI) and Structural Similarity Index Metric (SSIM).


Sign in / Sign up

Export Citation Format

Share Document