Video Steganography Based on Shi-Tomasi Corner Detection and Least Significant Bit Algorithm

Author(s):  
Kaviya K ◽  
Mridula Bala ◽  
Swathy N P ◽  
Chittam Jeevana Jyothi ◽  
S.Ewins Pon Pushpa

Today, the digital and social media platforms are extremely trending, leading a demand to transmit knowledge very firmly. The information that is exchanged daily becomes ‘a victim’ to hackers. To beat this downside, one of the effective solutions is Steganography or Cryptography. In this paper, the video Steganography and cryptography thoughts are employed, where a key text is hidden behind a ‘certain frame’ of the video using Shi-Tomasi corner point detection and Least Significant Bit (LSB) algorithmic rule. Shi-Tomasi algorithmic rule is employed to observe, the corner points of the frame. In the proposed work, a ‘certain frame’ with large number of corner points is chosen from the video. Then, the secret text is embedded within the detected corner points using LSB algorithmic rule and transmitted. At the receiver end, decryption process is employed, in the reverser order of encryption to retrieve the secret data. As a technical contribution, the average variation of Mean Squared Error, Peak Signal to Noise Ratio, Structural Similarity Index are analysed for original and embedded frames and found to be 0.002, 0.016 and 0.0018 respectively.

2019 ◽  
Vol 2019 ◽  
pp. 1-25
Author(s):  
Yanzhu Hu ◽  
Jiao Wang ◽  
Xinbo Ai ◽  
Xu Zhuang

In order to realize the multithreshold segmentation of images, an improved segmentation algorithm based on graph cut theory using artificial bee colony is proposed. A new weight function based on gray level and the location of pixels is constructed in this paper to calculate the probability that each pixel belongs to the same region. On this basis, a new cost function is reconstructed that can use both square and nonsquare images. Then the optimal threshold of the image is obtained through searching for the minimum value of the cost function using artificial bee colony algorithm. In this paper, public dataset for segmentation and widely used images were measured separately. Experimental results show that the algorithm proposed in this paper can achieve larger Information Entropy (IE), higher Peak Signal to Noise Ratio (PSNR), higher Structural Similarity Index (SSIM), smaller Root Mean Squared Error (RMSE), and shorter time than other image segmentation algorithms.


MR imaging method is widely used for diagnosis applications. The echo signal received from the MR scanning machine is used to generate the image. The data acquisition and reconstruction are the important operations. In this paper the kspace is compressively sampled using Radial Sampling pattern for acquiring the k-space data and Particle Swarm Optimization (PSO) with Total Variation (TV) is used as the reconstruction algorithm for the faithful reconstruction of MR image. The experiments are conducted on MR images of Brain, Head Angiogram and Shoulder images. Performance of the proposed method of reconstruction is analyzed for different sampling kspace scanning percentages. The reconstruction results are compared with the standard sampling pattern used for compressive sampling prove the novelty of the proposed method. The results are verified in terms of Peak Signal to Noise Ratio (PSNR), Mean Squared Error (MSE) and Structural Similarity index (SSIM).


2020 ◽  
Vol 20 (02) ◽  
pp. 2050008
Author(s):  
S. P. Raja

This paper presents a complete analysis of wavelet-based image compression encoding techniques. The techniques involved in this paper are embedded zerotree wavelet (EZW), set partitioning in hierarchical trees (SPIHT), wavelet difference reduction (WDR), adaptively scanned wavelet difference reduction (ASWDR), set partitioned embedded block coder (SPECK), compression with reversible embedded wavelet (CREW) and spatial orientation tree wavelet (STW). Experiments are done by varying level of the decomposition, bits per pixel and compression ratio. The evaluation is done by taking parameters like peak signal to noise ratio (PSNR), mean square error (MSE), image quality index (IQI) and structural similarity index (SSIM), average difference (AD), normalized cross-correlation (NK), structural content (SC), maximum difference (MD), Laplacian mean squared error (LMSE) and normalized absolute error (NAE).


2019 ◽  
Vol 28 (05) ◽  
pp. 1950083 ◽  
Author(s):  
Sa’ed Abed ◽  
Mohammed Al-Mutairi ◽  
Abdullah Al-Watyan ◽  
Omar Al-Mutairi ◽  
Wesam AlEnizy ◽  
...  

Steganography has become one of the most significant techniques to conceal secret data in media files. This paper proposes a novel automated methodology of achieving two levels of security for videos, which comprise encryption and steganography techniques. The methodology enhances the security level of secret data without affecting the accuracy and capacity of the videos. In the first level, the secret data is encrypted based on Advanced Encryption Standard (AES) algorithm using Java language, which renders the data unreadable. In the second level, the encrypted data is concealed in the video frames (images) using FPGA hardware implementation that renders the data invisible. The steganographic technique used in this work is the least significant bit (LSB) method; a 1–1–0 LSB scheme is used to maintain significantly high frame imperceptibility. The video frames used as cover files are selected randomly by the randomization scheme developed in this work. The randomization method scatters the data throughout the video frames rendering the retrieval of the data in its original order, without a proper key, a challenging task. The experimental results of concealment of secret data in video frames are presented in this paper and compared with those of similar approaches. The performance in terms of area, power dissipation, and peak signal-to-noise ratio (PSNR) of the proposed method outperformed traditional approaches. Furthermore, it is demonstrated that the proposed method is capable of automatically embedding and extracting the secret data at two levels of security on video frames, with a 57.1[Formula: see text]dB average PSNR.


Sensors ◽  
2020 ◽  
Vol 20 (21) ◽  
pp. 6311
Author(s):  
Eunjae Ha ◽  
Joongchol Shin ◽  
Joonki Paik

In a hazy environment, visibility is reduced and objects are difficult to identify. For this reason, many dehazing techniques have been proposed to remove the haze. Especially, in the case of the atmospheric scattering model estimation-based method, there is a problem of distortion when inaccurate models are estimated. We present a novel residual-based dehazing network model to overcome the performance limitation in an atmospheric scattering model-based method. More specifically, the proposed model adopted the gate fusion network that generates the dehazed results using a residual operator. To further reduce the divergence between the clean and dehazed images, the proposed discriminator distinguishes dehazed results and clean images, and then reduces the statistical difference via adversarial learning. To verify each element of the proposed model, we hierarchically performed the haze removal process in an ablation study. Experimental results show that the proposed method outperformed state-of-the-art approaches in terms of peak signal-to-noise ratio (PSNR), structural similarity index measure (SSIM), international commission on illumination cie delta e 2000 (CIEDE2000), and mean squared error (MSE). It also gives subjectively high-quality images without color distortion or undesired artifacts for both synthetic and real-world hazy images.


2018 ◽  
Vol 19 (2) ◽  
pp. 68-79 ◽  
Author(s):  
Khan Bahadar Khan ◽  
Muhammad Shahid ◽  
Hayat Ullah ◽  
Eid Rehman ◽  
Muhammad Mohsin Khan

A 2-D Adaptive Trimmed Mean Autoregressive (ATMAR) model has been proposed for denoising of medical images corrupted with poisson noise. Unfiltered images are divided into smaller chunks and ATMAR model is applied on each chunk separately. In this paper, two 5x5 windows with 40% overlapping are used to predict the center pixel value of the central row. The AR coefficients are updated by sliding both windows forward with 60% shift. The same process is repeated to scan the entire image for prediction of a new denoised image. The Adaptive Trimmed Mean Filter (ATMF) eradicates the lowest and highest variations in pixel values of the ATMAR model denoised image and also average out the remaining neighborhood pixel values. Finally, power-law transformation is applied on the resultant image of the ATMAR model for contrast stretching. Image quality is judged in terms of correlation, Mean Squared Error (MSE), Structural Similarity Index Measure (SSIM) and Peak Signal to Noise Ratio (PSNR) of the image with latest denoising techniques. The proposed technique showed an efficient way to scale down poisson noise in scintigraphic images on a pixel-by-pixel basis.


2021 ◽  
Author(s):  
Mayank Kumar Singh ◽  
Indu Saini ◽  
Neetu Sood

Abstract Ultrasound in diagnostic imaging is well known for its safety and accessibility. But its efficiency for diagnosis is always limited by the presence of noise. So, in this study, a Log-Exponential shrinkage technique is presented for denoising of ultrasound images. A Combinational filter was designed for the removal of additive noise without losing any details. The speckle noise after homomorphic transformation follows Gaussian distribution and the conventional median estimator has very low accuracy for Gaussian distribution. The scale parameter calculated from the sub-band coefficients after homomorphic transformation was utilized to design the estimator. For shrinkage of wavelet coefficients, a multi-scale thresholding function was designed, with better flexibility. The proposed technique was tested for both medical and standard images. A significant improvement was observed in the estimation of speckle noise variance. For quantitative evaluation of the proposed technique with existing denoising methods, Mean Squared Error (MSE), Structural Similarity Index (SSIM), and Peak Signal to Noise Ratio (PSNR) were used. At the highest noise variance, the minimum improvement achieved by the proposed denoising technique in PSNR, SSIM, and MSE was 10.65%, 23.21%, and 30.46% respectively.


Segmentation separates an image into different sections badsed on the desire of the user. Segmentation will be carried out in an image, until the region of interest (ROI) of an object is extracted. Segmentation reliability predicts the progress of the various segmentation techniques. In this paper, various segmentation methods are proposed and quality of segmentation is verified by using quality metrics like Mean Squared Error (MSE),Signal to Noise Ratio (SNR), Peak- Signal to Noise Ratio (PSNR), Edge Preservation Index (EPI) and Structural Similarity Index Metric (SSIM).


Emerging trends in the widespread use of technology has led to proliferation of images and videos acquired and distributed through electronic devices. There is an increasing need towards capturing high fidelity images and filtering of the concomitant noise inevitable in the capture, transmission and reception of the same. In this paper, we propose an OPSS (Optimized Patch based Self Similar) filter that exploits concurrently the photometric, geometric and graphical patch similarities of the image. This model recognizes similar patches to segregate the corrupted from the uncorrupted pixels in an image and improve the performance of denoising. Photometric patch similarity is established by using Non-Local Means Decision Based Unsymmetrical Trimmed Median (NLM-DBUTM) filter, which computes weights based on the reference patch. The geometrical patch similarity is done through the K-means clustering and graphically similar patches are identified through Ant Colony Optimization (ACO) technique. These “three similarities” based models have been taken advantage of and combined to arrive at a more comprehensive and effective denoising. The results obtained through the OPSS algorithm demonstrate improved efficiency in removing Gaussian and Impulse noise. Experimental results demonstrate that our proposed study achieves good performance with respect to other denoising algorithms being compared. Experimental results are based on performance measure (evaluation parameters) through Peak Signal to Noise Ratio (PSNR), Mean squared error (MSE) and Structural Similarity Index Measure (SSIM).


Electronics ◽  
2019 ◽  
Vol 8 (9) ◽  
pp. 936 ◽  
Author(s):  
Ibrahim Furkan Ince ◽  
Omer Faruk Ince ◽  
Faruk Bulut

In this study, an edge-preserving nonlinear filter is proposed to reduce multiplicative noise by using a filter structure based on mathematical morphology. This method is called the minimum index of dispersion (MID) filter. MID is an improved and extended version of MCV (minimum coefficient of variation) and MLV (mean least variance) filters. Different from these filters, this paper proposes an extra-layer for the value-and-criterion function in which orientation information is employed in addition to the intensity information. Furthermore, the selection function is re-modeled by performing low-pass filtering (mean filtering) to reduce multiplicative noise. MID outputs are benchmarked with the outputs of MCV and MLV filters in terms of structural similarity index (SSIM), peak signal-to-noise ratio (PSNR), mean squared error (MSE), standard deviation, and contrast value metrics. Additionally, F Score, which is a hybrid metric that is the combination of all five of those metrics, is presented in order to evaluate all the filters. Experimental results and extensive benchmarking studies show that the proposed method achieves promising results better than conventional MCV and MLV filters in terms of robustness in both edge preservation and noise removal. Noise filter methods normally cannot give better results in noise removal and edge-preserving at the same time. However, this study proves a great contribution that MID filter produces better results in both noise cleaning and edge preservation.


Sign in / Sign up

Export Citation Format

Share Document