scholarly journals Chicken-Moth Search Optimization-based Deep Convolutional Neural Network for Image Steganography

2020 ◽  
Vol 21 (2) ◽  
pp. 217-232
Author(s):  
Reshma V K ◽  
Vinod Kumar R S ◽  
Shahi D ◽  
Shyjith M B

Image steganography is considered as one of the promising and popular techniques utilized to maintain the confidentiality of the secret message that is embedded in an image. Even though there are various techniques available in the previous works, an approach providing better results is still the challenge. Therefore, an effective pixel prediction based on image stegonography is developed, which employs error dependent Deep Convolutional Neural Network (DCNN) classifier for pixel identification. Here, the best pixels are identified from the medical image based on DCNN classifier using pixel features, like texture, wavelet energy, Gabor, scattering features, and so on. The DCNN is optimally trained using Chicken-Moth search optimization (CMSO). The CMSO is designed by integrating Chicken Swarm Optimization (CSO) and Moth Search Optimization (MSO) algorithm based on limited error. Subsequently, the Tetrolet transform is fed to the predicted pixel for the embedding process. At last, the inverse tetrolet transform is used for extracting the secret message from an embedded image. The experimentation is carried out using BRATS dataset, and the performance of image stegonography based on CMSO-DCNN+tetrolet is evaluated based on correlation coefficient, Structural Similarity Index, and Peak Signal to Noise Ratio, which attained 0.85, 46.981dB, and 0.6388, for the image with noise.  

2021 ◽  
Vol 18 (5) ◽  
pp. 6638-6651
Author(s):  
Huilin Ge ◽  
◽  
Yuewei Dai ◽  
Zhiyu Zhu ◽  
Biao Wang

<abstract> <sec><title>Purpose</title><p>Due to the lack of prior knowledge of face images, large illumination changes, and complex backgrounds, the accuracy of face recognition is low. To address this issue, we propose a face detection and recognition algorithm based on multi-task convolutional neural network (MTCNN).</p> </sec> <sec><title>Methods</title><p>In our paper, MTCNN mainly uses three cascaded networks, and adopts the idea of candidate box plus classifier to perform fast and efficient face recognition. The model is trained on a database of 50 faces we have collected, and Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index Measurement (SSIM), and receiver operating characteristic (ROC) curve are used to analyse MTCNN, Region-CNN (R-CNN) and Faster R-CNN.</p> </sec> <sec><title>Results</title><p>The average PSNR of this technique is 1.24 dB higher than that of R-CNN and 0.94 dB higher than that of Faster R-CNN. The average SSIM value of MTCNN is 10.3% higher than R-CNN and 8.7% higher than Faster R-CNN. The Area Under Curve (AUC) of MTCNN is 97.56%, the AUC of R-CNN is 91.24%, and the AUC of Faster R-CNN is 92.01%. MTCNN has the best comprehensive performance in face recognition. For the face images with defective features, MTCNN still has the best effect.</p> </sec> <sec><title>Conclusions</title><p>This algorithm can effectively improve face recognition to a certain extent. The accuracy rate and the reduction of the false detection rate of face detection can not only be better used in key places, ensure the safety of property and security of the people, improve safety, but also better reduce the waste of human resources and improve efficiency.</p> </sec> </abstract>


2019 ◽  
Vol 40 (11) ◽  
pp. 2240-2253 ◽  
Author(s):  
Jia Guo ◽  
Enhao Gong ◽  
Audrey P Fan ◽  
Maged Goubran ◽  
Mohammad M Khalighi ◽  
...  

To improve the quality of MRI-based cerebral blood flow (CBF) measurements, a deep convolutional neural network (dCNN) was trained to combine single- and multi-delay arterial spin labeling (ASL) and structural images to predict gold-standard 15O-water PET CBF images obtained on a simultaneous PET/MRI scanner. The dCNN was trained and tested on 64 scans in 16 healthy controls (HC) and 16 cerebrovascular disease patients (PT) with 4-fold cross-validation. Fidelity to the PET CBF images and the effects of bias due to training on different cohorts were examined. The dCNN significantly improved CBF image quality compared with ASL alone (mean ± standard deviation): structural similarity index (0.854 ± 0.036 vs. 0.743 ± 0.045 [single-delay] and 0.732 ± 0.041 [multi-delay], P <  0.0001); normalized root mean squared error (0.209 ± 0.039 vs. 0.326 ± 0.050 [single-delay] and 0.344 ± 0.055 [multi-delay], P <  0.0001). The dCNN also yielded mean CBF with reduced estimation error in both HC and PT ( P <  0.001), and demonstrated better correlation with PET. The dCNN trained with the mixed HC and PT cohort performed the best. The results also suggested that models should be trained on cases representative of the target population.


2020 ◽  
Author(s):  
Reshma V K ◽  
Vinod Kumar R S

Abstract Securing the privacy of the medical information through the image steganography process has gained more research interest nowadays to protect the privacy of the patient. In the existing works, least significant bit (LSB) replacement strategy was most popularly used to hide the sensitive contents. Here, every pixel was replaced for achieving higher privacy, but it increased the complexity. This work introduces a novel pixel prediction scheme-based image steganography to overcome the complexity issues prevailing in the existing works. In the proposed pixel prediction scheme, the support vector neural network (SVNN) classifier is utilized for the construction of a prediction map, which identifies the suitable pixels for the embedding process. Then, in the embedding phase, wavelet coefficients are extracted from the medical image based on discrete wavelet transform (DWT) and embedding strength, and the secret message is embedded into the HL wavelet band. Finally, the secret message is extracted from the medical image on applying the DWT. The experimentation of the proposed pixel prediction scheme is done by utilizing the medical images from the BRATS database. The proposed pixel prediction scheme has achieved high performance with the values of 48.558 dB, 0.50009 and 0.9879 for the peak signal to noise ratio (PSNR), Structural Similarity Index (SSIM) and correlation factor, respectively.


2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Shahab U. Ansari ◽  
Kamran Javed ◽  
Saeed Mian Qaisar ◽  
Rashad Jillani ◽  
Usman Haider

Multiple sclerosis (MS) is a chronic and autoimmune disease that forms lesions in the central nervous system. Quantitative analysis of these lesions has proved to be very useful in clinical trials for therapies and assessing disease prognosis. However, the efficacy of these quantitative analyses greatly depends on how accurately the MS lesions have been identified and segmented in brain MRI. This is usually carried out by radiologists who label 3D MR images slice by slice using commonly available segmentation tools. However, such manual practices are time consuming and error prone. To circumvent this problem, several automatic segmentation techniques have been investigated in recent years. In this paper, we propose a new framework for automatic brain lesion segmentation that employs a novel convolutional neural network (CNN) architecture. In order to segment lesions of different sizes, we have to pick a specific filter or size 3 × 3 or 5 × 5. Sometimes, it is hard to decide which filter will work better to get the best results. Google Net has solved this problem by introducing an inception module. An inception module uses 3 × 3 , 5 × 5 , 1 × 1 and max pooling filters in parallel fashion. Results show that incorporating inception modules in a CNN has improved the performance of the network in the segmentation of MS lesions. We compared the results of the proposed CNN architecture for two loss functions: binary cross entropy (BCE) and structural similarity index measure (SSIM) using the publicly available ISBI-2015 challenge dataset. A score of 93.81 which is higher than the human rater with BCE loss function is achieved.


Author(s):  
Diptasree Debnath ◽  
Emlon Ghosh ◽  
Barnali Gupta Banik

Steganography is a widely-used technique for digital data hiding. Image steganography is the most popular among all other kinds of steganography. In this article, a novel key-based blind method for RGB image steganography where multiple images can be hidden simultaneously is described. The proposed method is based on Discrete Cosine Transformation (DCT) and Discrete Wavelet Transformation (DWT) which provides enhanced security as well as improve the quality of the stego. Here, the cover image has been taken as RGB although the method can be implemented on grayscale images as well. The fundamental concept of visual cryptography has been utilized here in order to increase the capacity to a great extent. To make the method more robust and imperceptible, pseudo-random number sequence and a correlation coefficient have been used for embedding and the extraction of the secrets, respectively. The robustness of the method is tested against steganalysis attacks such as crop, rotate, resize, noise addition, and histogram equalization. The method has been applied on multiple sets of images and the quality of the resultant images have been analyzed through various matrices namely ‘Peak Signal to Noise Ratio,' ‘Structural Similarity index,' ‘Structural Content,' and ‘Maximum Difference.' The results obtained are very promising and have been compared with existing methods to prove its efficiency.


Author(s):  
Hong Lu ◽  
Xiaofei Zou ◽  
Longlong Liao ◽  
Kenli Li ◽  
Jie Liu

Compressive Sensing for Magnetic Resonance Imaging (CS-MRI) aims to reconstruct Magnetic Resonance (MR) images from under-sampled raw data. There are two challenges to improve CS-MRI methods, i.e. designing an under-sampling algorithm to achieve optimal sampling, as well as designing fast and small deep neural networks to obtain reconstructed MR images with superior quality. To improve the reconstruction quality of MR images, we propose a novel deep convolutional neural network architecture for CS-MRI named MRCSNet. The MRCSNet consists of three sub-networks, a compressive sensing sampling sub-network, an initial reconstruction sub-network, and a refined reconstruction sub-network. Experimental results demonstrate that MRCSNet generates high-quality reconstructed MR images at various under-sampling ratios, and also meets the requirements of real-time CS-MRI applications. Compared to state-of-the-art CS-MRI approaches, MRCSNet offers a significant improvement in reconstruction accuracies, such as Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity (SSIM). Besides, it reduces the reconstruction error evaluated by the Normalized Root-Mean-Square Error (NRMSE). The source codes are available at https://github.com/TaihuLight/MRCSNet .


2018 ◽  
Vol 2018 ◽  
pp. 1-21 ◽  
Author(s):  
Barnali Gupta Banik ◽  
Samir Kumar Bandyopadhyay

Steganography is a popular technique of digital data security. Among all digital steganography methods, audio steganography is very delicate as human auditory system is highly sensitive to noise; hence small modification in audio can make significant audible impact. In this paper, a key based blind audio steganography method has been proposed which is built on discrete wavelet transform (DWT) as well as discrete cosine transform (DCT) and adheres to Kerckhoff’s principle. Here image has been used as secret message which is preprocessed using Arnold’s Transform. To make the system more robust and undetectable, a well-known problem of audio analysis has been explored here, known as Cocktail Party Problem, for wrapping stego audio. The robustness of the proposed method has been tested against Steganalysis attacks like noise addition, random cropping, resampling, requantization, pitch shifting, and mp3 compression. The quality of resultant stego audio and retrieved secret image has been measured by various metrics, namely, “peak signal-to-noise ratio”; “correlation coefficient”; “perceptual evaluation of audio quality”; “bit error rate”; and “structural similarity index.” The embedding capacity has also been evaluated and, as seen from the comparison result, the proposed method has outperformed other existing DCT-DWT based technique.


2020 ◽  
Vol 13 (4) ◽  
pp. 10-17
Author(s):  
Fadhil Kadhim Zaidan

In this work, a grayscale image steganography scheme is proposed using a discrete wavelet transform (DWT) and singular value decomposition (SVD). In this scheme, 2-level DWT is applied to a cover image to obtain the high frequency band HL2 which is utilized to embed a secret grayscale image based on the SVD technique. The robustness and the imperceptibility of the proposed steganography algorithm are controlled by a scaling factor for obtaining an acceptable trade-off between them. Peak signal to noise ratio (PSNR) and Structural Similarity Index Measure (SSIM) are used for assessing the efficiency of the proposed approach. Experimental results demonstrate that the proposed scheme still holds its validity under different known attacks such as noise addition, filtering, cropping and JPEG compression


2020 ◽  
Vol 10 (21) ◽  
pp. 7815
Author(s):  
Dorota Oszutowska-Mazurek ◽  
Miroslaw Parafiniuk ◽  
Przemyslaw Mazurek

The use of UV (ultraviolet fluorescence) light in microscopy allows improving the quality of images and observation of structures that are not visible in visible spectrum. The disadvantage of this method is the degradation of microstructures in the slide due to exposure to UV light. The article examines the possibility of using a convolutional neural network to perform this type of conversion without damaging the slides. Using eosin hematoxylin stained slides, a database of image pairs was created for visible light (halogen lamp) and UV light. This database was used to train a multi–layer unidirectional convolutional neural network. The results of the study were subjectively and objectively assessed using the SSIM (Structural Similarity Index Measure) and SSIM (structure only) image quality measures. The results show that it is possible to perform this type of conversion (the studies used liver slides for 100× magnification), and in some cases there was an additional improvement in image quality.


Sign in / Sign up

Export Citation Format

Share Document