scholarly journals An Efficient Parallel Block Compressive Sensing Scheme for Medical Signals and Image Compression

Author(s):  
Parnasree Chakraborty ◽  
Tharini C

Abstract With rapid development of real-time and dynamic application, Compressive Sensing or Compressed sensing (CS) has been used for medical image and biomedical signal compression in the last decades. The performance of CS based compression is mostly dependent on decoding methods rather than the CS encoding methods used in practice. Many CS encoding and decoding algorithms have been reported in literature. However, the comparative study on performance metrics of CS encoding with block processing and without block processing is not investigated by the researchers so far. This paper proposes block CS based medical images and signals compression technique and the proposed technique is compared with standard CS compression. The proposed algorithm divides the input medical images and signals to blocks and each block is processed parallel to enable faster computation. Three performance indices, i.e., the peak signal to noise ratio (PSNR), reconstruction time (RT) and structural similarity index (SSIM) were tested to observe their changes with respect to compression ratio. The results showed that block CS algorithm had better performance than standard CS based compression. More specifically, the parallel block CS reported the best results than standard CS with respect to less reconstruction time and satisfactory PSNR and SSIM.

Author(s):  
G. Kowsalya ◽  
H. A. Christinal ◽  
D. A. Chandy ◽  
S. Jebasingh ◽  
C. Bajaj

Compressive sensing of images is based on three key components namely sparse representation, construction of measurement matrix and reconstruction of images. The visual quality of reconstructed image is prime important in medical images. We apply Discrete Cosine Transform (DCT) for sparse representation of medical images. This paper focuses on the analysis of measurement matrices on compressive sensing of MRI images. In this work, the Gaussian and Bernoulli type of random matrices are considered as measurement matrix. The compressed images are reconstructed using Basis Pursuit algorithm. Peak-signal-to noise ratio and reconstruction time are the metrics taken for evaluating the performance of measurement matrices towards compressive sensing of medical images.


Author(s):  
Ahmed Nagm ◽  
Mohammed Safy

<p>Integrated healthcare systems require the transmission of medical images between medical centres. The presence of watermarks in such images has become important for patient privacy protection. However, some important issues should be considered while watermarking an image. Among these issues, the watermark should be robust against attacks and does not affect the quality of the image. In this paper, a watermarking approach employing a robust dynamic secret code is proposed. This approach is to process every pixel of the digital image and not only the pixels of the regions of non-interest at the same time it preserves the image details. The performance of the proposed approach is evaluated using several performance measures such as the Mean Square Error (MSE), the Mean Absolute Error (MAE), the Peak Signal to Noise Ratio (PSNR), the Universal Image Quality Index (UIQI) and the Structural Similarity Index (SSIM). The proposed approach has been tested and shown robustness in detecting the intentional attacks that change image, specifically the most important diagnostic information.</p>


Author(s):  
Charu Bhardwaj ◽  
Urvashi Sharma ◽  
Shruti Jain ◽  
Meenakshi Sood

Compression serves as a significant feature for efficient storage and transmission of medical, satellite, and natural images. Transmission speed is a key challenge in transmitting a large amount of data especially for magnetic resonance imaging and computed tomography scan images. Compressive sensing is an optimization-based option to acquire sparse signal using sub-Nyquist criteria exploiting only the signal of interest. This chapter explores compressive sensing for correct sensing, acquisition, and reconstruction of clinical images. In this chapter, distinctive overall performance metrics like peak signal to noise ratio, root mean square error, structural similarity index, compression ratio, etc. are assessed for medical image evaluation by utilizing best three reconstruction algorithms: basic pursuit, least square, and orthogonal matching pursuit. Basic pursuit establishes a well-renowned reconstruction method among the examined recovery techniques. At distinct measurement samples, on increasing the number of measurement samples, PSNR increases significantly and RMSE decreases.


2020 ◽  
Vol 20 (3) ◽  
pp. 130-146
Author(s):  
S. Shajun Nisha ◽  
S. P. Raja

AbstractDue to sparsity and multiresolution properties, Mutiscale transforms are gaining popularity in the field of medical image denoising. This paper empirically evaluates different Mutiscale transform approaches such as Wavelet, Bandelet, Ridgelet, Contourlet, and Curvelet for image denoising. The image to be denoised first undergoes decomposition and then the thresholding is applied to its coefficients. This paper also deals with basic shrinkage thresholding techniques such Visushrink, Sureshrink, Neighshrink, Bayeshrink, Normalshrink and Neighsureshrink to determine the best one for image denoising. Experimental results on several test images were taken on Magnetic Resonance Imaging (MRI), X-RAY and Computed Tomography (CT). Qualitative performance metrics like Peak Signal to Noise Ratio (PSNR), Weighted Signal to Noise Ratio (WSNR), Structural Similarity Index (SSIM), and Correlation Coefficient (CC) were computed. The results shows that Contourlet based Medical image denoising methods are achieving significant improvement in association with Neighsureshrink thresholding technique.


Diagnostics ◽  
2021 ◽  
Vol 11 (5) ◽  
pp. 840
Author(s):  
Sivaramakrishnan Rajaraman ◽  
Ghada Zamzmi ◽  
Les Folio ◽  
Philip Alderson ◽  
Sameer Antani

Chest X-rays (CXRs) are the most commonly performed diagnostic examination to detect cardiopulmonary abnormalities. However, the presence of bony structures such as ribs and clavicles can obscure subtle abnormalities, resulting in diagnostic errors. This study aims to build a deep learning (DL)-based bone suppression model that identifies and removes these occluding bony structures in frontal CXRs to assist in reducing errors in radiological interpretation, including DL workflows, related to detecting manifestations consistent with tuberculosis (TB). Several bone suppression models with various deep architectures are trained and optimized using the proposed combined loss function and their performances are evaluated in a cross-institutional test setting using several metrics such as mean absolute error (MAE), peak signal-to-noise ratio (PSNR), structural similarity index measure (SSIM), and multiscale structural similarity measure (MS–SSIM). The best-performing model (ResNet–BS) (PSNR = 34.0678; MS–SSIM = 0.9828) is used to suppress bones in the publicly available Shenzhen and Montgomery TB CXR collections. A VGG-16 model is pretrained on a large collection of publicly available CXRs. The CXR-pretrained model is then fine-tuned individually on the non-bone-suppressed and bone-suppressed CXRs of Shenzhen and Montgomery TB CXR collections to classify them as showing normal lungs or TB manifestations. The performances of these models are compared using several performance metrics such as accuracy, the area under the curve (AUC), sensitivity, specificity, precision, F-score, and Matthews correlation coefficient (MCC), analyzed for statistical significance, and their predictions are qualitatively interpreted through class-selective relevance maps (CRMs). It is observed that the models trained on bone-suppressed CXRs (Shenzhen: AUC = 0.9535 ± 0.0186; Montgomery: AUC = 0.9635 ± 0.0106) significantly outperformed (p < 0.05) the models trained on the non-bone-suppressed CXRs (Shenzhen: AUC = 0.8991 ± 0.0268; Montgomery: AUC = 0.8567 ± 0.0870).. Models trained on bone-suppressed CXRs improved detection of TB-consistent findings and resulted in compact clustering of the data points in the feature space signifying that bone suppression improved the model sensitivity toward TB classification.


2020 ◽  
Vol 25 (2) ◽  
pp. 86-97
Author(s):  
Sandy Suryo Prayogo ◽  
Tubagus Maulana Kusuma

DVB merupakan standar transmisi televisi digital yang paling banyak digunakan saat ini. Unsur terpenting dari suatu proses transmisi adalah kualitas gambar dari video yang diterima setelah melalui proses transimisi tersebut. Banyak faktor yang dapat mempengaruhi kualitas dari suatu gambar, salah satunya adalah struktur frame dari video. Pada tulisan ini dilakukan pengujian sensitifitas video MPEG-4 berdasarkan struktur frame pada transmisi DVB-T. Pengujian dilakukan menggunakan simulasi matlab dan simulink. Digunakan juga ffmpeg untuk menyediakan format dan pengaturan video akan disimulasikan. Variabel yang diubah dari video adalah bitrate dan juga group-of-pictures (GOP), sedangkan variabel yang diubah dari transmisi DVB-T adalah signal-to-noise-ratio (SNR) pada kanal AWGN di antara pengirim (Tx) dan penerima (Rx). Hasil yang diperoleh dari percobaan berupa kualitas rata-rata gambar pada video yang diukur menggunakan metode pengukuran structural-similarity-index (SSIM). Dilakukan juga pengukuran terhadap jumlah bit-error-rate BER pada bitstream DVB-T. Percobaan yang dilakukan dapat menunjukkan seberapa besar sensitifitas bitrate dan GOP dari video pada transmisi DVB-T dengan kesimpulan semakin besar bitrate maka akan semakin buruk nilai kualitas gambarnya, dan semakin kecil nilai GOP maka akan semakin baik nilai kualitasnya. Penilitian diharapkan dapat dikembangkan menggunakan deep learning untuk memperoleh frame struktur yang tepat di kondisi-kondisi tertentu dalam proses transmisi televisi digital.


Electronics ◽  
2021 ◽  
Vol 10 (3) ◽  
pp. 319
Author(s):  
Yi Wang ◽  
Xiao Song ◽  
Guanghong Gong ◽  
Ni Li

Due to the rapid development of deep learning and artificial intelligence techniques, denoising via neural networks has drawn great attention due to their flexibility and excellent performances. However, for most convolutional network denoising methods, the convolution kernel is only one layer deep, and features of distinct scales are neglected. Moreover, in the convolution operation, all channels are treated equally; the relationships of channels are not considered. In this paper, we propose a multi-scale feature extraction-based normalized attention neural network (MFENANN) for image denoising. In MFENANN, we define a multi-scale feature extraction block to extract and combine features at distinct scales of the noisy image. In addition, we propose a normalized attention network (NAN) to learn the relationships between channels, which smooths the optimization landscape and speeds up the convergence process for training an attention model. Moreover, we introduce the NAN to convolutional network denoising, in which each channel gets gain; channels can play different roles in the subsequent convolution. To testify the effectiveness of the proposed MFENANN, we used both grayscale and color image sets whose noise levels ranged from 0 to 75 to do the experiments. The experimental results show that compared with some state-of-the-art denoising methods, the restored images of MFENANN have larger peak signal-to-noise ratios (PSNR) and structural similarity index measure (SSIM) values and get better overall appearance.


2021 ◽  
Vol 21 (1) ◽  
pp. 1-20
Author(s):  
A. K. Singh ◽  
S. Thakur ◽  
Alireza Jolfaei ◽  
Gautam Srivastava ◽  
MD. Elhoseny ◽  
...  

Recently, due to the increase in popularity of the Internet, the problem of digital data security over the Internet is increasing at a phenomenal rate. Watermarking is used for various notable applications to secure digital data from unauthorized individuals. To achieve this, in this article, we propose a joint encryption then-compression based watermarking technique for digital document security. This technique offers a tool for confidentiality, copyright protection, and strong compression performance of the system. The proposed method involves three major steps as follows: (1) embedding of multiple watermarks through non-sub-sampled contourlet transform, redundant discrete wavelet transform, and singular value decomposition; (2) encryption and compression via SHA-256 and Lempel Ziv Welch (LZW), respectively; and (3) extraction/recovery of multiple watermarks from the possibly distorted cover image. The performance estimations are carried out on various images at different attacks, and the efficiency of the system is determined in terms of peak signal-to-noise ratio (PSNR) and normalized correlation (NC), structural similarity index measure (SSIM), number of changing pixel rate (NPCR), unified averaged changed intensity (UACI), and compression ratio (CR). Furthermore, the comparative analysis of the proposed system with similar schemes indicates its superiority to them.


Sensors ◽  
2021 ◽  
Vol 21 (16) ◽  
pp. 5540
Author(s):  
Nayeem Hasan ◽  
Md Saiful Islam ◽  
Wenyu Chen ◽  
Muhammad Ashad Kabir ◽  
Saad Al-Ahmadi

This paper proposes an encryption-based image watermarking scheme using a combination of second-level discrete wavelet transform (2DWT) and discrete cosine transform (DCT) with an auto extraction feature. The 2DWT has been selected based on the analysis of the trade-off between imperceptibility of the watermark and embedding capacity at various levels of decomposition. DCT operation is applied to the selected area to gather the image coefficients into a single vector using a zig-zig operation. We have utilized the same random bit sequence as the watermark and seed for the embedding zone coefficient. The quality of the reconstructed image was measured according to bit correction rate, peak signal-to-noise ratio (PSNR), and similarity index. Experimental results demonstrated that the proposed scheme is highly robust under different types of image-processing attacks. Several image attacks, e.g., JPEG compression, filtering, noise addition, cropping, sharpening, and bit-plane removal, were examined on watermarked images, and the results of our proposed method outstripped existing methods, especially in terms of the bit correction ratio (100%), which is a measure of bit restoration. The results were also highly satisfactory in terms of the quality of the reconstructed image, which demonstrated high imperceptibility in terms of peak signal-to-noise ratio (PSNR ≥ 40 dB) and structural similarity (SSIM ≥ 0.9) under different image attacks.


Author(s):  
Shenghan Mei ◽  
Xiaochun Liu ◽  
Shuli Mei

The locust slice images have all the features such as strong self-similarity, piecewise smoothness and nonlinear texture structure. Multi-scale interpolation operator is an effective tool to describe such structures, but it cannot overcome the influence of noise on images. Therefore, this research designed the Shannon–Cosine wavelet which possesses all the excellent properties such as interpolation, smoothness, compact support and normalization, then constructing multi-scale wavelet interpolative operator, the operator can be applied to decompose and reconstruct the images adaptively. Combining the operator with the local filter operator (mean and median), a multi-scale Shannon–Cosine wavelet denoising algorithm based on cell filtering is constructed in this research. The algorithm overcomes the disadvantages of multi-scale interpolation wavelet, which is only suitable for describing smooth signals, and realizes multi-scale noise reduction of locust slice images. The experimental results show that the proposed method can keep all kinds of texture structures in the slice image of locust. In the experiments, the locust slice images with mixture noise of Gaussian and salt–pepper are taken as examples to compare the performances of the proposed method and other typical denoising methods. The experimental results show that the Peak Signal-To-Noise Ratio (PSNR) of the denoised images obtained by the proposed method is greater 27.3%, 24.6%, 2.94%, 22.9% than Weiner filter, wavelet transform method, median and average filtering, respectively; and the Structural Similarity Index (SSIM) for measuring image quality is greater 31.1%, 31.3%, 15.5%, 10.2% than other four methods, respectively. As the variance of Gaussian white noise increases from 0.02 to 0.1, the values of PSNR and SSIM obtained by the proposed method only decrease by 11.94% and 13.33%, respectively, which are much less than other 4 methods. This shows that the proposed method possesses stronger adaptability.


Sign in / Sign up

Export Citation Format

Share Document