scholarly journals QCBCT-NET for direct measurement of bone mineral density from quantitative cone-beam CT: a human skull phantom study

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Tae-Hoon Yong ◽  
Su Yang ◽  
Sang-Jeong Lee ◽  
Chansoo Park ◽  
Jo-Eun Kim ◽  
...  

AbstractThe purpose of this study was to directly and quantitatively measure BMD from Cone-beam CT (CBCT) images by enhancing the linearity and uniformity of the bone intensities based on a hybrid deep-learning model (QCBCT-NET) of combining the generative adversarial network (Cycle-GAN) and U-Net, and to compare the bone images enhanced by the QCBCT-NET with those by Cycle-GAN and U-Net. We used two phantoms of human skulls encased in acrylic, one for the training and validation datasets, and the other for the test dataset. We proposed the QCBCT-NET consisting of Cycle-GAN with residual blocks and a multi-channel U-Net using paired training data of quantitative CT (QCT) and CBCT images. The BMD images produced by QCBCT-NET significantly outperformed the images produced by the Cycle-GAN or the U-Net in mean absolute difference (MAD), peak signal to noise ratio (PSNR), normalized cross-correlation (NCC), structural similarity (SSIM), and linearity when compared to the original QCT image. The QCBCT-NET improved the contrast of the bone images by reflecting the original BMD distribution of the QCT image locally using the Cycle-GAN, and also spatial uniformity of the bone images by globally suppressing image artifacts and noise using the two-channel U-Net. The QCBCT-NET substantially enhanced the linearity, uniformity, and contrast as well as the anatomical and quantitative accuracy of the bone images, and demonstrated more accuracy than the Cycle-GAN and the U-Net for quantitatively measuring BMD in CBCT.

2020 ◽  
Vol 2020 ◽  
pp. 1-12
Author(s):  
Jianfang Cao ◽  
Zibang Zhang ◽  
Aidi Zhao

Considering the problems of low resolution and rough details in existing mural images, this paper proposes a superresolution reconstruction algorithm for enhancing artistic mural images, thereby optimizing mural images. The algorithm takes a generative adversarial network (GAN) as the framework. First, a convolutional neural network (CNN) is used to extract image feature information, and then, the features are mapped to the high-resolution image space of the same size as the original image. Finally, the reconstructed high-resolution image is output to complete the design of the generative network. Then, a CNN with deep and residual modules is used for image feature extraction to determine whether the output of the generative network is an authentic, high-resolution mural image. In detail, the depth of the network increases, the residual module is introduced, the batch standardization of the network convolution layer is deleted, and the subpixel convolution is used to realize upsampling. Additionally, a combination of multiple loss functions and staged construction of the network model is adopted to further optimize the mural image. A mural dataset is set up by the current team. Compared with several existing image superresolution algorithms, the peak signal-to-noise ratio (PSNR) of the proposed algorithm increases by an average of 1.2–3.3 dB and the structural similarity (SSIM) increases by 0.04 = 0.13; it is also superior to other algorithms in terms of subjective scoring. The proposed method in this study is effective in the superresolution reconstruction of mural images, which contributes to the further optimization of ancient mural images.


2022 ◽  
Author(s):  
Lisa Sophie Kölln ◽  
Omar Salem ◽  
Jessica Valli ◽  
Carsten Gram Hansen ◽  
Gail McConnell

Immunofluorescence (IF) microscopy is routinely used to visualise the spatial distribution of proteins that dictates their cellular function. However, unspecific antibody binding often results in high cytosolic background signals, decreasing the image contrast of a target structure. Recently, convolutional neural networks (CNNs) were successfully employed for image restoration in IF microscopy, but current methods cannot correct for those background signals. We report a new method that trains a CNN to reduce unspecific signals in IF images; we name this method label2label (L2L). In L2L, a CNN is trained with image pairs of two non-identical labels that target the same cellular structure. We show that after L2L training a network predicts images with significantly increased contrast of a target structure, which is further improved after implementing a multi-scale structural similarity loss function. Here, our results suggest that sample differences in the training data decrease hallucination effects that are observed with other methods. We further assess the performance of a cycle generative adversarial network, and show that a CNN can be trained to separate structures in superposed IF images of two targets.


2020 ◽  
Vol 2020 ◽  
pp. 1-11
Author(s):  
Shengnan Zhang ◽  
Lei Wang ◽  
Chunhong Chang ◽  
Cong Liu ◽  
Longbo Zhang ◽  
...  

To overcome the disadvantages of the traditional block-matching-based image denoising method, an image denoising method based on block matching with 4D filtering (BM4D) in the 3D shearlet transform domain and a generative adversarial network is proposed. Firstly, the contaminated images are decomposed to get the shearlet coefficients; then, an improved 3D block-matching algorithm is proposed in the hard threshold and wiener filtering stage to get the latent clean images; the final clean images can be obtained by training the latent clean images via a generative adversarial network (GAN).Taking the peak signal-to-noise ratio (PSNR), structural similarity (SSIM for short) of image, and edge-preserving index (EPI for short) as the evaluation criteria, experimental results demonstrate that the proposed method can not only effectively remove image noise in high noisy environment, but also effectively improve the visual effect of the images.


2020 ◽  
Vol 10 (17) ◽  
pp. 5898
Author(s):  
Qirong Bu ◽  
Jie Luo ◽  
Kuan Ma ◽  
Hongwei Feng ◽  
Jun Feng

In this paper, we propose an enhanced pix2pix dehazing network, which generates clear images without relying on a physical scattering model. This network is a generative adversarial network (GAN) which combines multiple guided filter layers. First, the input of hazy images is smoothed to obtain high-frequency features according to different smoothing kernels of the guided filter layer. Then, these features are embedded in higher dimensions of the network and connected with the output of the generator’s encoder. Finally, Visual Geometry Group (VGG) features are introduced to serve as a loss function to improve the quality of the texture information restoration and generate better hazy-free images. We conduct experiments on NYU-Depth, I-HAZE and O-HAZE datasets. The enhanced pix2pix dehazing network we propose produces increases of 1.22 dB in the Peak Signal-to-Noise Ratio (PSNR) and 0.01 in the Structural Similarity Index Metric (SSIM) compared with a second successful comparison method using the indoor test dataset. Extensive experiments demonstrate that the proposed method has good performance for image dehazing.


2019 ◽  
Vol 133 ◽  
pp. S266-S267
Author(s):  
C. Kurz ◽  
M. Maspero ◽  
M.H.F. Savenije ◽  
G. Landry ◽  
F. Kamp ◽  
...  

Complexity ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Guanghui Song ◽  
Hai Wang

In this article, we study the mural restoration work based on artificial intelligence-assisted multiscale trace generation. Firstly, we convert the fresco images to colour space to obtain the luminance and chromaticity component images; then we process each component image to enhance the edges of the exfoliated region using high and low hat operations; then we construct a multistructure morphological filter to smooth the noise of the image. Finally, the fused mask image is fused with the original mural to obtain the final calibration result. The fresco is converted to HSV colour space, and chromaticity, saturation, and luminance features are introduced; then the confidence term and data term are used to determine the priority of shedding boundary points; then a new block matching criterion is defined, and the best matching block is obtained to replace the block to be repaired based on the structural similarity between the block to be repaired and the matching block by global search; finally, the restoration result is converted to RGB colour space to obtain the final restoration result. An improved generative adversarial network structure is proposed to address the shortcomings of the existing network structure in mural defect restoration, and the effectiveness of the improved modules of the network is verified. Compared with the existing mural restoration algorithms on the test data experimentally verified, the peak signal-to-noise ratio (PSNR) score is improved by 4% and the structural similarity (SSIM) score is improved by 2%.


2020 ◽  
Vol 2020 ◽  
pp. 1-10
Author(s):  
Li Li ◽  
Zijia Fan ◽  
Mingyang Zhao ◽  
Xinlei Wang ◽  
Zhongyang Wang ◽  
...  

Since the underwater image is not clear and difficult to recognize, it is necessary to obtain a clear image with the super-resolution (SR) method to further study underwater images. The obtained images with conventional underwater image super-resolution methods lack detailed information, which results in errors in subsequent recognition and other processes. Therefore, we propose an image sequence generative adversarial network (ISGAN) method for super-resolution based on underwater image sequences collected by multifocus from the same angle, which can obtain more details and improve the resolution of the image. At the same time, a dual generator method is used in order to optimize the network architecture and improve the stability of the generator. The preprocessed images are, respectively, passed through the dual generator, one of which is used as the main generator to generate the SR image of sequence images, and the other is used as the auxiliary generator to prevent the training from crashing or generating redundant details. Experimental results show that the proposed method can be improved on both peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) compared to the traditional GAN method in underwater image SR.


Sign in / Sign up

Export Citation Format

Share Document