scholarly journals Artificial Intelligence-Assisted Fresco Restoration with Multiscale Line Drawing Generation

Complexity ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Guanghui Song ◽  
Hai Wang

In this article, we study the mural restoration work based on artificial intelligence-assisted multiscale trace generation. Firstly, we convert the fresco images to colour space to obtain the luminance and chromaticity component images; then we process each component image to enhance the edges of the exfoliated region using high and low hat operations; then we construct a multistructure morphological filter to smooth the noise of the image. Finally, the fused mask image is fused with the original mural to obtain the final calibration result. The fresco is converted to HSV colour space, and chromaticity, saturation, and luminance features are introduced; then the confidence term and data term are used to determine the priority of shedding boundary points; then a new block matching criterion is defined, and the best matching block is obtained to replace the block to be repaired based on the structural similarity between the block to be repaired and the matching block by global search; finally, the restoration result is converted to RGB colour space to obtain the final restoration result. An improved generative adversarial network structure is proposed to address the shortcomings of the existing network structure in mural defect restoration, and the effectiveness of the improved modules of the network is verified. Compared with the existing mural restoration algorithms on the test data experimentally verified, the peak signal-to-noise ratio (PSNR) score is improved by 4% and the structural similarity (SSIM) score is improved by 2%.


2020 ◽  
Vol 2020 ◽  
pp. 1-11
Author(s):  
Shengnan Zhang ◽  
Lei Wang ◽  
Chunhong Chang ◽  
Cong Liu ◽  
Longbo Zhang ◽  
...  

To overcome the disadvantages of the traditional block-matching-based image denoising method, an image denoising method based on block matching with 4D filtering (BM4D) in the 3D shearlet transform domain and a generative adversarial network is proposed. Firstly, the contaminated images are decomposed to get the shearlet coefficients; then, an improved 3D block-matching algorithm is proposed in the hard threshold and wiener filtering stage to get the latent clean images; the final clean images can be obtained by training the latent clean images via a generative adversarial network (GAN).Taking the peak signal-to-noise ratio (PSNR), structural similarity (SSIM for short) of image, and edge-preserving index (EPI for short) as the evaluation criteria, experimental results demonstrate that the proposed method can not only effectively remove image noise in high noisy environment, but also effectively improve the visual effect of the images.



2020 ◽  
Vol 2020 ◽  
pp. 1-12
Author(s):  
Jianfang Cao ◽  
Zibang Zhang ◽  
Aidi Zhao

Considering the problems of low resolution and rough details in existing mural images, this paper proposes a superresolution reconstruction algorithm for enhancing artistic mural images, thereby optimizing mural images. The algorithm takes a generative adversarial network (GAN) as the framework. First, a convolutional neural network (CNN) is used to extract image feature information, and then, the features are mapped to the high-resolution image space of the same size as the original image. Finally, the reconstructed high-resolution image is output to complete the design of the generative network. Then, a CNN with deep and residual modules is used for image feature extraction to determine whether the output of the generative network is an authentic, high-resolution mural image. In detail, the depth of the network increases, the residual module is introduced, the batch standardization of the network convolution layer is deleted, and the subpixel convolution is used to realize upsampling. Additionally, a combination of multiple loss functions and staged construction of the network model is adopted to further optimize the mural image. A mural dataset is set up by the current team. Compared with several existing image superresolution algorithms, the peak signal-to-noise ratio (PSNR) of the proposed algorithm increases by an average of 1.2–3.3 dB and the structural similarity (SSIM) increases by 0.04 = 0.13; it is also superior to other algorithms in terms of subjective scoring. The proposed method in this study is effective in the superresolution reconstruction of mural images, which contributes to the further optimization of ancient mural images.



2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Tae-Hoon Yong ◽  
Su Yang ◽  
Sang-Jeong Lee ◽  
Chansoo Park ◽  
Jo-Eun Kim ◽  
...  

AbstractThe purpose of this study was to directly and quantitatively measure BMD from Cone-beam CT (CBCT) images by enhancing the linearity and uniformity of the bone intensities based on a hybrid deep-learning model (QCBCT-NET) of combining the generative adversarial network (Cycle-GAN) and U-Net, and to compare the bone images enhanced by the QCBCT-NET with those by Cycle-GAN and U-Net. We used two phantoms of human skulls encased in acrylic, one for the training and validation datasets, and the other for the test dataset. We proposed the QCBCT-NET consisting of Cycle-GAN with residual blocks and a multi-channel U-Net using paired training data of quantitative CT (QCT) and CBCT images. The BMD images produced by QCBCT-NET significantly outperformed the images produced by the Cycle-GAN or the U-Net in mean absolute difference (MAD), peak signal to noise ratio (PSNR), normalized cross-correlation (NCC), structural similarity (SSIM), and linearity when compared to the original QCT image. The QCBCT-NET improved the contrast of the bone images by reflecting the original BMD distribution of the QCT image locally using the Cycle-GAN, and also spatial uniformity of the bone images by globally suppressing image artifacts and noise using the two-channel U-Net. The QCBCT-NET substantially enhanced the linearity, uniformity, and contrast as well as the anatomical and quantitative accuracy of the bone images, and demonstrated more accuracy than the Cycle-GAN and the U-Net for quantitatively measuring BMD in CBCT.



2020 ◽  
Vol 10 (17) ◽  
pp. 5898
Author(s):  
Qirong Bu ◽  
Jie Luo ◽  
Kuan Ma ◽  
Hongwei Feng ◽  
Jun Feng

In this paper, we propose an enhanced pix2pix dehazing network, which generates clear images without relying on a physical scattering model. This network is a generative adversarial network (GAN) which combines multiple guided filter layers. First, the input of hazy images is smoothed to obtain high-frequency features according to different smoothing kernels of the guided filter layer. Then, these features are embedded in higher dimensions of the network and connected with the output of the generator’s encoder. Finally, Visual Geometry Group (VGG) features are introduced to serve as a loss function to improve the quality of the texture information restoration and generate better hazy-free images. We conduct experiments on NYU-Depth, I-HAZE and O-HAZE datasets. The enhanced pix2pix dehazing network we propose produces increases of 1.22 dB in the Peak Signal-to-Noise Ratio (PSNR) and 0.01 in the Structural Similarity Index Metric (SSIM) compared with a second successful comparison method using the indoor test dataset. Extensive experiments demonstrate that the proposed method has good performance for image dehazing.



2020 ◽  
Vol 2020 ◽  
pp. 1-10
Author(s):  
Li Li ◽  
Zijia Fan ◽  
Mingyang Zhao ◽  
Xinlei Wang ◽  
Zhongyang Wang ◽  
...  

Since the underwater image is not clear and difficult to recognize, it is necessary to obtain a clear image with the super-resolution (SR) method to further study underwater images. The obtained images with conventional underwater image super-resolution methods lack detailed information, which results in errors in subsequent recognition and other processes. Therefore, we propose an image sequence generative adversarial network (ISGAN) method for super-resolution based on underwater image sequences collected by multifocus from the same angle, which can obtain more details and improve the resolution of the image. At the same time, a dual generator method is used in order to optimize the network architecture and improve the stability of the generator. The preprocessed images are, respectively, passed through the dual generator, one of which is used as the main generator to generate the SR image of sequence images, and the other is used as the auxiliary generator to prevent the training from crashing or generating redundant details. Experimental results show that the proposed method can be improved on both peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) compared to the traditional GAN method in underwater image SR.



Sensors ◽  
2020 ◽  
Vol 20 (21) ◽  
pp. 6193
Author(s):  
Chen Li ◽  
Kai He ◽  
Kun Liu ◽  
Xitao Ma

Image inpainting networks can produce visually reasonable results in the damaged regions. However, existing inpainting networks may fail to reconstruct the proper structures or tend to generate the results with color discrepancy. To solve this issue, this paper proposes an image inpainting approach using the proposed two-stage loss function. The loss function consists of different Gaussian kernels, which are utilized in different stages of network. The use of our two-stage loss function in coarse network helps to focus on the image structure, while the use of it in refinement network is helpful to restore the image details. Moreover, we proposed a global and local PatchGANs (GAN means generative adversarial network), named GL-PatchGANs, in which the global and local markovian discriminators were used to control the final results. This is beneficial to focus on the regions of interest (ROI) on different scales and tends to produce more realistic structural and textural details. We trained our network on three popular datasets on image inpainting separately, both Peak Signal to Noise ratio (PSNR) and Structural Similarity (SSIM) between our results, and ground truths on test images show that our network can achieve better performance compared with the recent works in most cases. Besides, the visual results on three datasets also show that our network can produce visual plausible results compared with the recent works.



2019 ◽  
Vol 485 (2) ◽  
pp. 2617-2627 ◽  
Author(s):  
David M Reiman ◽  
Brett E Göhre

Abstract Near-future large galaxy surveys will encounter blended galaxy images at a fraction of up to 50 per cent in the densest regions of the Universe. Current deblending techniques may segment the foreground galaxy while leaving missing pixel intensities in the background galaxy flux. The problem is compounded by the diffuse nature of galaxies in their outer regions, making segmentation significantly more difficult than in traditional object segmentation applications. We propose a novel branched generative adversarial network to deblend overlapping galaxies, where the two branches produce images of the two deblended galaxies. We show that generative models are a powerful engine for deblending given their innate ability to infill missing pixel values occluded by the superposition. We maintain high peak signal-to-noise ratio and structural similarity scores with respect to ground truth images upon deblending. Our model also predicts near-instantaneously, making it a natural choice for the immense quantities of data soon to be created by large surveys such as Large Synoptic Survey Telescope, Euclid, and Wide-Field Infrared Survey Telescope.



2021 ◽  
pp. 1-14
Author(s):  
Lijun Zhang ◽  
Lixiang Duan ◽  
Xiaocui Hong ◽  
Xiangyu Liu ◽  
Xinyun Zhang

Machinery operates well under normal conditions in most cases; far fewer samples are collected in a fault state (minority samples) than in a normal state, resulting in an imbalance of samples. Common machine learning algorithms such as deep neural networks require a significant amount of data during training to avoid overfitting. These models often fail to detect minority samples when the input samples are imbalanced, which results in missed diagnoses of equipment faults. As an effective method to enhance minority samples, Deep Convolution Generative Adversarial Network (DCGAN) does not fundamentally address the problem of unstable Generative Adversarial Network (GAN) training. This study proposes an improved DCGAN model with improved stability and sample balance for achieving greater classification accuracy over minority samples. First, spectral normalization is performed on each convolutional layer, improving stability in the DCGAN discriminator. Then, the improved DCGAN model is trained to generate new samples that are different from the original samples but with a similar distribution when the Nash equilibrium is reached. Four indices—Inception Score (IS), Fréchet Inception Distance Score (FID), Peak Signal to Noise Ratio (PSNR), and Structural Similarity (SSIM)—were used to quantitatively evaluate of the generated images. Finally, the Balance Degree of Samples (BDS) index was proposed, and the new samples are proportionally added to the original samples to improve sample balance, resulting in the formation of several groups of datasets with different balance degrees, and Convolutional Neural Network (CNN) models are used to classify these samples. With experimental analysis on the reciprocating compressor, the variance of lost data is found to be less than 1% of the original value, representing an increase in stabilityof the model to generate diverse and high-quality sample images, as compared with that of the unmodified model. The classification accuracy exceeds 95% and tends to remain stable when the balance degree of samples is greater than 80%.



Author(s):  
Maryam Abedini ◽  
Horriyeh Haddad ◽  
Marzieh Faridi Masouleh ◽  
Asadollah Shahbahrami

This study proposes an image denoising algorithm based on sparse representation and Principal Component Analysis (PCA). The proposed algorithm includes the following steps. First, the noisy image is divided into overlapped [Formula: see text] blocks. Second, the discrete cosine transform is applied as a dictionary for the sparse representation of the vectors created by the overlapped blocks. To calculate the sparse vector, the orthogonal matching pursuit algorithm is used. Then, the dictionary is updated by means of the PCA algorithm to achieve the sparsest representation of vectors. Since the signal energy, unlike the noise energy, is concentrated on a small dataset by transforming into the PCA domain, the signal and noise can be well distinguished. The proposed algorithm was implemented in a MATLAB environment and its performance was evaluated on some standard grayscale images under different levels of standard deviations of white Gaussian noise by means of peak signal-to-noise ratio, structural similarity indexes, and visual effects. The experimental results demonstrate that the proposed denoising algorithm achieves significant improvement compared to dual-tree complex discrete wavelet transform and K-singular value decomposition image denoising methods. It also obtains competitive results with the block-matching and 3D filtering method, which is the current state-of-the-art for image denoising.



Electronics ◽  
2022 ◽  
Vol 11 (2) ◽  
pp. 245
Author(s):  
Konstantinos G. Liakos ◽  
Georgios K. Georgakilas ◽  
Fotis C. Plessas ◽  
Paris Kitsos

A significant problem in the field of hardware security consists of hardware trojan (HT) viruses. The insertion of HTs into a circuit can be applied for each phase of the circuit chain of production. HTs degrade the infected circuit, destroy it or leak encrypted data. Nowadays, efforts are being made to address HTs through machine learning (ML) techniques, mainly for the gate-level netlist (GLN) phase, but there are some restrictions. Specifically, the number and variety of normal and infected circuits that exist through the free public libraries, such as Trust-HUB, are based on the few samples of benchmarks that have been created from circuits large in size. Thus, it is difficult, based on these data, to develop robust ML-based models against HTs. In this paper, we propose a new deep learning (DL) tool named Generative Artificial Intelligence Netlists SynthesIS (GAINESIS). GAINESIS is based on the Wasserstein Conditional Generative Adversarial Network (WCGAN) algorithm and area–power analysis features from the GLN phase and synthesizes new normal and infected circuit samples for this phase. Based on our GAINESIS tool, we synthesized new data sets, different in size, and developed and compared seven ML classifiers. The results demonstrate that our new generated data sets significantly enhance the performance of ML classifiers compared with the initial data set of Trust-HUB.



Sign in / Sign up

Export Citation Format

Share Document