scholarly journals Learning to Deblur Face Images via Sketch Synthesis

2020 ◽  
Vol 34 (07) ◽  
pp. 11523-11530 ◽  
Author(s):  
Songnan Lin ◽  
Jiawei Zhang ◽  
Jinshan Pan ◽  
Yicun Liu ◽  
Yongtian Wang ◽  
...  

The success of existing face deblurring methods based on deep neural networks is mainly due to the large model capacity. Few algorithms have been specially designed according to the domain knowledge of face images and the physical properties of the deblurring process. In this paper, we propose an effective face deblurring algorithm based on deep convolutional neural networks (CNNs). Motivated by the conventional deblurring process which usually involves the motion blur estimation and the latent clear image restoration, the proposed algorithm first estimates motion blur by a deep CNN and then restores latent clear images with the estimated motion blur. However, estimating motion blur from blurry face images is difficult as the textures of the blurry face images are scarce. As most face images share some common global structures which can be modeled well by sketch information, we propose to learn face sketches by a deep CNN so that the sketches can help the motion blur estimation. With the estimated motion blur, we then develop an effective latent image restoration algorithm based on a deep CNN. Although involving the several components, the proposed algorithm is trained in an end-to-end fashion. We analyze the effectiveness of each component on face image deblurring and show that the proposed algorithm is able to deblur face images with favorable performance against state-of-the-art methods.

Optimization is the process that relates to finding the most excellent ways for all possible solutions. From last 2-3 decades, natural algorithms play an important role in improving solutions of various problems. By comparing various meta-heuristic algorithms, researchers can make a choice to the best selection of the meta-heuristic algorithms for the proposed problem. In this particular research, we have applied New Cepstrum based technique of image restoration to find out PSF parameters of motion blurred images as a primary technique. In addition, Genetic Algorithm (GA), Particle Swarm Optimization (PSO), BAT Algorithm and GA-BAT hybrid technique etc. are also applied to optimize the blur parameters for calculated by new cepstrum based technique for blur estimation. This aids in analyzing the performance of each algorithm on the same primary technique. The performance analysis of all four algorithms aid in making the decision on the best meta-heuristic algorithm of the cepstrum based technique and to identify the preciseness of the motion blur. All four methods are applied to the same set of images. The algorithm is tested and compared using grayscale images and the benchmarking freely available online datasets, respectivel


2020 ◽  
Vol 2020 (10) ◽  
pp. 28-1-28-7 ◽  
Author(s):  
Kazuki Endo ◽  
Masayuki Tanaka ◽  
Masatoshi Okutomi

Classification of degraded images is very important in practice because images are usually degraded by compression, noise, blurring, etc. Nevertheless, most of the research in image classification only focuses on clean images without any degradation. Some papers have already proposed deep convolutional neural networks composed of an image restoration network and a classification network to classify degraded images. This paper proposes an alternative approach in which we use a degraded image and an additional degradation parameter for classification. The proposed classification network has two inputs which are the degraded image and the degradation parameter. The estimation network of degradation parameters is also incorporated if degradation parameters of degraded images are unknown. The experimental results showed that the proposed method outperforms a straightforward approach where the classification network is trained with degraded images only.


2021 ◽  
Vol 2021 ◽  
pp. 1-15
Author(s):  
Xuhui Fu

In recent years, deep learning, as a very popular artificial intelligence method, can be said to be a small area in the field of image recognition. It is a type of machine learning, actually derived from artificial neural networks, and is a method used to learn the characteristics of sample data. It is a multilayer network, which can learn the information from the bottom to the top of the image through the multilayer network, so as to extract the characteristics of the sample, and then perform identification and classification. The purpose of deep learning is to make the machine have the same analytical and learning capabilities as the human brain. The ability of deep learning in data processing (including images) is unmatched by other methods, and its achievements in recent years have left other methods behind. This article comprehensively reviews the application research progress of deep convolutional neural networks in ancient Chinese pattern restoration and mainly focuses on the research based on deep convolutional neural networks. The main tasks are as follows: (1) a detailed and comprehensive introduction to the basic knowledge of deep convolutional neural and a summary of related algorithms along the three directions of text preprocessing, learning, and neural networks are provided. This article focuses on the related mechanism of traditional pattern repair based on deep convolutional neural network and analyzes the key structure and principle. (2) Research on image restoration models based on deep convolutional networks and adversarial neural networks is carried out. The model is mainly composed of four parts, namely, information masking, feature extraction, generating network, and discriminant network. The main functions of each part are independent and interdependent. (3) The method based on the deep convolutional neural network and the other two methods are tested on the same part of the Qinghai traditional embroidery image data set. From the final evaluation index of the experiment, the method in this paper has better evaluation index than the traditional image restoration method based on samples and the image restoration method based on deep learning. In addition, from the actual image restoration effect, the method in this paper has a better image restoration effect than the other two methods, and the restoration results produced are more in line with the habit of human observation with the naked eye.


Sensors ◽  
2019 ◽  
Vol 19 (16) ◽  
pp. 3584 ◽  
Author(s):  
Weijun Hu ◽  
Yan Zhang ◽  
Lijie Li

The fast progress in research and development of multifunctional, distributed sensor networks has brought challenges in processing data from a large number of sensors. Using deep learning methods such as convolutional neural networks (CNN), it is possible to build smarter systems to forecasting future situations as well as precisely classify large amounts of data from sensors. Multi-sensor data from atmospheric pollutants measurements that involves five criteria, with the underlying analytic model unknown, need to be categorized, so do the Diabetic Retinopathy (DR) fundus images dataset. In this work, we created automatic classifiers based on a deep convolutional neural network (CNN) with two models, a simpler feedforward model with dual modules and an Inception Resnet v2 model, and various structural tweaks for classifying the data from the two tasks. For segregating multi-sensor data, we trained a deep CNN-based classifier on an image dataset extracted from the data by a novel image generating method. We created two deepened and one reductive feedforward network for DR phase classification. The validation accuracies and visualization results show that increasing deep CNN structure depth or kernels number in convolutional layers will not indefinitely improve the classification quality and that a more sophisticated model does not necessarily achieve higher performance when training datasets are quantitatively limited, while increasing training image resolution can induce higher classification accuracies for trained CNNs. The methodology aims at providing support for devising classification networks powering intelligent sensors.


Geophysics ◽  
2020 ◽  
Vol 85 (4) ◽  
pp. WA77-WA86 ◽  
Author(s):  
Haibin Di ◽  
Zhun Li ◽  
Hiren Maniar ◽  
Aria Abubakar

Depicting geologic sequences from 3D seismic surveying is of significant value to subsurface reservoir exploration, but it is usually time- and labor-intensive for manual interpretation by experienced seismic interpreters. We have developed a semisupervised workflow for efficient seismic stratigraphy interpretation by using the state-of-the-art deep convolutional neural networks (CNNs). Specifically, the workflow consists of two components: (1) seismic feature self-learning (SFSL) and (2) stratigraphy model building (SMB), each of which is formulated as a deep CNN. Whereas the SMB is supervised by knowledge from domain experts and the associated CNN uses a similar network architecture typically used in image segmentation, the SFSL is designed as an unsupervised process and thus can be performed backstage while an expert prepares the training labels for the SMB CNN. Compared with conventional approaches, the our workflow is superior in two aspects. First, the SMB CNN, initialized by the SFSL CNN, successfully inherits the prior knowledge of the seismic features in the target seismic data. Therefore, it becomes feasible for completing the supervised training of the SMB CNN more efficiently using only a small amount of training data, for example, less than 0.1% of the available seismic data as demonstrated in this paper. Second, for the convenience of seismic experts in translating their domain knowledge into training labels, our workflow is designed to be applicable to three scenarios, trace-wise, paintbrushing, and full-sectional annotation. The performance of the new workflow is well-verified through application to three real seismic data sets. We conclude that the new workflow is not only capable of providing robust stratigraphy interpretation for a given seismic volume, but it also holds great potential for other problems in seismic data analysis.


Author(s):  
S.M. Sofiqul Islam ◽  
Emon Kumar Dey ◽  
Md. Nurul Ahad Tawhid ◽  
B. M. Mainul Hossain

Automatic garments design class identification for recommending the fashion trends is important nowadays because of the rapid growth of online shopping. By learning the properties of images efficiently, a machine can give better accuracy of classification. Several methods, based on Hand-Engineered feature coding exist for identifying garments design classes. But, most of the time, those methods do not help to achieve better results. Recently, Deep Convolutional Neural Networks (CNNs) have shown better performances for different object recognition. Deep CNN uses multiple levels of representation and abstraction that helps a machine to understand the types of data (images, sound, and text) more accurately. In this paper, we have applied deep CNN for identifying garments design classes. To evaluate the performances, we used two well-known CNN models AlexNet and VGGNet on two different datasets. We also propose a new CNN model based on AlexNet and found better results than existing state-of-the-art by a significant margin.


2020 ◽  
Vol 9 (2) ◽  
pp. 392 ◽  
Author(s):  
Ki-Sun Lee ◽  
Seok-Ki Jung ◽  
Jae-Jun Ryu ◽  
Sang-Wan Shin ◽  
Jinwook Choi

Dental panoramic radiographs (DPRs) provide information required to potentially evaluate bone density changes through a textural and morphological feature analysis on a mandible. This study aims to evaluate the discriminating performance of deep convolutional neural networks (CNNs), employed with various transfer learning strategies, on the classification of specific features of osteoporosis in DPRs. For objective labeling, we collected a dataset containing 680 images from different patients who underwent both skeletal bone mineral density and digital panoramic radiographic examinations at the Korea University Ansan Hospital between 2009 and 2018. Four study groups were used to evaluate the impact of various transfer learning strategies on deep CNN models as follows: a basic CNN model with three convolutional layers (CNN3), visual geometry group deep CNN model (VGG-16), transfer learning model from VGG-16 (VGG-16_TF), and fine-tuning with the transfer learning model (VGG-16_TF_FT). The best performing model achieved an overall area under the receiver operating characteristic of 0.858. In this study, transfer learning and fine-tuning improved the performance of a deep CNN for screening osteoporosis in DPR images. In addition, using the gradient-weighted class activation mapping technique, a visual interpretation of the best performing deep CNN model indicated that the model relied on image features in the lower left and right border of the mandibular. This result suggests that deep learning-based assessment of DPR images could be useful and reliable in the automated screening of osteoporosis patients.


Sign in / Sign up

Export Citation Format

Share Document