scholarly journals Image restoration for irregular holes based on dual discrimination generation countermeasure network

Author(s):  
Haiyan Li ◽  
Yan Ma ◽  
Lei Guo ◽  
Haijiang Li ◽  
Jianhua Chen ◽  
...  

In order to solve the problem that the global and local generated countermeasure network cannot inpaint the random irregular large holes, and to improve the standard convolution generator, which demonstrates the defects of color difference and blur, a network architecture of inpainting irregular large holes in an image based on double discrimination generation countermeasure network is proposed. Firstly, the image generator is a U-net architecture defined by partial convolution. The normalized partial convolution only completes the end-to-end mask update for the effective pixels. The skip link in U-net propagates the context information of the image to the higher resolution, and optimizes the training results of the model with the weighted loss function of reconstruction loss, perception loss and wind grid loss. Subsequently, the adversary loss function, the dual discrimination network including the synthetic discriminator and the global discriminator are trained separately to judge the consistency between the generated image and the real image. Finally, the weighted loss functions are trained together with generating network and double discrimination network to further enhance the detail and overall consistency of the inpainted area and make the inpainted results more natural. The simulation experiment is carried out on the Place 365 standard database. The subjective and objective experimental results show that the results of the proposed method has reasonable overall and detail semantic consistency than those of the existing methods when they are used to repair random, irregular and large-area holes. The proposed method effectively overcomes the defects of blurry details, color distortion and artifacts.

Author(s):  
Zhenzhen Yang ◽  
Pengfei Xu ◽  
Yongpeng Yang ◽  
Bing-Kun Bao

The U-Net has become the most popular structure in medical image segmentation in recent years. Although its performance for medical image segmentation is outstanding, a large number of experiments demonstrate that the classical U-Net network architecture seems to be insufficient when the size of segmentation targets changes and the imbalance happens between target and background in different forms of segmentation. To improve the U-Net network architecture, we develop a new architecture named densely connected U-Net (DenseUNet) network in this article. The proposed DenseUNet network adopts a dense block to improve the feature extraction capability and employs a multi-feature fuse block fusing feature maps of different levels to increase the accuracy of feature extraction. In addition, in view of the advantages of the cross entropy and the dice loss functions, a new loss function for the DenseUNet network is proposed to deal with the imbalance between target and background. Finally, we test the proposed DenseUNet network and compared it with the multi-resolutional U-Net (MultiResUNet) and the classic U-Net networks on three different datasets. The experimental results show that the DenseUNet network has significantly performances compared with the MultiResUNet and the classic U-Net networks.


Sensors ◽  
2021 ◽  
Vol 21 (18) ◽  
pp. 6109
Author(s):  
Nkosikhona Dlamini ◽  
Terence L. van Zyl

Similarity learning using deep convolutional neural networks has been applied extensively in solving computer vision problems. This attraction is supported by its success in one-shot and zero-shot classification applications. The advances in similarity learning are essential for smaller datasets or datasets in which few class labels exist per class such as wildlife re-identification. Improving the performance of similarity learning models comes with developing new sampling techniques and designing loss functions better suited to training similarity in neural networks. However, the impact of these advances is tested on larger datasets, with limited attention given to smaller imbalanced datasets such as those found in unique wildlife re-identification. To this end, we test the advances in loss functions for similarity learning on several animal re-identification tasks. We add two new public datasets, Nyala and Lions, to the challenge of animal re-identification. Our results are state of the art on all public datasets tested except Pandas. The achieved Top-1 Recall is 94.8% on the Zebra dataset, 72.3% on the Nyala dataset, 79.7% on the Chimps dataset and, on the Tiger dataset, it is 88.9%. For the Lion dataset, we set a new benchmark at 94.8%. We find that the best performing loss function across all datasets is generally the triplet loss; however, there is only a marginal improvement compared to the performance achieved by Proxy-NCA models. We demonstrate that no single neural network architecture combined with a loss function is best suited for all datasets, although VGG-11 may be the most robust first choice. Our results highlight the need for broader experimentation and exploration of loss functions and neural network architecture for the more challenging task, over classical benchmarks, of wildlife re-identification.


2021 ◽  
Vol 3 (5) ◽  
Author(s):  
Sai Nikhil Rao Gona ◽  
Himamsu Marellapudi

AbstractChoosing which recipe to eat and which recipe to avoid isn’t that simple for anyone. It takes strenuous efforts and a lot of time for people to calculate the number of calories and P.H level of the dish. In this paper, we propose an ensemble neural network architecture that suggests recipes based on the taste of the person, P.H level and calorie content of the recipes. We also propose a bi-directional LSTMs-based variational autoencoder for generating new recipes. We have ensembled three bi-directional LSTM-based recurrent neural networks which can classify the recipes based on the taste of the person, P.H level of the recipe and calorie content of the recipe. The proposed model also predicts the taste ratings of the recipes for which we proposed a custom loss function which gave better results than the standard loss functions and the model also predicts the calorie content of the recipes. The bi-directional LSTMs-based variational autoencoder after being trained with the recipes which are fit for the person generates new recipes from the existing recipes. After training and testing the recurrent neural networks and the variational autoencoder, we have tested the model with 20 new recipes and got overwhelming results in the experimentation, the variational autoencoders generated a couple of new recipes, which are healthy to the specific person and will be liked by the specific person.


Author(s):  
A. Howie ◽  
D.W. McComb

The bulk loss function Im(-l/ε (ω)), a well established tool for the interpretation of valence loss spectra, is being progressively adapted to the wide variety of inhomogeneous samples of interest to the electron microscopist. Proportionality between n, the local valence electron density, and ε-1 (Sellmeyer's equation) has sometimes been assumed but may not be valid even in homogeneous samples. Figs. 1 and 2 show the experimentally measured bulk loss functions for three pure silicates of different specific gravity ρ - quartz (ρ = 2.66), coesite (ρ = 2.93) and a zeolite (ρ = 1.79). Clearly, despite the substantial differences in density, the shift of the prominent loss peak is very small and far less than that predicted by scaling e for quartz with Sellmeyer's equation or even the somewhat smaller shift given by the Clausius-Mossotti (CM) relation which assumes proportionality between n (or ρ in this case) and (ε - 1)/(ε + 2). Both theories overestimate the rise in the peak height for coesite and underestimate the increase at high energies.


2021 ◽  
Vol 13 (9) ◽  
pp. 1779
Author(s):  
Xiaoyan Yin ◽  
Zhiqun Hu ◽  
Jiafeng Zheng ◽  
Boyong Li ◽  
Yuanyuan Zuo

Radar beam blockage is an important error source that affects the quality of weather radar data. An echo-filling network (EFnet) is proposed based on a deep learning algorithm to correct the echo intensity under the occlusion area in the Nanjing S-band new-generation weather radar (CINRAD/SA). The training dataset is constructed by the labels, which are the echo intensity at the 0.5° elevation in the unblocked area, and by the input features, which are the intensity in the cube including multiple elevations and gates corresponding to the location of bottom labels. Two loss functions are applied to compile the network: one is the common mean square error (MSE), and the other is a self-defined loss function that increases the weight of strong echoes. Considering that the radar beam broadens with distance and height, the 0.5° elevation scan is divided into six range bands every 25 km to train different models. The models are evaluated by three indicators: explained variance (EVar), mean absolute error (MAE), and correlation coefficient (CC). Two cases are demonstrated to compare the effect of the echo-filling model by different loss functions. The results suggest that EFnet can effectively correct the echo reflectivity and improve the data quality in the occlusion area, and there are better results for strong echoes when the self-defined loss function is used.


Author(s):  
Donghui Zhang ◽  
Ruijie Liu

Abstract Orienteering has gradually changed from a professional sport to a civilian sport. Especially in recent years, orienteering has been widely popularized. Many colleges and universities in China have also set up this course. With the improvement of people’s living conditions, orienteering has really become a leisure sport in modern people’s life. The reduced difficulty of sports enables more people to participate, but it also exposes a series of problems. As the existing positioning technology is relatively backward, the progress in personnel tracking, emergency services, and other aspects is slow. To solve these problems, a new intelligent orienteering application system is developed based on the Internet of things. ZigBee network architecture is adopted in the system. ZigBee is the mainstream scheme in the current wireless sensor network technology, which has many advantages such as convenient carrying, low power consumption, and signal stability. Due to the complex communication environment in mobile signal, the collected information is processed by signal amplification and signal anti-interference technology. By adding anti-interference devices, video isolators and other devices, the signal is guaranteed to the maximum extent. In order to verify the actual effect of this system, through a number of experimental studies including the relationship between error and traffic radius and the relationship between coverage and the number of anchor nodes, the data shows that the scheme studied in this paper has a greater improvement in comprehensive performance than the traditional scheme, significantly improving the accuracy and coverage. Especially the coverage is close to 100% in the simulation experiment. This research has achieved good results and can be widely used in orienteering training and competition.


2021 ◽  
pp. 1-29
Author(s):  
Yanhong Chen

ABSTRACT In this paper, we study the optimal reinsurance contracts that minimize the convex combination of the Conditional Value-at-Risk (CVaR) of the insurer’s loss and the reinsurer’s loss over the class of ceded loss functions such that the retained loss function is increasing and the ceded loss function satisfies Vajda condition. Among a general class of reinsurance premium principles that satisfy the properties of risk loading and convex order preserving, the optimal solutions are obtained. Our results show that the optimal ceded loss functions are in the form of five interconnected segments for general reinsurance premium principles, and they can be further simplified to four interconnected segments if more properties are added to reinsurance premium principles. Finally, we derive optimal parameters for the expected value premium principle and give a numerical study to analyze the impact of the weighting factor on the optimal reinsurance.


2021 ◽  
Vol 11 (15) ◽  
pp. 7046
Author(s):  
Jorge Francisco Ciprián-Sánchez ◽  
Gilberto Ochoa-Ruiz ◽  
Lucile Rossi ◽  
Frédéric Morandini

Wildfires stand as one of the most relevant natural disasters worldwide, particularly more so due to the effect of climate change and its impact on various societal and environmental levels. In this regard, a significant amount of research has been done in order to address this issue, deploying a wide variety of technologies and following a multi-disciplinary approach. Notably, computer vision has played a fundamental role in this regard. It can be used to extract and combine information from several imaging modalities in regard to fire detection, characterization and wildfire spread forecasting. In recent years, there has been work pertaining to Deep Learning (DL)-based fire segmentation, showing very promising results. However, it is currently unclear whether the architecture of a model, its loss function, or the image type employed (visible, infrared, or fused) has the most impact on the fire segmentation results. In the present work, we evaluate different combinations of state-of-the-art (SOTA) DL architectures, loss functions, and types of images to identify the parameters most relevant to improve the segmentation results. We benchmark them to identify the top-performing ones and compare them to traditional fire segmentation techniques. Finally, we evaluate if the addition of attention modules on the best performing architecture can further improve the segmentation results. To the best of our knowledge, this is the first work that evaluates the impact of the architecture, loss function, and image type in the performance of DL-based wildfire segmentation models.


2022 ◽  
Author(s):  
Lijuan Zheng ◽  
Shaopeng Liu ◽  
Senping Tian ◽  
Jianhua Guo ◽  
Xinpeng Wang ◽  
...  

Abstract Anemia is one of the most widespread clinical symptoms all over the world, which could bring adverse effects on people's daily life and work. Considering the universality of anemia detection and the inconvenience of traditional blood testing methods, many deep learning detection methods based on image recognition have been developed in recent years, including the methods of anemia detection with individuals’ images of conjunctiva. However, existing methods using one single conjunctiva image could not reach comparable accuracy in anemia detection in many real-world application scenarios. To enhance intelligent anemia detection using conjunctiva images, we proposed a new algorithmic framework which could make full use of the data information contained in the image. To be concrete, we proposed to fully explore the global and local information in the image, and adopted a two-branch neural network architecture to unify the information of these two aspects. Compared with the existing methods, our method can fully explore the information contained in a single conjunctiva image and achieve more reliable anemia detection effect. Compared with other existing methods, the experimental results verified the effectiveness of the new algorithm.


Author(s):  
Andrew Cropper ◽  
Sebastijan Dumančic

A major challenge in inductive logic programming (ILP) is learning large programs. We argue that a key limitation of existing systems is that they use entailment to guide the hypothesis search. This approach is limited because entailment is a binary decision: a hypothesis either entails an example or does not, and there is no intermediate position. To address this limitation, we go beyond entailment and use 'example-dependent' loss functions to guide the search, where a hypothesis can partially cover an example. We implement our idea in Brute, a new ILP system which uses best-first search, guided by an example-dependent loss function, to incrementally build programs. Our experiments on three diverse program synthesis domains (robot planning, string transformations, and ASCII art), show that Brute can substantially outperform existing ILP systems, both in terms of predictive accuracies and learning times, and can learn programs 20 times larger than state-of-the-art systems.


Sign in / Sign up

Export Citation Format

Share Document