scholarly journals Joint Character-Level Convolutional and Generative Adversarial Networks for Text Classification

Complexity ◽  
2020 ◽  
Vol 2020 ◽  
pp. 1-11
Author(s):  
Tianshi Wang ◽  
Li Liu ◽  
Huaxiang Zhang ◽  
Long Zhang ◽  
Xiuxiu Chen

With the continuous renewal of text classification rules, text classifiers need more powerful generalization ability to process the datasets with new text categories or small training samples. In this paper, we propose a text classification framework under insufficient training sample conditions. In the framework, we first quantify the texts by a character-level convolutional neural network and input the textual features into an adversarial network and a classifier, respectively. Then, we use the real textual features to train a generator and a discriminator so as to make the distribution of generated data consistent with that of real data. Finally, the classifier is cooperatively trained by real data and generated data. Extensive experimental validation on four public datasets demonstrates that our method significantly performs better than the comparative methods.

2019 ◽  
Vol 11 (9) ◽  
pp. 1017 ◽  
Author(s):  
Yang Zhang ◽  
Zhangyue Xiong ◽  
Yu Zang ◽  
Cheng Wang ◽  
Jonathan Li ◽  
...  

Road network extraction from remote sensing images has played an important role in various areas. However, due to complex imaging conditions and terrain factors, such as occlusion and shades, it is very challenging to extract road networks with complete topology structures. In this paper, we propose a learning-based road network extraction framework via a Multi-supervised Generative Adversarial Network (MsGAN), which is jointly trained by the spectral and topology features of the road network. Such a design makes the network capable of learning how to “guess” the aberrant road cases, which is caused by occlusion and shadow, based on the relationship between the road region and centerline; thus, it is able to provide a road network with integrated topology. Additionally, we also present a sample quality measurement to efficiently generate a large number of training samples with a little human interaction. Through the experiments on images from various satellites and the comprehensive comparisons to state-of-the-art approaches on the public datasets, it is demonstrated that the proposed method is able to provide high-quality results, especially for the completeness of the road network.


2021 ◽  
Vol 11 (4) ◽  
pp. 1380
Author(s):  
Yingbo Zhou ◽  
Pengcheng Zhao ◽  
Weiqin Tong ◽  
Yongxin Zhu

While Generative Adversarial Networks (GANs) have shown promising performance in image generation, they suffer from numerous issues such as mode collapse and training instability. To stabilize GAN training and improve image synthesis quality with diversity, we propose a simple yet effective approach as Contrastive Distance Learning GAN (CDL-GAN) in this paper. Specifically, we add Consistent Contrastive Distance (CoCD) and Characteristic Contrastive Distance (ChCD) into a principled framework to improve GAN performance. The CoCD explicitly maximizes the ratio of the distance between generated images and the increment between noise vectors to strengthen image feature learning for the generator. The ChCD measures the sampling distance of the encoded images in Euler space to boost feature representations for the discriminator. We model the framework by employing Siamese Network as a module into GANs without any modification on the backbone. Both qualitative and quantitative experiments conducted on three public datasets demonstrate the effectiveness of our method.


Electronics ◽  
2021 ◽  
Vol 11 (1) ◽  
pp. 60
Author(s):  
Paolo Andreini ◽  
Giorgio Ciano ◽  
Simone Bonechi ◽  
Caterina Graziani ◽  
Veronica Lachi ◽  
...  

In this paper, we use Generative Adversarial Networks (GANs) to synthesize high-quality retinal images along with the corresponding semantic label-maps, instead of real images during training of a segmentation network. Different from other previous proposals, we employ a two-step approach: first, a progressively growing GAN is trained to generate the semantic label-maps, which describes the blood vessel structure (i.e., the vasculature); second, an image-to-image translation approach is used to obtain realistic retinal images from the generated vasculature. The adoption of a two-stage process simplifies the generation task, so that the network training requires fewer images with consequent lower memory usage. Moreover, learning is effective, and with only a handful of training samples, our approach generates realistic high-resolution images, which can be successfully used to enlarge small available datasets. Comparable results were obtained by employing only synthetic images in place of real data during training. The practical viability of the proposed approach was demonstrated on two well-established benchmark sets for retinal vessel segmentation—both containing a very small number of training samples—obtaining better performance with respect to state-of-the-art techniques.


2020 ◽  
Vol 2020 ◽  
pp. 1-13 ◽  
Author(s):  
C. Yuan ◽  
C. Q. Sun ◽  
X. Y. Tang ◽  
R. F. Liu

The purpose of image fusion is to combine the source images of the same scene into a single composite image with more useful information and better visual effects. Fusion GAN has made a breakthrough in this field by proposing to use the generative adversarial network to fuse images. In some cases, considering retain infrared radiation information and gradient information at the same time, the existing fusion methods ignore the image contrast and other elements. To this end, we propose a new end-to-end network structure based on generative adversarial networks (GANs), termed as FLGC-Fusion GAN. In the generator, using the learnable grouping convolution can improve the efficiency of the model and save computing resources. Therefore, we can have a better trade-off between the accuracy and speed of the model. Besides, we take the residual dense block as the basic network building unit and use the perception characteristics of the inactive as content loss characteristics of input, achieving the effect of deep network supervision. Experimental results on two public datasets show that the proposed method performs well in subjective visual performance and objective criteria and has obvious advantages over other current typical methods.


2020 ◽  
Vol 6 (9) ◽  
pp. 83 ◽  
Author(s):  
Ufuk Cem Birbiri ◽  
Azam Hamidinekoo ◽  
Amélie Grall ◽  
Paul Malcolm ◽  
Reyer Zwiggelaar

The manual delineation of region of interest (RoI) in 3D magnetic resonance imaging (MRI) of the prostate is time-consuming and subjective. Correct identification of prostate tissue is helpful to define a precise RoI to be used in CAD systems in clinical practice during diagnostic imaging, radiotherapy and monitoring the progress of disease. Conditional GAN (cGAN), cycleGAN and U-Net models and their performances were studied for the detection and segmentation of prostate tissue in 3D multi-parametric MRI scans. These models were trained and evaluated on MRI data from 40 patients with biopsy-proven prostate cancer. Due to the limited amount of available training data, three augmentation schemes were proposed to artificially increase the training samples. These models were tested on a clinical dataset annotated for this study and on a public dataset (PROMISE12). The cGAN model outperformed the U-Net and cycleGAN predictions owing to the inclusion of paired image supervision. Based on our quantitative results, cGAN gained a Dice score of 0.78 and 0.75 on the private and the PROMISE12 public datasets, respectively.


2019 ◽  
Vol 2019 ◽  
pp. 1-9 ◽  
Author(s):  
Wang Wei ◽  
Tang Can ◽  
Wang Xin ◽  
Luo Yanhong ◽  
Hu Yongle ◽  
...  

An image object recognition approach based on deep features and adaptive weighted joint sparse representation (D-AJSR) is proposed in this paper. D-AJSR is a data-lightweight classification framework, which can classify and recognize objects well with few training samples. In D-AJSR, the convolutional neural network (CNN) is used to extract the deep features of the training samples and test samples. Then, we use the adaptive weighted joint sparse representation to identify the objects, in which the eigenvectors are reconstructed by calculating the contribution weights of each eigenvector. Aiming at the high-dimensional problem of deep features, we use the principal component analysis (PCA) method to reduce the dimensions. Lastly, combined with the joint sparse model, the public features and private features of images are extracted from the training sample feature set so as to construct the joint feature dictionary. Based on the joint feature dictionary, sparse representation-based classifier (SRC) is used to recognize the objects. Experiments on face images and remote sensing images show that D-AJSR is superior to the traditional SRC method and some other advanced methods.


Author(s):  
Eric P. Jiang

Automatic text classification is a process that applies information retrieval technology and machine learning algorithms to build models from pre-labeled training samples and then deploys the models to previously unseen documents for classification. Text classification has been widely applied in many fields ranging from Web page indexing, document filtering, and information security, to business intelligence mining. This chapter presents a semi-supervised text classification framework that is based on the radial basis function (RBF) neural networks. The framework integrates an Expectation Maximization (EM) process into a RBF network and can learn for classification effectively from a very small quantity of labeled training samples and a large pool of additional unlabeled documents. The effectiveness of the framework is demonstrated and confirmed by some experiments of the framework on two popular text classification corpora.


SINERGI ◽  
2021 ◽  
Vol 25 (2) ◽  
pp. 141
Author(s):  
Zendi Iklima ◽  
Andi Adriansyah ◽  
Sabin Hitimana

Collision avoidance of Arm Robot is designed for the robot to collide objects, colliding environment, and colliding its body. Self-collision avoidance was successfully trained using Generative Adversarial Networks (GANs) and Particle Swarm Optimization (PSO). The Inverse Kinematics (IK) with 96K motion data was extracted as the dataset to train data distribution of  3.6K samples and 7.2K samples. The proposed method GANs-PSO can solve the common GAN problem such as Mode Collapse or Helvetica Scenario that occurs when the generator  always gets the same output point which mapped to different input  values. The discriminator  produces the random samples' data distribution in which present the real data distribution (generated by Inverse Kinematic analysis).  The PSO was successfully reduced the number of training epochs of the generator  only with 5000 iterations. The result of our proposed method (GANs-PSO) with 50 particles was 5000 training epochs executed in 0.028ms per single prediction and 0.027474% Generator Mean Square Error (GMSE).


Author(s):  
Zhanpeng Wang ◽  
Jiaping Wang ◽  
Michael Kourakos ◽  
Nhung Hoang ◽  
Hyong Hark Lee ◽  
...  

AbstractPopulation genetics relies heavily on simulated data for validation, inference, and intuition. In particular, since real data is always limited, simulated data is crucial for training machine learning methods. Simulation software can accurately model evolutionary processes, but requires many hand-selected input parameters. As a result, simulated data often fails to mirror the properties of real genetic data, which limits the scope of methods that rely on it. In this work, we develop a novel approach to estimating parameters in population genetic models that automatically adapts to data from any population. Our method is based on a generative adversarial network that gradually learns to generate realistic synthetic data. We demonstrate that our method is able to recover input parameters in a simulated isolation-with-migration model. We then apply our method to human data from the 1000 Genomes Project, and show that we can accurately recapitulate the features of real data.


2020 ◽  
Vol 12 (13) ◽  
pp. 2098
Author(s):  
Yue Wu ◽  
Zhuangfei Bai ◽  
Qiguang Miao ◽  
Wenping Ma ◽  
Yuelei Yang ◽  
...  

Adversarial training has demonstrated advanced capabilities for generating image models. In this paper, we propose a deep neural network, named a classified adversarial network (CAN), for multi-spectral image change detection. This network is based on generative adversarial networks (GANs). The generator captures the distribution of the bitemporal multi-spectral image data and transforms it into change detection results, and these change detection results (as the fake data) are input into the discriminator to train the discriminator. The results obtained by pre-classification are also input into the discriminator as the real data. The adversarial training can facilitate the generator learning the transformation from a bitemporal image to a change map. When the generator is trained well, the generator has the ability to generate the final result. The bitemporal multi-spectral images are input into the generator, and then the final change detection results are obtained from the generator. The proposed method is completely unsupervised, and we only need to input the preprocessed data that were obtained from the pre-classification and training sample selection. Through adversarial training, the generator can better learn the relationship between the bitemporal multi-spectral image data and the corresponding labels. Finally, the well-trained generator can be applied to process the raw bitemporal multi-spectral images to obtain the final change map (CM). The effectiveness and robustness of the proposed method were verified by the experimental results on the real high-resolution multi-spectral image data sets.


Sign in / Sign up

Export Citation Format

Share Document