scholarly journals BREAST CANCER SEGMENTATION OF MAMMOGRAPHICS IMAGES USING GENERATIVE

2021 ◽  
Vol 57 (2) ◽  
pp. 247-255
Author(s):  
Swathi N ◽  
◽  
T. Christy Bobby ◽  

Segmentation of breast cancer tumor plays an important role in identifying the location of the tumor, to know the shape of tumor and hence the stage of breast cancer. This paper deals with the segmentation of tumor from whole mammographic mass images using Generative Adversarial Network (GAN). A mini dataset was considered with mammograms and their corresponding ground truth images. Pre-processing like image format conversion, enhancement, pectoral muscle removal and resizing was performed on raw mammogram images. GANs have two neural nets called generative and discriminative networks that compete against each other to obtain the segmentation output. PIX2PIX is a conditional GAN variant which has U-Net as the Generator network and a simple deep neural net as the discriminator. The input to the network was pair of pre-processed mass image and the associated ground truth. A binary image with highlighted tumor was obtained as output. The performance of GAN was evaluated by plotting Generator and discriminator loss. The segmented output was compared with corresponding ground truth. Metrics like Jaccard index, Jaccard distance and Dice-coefficient were calculated. A Dice-coefficient and Jaccard index of 90% and 88.38% was achieved. In future, higher accuracy could be achieved by involving larger dataset to make the system robust.

Author(s):  
Jinning Li ◽  
Yexiang Xue

We propose the Dual Scribble-to-Painting Network (DSP-Net), which is able to produce artistic paintings based on user-generated scribbles. In scribble-to-painting transformation, a neural net has to infer additional details of the image, given relatively sparse information contained in the outlines of the scribble. Therefore, it is more challenging than classical image style transfer, in which the information content is reduced from photos to paintings. Inspired by the human cognitive process, we propose a multi-task generative adversarial network, which consists of two jointly trained neural nets -- one for generating artistic images and the other one for semantic segmentation. We demonstrate that joint training on these two tasks brings in additional benefit. Experimental result shows that DSP-Net outperforms state-of-the-art models both visually and quantitatively. In addition, we publish a large dataset for scribble-to-painting transformation.


2021 ◽  
Author(s):  
Tham Vo

Abstract In abstractive summarization task, most of proposed models adopt the deep recurrent neural network (RNN)-based encoder-decoder architecture to learn and generate meaningful summary for a given input document. However, most of recent RNN-based models always suffer the challenges related to the involvement of much capturing high-frequency/reparative phrases in long documents during the training process which leads to the outcome of trivial and generic summaries are generated. Moreover, the lack of thorough analysis on the sequential and long-range dependency relationships between words within different contexts while learning the textual representation also make the generated summaries unnatural and incoherent. To deal with these challenges, in this paper we proposed a novel semantic-enhanced generative adversarial network (GAN)-based approach for abstractive text summarization task, called as: SGAN4AbSum. We use an adversarial training strategy for our text summarization model in which train the generator and discriminator to simultaneously handle the summary generation and distinguishing the generated summary with the ground-truth one. The input of generator is the jointed rich-semantic and global structural latent representations of training documents which are achieved by applying a combined BERT and graph convolutional network (GCN) textual embedding mechanism. Extensive experiments in benchmark datasets demonstrate the effectiveness of our proposed SGAN4AbSum which achieve the competitive ROUGE-based scores in comparing with state-of-the-art abstractive text summarization baselines.


Sensors ◽  
2019 ◽  
Vol 19 (13) ◽  
pp. 2919 ◽  
Author(s):  
Wangyong He ◽  
Zhongzhao Xie ◽  
Yongbo Li ◽  
Xinmei Wang ◽  
Wendi Cai

Hand pose estimation is a critical technology of computer vision and human-computer interaction. Deep-learning methods require a considerable amount of tagged data. Accordingly, numerous labeled training data are required. This paper aims to generate depth hand images. Given a ground-truth 3D hand pose, the developed method can generate depth hand images. To be specific, a ground truth can be 3D hand poses with the hand structure contained, while the synthesized image has an identical size to that of the training image and a similar visual appearance to the training set. The developed method, inspired by the progress in the generative adversarial network (GAN) and image-style transfer, helps model the latent statistical relationship between the ground-truth hand pose and the corresponding depth hand image. The images synthesized using the developed method are demonstrated to be feasible for enhancing performance. On public hand pose datasets (NYU, MSRA, ICVL), comprehensive experiments prove that the developed method outperforms the existing works.


2019 ◽  
Vol 1 (2) ◽  
pp. 99-120 ◽  
Author(s):  
Tongtao Zhang ◽  
Heng Ji ◽  
Avirup Sil

We propose a new framework for entity and event extraction based on generative adversarial imitation learning—an inverse reinforcement learning method using a generative adversarial network (GAN). We assume that instances and labels yield to various extents of difficulty and the gains and penalties (rewards) are expected to be diverse. We utilize discriminators to estimate proper rewards according to the difference between the labels committed by the ground-truth (expert) and the extractor (agent). Our experiments demonstrate that the proposed framework outperforms state-of-the-art methods.


Author(s):  
Y. Xun ◽  
W. Q. Yu

Abstract. As one of the important sources of meteorological information, satellite nephogram is playing an increasingly important role in the detection and forecast of disastrous weather. The predictions about the movement and transformation of cloud with certain timeliness can enhance the practicability of satellite nephogram. Based on the generative adversarial network in unsupervised learning, we propose a prediction model of time series nephogram, which construct the internal representation of cloud evolution accurately and realize nephogram prediction for the next several hours. We improve the traditional generative adversarial network by constructing the generator and discriminator used the multi-scale convolution network. After the scale transform process, different scales operate convolutions in parallel and then merge the features. This structure can solve the problem of long-term dependence in the traditional network, and both global and detailed features are considered. Then according to the network structure and practical application, we define a new loss function combined with adversarial loss function to accelerate the convergence of model and sharpen predictions which keeps the effectivity of predictions further. Our method has no need to carry out the stack mathematics calculation and the manual operations, has greatly enhanced the feasibility and the efficiency. The results show that this model can reasonably describe the basic characteristics and evolution trend of cloud cluster, the prediction nephogram has very high similarity to the ground-truth nephogram.


2021 ◽  
Author(s):  
Jiali Wang ◽  
Zhengchun Liu ◽  
Ian Foster ◽  
Won Chang ◽  
Rajkumar Kettimuthu ◽  
...  

Abstract. This study develops a neural network-based approach for emulating high-resolution modeled precipitation data with comparable statistical properties but at greatly reduced computational cost. The key idea is to use combination of low- and high- resolution simulations to train a neural network to map from the former to the latter. Specifically, we define two types of CNNs, one that stacks variables directly and one that encodes each variable before stacking, and we train each CNN type both with a conventional loss function, such as mean square error (MSE), and with a conditional generative adversarial network (CGAN), for a total of four CNN variants.We compare the four new CNN-derived high-resolution precipitation results with precipitation generated from original high resolution simulations, a bilinear interpolater and the state-of-the-art CNN-based super-resolution (SR) technique. Results show that the SR technique produces results similar to those of the bilinear interpolator with smoother spatial and temporal distributions and smaller data variabilities and extremes than the high resolution simulations. While the new CNNs trained by MSE generate better results over some regions than the interpolator and SR technique do, their predictions are still not as close as ground truth. The CNNs trained by CGAN generate more realistic and physically reasonable results, better capturing not only data variability in time and space but also extremes such as intense and long-lasting storms. The new proposed CNN-based downscaling approach can downscale precipitation from 50 km to 12 km in 14 min for 30 years once the network is trained (training takes 4 hours using 1 GPU), while the conventional dynamical downscaling would take 1 months using 600 CPU cores to generate simulations at the resolution of 12 km over contiguous United States.


Symmetry ◽  
2018 ◽  
Vol 10 (10) ◽  
pp. 467 ◽  
Author(s):  
Ke Chen ◽  
Dandan Zhu ◽  
Jianwei Lu ◽  
Ye Luo

Automatic reconstructing of neural circuits in the brain is one of the most crucial studies in neuroscience. Connectomes segmentation plays an important role in reconstruction from electron microscopy (EM) images; however, it is rather challenging due to highly anisotropic shapes with inferior quality and various thickness. In our paper, we propose a novel connectomes segmentation framework called adversarial and densely dilated network (ADDN) to address these issues. ADDN is based on the conditional Generative Adversarial Network (cGAN) structure which is the latest advance in machine learning with power to generate images similar to the ground truth especially when the training data is limited. Specifically, we design densely dilated network (DDN) as the segmentor to allow a deeper architecture and larger receptive fields for more accurate segmentation. Discriminator is trained to distinguish generated segmentation from manual segmentation. During training, such adversarial loss function is optimized together with dice loss. Extensive experimental results demonstrate that our ADDN is effective for such connectomes segmentation task, helping to retrieve more accurate segmentation and attenuate the blurry effects of generated boundary map. Our method obtains state-of-the-art performance while requiring less computation on ISBI 2012 EM dataset and mouse piriform cortex dataset.


Sensors ◽  
2019 ◽  
Vol 19 (21) ◽  
pp. 4818 ◽  
Author(s):  
Hyun-Koo Kim ◽  
Kook-Yeol Yoo ◽  
Ju H. Park ◽  
Ho-Youl Jung

In this paper, we propose a method of generating a color image from light detection and ranging (LiDAR) 3D reflection intensity. The proposed method is composed of two steps: projection of LiDAR 3D reflection intensity into 2D intensity, and color image generation from the projected intensity by using a fully convolutional network (FCN). The color image should be generated from a very sparse projected intensity image. For this reason, the FCN is designed to have an asymmetric network structure, i.e., the layer depth of the decoder in the FCN is deeper than that of the encoder. The well-known KITTI dataset for various scenarios is used for the proposed FCN training and performance evaluation. Performance of the asymmetric network structures are empirically analyzed for various depth combinations for the encoder and decoder. Through simulations, it is shown that the proposed method generates fairly good visual quality of images while maintaining almost the same color as the ground truth image. Moreover, the proposed FCN has much higher performance than conventional interpolation methods and generative adversarial network based Pix2Pix. One interesting result is that the proposed FCN produces shadow-free and daylight color images. This result is caused by the fact that the LiDAR sensor data is produced by the light reflection and is, therefore, not affected by sunlight and shadow.


Sign in / Sign up

Export Citation Format

Share Document