scholarly journals Automated Processing and Phenotype Extraction of Ovine Medical Images Using a Combined Generative Adversarial Network and Computer Vision Pipeline

Sensors ◽  
2021 ◽  
Vol 21 (21) ◽  
pp. 7268
Author(s):  
James Francis Robson ◽  
Scott John Denholm ◽  
Mike Coffey

The speed and accuracy of phenotype detection from medical images are some of the most important qualities needed for any informed and timely response such as early detection of cancer or detection of desirable phenotypes for animal breeding. To improve both these qualities, the world is leveraging artificial intelligence and machine learning against this challenge. Most recently, deep learning has successfully been applied to the medical field to improve detection accuracies and speed for conditions including cancer and COVID-19. In this study, we applied deep neural networks, in the form of a generative adversarial network (GAN), to perform image-to-image processing steps needed for ovine phenotype analysis from CT scans of sheep. Key phenotypes such as gigot geometry and tissue distribution were determined using a computer vision (CV) pipeline. The results of the image processing using a trained GAN are strikingly similar (a similarity index of 98%) when used on unseen test images. The combined GAN-CV pipeline was able to process and determine the phenotypes at a speed of 0.11 s per medical image compared to approximately 30 min for manual processing. We hope this pipeline represents the first step towards automated phenotype extraction for ovine genetic breeding programmes.

Sensors ◽  
2021 ◽  
Vol 21 (9) ◽  
pp. 2998
Author(s):  
Aamir Khan ◽  
Weidong Jin ◽  
Amir Haider ◽  
MuhibUr Rahman ◽  
Desheng Wang

Image denoising is a challenging task that is essential in numerous computer vision and image processing problems. This study proposes and applies a generative adversarial network-based image denoising training architecture to multiple-level Gaussian image denoising tasks. Convolutional neural network-based denoising approaches come across a blurriness issue that produces denoised images blurry on texture details. To resolve the blurriness issue, we first performed a theoretical study of the cause of the problem. Subsequently, we proposed an adversarial Gaussian denoiser network, which uses the generative adversarial network-based adversarial learning process for image denoising tasks. This framework resolves the blurriness problem by encouraging the denoiser network to find the distribution of sharp noise-free images instead of blurry images. Experimental results demonstrate that the proposed framework can effectively resolve the blurriness problem and achieve significant denoising efficiency than the state-of-the-art denoising methods.


Author(s):  
Y.A. Hamad ◽  
K.V. Simonov ◽  
A.S. Kents

The paper considers general approaches to image processing, analysis of visual data and computer vision. The main methods for detecting features and edges associated with these approaches are presented. A brief description of modern edge detection and classification algorithms suitable for isolating and characterizing the type of pathology in the lungs in medical images is also given.


Sensors ◽  
2021 ◽  
Vol 21 (14) ◽  
pp. 4867
Author(s):  
Lu Chen ◽  
Hongjun Wang ◽  
Xianghao Meng

With the development of science and technology, neural networks, as an effective tool in image processing, play an important role in gradual remote-sensing image-processing. However, the training of neural networks requires a large sample database. Therefore, expanding datasets with limited samples has gradually become a research hotspot. The emergence of the generative adversarial network (GAN) provides new ideas for data expansion. Traditional GANs either require a large number of input data, or lack detail in the pictures generated. In this paper, we modify a shuffle attention network and introduce it into GAN to generate higher quality pictures with limited inputs. In addition, we improved the existing resize method and proposed an equal stretch resize method to solve the problem of image distortion caused by different input sizes. In the experiment, we also embed the newly proposed coordinate attention (CA) module into the backbone network as a control test. Qualitative indexes and six quantitative evaluation indexes were used to evaluate the experimental results, which show that, compared with other GANs used for picture generation, the modified Shuffle Attention GAN proposed in this paper can generate more refined and high-quality diversified aircraft pictures with more detailed features of the object under limited datasets.


2021 ◽  
pp. 24-34
Author(s):  
Sungmin Hong ◽  
Razvan Marinescu ◽  
Adrian V. Dalca ◽  
Anna K. Bonkhoff ◽  
Martin Bretzner ◽  
...  

2021 ◽  
Vol 8 (02) ◽  
Author(s):  
Engin Dikici ◽  
Matthew Bigelow ◽  
Richard D. White ◽  
Barbaros S. Erdal ◽  
Luciano M. Prevedello

2020 ◽  
Vol 10 (1) ◽  
pp. 375 ◽  
Author(s):  
Zetao Jiang ◽  
Yongsong Huang ◽  
Lirui Hu

The super-resolution generative adversarial network (SRGAN) is a seminal work that is capable of generating realistic textures during single image super-resolution. However, the hallucinated details are often accompanied by unpleasant artifacts. To further enhance the visual quality, we propose a deep learning method for single image super-resolution (SR). Our method directly learns an end-to-end mapping between the low/high-resolution images. The method is based on depthwise separable convolution super-resolution generative adversarial network (DSCSRGAN). A new depthwise separable convolution dense block (DSC Dense Block) was designed for the generator network, which improved the ability to represent and extract image features, while greatly reducing the total amount of parameters. For the discriminator network, the batch normalization (BN) layer was discarded, and the problem of artifacts was reduced. A frequency energy similarity loss function was designed to constrain the generator network to generate better super-resolution images. Experiments on several different datasets showed that the peak signal-to-noise ratio (PSNR) was improved by more than 3 dB, structural similarity index (SSIM) was increased by 16%, and the total parameter was reduced to 42.8% compared with the original model. Combining various objective indicators and subjective visual evaluation, the algorithm was shown to generate richer image details, clearer texture, and lower complexity.


2020 ◽  
Vol 30 (Supplement_5) ◽  
Author(s):  
M Dedicatoria ◽  
S Klaus ◽  
R Case ◽  
S Na ◽  
E Ludwick ◽  
...  

Abstract Background Rapid identification of pathogens is critical to outbreak detection and sentinel surveillance; however most diagnoses are made in laboratory settings. Advancements in artificial intelligence (AI) and computer vision offer unprecedented opportunities to facilitate detection and reduce response time in field settings. An initial step is the creation of analysis algorithms for offline mobile computing applications. Methods AI models to identify objects using computer vision are typically “trained” on previously labeled images. The scarcity of labeled image-libraries creates a bottleneck, requiring thousands of labor hours to annotate images by hand to create “training data.” We describe the applicability of Generative Adversarial Network (GAN) methods to amass sufficient training data with minimal manual input. Results Our AI models are built with a performance score of 0.84-0.93 for M. Tuberculosis, a measure of the AI model's accuracy using precision and recall. Our results demonstrate that our GAN pipeline boosts model robustness and learnability of sparse open source data. Conclusions The use of labeled training data to identify M. Tuberculosis developed using our GAN pipeline techniques demonstrates the potential for rapid identification of known pathogens in field settings. Our work paves the way for the development of offline mobile computing applications to identify pathogens outside of a laboratory setting. Advancements in artificial intelligence (AI) and computer vision offer unprecedented opportunities to decrease detection time in field settings by combining these technologies. Further development of these capabilities can improve time-to-detection and outbreak response significantly. Key messages Rapidly deploy AI detectors to aid in disease outbreak and surveillance. Our concept aligns with deploying responsive alerting capabilities to address dynamic threats in low resource, offline computing environs.


2021 ◽  
Author(s):  
Ziyu Li ◽  
Qiyuan Tian ◽  
Chanon Ngamsombat ◽  
Samuel Cartmell ◽  
John Conklin ◽  
...  

Purpose: To improve the signal-to-noise ratio (SNR) of highly accelerated volumetric MRI while preserve realistic textures using a generative adversarial network (GAN). Methods: A hybrid GAN for denoising entitled "HDnGAN" with a 3D generator and a 2D discriminator was proposed to denoise 3D T2-weighted fluid-attenuated inversion recovery (FLAIR) images acquired in 2.75 minutes (R=3×2) using wave-controlled aliasing in parallel imaging (Wave-CAIPI). HDnGAN was trained on data from 25 multiple sclerosis patients by minimizing a combined mean squared error and adversarial loss with adjustable weight λ. Results were evaluated on eight separate patients by comparing to standard T2-SPACE FLAIR images acquired in 7.25 minutes (R=2×2) using mean absolute error (MAE), peak SNR (PSNR), structural similarity index (SSIM), and VGG perceptual loss, and by two neuroradiologists using a five-point score regarding gray-white matter contrast, sharpness, SNR, lesion conspicuity, and overall quality. Results: HDnGAN (λ=0) produced the lowest MAE, highest PSNR and SSIM. HDnGAN (λ=10-3) produced the lowest VGG loss. In the reader study, HDnGAN (λ=10-3) significantly improved the gray-white contrast and SNR of Wave-CAIPI images, and outperformed BM4D and HDnGAN (λ=0) regarding image sharpness. The overall quality score from HDnGAN (λ=10-3) was significantly higher than those from Wave-CAIPI, BM4D, and HDnGAN (λ=0), with no significant difference compared to standard images. Conclusion: HDnGAN concurrently benefits from improved image synthesis performance of 3D convolution and increased training samples for training the 2D discriminator on limited data. HDnGAN generates images with high SNR and realistic textures, similar to those acquired in longer times and preferred by neuroradiologists.


2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Yi Gu ◽  
Qiankun Zheng

Background. The generation of medical images is to convert the existing medical images into one or more required medical images to reduce the time required for sample diagnosis and the radiation to the human body from multiple medical images taken. Therefore, the research on the generation of medical images has important clinical significance. At present, there are many methods in this field. For example, in the image generation process based on the fuzzy C-means (FCM) clustering method, due to the unique clustering idea of FCM, the images generated by this method are uncertain of the attribution of certain organizations. This will cause the details of the image to be unclear, and the resulting image quality is not high. With the development of the generative adversarial network (GAN) model, many improved methods based on the deep GAN model were born. Pix2Pix is a GAN model based on UNet. The core idea of this method is to use paired two types of medical images for deep neural network fitting, thereby generating high-quality images. The disadvantage is that the requirements for data are very strict, and the two types of medical images must be paired one by one. DualGAN model is a network model based on transfer learning. The model cuts the 3D image into multiple 2D slices, simulates each slice, and merges the generated results. The disadvantage is that every time an image is generated, bar-shaped “shadows” will be generated in the three-dimensional image. Method/Material. To solve the above problems and ensure the quality of image generation, this paper proposes a Dual3D&PatchGAN model based on transfer learning. Since Dual3D&PatchGAN is set based on transfer learning, there is no need for one-to-one paired data sets, only two types of medical image data sets are needed, which has important practical significance for applications. This model can eliminate the bar-shaped “shadows” produced by DualGAN’s generated images and can also perform two-way conversion of the two types of images. Results. From the multiple evaluation indicators of the experimental results, it can be analyzed that Dual3D&PatchGAN is more suitable for the generation of medical images than other models, and its generation effect is better.


Sign in / Sign up

Export Citation Format

Share Document