scholarly journals Wildfire-Detection Method Using DenseNet and CycleGAN Data Augmentation-Based Remote Camera Imagery

2020 ◽  
Vol 12 (22) ◽  
pp. 3715 ◽  
Author(s):  
Minsoo Park ◽  
Dai Quoc Tran ◽  
Daekyo Jung ◽  
Seunghee Park

To minimize the damage caused by wildfires, a deep learning-based wildfire-detection technology that extracts features and patterns from surveillance camera images was developed. However, many studies related to wildfire-image classification based on deep learning have highlighted the problem of data imbalance between wildfire-image data and forest-image data. This data imbalance causes model performance degradation. In this study, wildfire images were generated using a cycle-consistent generative adversarial network (CycleGAN) to eliminate data imbalances. In addition, a densely-connected-convolutional-networks-based (DenseNet-based) framework was proposed and its performance was compared with pre-trained models. While training with a train set containing an image generated by a GAN in the proposed DenseNet-based model, the best performance result value was realized among the models with an accuracy of 98.27% and an F1 score of 98.16, obtained using the test dataset. Finally, this trained model was applied to high-quality drone images of wildfires. The experimental results showed that the proposed framework demonstrated high wildfire-detection accuracy.

Information ◽  
2021 ◽  
Vol 12 (6) ◽  
pp. 249
Author(s):  
Xin Jin ◽  
Yuanwen Zou ◽  
Zhongbing Huang

The cell cycle is an important process in cellular life. In recent years, some image processing methods have been developed to determine the cell cycle stages of individual cells. However, in most of these methods, cells have to be segmented, and their features need to be extracted. During feature extraction, some important information may be lost, resulting in lower classification accuracy. Thus, we used a deep learning method to retain all cell features. In order to solve the problems surrounding insufficient numbers of original images and the imbalanced distribution of original images, we used the Wasserstein generative adversarial network-gradient penalty (WGAN-GP) for data augmentation. At the same time, a residual network (ResNet) was used for image classification. ResNet is one of the most used deep learning classification networks. The classification accuracy of cell cycle images was achieved more effectively with our method, reaching 83.88%. Compared with an accuracy of 79.40% in previous experiments, our accuracy increased by 4.48%. Another dataset was used to verify the effect of our model and, compared with the accuracy from previous results, our accuracy increased by 12.52%. The results showed that our new cell cycle image classification system based on WGAN-GP and ResNet is useful for the classification of imbalanced images. Moreover, our method could potentially solve the low classification accuracy in biomedical images caused by insufficient numbers of original images and the imbalanced distribution of original images.


2021 ◽  
Vol 11 (5) ◽  
pp. 2166
Author(s):  
Van Bui ◽  
Tung Lam Pham ◽  
Huy Nguyen ◽  
Yeong Min Jang

In the last decade, predictive maintenance has attracted a lot of attention in industrial factories because of its wide use of the Internet of Things and artificial intelligence algorithms for data management. However, in the early phases where the abnormal and faulty machines rarely appeared in factories, there were limited sets of machine fault samples. With limited fault samples, it is difficult to perform a training process for fault classification due to the imbalance of input data. Therefore, data augmentation was required to increase the accuracy of the learning model. However, there were limited methods to generate and evaluate the data applied for data analysis. In this paper, we introduce a method of using the generative adversarial network as the fault signal augmentation method to enrich the dataset. The enhanced data set could increase the accuracy of the machine fault detection model in the training process. We also performed fault detection using a variety of preprocessing approaches and classified the models to evaluate the similarities between the generated data and authentic data. The generated fault data has high similarity with the original data and it significantly improves the accuracy of the model. The accuracy of fault machine detection reaches 99.41% with 20% original fault machine data set and 93.1% with 0% original fault machine data set (only use generate data only). Based on this, we concluded that the generated data could be used to mix with original data and improve the model performance.


2021 ◽  
Vol 14 ◽  
Author(s):  
Eric Nathan Carver ◽  
Zhenzhen Dai ◽  
Evan Liang ◽  
James Snyder ◽  
Ning Wen

Every year thousands of patients are diagnosed with a glioma, a type of malignant brain tumor. MRI plays an essential role in the diagnosis and treatment assessment of these patients. Neural networks show great potential to aid physicians in the medical image analysis. This study investigated the creation of synthetic brain T1-weighted (T1), post-contrast T1-weighted (T1CE), T2-weighted (T2), and T2 Fluid Attenuated Inversion Recovery (Flair) MR images. These synthetic MR (synMR) images were assessed quantitatively with four metrics. The synMR images were also assessed qualitatively by an authoring physician with notions that synMR possessed realism in its portrayal of structural boundaries but struggled to accurately depict tumor heterogeneity. Additionally, this study investigated the synMR images created by generative adversarial network (GAN) to overcome the lack of annotated medical image data in training U-Nets to segment enhancing tumor, whole tumor, and tumor core regions on gliomas. Multiple two-dimensional (2D) U-Nets were trained with original BraTS data and differing subsets of the synMR images. Dice similarity coefficient (DSC) was used as the loss function during training as well a quantitative metric. Additionally, Hausdorff Distance 95% CI (HD) was used to judge the quality of the contours created by these U-Nets. The model performance was improved in both DSC and HD when incorporating synMR in the training set. In summary, this study showed the ability to generate high quality Flair, T2, T1, and T1CE synMR images using GAN. Using synMR images showed encouraging results to improve the U-Net segmentation performance and shows potential to address the scarcity of annotated medical images.


2021 ◽  
Author(s):  
Tian Xiang Gao ◽  
Jia Yi Li ◽  
Yuji Watanabe ◽  
Chi Jung Hung ◽  
Akihiro Yamanaka ◽  
...  

Abstract Sleep-stage classification is essential for sleep research. Various automatic judgment programs including deep learning algorithms using artificial intelligence (AI) have been developed, but with limitations in data format compatibility, human interpretability, cost, and technical requirements. We developed a novel program called GI-SleepNet, generative adversarial network (GAN)-assisted image-based sleep staging for mice that is accurate, versatile, compact, and easy to use. In this program, electroencephalogram and electromyography data are first visualized as images and then classified into three stages (wake, NREM, and REM) by a supervised image learning algorithm. To increase the accuracy, we adopted GAN and artificially generated fake REM sleep data to equalize the number of stages. This resulted in improved accuracy, and as few as one mouse data yielded significant accuracy. Because of its image-based nature, it is easy to apply to data of different formats, of different species of animals, and even outside of sleep research. Image data can be easily understood by humans, thus especially confirmation by experts is easy when there are some anomalies of prediction. Because deep learning of images is one of the leading fields in AI, numerous algorithms are also available.


2020 ◽  
Vol 34 (07) ◽  
pp. 11490-11498
Author(s):  
Che-Tsung Lin ◽  
Yen-Yi Wu ◽  
Po-Hao Hsu ◽  
Shang-Hong Lai

Unpaired image-to-image translation is proven quite effective in boosting a CNN-based object detector for a different domain by means of data augmentation that can well preserve the image-objects in the translated images. Recently, multimodal GAN (Generative Adversarial Network) models have been proposed and were expected to further boost the detector accuracy by generating a diverse collection of images in the target domain, given only a single/labelled image in the source domain. However, images generated by multimodal GANs would achieve even worse detection accuracy than the ones by a unimodal GAN with better object preservation. In this work, we introduce cycle-structure consistency for generating diverse and structure-preserved translated images across complex domains, such as between day and night, for object detector training. Qualitative results show that our model, Multimodal AugGAN, can generate diverse and realistic images for the target domain. For quantitative comparisons, we evaluate other competing methods and ours by using the generated images to train YOLO, Faster R-CNN and FCN models and prove that our model achieves significant improvement and outperforms other methods on the detection accuracies and the FCN scores. Also, we demonstrate that our model could provide more diverse object appearances in the target domain through comparison on the perceptual distance metric.


2020 ◽  
Author(s):  
Howard Martin ◽  
Suharjito

Abstract Face recognition has a lot of use on smartphone authentication, finding people, etc. Nowadays, face recognition with a constrained environment has achieved very good performance on accuracy. However, the accuracy of existing face recognition methods will gradually decrease when using a dataset with an unconstrained environment. Face image with an unconstrained environment is usually taken from a surveillance camera. In general, surveillance cameras will be placed on the corner of a room or even on the street. So, the image resolution will be low. Low-resolution image will cause the face very hard to be recognized and the accuracy will eventually decrease. That is the main reason why increasing the accuracy of the Low-Resolution Face Recognition (LRFR) problem is still challenging. This research aimed to solve the Low-Resolution Face Recognition (LRFR) problem. The datasets are YouTube Faces Database (YTF) and Labelled Faces in The Wild (LFW). In this research, face image resolution would be decreased using bicubic linear and became the low-resolution image data. Then super resolution methods as the preprocessing step would increase the image resolution. Super resolution methods used in this research are Super resolution GAN (SRGAN) [1] and Enhanced Super resolution GAN (ESRGAN) [2]. These methods would be compared to reach a better accuracy on solving LRFR problem. After increased the image resolution, the image would be recognized using FaceNet. This research concluded that using super resolution as the preprocessing step for LRFR problem has achieved a higher accuracy compared to [3]. The highest accuracy achieved by using ESRGAN as the preprocessing and FaceNet for face recognition with accuracy of 98.96 % and Validation rate 96.757 %.


Sensors ◽  
2021 ◽  
Vol 21 (21) ◽  
pp. 7294
Author(s):  
Hyunwoo Cho ◽  
Haesol Park ◽  
Ig-Jae Kim ◽  
Junghyun Cho

Custom inspection using X-ray imaging is a very promising application of modern pattern recognition technology. However, the lack of data or renewal of tariff items makes the application of such technology difficult. In this paper, we present a data augmentation technique based on a new image-to-image translation method to deal with these difficulties. Unlike the conventional methods that convert a semantic label image into a realistic image, the proposed method takes a texture map with a special modification as an additional input of a generative adversarial network to reproduce domain-specific characteristics, such as background clutter or sensor-specific noise patterns. The proposed method was validated by applying it to backscatter X-ray (BSX) vehicle data augmentation. The Fréchet inception distance (FID) of the result indicates the visual quality of the translated image was significantly improved from the baseline when the texture parameters were used. Additionally, in terms of data augmentation, the experimental results of classification, segmentation, and detection show that the use of the translated image data, along with the real data consistently, improved the performance of the trained models. Our findings show that detailed depiction of the texture in translated images is crucial for data augmentation. Considering the comparatively few studies that have examined custom inspections of container scale goods, such as cars, we believe that this study will facilitate research on the automation of container screening, and the security of aviation and ports.


Author(s):  
Haohui Liu ◽  
Ying-Hwey Nai ◽  
Francis Saridin ◽  
Tomotaka Tanaka ◽  
Jim O’ Doherty ◽  
...  

Abstract Purpose Standardized uptake value ratio (SUVr) used to quantify amyloid-β burden from amyloid-PET scans can be biased by variations in the tracer’s nonspecific (NS) binding caused by the presence of cerebrovascular disease (CeVD). In this work, we propose a novel amyloid-PET quantification approach that harnesses the intermodal image translation capability of convolutional networks to remove this undesirable source of variability. Methods Paired MR and PET images exhibiting very low specific uptake were selected from a Singaporean amyloid-PET study involving 172 participants with different severities of CeVD. Two convolutional neural networks (CNN), ScaleNet and HighRes3DNet, and one conditional generative adversarial network (cGAN) were trained to map structural MR to NS PET images. NS estimates generated for all subjects using the most promising network were then subtracted from SUVr images to determine specific amyloid load only (SAβL). Associations of SAβL with various cognitive and functional test scores were then computed and compared to results using conventional SUVr. Results Multimodal ScaleNet outperformed other networks in predicting the NS content in cortical gray matter with a mean relative error below 2%. Compared to SUVr, SAβL showed increased association with cognitive and functional test scores by up to 67%. Conclusion Removing the undesirable NS uptake from the amyloid load measurement is possible using deep learning and substantially improves its accuracy. This novel analysis approach opens a new window of opportunity for improved data modeling in Alzheimer’s disease and for other neurodegenerative diseases that utilize PET imaging.


Sign in / Sign up

Export Citation Format

Share Document