GPR data interpretation by the deep learning with coloring data

2019 ◽  
Vol 72 (0) ◽  
pp. 68-77
Author(s):  
Shinichiro Iso ◽  
Kazuya Ishitsuka ◽  
Kyosuke Onishi ◽  
Toshifumi Matsuoka
Sensors ◽  
2021 ◽  
Vol 21 (3) ◽  
pp. 750
Author(s):  
Iván Garrido ◽  
Jorge Erazo-Aux ◽  
Susana Lagüela ◽  
Stefano Sfarra ◽  
Clemente Ibarra-Castanedo ◽  
...  

The monitoring of heritage objects is necessary due to their continuous deterioration over time. Therefore, the joint use of the most up-to-date inspection techniques with the most innovative data processing algorithms plays an important role to apply the required prevention and conservation tasks in each case study. InfraRed Thermography (IRT) is one of the most used Non-Destructive Testing (NDT) techniques in the cultural heritage field due to its advantages in the analysis of delicate objects (i.e., undisturbed, non-contact and fast inspection of large surfaces) and its continuous evolution in both the acquisition and the processing of the data acquired. Despite the good qualitative and quantitative results obtained so far, the lack of automation in the IRT data interpretation predominates, with few automatic analyses that are limited to specific conditions and the technology of the thermographic camera. Deep Learning (DL) is a data processor with a versatile solution for highly automated analysis. Then, this paper introduces the latest state-of-the-art DL model for instance segmentation, Mask Region-Convolution Neural Network (Mask R-CNN), for the automatic detection and segmentation of the position and area of different surface and subsurface defects, respectively, in two different artistic objects belonging to the same family: Marquetry. For that, active IRT experiments are applied to each marquetry. The thermal image sequences acquired are used as input dataset in the Mask R-CNN learning process. Previously, two automatic thermal image pre-processing algorithms based on thermal fundamentals are applied to the acquired data in order to improve the contrast between defective and sound areas. Good detection and segmentation results are obtained regarding state-of-the-art IRT data processing algorithms, which experience difficulty in identifying the deepest defects in the tests. In addition, the performance of the Mask R-CNN is improved by the prior application of the proposed pre-processing algorithms.


2021 ◽  
Vol 13 (14) ◽  
pp. 2837
Author(s):  
Yago Diez ◽  
Sarah Kentsch ◽  
Motohisa Fukuda ◽  
Maximo Larry Lopez Caceres ◽  
Koma Moritake ◽  
...  

Forests are the planet’s main CO2 filtering agent as well as important economical, environmental and social assets. Climate change is exerting an increased stress, resulting in a need for improved research methodologies to study their health, composition or evolution. Traditionally, information about forests has been collected using expensive and work-intensive field inventories, but in recent years unoccupied autonomous vehicles (UAVs) have become very popular as they represent a simple and inexpensive way to gather high resolution data of large forested areas. In addition to this trend, deep learning (DL) has also been gaining much attention in the field of forestry as a way to include the knowledge of forestry experts into automatic software pipelines tackling problems such as tree detection or tree health/species classification. Among the many sensors that UAVs can carry, RGB cameras are fast, cost-effective and allow for straightforward data interpretation. This has resulted in a large increase in the amount of UAV-acquired RGB data available for forest studies. In this review, we focus on studies that use DL and RGB images gathered by UAVs to solve practical forestry research problems. We summarize the existing studies, provide a detailed analysis of their strengths paired with a critical assessment on common methodological problems and include other information, such as available public data and code resources that we believe can be useful for researchers that want to start working in this area. We structure our discussion using three main families of forestry problems: (1) individual Tree Detection, (2) tree Species Classification, and (3) forest Anomaly Detection (forest fires and insect Infestation).


2020 ◽  
Author(s):  
Hee Kim ◽  
Thomas Ganslandt ◽  
Thomas Miethke ◽  
Michael Neumaier ◽  
Maximilian Kittel

BACKGROUND In recent years, remarkable progress has been made in deep learning technology and successful use cases have been introduced in the medical domain. However, not many studies have considered high-performance computing to fully appreciate the capability of deep learning technology. OBJECTIVE This paper aims to design a solution to accelerate an automated Gram stain image interpretation by means of a deep learning framework without additional hardware resources. METHODS We will apply and evaluate 3 methodologies, namely fine-tuning, an integer arithmetic–only framework, and hyperparameter tuning. RESULTS The choice of pretrained models and the ideal setting for layer tuning and hyperparameter tuning will be determined. These results will provide an empirical yet reproducible guideline for those who consider a rapid deep learning solution for Gram stain image interpretation. The results are planned to be announced in the first quarter of 2021. CONCLUSIONS Making a balanced decision between modeling performance and computational performance is the key for a successful deep learning solution. Otherwise, highly accurate but slow deep learning solutions can add value to routine care. INTERNATIONAL REGISTERED REPORT DERR1-10.2196/16843


Author(s):  
Nirbhay Kumar Chaubey ◽  
Prisilla Jayanthi

This chapter explicates deep learning algorithms for healthcare opportunities. Deep Learning is a group of neural network algorithms and learns from various levels of representation and abstraction to aid in the data interpretation. Since the datasets get bigger, computers become more powerful, and the training of the datasets (images or numeric) gets much easier and the results achieved using deep learning are better. In contrast to machine-learning algorithms that rely on large amounts of labelled data, human cognition can find structure in unlabeled data, a technique known as unsupervised learning. It was noted that using deep learning algorithms on the dataset will reduce the number of unnecessary biopsies in future. In this chapter, the authors study deep learning algorithms to diagnose diabetic retinopathy retinal images and training a convolution neural network (CNN) algorithm to identify object tumors from a large set of brain tumor images.


2020 ◽  
Vol 12 (18) ◽  
pp. 3056
Author(s):  
Man-Sung Kang ◽  
Yun-Kyu An

This paper proposes a frequency–wavenumber (f–k) analysis technique through deep learning-based super resolution (SR) ground penetrating radar (GPR) image enhancement. GPR is one of the most popular underground investigation tools owing to its nondestructive and high-speed survey capabilities. However, arbitrary underground medium inhomogeneity and undesired measurement noises often disturb GPR data interpretation. Although the f–k analysis can be a promising technique for GPR data interpretation, the lack of GPR image resolution caused by the fast or coarse spatial scanning mechanism in reality often leads to analysis distortion. To address the technical issue, we propose the f–k analysis technique by a deep learning network in this study. The proposed f–k analysis technique incorporated with the SR GPR images generated by a deep learning network makes it possible to significantly reduce the arbitrary underground medium inhomogeneity and undesired measurement noises. Moreover, the GPR-induced electromagnetic wavefields can be decomposed for directivity analysis of wave propagation that is reflected from a certain underground object. The effectiveness of the proposed technique is numerically validated through 3D GPR simulation and experimentally demonstrated using in-situ 3D GPR data collected from urban roads in Seoul, Korea.


10.2196/16843 ◽  
2020 ◽  
Vol 9 (7) ◽  
pp. e16843
Author(s):  
Hee Kim ◽  
Thomas Ganslandt ◽  
Thomas Miethke ◽  
Michael Neumaier ◽  
Maximilian Kittel

Background In recent years, remarkable progress has been made in deep learning technology and successful use cases have been introduced in the medical domain. However, not many studies have considered high-performance computing to fully appreciate the capability of deep learning technology. Objective This paper aims to design a solution to accelerate an automated Gram stain image interpretation by means of a deep learning framework without additional hardware resources. Methods We will apply and evaluate 3 methodologies, namely fine-tuning, an integer arithmetic–only framework, and hyperparameter tuning. Results The choice of pretrained models and the ideal setting for layer tuning and hyperparameter tuning will be determined. These results will provide an empirical yet reproducible guideline for those who consider a rapid deep learning solution for Gram stain image interpretation. The results are planned to be announced in the first quarter of 2021. Conclusions Making a balanced decision between modeling performance and computational performance is the key for a successful deep learning solution. Otherwise, highly accurate but slow deep learning solutions can add value to routine care. International Registered Report Identifier (IRRID) DERR1-10.2196/16843


2018 ◽  
Author(s):  
Jesse F. Abrams ◽  
Anand Vashishtha ◽  
Seth T. Wong ◽  
An Nguyen ◽  
Azlan Mohamed ◽  
...  

ABSTRACTUnderstanding environmental factors that influence forest health, as well as the occurrence and abundance of wildlife, is a central topic in forestry and ecology. However, the manual processing of field habitat data is time-consuming and months are often needed to progress from data collection to data interpretation. Computer-assisted tools, such as deep-learning applications can significantly shortening the time to process the data while maintaining a high level of accuracy. Here, we propose Habitat-Net: a novel method based on Convolutional Neural Networks (CNN) to segment habitat images of tropical rainforests. Habitat-Net takes color images as input and after multiple layers of convolution and deconvolution, produces a binary segmentation of the input image. We worked on two different types of habitat datasets that are widely used in ecological studies to characterize the forest conditions: canopy closure and understory vegetation. We trained the model with 800 canopy images and 700 understory images separately and then used 149 canopy and 172 understory images to test the performance of Habitat-Net. We compared the performance of Habitat-Net with a simple threshold based method, a manual processing by a second researcher and a CNN approach called U-Net upon which Habitat-Net is based. Habitat-Net, U-Net and simple thresholding reduced total processing time to milliseconds per image, compared to 45 seconds per image for manual processing. However, the higher mean Dice coefficient of Habitat-Net (0.94 for canopy and 0.95 for understory) indicates that accuracy of Habitat-Net is higher than that of both the simple thresholding (0.64, 0.83) and U-Net (0.89, 0.94). Habitat-Net will be of great relevance for ecologists and foresters, who need to monitor changes in their forest structures. The automated workflow not only reduces the time, it also standardizes the analytical pipeline and, thus, reduces the degree of uncertainty that would be introduced by manual processing of images by different people (either over time or between study sites). Furthermore, it provides the opportunity to collect and process more images from the field, which might increase the accuracy of the method. Although datasets from other habitats might need an annotated dataset to first train the model, the overall time required to process habitat photos will be reduced, particularly for large projects.


2021 ◽  
pp. 1-30
Author(s):  
Juan M. Haut ◽  
Mercedes E. Paoletti ◽  
Sergio Moreno-Alvarez ◽  
Javier Plaza ◽  
Juan-Antonio Rico-Gallego ◽  
...  

Author(s):  
H.A. Cohen ◽  
T.W. Jeng ◽  
W. Chiu

This tutorial will discuss the methodology of low dose electron diffraction and imaging of crystalline biological objects, the problems of data interpretation for two-dimensional projected density maps of glucose embedded protein crystals, the factors to be considered in combining tilt data from three-dimensional crystals, and finally, the prospects of achieving a high resolution three-dimensional density map of a biological crystal. This methodology will be illustrated using two proteins under investigation in our laboratory, the T4 DNA helix destabilizing protein gp32*I and the crotoxin complex crystal.


Sign in / Sign up

Export Citation Format

Share Document