scholarly journals Vision and Deep Learning-Based Algorithms to Detect and Quantify Cracks on Concrete Surfaces from UAV Videos

Sensors ◽  
2020 ◽  
Vol 20 (21) ◽  
pp. 6299 ◽  
Author(s):  
Sutanu Bhowmick ◽  
Satish Nagarajaiah ◽  
Ashok Veeraraghavan

Immediate assessment of structural integrity of important civil infrastructures, like bridges, hospitals, or dams, is of utmost importance after natural disasters. Currently, inspection is performed manually by engineers who look for local damages and their extent on significant locations of the structure to understand its implication on its global stability. However, the whole process is time-consuming and prone to human errors. Due to their size and extent, some regions of civil structures are hard to gain access for manual inspection. In such situations, a vision-based system of Unmanned Aerial Vehicles (UAVs) programmed with Artificial Intelligence algorithms may be an effective alternative to carry out a health assessment of civil infrastructures in a timely manner. This paper proposes a framework of achieving the above-mentioned goal using computer vision and deep learning algorithms for detection of cracks on the concrete surface from its image by carrying out image segmentation of pixels, i.e., classification of pixels in an image of the concrete surface and whether it belongs to cracks or not. The image segmentation or dense pixel level classification is carried out using a deep neural network architecture named U-Net. Further, morphological operations on the segmented images result in dense measurements of crack geometry, like length, width, area, and crack orientation for individual cracks present in the image. The efficacy and robustness of the proposed method as a viable real-life application was validated by carrying out a laboratory experiment of a four-point bending test on an 8-foot-long concrete beam of which the video is recorded using a camera mounted on a UAV-based, as well as a still ground-based, video camera. Detection, quantification, and localization of damage on a civil infrastructure using the proposed framework can directly be used in the prognosis of the structure’s ability to withstand service loads.

Sensors ◽  
2021 ◽  
Vol 21 (1) ◽  
pp. 268
Author(s):  
Yeganeh Jalali ◽  
Mansoor Fateh ◽  
Mohsen Rezvani ◽  
Vahid Abolghasemi ◽  
Mohammad Hossein Anisi

Lung CT image segmentation is a key process in many applications such as lung cancer detection. It is considered a challenging problem due to existing similar image densities in the pulmonary structures, different types of scanners, and scanning protocols. Most of the current semi-automatic segmentation methods rely on human factors therefore it might suffer from lack of accuracy. Another shortcoming of these methods is their high false-positive rate. In recent years, several approaches, based on a deep learning framework, have been effectively applied in medical image segmentation. Among existing deep neural networks, the U-Net has provided great success in this field. In this paper, we propose a deep neural network architecture to perform an automatic lung CT image segmentation process. In the proposed method, several extensive preprocessing techniques are applied to raw CT images. Then, ground truths corresponding to these images are extracted via some morphological operations and manual reforms. Finally, all the prepared images with the corresponding ground truth are fed into a modified U-Net in which the encoder is replaced with a pre-trained ResNet-34 network (referred to as Res BCDU-Net). In the architecture, we employ BConvLSTM (Bidirectional Convolutional Long Short-term Memory)as an advanced integrator module instead of simple traditional concatenators. This is to merge the extracted feature maps of the corresponding contracting path into the previous expansion of the up-convolutional layer. Finally, a densely connected convolutional layer is utilized for the contracting path. The results of our extensive experiments on lung CT images (LIDC-IDRI database) confirm the effectiveness of the proposed method where a dice coefficient index of 97.31% is achieved.


Author(s):  
Jun Li ◽  
Yongjun Chen ◽  
Lei Cai ◽  
Ian Davidson ◽  
Shuiwang Ji

The key idea of current deep learning methods for dense prediction is to apply a model on a regular patch centered on each pixel to make pixel-wise predictions. These methods are limited in the sense that the patches are determined by network architecture instead of learned from data. In this work, we propose the dense transformer networks, which can learn the shapes and sizes of patches from data. The dense transformer networks employ an encoder-decoder architecture, and a pair of dense transformer modules are inserted into each of the encoder and decoder paths. The novelty of this work is that we provide technical solutions for learning the shapes and sizes of patches from data and efficiently restoring the spatial correspondence required for dense prediction. The proposed dense transformer modules are differentiable, thus the entire network can be trained. We apply the proposed networks on biological image segmentation tasks and show superior performance is achieved in comparison to baseline methods.


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Shanshan Wang ◽  
Cheng Li ◽  
Rongpin Wang ◽  
Zaiyi Liu ◽  
Meiyun Wang ◽  
...  

AbstractAutomatic medical image segmentation plays a critical role in scientific research and medical care. Existing high-performance deep learning methods typically rely on large training datasets with high-quality manual annotations, which are difficult to obtain in many clinical applications. Here, we introduce Annotation-effIcient Deep lEarning (AIDE), an open-source framework to handle imperfect training datasets. Methodological analyses and empirical evaluations are conducted, and we demonstrate that AIDE surpasses conventional fully-supervised models by presenting better performance on open datasets possessing scarce or noisy annotations. We further test AIDE in a real-life case study for breast tumor segmentation. Three datasets containing 11,852 breast images from three medical centers are employed, and AIDE, utilizing 10% training annotations, consistently produces segmentation maps comparable to those generated by fully-supervised counterparts or provided by independent radiologists. The 10-fold enhanced efficiency in utilizing expert labels has the potential to promote a wide range of biomedical applications.


2020 ◽  
Author(s):  
Sandip S Panesar ◽  
Vishwesh Nath ◽  
Sudhir K Pathak ◽  
Walter Schneider ◽  
Bennett A. Landman ◽  
...  

BackgroundDiffusion tensor imaging (DTI) is a commonly utilized pre-surgical tractography technique. Despite widespread use, DTI suffers from several critical limitations. These include an inability to replicate crossing fibers and a low angular-resolution, affecting quality of results. More advanced, non-tensor methods have been devised to address DTI’s shortcomings, but they remain clinically underutilized due to lack of awareness, logistical and cost factors.ObjectiveNath et al. (2020) described a method of transforming DTI data into non-tensor high-resolution data, suitable for tractography, using a deep learning technique. This study aims to apply this technique to real-life tumor cases.MethodsThe deep learning model utilizes a residual convolutional neural network architecture to yield a spherical harmonic representation of the diffusion-weighted MR signal. The model was trained using normal subject data. DTI data from clinical cases were utilized for testing: Subject 1 had a right-sided anaplastic oligodendroglioma. Subject 2 had a right-sided glioblastoma. We conducted deterministic fiber tractography on both the DTI data and the post-processed deep learning algorithm datasets.ResultsGenerally, all tracts generated using the deep learning algorithm dataset were qualitatively and quantitatively (in terms of tract volume) superior than those created with DTI data. This was true for both test cases.ConclusionsWe successfully utilized a deep learning technique to convert standard DTI data into data capable of high-angular resolution tractography. This method dispenses with specialized hardware or dedicated acquisition protocols. It presents an economical and logistically feasible method for increasing access to high definition tractography imaging clinically.


Sensors ◽  
2021 ◽  
Vol 21 (11) ◽  
pp. 3719
Author(s):  
Aoxin Ni ◽  
Arian Azarang ◽  
Nasser Kehtarnavaz

The interest in contactless or remote heart rate measurement has been steadily growing in healthcare and sports applications. Contactless methods involve the utilization of a video camera and image processing algorithms. Recently, deep learning methods have been used to improve the performance of conventional contactless methods for heart rate measurement. After providing a review of the related literature, a comparison of the deep learning methods whose codes are publicly available is conducted in this paper. The public domain UBFC dataset is used to compare the performance of these deep learning methods for heart rate measurement. The results obtained show that the deep learning method PhysNet generates the best heart rate measurement outcome among these methods, with a mean absolute error value of 2.57 beats per minute and a mean square error value of 7.56 beats per minute.


IEEE Access ◽  
2021 ◽  
pp. 1-1
Author(s):  
Yujia Sun ◽  
Yang Yang ◽  
Gang Yao ◽  
Fujia Wei ◽  
Mingpu Wong

2021 ◽  
Vol 1861 (1) ◽  
pp. 012067
Author(s):  
Yu’ang Niu ◽  
Yuanyang Zhang ◽  
Liping Ying ◽  
Hong Li ◽  
Wenbo Chen ◽  
...  

2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Andreas M. Weng ◽  
Julius F. Heidenreich ◽  
Corona Metz ◽  
Simon Veldhoen ◽  
Thorsten A. Bley ◽  
...  

Abstract Background Functional lung MRI techniques are usually associated with time-consuming post-processing, where manual lung segmentation represents the most cumbersome part. The aim of this study was to investigate whether deep learning-based segmentation of lung images which were scanned by a fast UTE sequence exploiting the stack-of-spirals trajectory can provide sufficiently good accuracy for the calculation of functional parameters. Methods In this study, lung images were acquired in 20 patients suffering from cystic fibrosis (CF) and 33 healthy volunteers, by a fast UTE sequence with a stack-of-spirals trajectory and a minimum echo-time of 0.05 ms. A convolutional neural network was then trained for semantic lung segmentation using 17,713 2D coronal slices, each paired with a label obtained from manual segmentation. Subsequently, the network was applied to 4920 independent 2D test images and results were compared to a manual segmentation using the Sørensen–Dice similarity coefficient (DSC) and the Hausdorff distance (HD). Obtained lung volumes and fractional ventilation values calculated from both segmentations were compared using Pearson’s correlation coefficient and Bland Altman analysis. To investigate generalizability to patients outside the CF collective, in particular to those exhibiting larger consolidations inside the lung, the network was additionally applied to UTE images from four patients with pneumonia and one with lung cancer. Results The overall DSC for lung tissue was 0.967 ± 0.076 (mean ± standard deviation) and HD was 4.1 ± 4.4 mm. Lung volumes derived from manual and deep learning based segmentations as well as values for fractional ventilation exhibited a high overall correlation (Pearson’s correlation coefficent = 0.99 and 1.00). For the additional cohort with unseen pathologies / consolidations, mean DSC was 0.930 ± 0.083, HD = 12.9 ± 16.2 mm and the mean difference in lung volume was 0.032 ± 0.048 L. Conclusions Deep learning-based image segmentation in stack-of-spirals based lung MRI allows for accurate estimation of lung volumes and fractional ventilation values and promises to replace the time-consuming step of manual image segmentation in the future.


Sign in / Sign up

Export Citation Format

Share Document