Deep super resolution crack network (SrcNet) for improving computer vision–based automated crack detectability in in situ bridges

2020 ◽  
pp. 147592172091722
Author(s):  
Hyunjin Bae ◽  
Keunyoung Jang ◽  
Yun-Kyu An

This article proposes a new end-to-end deep super-resolution crack network (SrcNet) for improving computer vision–based automated crack detectability. The digital images acquired from large-scale civil infrastructures for crack detection using unmanned robots often suffer from motion blur and lack of pixel resolution, which may degrade the corresponding crack detectability. The proposed SrcNet is able to significantly enhance the crack detectability by augmenting the pixel resolution of the raw digital image through deep learning. SrcNet basically consists of two phases: phase I—deep learning–based super resolution (SR) image generation and phase II—deep learning–based automated crack detection. Once the raw digital images are obtained from a target bridge surface, phase I of SrcNet generates the corresponding SR images to the raw digital images. Then, phase II automatically detects cracks from the generated SR images, making it possible to remarkably improve the crack detectability. SrcNet is experimentally validated using the digital images obtained using a climbing robot and an unmanned aerial vehicle from in situ concrete bridges located in South Korea. The validation test results reveal that the proposed SrcNet shows 24% better crack detectability compared to the crack detection results using the raw digital images.

Data ◽  
2018 ◽  
Vol 3 (3) ◽  
pp. 28 ◽  
Author(s):  
Kasthurirangan Gopalakrishnan

Deep learning, more specifically deep convolutional neural networks, is fast becoming a popular choice for computer vision-based automated pavement distress detection. While pavement image analysis has been extensively researched over the past three decades or so, recent ground-breaking achievements of deep learning algorithms in the areas of machine translation, speech recognition, and computer vision has sparked interest in the application of deep learning to automated detection of distresses in pavement images. This paper provides a narrative review of recently published studies in this field, highlighting the current achievements and challenges. A comparison of the deep learning software frameworks, network architecture, hyper-parameters employed by each study, and crack detection performance is provided, which is expected to provide a good foundation for driving further research on this important topic in the context of smart pavement or asset management systems. The review concludes with potential avenues for future research; especially in the application of deep learning to not only detect, but also characterize the type, extent, and severity of distresses from 2D and 3D pavement images.


2020 ◽  
Vol 12 (14) ◽  
pp. 2284
Author(s):  
Peter Feldens

In marine habitat mapping, a demand exists for high-resolution maps of the seafloor both for marine spatial planning and research. One topic of interest is the detection of boulders in side scan sonar backscatter mosaics of continental shelf seas. Boulders are oftentimes numerous, but encompass few pixels in backscatter mosaics. Therefore, both their automatic and manual detection is difficult. In this study, located in the German Baltic Sea, the use of super resolution by deep learning to improve the manual and automatic detection of boulders in backscatter mosaics is explored. It is found that upscaling of mosaics by a factor of 2 to 0.25 m or 0.125 m resolution increases the performance of small boulder detection and boulder density grids. Upscaling mosaics with 1.0 m pixel resolution by a factor of 4 improved performance, but the results are not sufficient for practical application. It is suggested that mosaics of 0.5 m resolution can be used to create boulder density grids in the Baltic Sea in line with current standards following upscaling.


2021 ◽  
Author(s):  
Andres Munoz-Jaramillo ◽  
Anna Jungbluth ◽  
Xavier Gitiaux ◽  
Paul Wright ◽  
Carl Shneider ◽  
...  

Abstract Super-resolution techniques aim to increase the resolution of images by adding detail. Compared to upsampling techniques reliant on interpolation, deep learning-based approaches learn features and their relationships across the training data set to leverage prior knowledge on what low resolution patterns look like in higher resolution images. As an added benefit, deep neural networks can learn the systematic properties of the target images (i.e.\ texture), combining super-resolution with instrument cross-calibration. While the successful use of super-resolution algorithms for natural images is rooted in creating perceptually convincing results, super-resolution applied to scientific data requires careful quantitative evaluation of performances. In this work, we demonstrate that deep learning can increase the resolution and calibrate space- and ground-based imagers belonging to different instrumental generations. In addition, we establish a set of measurements to benchmark the performance of scientific applications of deep learning-based super-resolution and calibration. We super-resolve and calibrate solar magnetic field images taken by the Michelson Doppler Imager (MDI; resolution ~2"/pixel; science-grade, space-based) and the Global Oscillation Network Group (GONG; resolution ~2.5"/pixel; space weather operations, ground-based) to the pixel resolution of images taken by the Helioseismic and Magnetic Imager (HMI; resolution ~0.5"/pixel; last generation, science-grade, space-based).


Nanomaterials ◽  
2020 ◽  
Vol 10 (9) ◽  
pp. 1640
Author(s):  
Ran Liu ◽  
Bo Liu ◽  
Quan-Jun Li ◽  
Bing-Bing Liu

An in situ high-pressure X-ray diffraction study was performed on Ag2S nanosheets, with an average lateral size of 29 nm and a relatively thin thickness. Based on the experimental data, we demonstrated that under high pressure, the samples experienced two different high-pressure structural phase transitions up to 29.4 GPa: from monoclinic P21/n structure (phase I, α-Ag2S) to orthorhombic P212121 structure (phase II) at 8.9 GPa and then to monoclinic P21/n structure (phase III) at 12.4 GPa. The critical phase transition pressures for phase II and phase III are approximately 2–3 GPa higher than that of 30 nm Ag2S nanoparticles and bulk materials. Additionally, phase III was stable up to the highest pressure of 29.4 GPa. Bulk moduli of Ag2S nanosheets were obtained as 73(6) GPa for phase I and 141(4) GPa for phase III, which indicate that the samples are more difficult to compress than their bulk counterparts and some other reported Ag2S nanoparticles. Further analysis suggested that the nanosize effect arising from the smaller thickness of Ag2S nanosheets restricts the relative position slip of the interlayer atoms during the compression, which leads to the enhancing of phase stabilities and the elevating of bulk moduli.


Author(s):  
Seyyed Hadi Seifi ◽  
Wenmeng Tian ◽  
Aref Yadollahi ◽  
Haley Doude ◽  
Linkan Bian

Abstract Additive manufacturing (AM) is a novel fabrication technique which enables production of very complex designs that are not feasible through conventional manufacturing techniques. However, one major barrier against broader adoption of additive manufacturing processes is concerned with the quality of the final products, which can be measured as presence of internal defects, such as pores and cracks, affecting the mechanical properties of the fabricated parts. In this paper, a data-driven methodology is proposed to predict the size and location of porosities based on in-situ process signatures, i.e. thermal history. Size as well as location of pores highly affect the resulted fatigue life where near-surface and large pores, compared to inner or small pores, significantly reduces the fatigue life. Therefore, building a model to predict the porosity size and location will pave the way toward building an in-situ prediction model for fatigue life which would drastically influence the additive manufacturing community. The proposed model consists of two phases: in Phase I, a model is established to predict the occurrence and location of small and large pores based on the thermal history; and subsequently, a fatigue model is trained in Phase II to predict the fatigue life based on porosity features predicted from Phase I. The model proposed in Phase I is validated using a thin wall fabricated by a direct laser deposition process and the Phase II model is validated based on fatigue life simulations. Both models provide promising results that can be further studied for functional outcomes.


2020 ◽  
Vol 3 (1) ◽  
Author(s):  
Fei Li ◽  
Diping Song ◽  
Han Chen ◽  
Jian Xiong ◽  
Xingyi Li ◽  
...  

Abstract By 2040, ~100 million people will have glaucoma. To date, there are a lack of high-efficiency glaucoma diagnostic tools based on visual fields (VFs). Herein, we develop and evaluate the performance of ‘iGlaucoma’, a smartphone application-based deep learning system (DLS) in detecting glaucomatous VF changes. A total of 1,614,808 data points of 10,784 VFs (5542 patients) from seven centers in China were included in this study, divided over two phases. In Phase I, 1,581,060 data points from 10,135 VFs of 5105 patients were included to train (8424 VFs), validate (598 VFs) and test (3 independent test sets—200, 406, 507 samples) the diagnostic performance of the DLS. In Phase II, using the same DLS, iGlaucoma cloud-based application further tested on 33,748 data points from 649 VFs of 437 patients from three glaucoma clinics. With reference to three experienced expert glaucomatologists, the diagnostic performance (area under curve [AUC], sensitivity and specificity) of the DLS and six ophthalmologists were evaluated in detecting glaucoma. In Phase I, the DLS outperformed all six ophthalmologists in the three test sets (AUC of 0.834–0.877, with a sensitivity of 0.831–0.922 and a specificity of 0.676–0.709). In Phase II, iGlaucoma had 0.99 accuracy in recognizing different patterns in pattern deviation probability plots region, with corresponding AUC, sensitivity and specificity of 0.966 (0.953–0.979), 0.954 (0.930–0.977), and 0.873 (0.838–0.908), respectively. The ‘iGlaucoma’ is a clinically effective glaucoma diagnostic tool to detect glaucoma from humphrey VFs, although the target population will need to be carefully identified with glaucoma expertise input.


2016 ◽  
Vol 37 (1) ◽  
pp. 91-100 ◽  
Author(s):  
Paul D. Frederick ◽  
Heidi D. Nelson ◽  
Patricia A. Carney ◽  
Tad T. Brunyé ◽  
Kimberly H. Allison ◽  
...  

Background. Medical decision making may be influenced by contextual factors. We evaluated whether pathologists are influenced by disease severity of recently observed cases. Methods. Pathologists independently interpreted 60 breast biopsy specimens (one slide per case; 240 total cases in the study) in a prospective randomized observational study. Pathologists interpreted the same cases in 2 phases, separated by a washout period of >6 months. Participants were not informed that the cases were identical in each phase, and the sequence was reordered randomly for each pathologist and between phases. A consensus reference diagnosis was established for each case by 3 experienced breast pathologists. Ordered logit models examined the effect the pathologists’ diagnoses on the preceding case or the 5 preceding cases had on their diagnosis for the subsequent index case. Results. Among 152 pathologists, 49 provided interpretive data in both phases I and II, 66 from only phase I, and 37 from phase II only. In phase I, pathologists were more likely to indicate a more severe diagnosis than the reference diagnosis when the preceding case was diagnosed as ductal carcinoma in situ (DCIS) or invasive cancer (proportional odds ratio [POR], 1.28; 95% confidence interval [CI], 1.15–1.42). Results were similar when considering the preceding 5 cases and for the pathologists in phase II who interpreted the same cases in a different order compared with phase I (POR, 1.17; 95% CI, 1.05–1.31). Conclusion. Physicians appear to be influenced by the severity of previously interpreted test cases. Understanding types and sources of diagnostic bias may lead to improved assessment of accuracy and better patient care.


2019 ◽  
Author(s):  
Linjing Fang ◽  
Fred Monroe ◽  
Sammy Weiser Novak ◽  
Lyndsey Kirk ◽  
Cara Rae Schiavon ◽  
...  

Point scanning imaging systems (e.g. scanning electron or laser scanning confocal microscopes) are perhaps the most widely used tools for high resolution cellular and tissue imaging. Like all other imaging modalities, the resolution, speed, sample preservation, and signal-to-noise ratio (SNR) of point scanning systems are difficult to optimize simultaneously. In particular, point scanning systems are uniquely constrained by an inverse relationship between imaging speed and pixel resolution. Here we show these limitations can be mitigated via the use of deep learning-based super-sampling of undersampled images acquired on a point-scanning system, which we termed point-scanning super-resolution (PSSR) imaging. Oversampled, high SNR ground truth images acquired on scanning electron or Airyscan laser scanning confocal microscopes were "crappified" to generate semi-synthetic training data for PSSR models that were then used to restore real-world undersampled images. Remarkably, our EM PSSR model could restore undersampled images acquired with different optics, detectors, samples, or sample preparation methods in other labs. PSSR enabled previously unattainable 2 nm resolution images with our serial block face scanning electron microscope system. For fluorescence, we show that undersampled confocal images combined with a multiframe PSSR model trained on Airyscan timelapses facilitates Airyscan-equivalent spatial resolution and SNR with ~100x lower laser dose and 16x higher frame rates than corresponding high-resolution acquisitions. In conclusion, PSSR facilitates point-scanning image acquisition with otherwise unattainable resolution, speed, and sensitivity.


Sign in / Sign up

Export Citation Format

Share Document