scholarly journals Faster Post-Earthquake Damage Assessment Based on 1D Convolutional Neural Networks

2021 ◽  
Vol 11 (21) ◽  
pp. 9844
Author(s):  
Xinzhe Yuan ◽  
Dustin Tanksley ◽  
Liujun Li ◽  
Haibin Zhang ◽  
Genda Chen ◽  
...  

Contemporary deep learning approaches for post-earthquake damage assessments based on 2D convolutional neural networks (CNNs) require encoding of ground motion records to transform their inherent 1D time series to 2D images, thus requiring high computing time and resources. This study develops a 1D CNN model to avoid the costly 2D image encoding. The 1D CNN model is compared with a 2D CNN model with wavelet transform encoding and a feedforward neural network (FNN) model to evaluate prediction performance and computational efficiency. A case study of a benchmark reinforced concrete (r/c) building indicated that the 1D CNN model achieved a prediction accuracy of 81.0%, which was very close to the 81.6% prediction accuracy of the 2D CNN model and much higher than the 70.8% prediction accuracy of the FNN model. At the same time, the 1D CNN model reduced computing time by more than 90% and reduced resources used by more than 69%, as compared to the 2D CNN model. Therefore, the developed 1D CNN model is recommended for rapid and accurate resultant damage assessment after earthquakes.

2020 ◽  
Vol 497 (3) ◽  
pp. 2641-2650
Author(s):  
Damien Turpin ◽  
M Ganet ◽  
S Antier ◽  
E Bertin ◽  
L P Xin ◽  
...  

ABSTRACT The observation of the transient sky through a multitude of astrophysical messengers has led to several scientific breakthroughs in the last two decades, thanks to the fast evolution of the observational techniques and strategies employed by the astronomers. Now, it requires to be able to coordinate multiwavelength and multimessenger follow-up campaigns with instruments both in space and on ground jointly capable of scanning a large fraction of the sky with a high-imaging cadency and duty cycle. In the optical domain, the key challenge of the wide field-of-view telescopes covering tens to hundreds of square degrees is to deal with the detection, identification, and classification of hundreds to thousands of optical transient (OT) candidates every night in a reasonable amount of time. In the last decade, new automated tools based on machine learning approaches have been developed to perform those tasks with a low computing time and a high classification efficiency. In this paper, we present an efficient classification method using convolutional neural networks (CNNs) to discard many common types of bogus falsely detected in astrophysical images in the optical domain. We designed this tool to improve the performances of the OT detection pipeline of the Ground Wide field Angle Cameras (GWAC) telescopes, a network of robotic telescopes aiming at monitoring the OT sky down to R = 16 with a 15 s imaging cadency. We applied our trained CNN classifier on a sample of 1472 GWAC OT candidates detected by the real-time detection pipeline.


2019 ◽  
Vol 277 ◽  
pp. 02024 ◽  
Author(s):  
Lincan Li ◽  
Tong Jia ◽  
Tianqi Meng ◽  
Yizhe Liu

In this paper, an accurate two-stage deep learning method is proposed to detect vulnerable plaques in ultrasonic images of cardiovascular. Firstly, a Fully Convonutional Neural Network (FCN) named U-Net is used to segment the original Intravascular Optical Coherence Tomography (IVOCT) cardiovascular images. We experiment on different threshold values to find the best threshold for removing noise and background in the original images. Secondly, a modified Faster RCNN is adopted to do precise detection. The modified Faster R-CNN utilize six-scale anchors (122,162,322,642,1282,2562) instead of the conventional one scale or three scale approaches. First, we present three problems in cardiovascular vulnerable plaque diagnosis, then we demonstrate how our method solve these problems. The proposed method in this paper apply deep convolutional neural networks to the whole diagnostic procedure. Test results show the Recall rate, Precision rate, IoU (Intersection-over-Union) rate and Total score are 0.94, 0.885, 0.913 and 0.913 respectively, higher than the 1st team of CCCV2017 Cardiovascular OCT Vulnerable Plaque Detection Challenge. AP of the designed Faster RCNN is 83.4%, higher than conventional approaches which use one-scale or three-scale anchors. These results demonstrate the superior performance of our proposed method and the power of deep learning approaches in diagnose cardiovascular vulnerable plaques.


2021 ◽  
Vol 5 (2) ◽  
Author(s):  
Alexander Knyshov ◽  
Samantha Hoang ◽  
Christiane Weirauch

Abstract Automated insect identification systems have been explored for more than two decades but have only recently started to take advantage of powerful and versatile convolutional neural networks (CNNs). While typical CNN applications still require large training image datasets with hundreds of images per taxon, pretrained CNNs recently have been shown to be highly accurate, while being trained on much smaller datasets. We here evaluate the performance of CNN-based machine learning approaches in identifying three curated species-level dorsal habitus datasets for Miridae, the plant bugs. Miridae are of economic importance, but species-level identifications are challenging and typically rely on information other than dorsal habitus (e.g., host plants, locality, genitalic structures). Each dataset contained 2–6 species and 126–246 images in total, with a mean of only 32 images per species for the most difficult dataset. We find that closely related species of plant bugs can be identified with 80–90% accuracy based on their dorsal habitus alone. The pretrained CNN performed 10–20% better than a taxon expert who had access to the same dorsal habitus images. We find that feature extraction protocols (selection and combination of blocks of CNN layers) impact identification accuracy much more than the classifying mechanism (support vector machine and deep neural network classifiers). While our network has much lower accuracy on photographs of live insects (62%), overall results confirm that a pretrained CNN can be straightforwardly adapted to collection-based images for a new taxonomic group and successfully extract relevant features to classify insect species.


2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Kai Kiwitz ◽  
Christian Schiffer ◽  
Hannah Spitzer ◽  
Timo Dickscheid ◽  
Katrin Amunts

AbstractThe distribution of neurons in the cortex (cytoarchitecture) differs between cortical areas and constitutes the basis for structural maps of the human brain. Deep learning approaches provide a promising alternative to overcome throughput limitations of currently used cytoarchitectonic mapping methods, but typically lack insight as to what extent they follow cytoarchitectonic principles. We therefore investigated in how far the internal structure of deep convolutional neural networks trained for cytoarchitectonic brain mapping reflect traditional cytoarchitectonic features, and compared them to features of the current grey level index (GLI) profile approach. The networks consisted of a 10-block deep convolutional architecture trained to segment the primary and secondary visual cortex. Filter activations of the networks served to analyse resemblances to traditional cytoarchitectonic features and comparisons to the GLI profile approach. Our analysis revealed resemblances to cellular, laminar- as well as cortical area related cytoarchitectonic features. The networks learned filter activations that reflect the distinct cytoarchitecture of the segmented cortical areas with special regard to their laminar organization and compared well to statistical criteria of the GLI profile approach. These results confirm an incorporation of relevant cytoarchitectonic features in the deep convolutional neural networks and mark them as a valid support for high-throughput cytoarchitectonic mapping workflows.


Sign in / Sign up

Export Citation Format

Share Document