Detecting structural damage under unknown seismic excitation by deep convolutional neural network with wavelet-based transmissibility data

2020 ◽  
pp. 147592172092308
Author(s):  
Ying Lei ◽  
Yixiao Zhang ◽  
Jianan Mi ◽  
Weifeng Liu ◽  
Lijun Liu

Many research groups in the structural health monitoring community have made efforts to utilize deep learning-based approaches for damage detection on a variety of structures. Among these approaches, structural damage detection through deep convolutional neural networks using raw structural response data has received great attention. However, structural responses are affected not only by structural properties but also by excitation characteristics. For detecting of structures’ damage under seismic excitations, different seismic excitations definitely cause varied structural responses data. In practice, it is impossible to accurately predict the characteristics of future seismic excitation for pre-training the deep convolutional neural network. Therefore, it is essential to investigate the autonomous detection of structural element damage subject to unknown seismic excitation. In this article, a new approach is proposed for detecting structural damage subject to unknown seismic excitation based on a convolutional neural network with wavelet-based transmissibility of structural response data. The transmissibility functions of structural response data are used to eliminate the influence of different seismic excitations. Moreover, contrary to the traditional Fourier transform in the conventional transmissibility function, wavelet-based transmissibility function is presented using the ability in subtle information acquisition of wavelet transform. The wavelet-based transmissibility data of structural responses are used as the inputs to constructed deep convolutional neural networks. Both a numerical simulation example and an experimental test are used to validate the performance of the proposed approach based on deep convolutional neural network.

2019 ◽  
Vol 2019 ◽  
pp. 1-10
Author(s):  
Jintao Wang ◽  
Mingxia Shen ◽  
Longshen Liu ◽  
Yi Xu ◽  
Cedric Okinda

Digestive diseases are one of the common broiler diseases that significantly affect production and animal welfare in broiler breeding. Droppings examination and observation are the most precise techniques to detect the occurrence of digestive disease infections in birds. This study proposes an automated broiler digestive disease detector based on a deep Convolutional Neural Network model to classify fine-grained abnormal broiler droppings images as normal and abnormal (shape, color, water content, and shape&water). Droppings images were collected from 10,000 25-35-day-old Ross broiler birds reared in multilayer cages with automatic droppings conveyor belts. For comparative purposes, Faster R-CNN and YOLO-V3 deep Convolutional Neural Networks were developed. The performance of YOLO-V3 was improved by optimizing the anchor box. Faster R-CNN achieved 99.1% recall and 93.3% mean average precision, while YOLO-V3 achieved 88.7% recall and 84.3% mean average precision on the testing data set. The proposed detector can provide technical support for the detection of digestive diseases in broiler production by automatically and nonintrusively recognizing and classifying chicken droppings.


2021 ◽  
Vol 2021 ◽  
pp. 1-15
Author(s):  
Xuhui Fu

In recent years, deep learning, as a very popular artificial intelligence method, can be said to be a small area in the field of image recognition. It is a type of machine learning, actually derived from artificial neural networks, and is a method used to learn the characteristics of sample data. It is a multilayer network, which can learn the information from the bottom to the top of the image through the multilayer network, so as to extract the characteristics of the sample, and then perform identification and classification. The purpose of deep learning is to make the machine have the same analytical and learning capabilities as the human brain. The ability of deep learning in data processing (including images) is unmatched by other methods, and its achievements in recent years have left other methods behind. This article comprehensively reviews the application research progress of deep convolutional neural networks in ancient Chinese pattern restoration and mainly focuses on the research based on deep convolutional neural networks. The main tasks are as follows: (1) a detailed and comprehensive introduction to the basic knowledge of deep convolutional neural and a summary of related algorithms along the three directions of text preprocessing, learning, and neural networks are provided. This article focuses on the related mechanism of traditional pattern repair based on deep convolutional neural network and analyzes the key structure and principle. (2) Research on image restoration models based on deep convolutional networks and adversarial neural networks is carried out. The model is mainly composed of four parts, namely, information masking, feature extraction, generating network, and discriminant network. The main functions of each part are independent and interdependent. (3) The method based on the deep convolutional neural network and the other two methods are tested on the same part of the Qinghai traditional embroidery image data set. From the final evaluation index of the experiment, the method in this paper has better evaluation index than the traditional image restoration method based on samples and the image restoration method based on deep learning. In addition, from the actual image restoration effect, the method in this paper has a better image restoration effect than the other two methods, and the restoration results produced are more in line with the habit of human observation with the naked eye.


2019 ◽  
Vol 23 (10) ◽  
pp. 4493-4502 ◽  
Author(s):  
Chuncheng Feng ◽  
Hua Zhang ◽  
Shuang Wang ◽  
Yonglong Li ◽  
Haoran Wang ◽  
...  

Diagnostics ◽  
2020 ◽  
Vol 10 (7) ◽  
pp. 487 ◽  
Author(s):  
Bosheng Qin ◽  
Letian Liang ◽  
Jingchao Wu ◽  
Qiyao Quan ◽  
Zeyu Wang ◽  
...  

Down syndrome is one of the most common genetic disorders. The distinctive facial features of Down syndrome provide an opportunity for automatic identification. Recent studies showed that facial recognition technologies have the capability to identify genetic disorders. However, there is a paucity of studies on the automatic identification of Down syndrome with facial recognition technologies, especially using deep convolutional neural networks. Here, we developed a Down syndrome identification method utilizing facial images and deep convolutional neural networks, which quantified the binary classification problem of distinguishing subjects with Down syndrome from healthy subjects based on unconstrained two-dimensional images. The network was trained in two main steps: First, we formed a general facial recognition network using a large-scale face identity database (10,562 subjects) and then trained (70%) and tested (30%) a dataset of 148 Down syndrome and 257 healthy images curated through public databases. In the final testing, the deep convolutional neural network achieved 95.87% accuracy, 93.18% recall, and 97.40% specificity in Down syndrome identification. Our findings indicate that the deep convolutional neural network has the potential to support the fast, accurate, and fully automatic identification of Down syndrome and could add considerable value to the future of precision medicine.


2021 ◽  
Vol 1 (1) ◽  
Author(s):  
Ali Sekmen ◽  
Mustafa Parlaktuna ◽  
Ayad Abdul-Malek ◽  
Erdem Erdemir ◽  
Ahmet Bugra Koku

AbstractThis paper introduces two deep convolutional neural network training techniques that lead to more robust feature subspace separation in comparison to traditional training. Assume that dataset has M labels. The first method creates M deep convolutional neural networks called $$\{\text {DCNN}_i\}_{i=1}^{M}$$ { DCNN i } i = 1 M . Each of the networks $$\text {DCNN}_i$$ DCNN i is composed of a convolutional neural network ($$\text {CNN}_i$$ CNN i ) and a fully connected neural network ($$\text {FCNN}_i$$ FCNN i ). In training, a set of projection matrices $$\{\mathbf {P}_i\}_{i=1}^M$$ { P i } i = 1 M are created and adaptively updated as representations for feature subspaces $$\{\mathcal {S}_i\}_{i=1}^M$$ { S i } i = 1 M . A rejection value is computed for each training based on its projections on feature subspaces. Each $$\text {FCNN}_i$$ FCNN i acts as a binary classifier with a cost function whose main parameter is rejection values. A threshold value $$t_i$$ t i is determined for $$i^{th}$$ i th network $$\text {DCNN}_i$$ DCNN i . A testing strategy utilizing $$\{t_i\}_{i=1}^M$$ { t i } i = 1 M is also introduced. The second method creates a single DCNN and it computes a cost function whose parameters depend on subspace separations using the geodesic distance on the Grasmannian manifold of subspaces $$\mathcal {S}_i$$ S i and the sum of all remaining subspaces $$\{\mathcal {S}_j\}_{j=1,j\ne i}^M$$ { S j } j = 1 , j ≠ i M . The proposed methods are tested using multiple network topologies. It is shown that while the first method works better for smaller networks, the second method performs better for complex architectures.


2018 ◽  
Vol 18 (1) ◽  
pp. 143-163 ◽  
Author(s):  
Yang Yu ◽  
Chaoyue Wang ◽  
Xiaoyu Gu ◽  
Jianchun Li

In the past few years, intelligent structural damage identification algorithms based on machine learning techniques have been developed and obtained considerable attentions worldwide, due to the advantages of reliable analysis and high efficiency. However, the performances of existing machine learning–based damage identification methods are heavily dependent on the selected signatures from raw signals. This will cause the fact that the damage identification method, which is the optimal solution for a specific application, may fail to provide the similar performance on other cases. Besides, the feature extraction is a time-consuming task, which may affect the real-time performance in practical applications. To address these problems, this article proposes a novel method based on deep convolutional neural networks to identify and localise damages of building structures equipped with smart control devices. The proposed deep convolutional neural network is capable of automatically extracting high-level features from raw signals or low-level features and optimally selecting the combination of extracted features via a multi-layer fusion to satisfy any damage identification objective. To evaluate the performance of the proposed deep convolutional neural network method, a five-level benchmark building equipped with adaptive smart isolators subjected to the seismic loading is investigated. The result shows that the proposed method has outstanding generalisation capacity and higher identification accuracy than other commonly used machine learning methods. Accordingly, it is deemed as an ideal and effective method for damage identification of smart structures.


2020 ◽  
Vol 2020 (4) ◽  
pp. 4-14
Author(s):  
Vladimir Budak ◽  
Ekaterina Ilyina

The article proposes the classification of lenses with different symmetrical beam angles and offers a scale as a spot-light’s palette. A collection of spotlight’s images was created and classified according to the proposed scale. The analysis of 788 pcs of existing lenses and reflectors with different LEDs and COBs carried out, and the dependence of the axial light intensity from beam angle was obtained. A transfer training of new deep convolutional neural network (CNN) based on the pre-trained GoogleNet was performed using this collection. GradCAM analysis showed that the trained network correctly identifies the features of objects. This work allows us to classify arbitrary spotlights with an accuracy of about 80 %. Thus, light designer can determine the class of spotlight and corresponding type of lens with its technical parameters using this new model based on CCN.


Sign in / Sign up

Export Citation Format

Share Document