Neural Network Interpretation of Ultrasonic Response for Concrete Condition Assessment

Author(s):  
Samir N. Shoukry ◽  
D.R. Martinelli

Ultrasonic testing of concrete structures using the pitch-catch method is an effective technique for testing concrete structures that cannot be accessed on two opposing surfaces. However, the ultrasonic signals so measured are extremely noisy and contain a complicated pattern of multiple frequency-coupled reflections that makes interpretation a difficult task. In this investigation, a neural network modeling approach is used to classify ultrasonically tested concrete specimens into one of two classes: defective or nondefective. Different types of neural nets are used, and their performance is evaluated. It was found that correct classification of the individual ultrasonic signals could be achieved with an accuracy of 75 percent for the test set and 95 percent for the training set. These recognition rates lead to the correct classification of all the individual test specimens. The study shows that although some neural net architectures may show high performance using a particular training data set, their results might not be consistent. In this paper, the consistency of the network performance was tested by shuffling the training and testing data sets.

Author(s):  
D. R. Martinelli ◽  
Samir N. Shoukry

A neural network modeling approach is used to identify concrete specimens that contain internal cracks. Different types of neural nets are used and their performance is evaluated. Correct classification of the signals received from a cracked specimen could be achieved with an accuracy of 75 percent for the test set and 95 percent for the training set. These recognition rates lead to the correct classification of all the individual test specimens. Although some neural net architectures may show high performance with a particular training data set, their results might be inconsistent. In situations in which the number of data sets is small, consistent performance of a neural network may be achieved by shuffling the training and testing data sets.


2020 ◽  
Vol 493 (3) ◽  
pp. 3178-3193 ◽  
Author(s):  
Wei Wei ◽  
E A Huerta ◽  
Bradley C Whitmore ◽  
Janice C Lee ◽  
Stephen Hannon ◽  
...  

ABSTRACT We present the results of a proof-of-concept experiment that demonstrates that deep learning can successfully be used for production-scale classification of compact star clusters detected in Hubble Space Telescope(HST) ultraviolet-optical imaging of nearby spiral galaxies ($D\lesssim 20\, \textrm{Mpc}$) in the Physics at High Angular Resolution in Nearby GalaxieS (PHANGS)–HST survey. Given the relatively small nature of existing, human-labelled star cluster samples, we transfer the knowledge of state-of-the-art neural network models for real-object recognition to classify star clusters candidates into four morphological classes. We perform a series of experiments to determine the dependence of classification performance on neural network architecture (ResNet18 and VGG19-BN), training data sets curated by either a single expert or three astronomers, and the size of the images used for training. We find that the overall classification accuracies are not significantly affected by these choices. The networks are used to classify star cluster candidates in the PHANGS–HST galaxy NGC 1559, which was not included in the training samples. The resulting prediction accuracies are 70 per cent, 40 per cent, 40–50 per cent, and 50–70 per cent for class 1, 2, 3 star clusters, and class 4 non-clusters, respectively. This performance is competitive with consistency achieved in previously published human and automated quantitative classification of star cluster candidate samples (70–80 per cent, 40–50 per cent, 40–50 per cent, and 60–70 per cent). The methods introduced herein lay the foundations to automate classification for star clusters at scale, and exhibit the need to prepare a standardized data set of human-labelled star cluster classifications, agreed upon by a full range of experts in the field, to further improve the performance of the networks introduced in this study.


Author(s):  
William Kirchner ◽  
Steve Southward ◽  
Mehdi Ahmadian

This work presents a generic passive non-contact based acoustic health monitoring approach using ultrasonic acoustic emissions (UAE) to facilitate classification of bearing health via neural networks. This generic approach is applied to classifying the operating condition of conventional ball bearings. The acoustic emission signals used in this study are in the ultrasonic range (20–120 kHz), which is significantly higher than the majority of the research in this area thus far. A direct benefit of working in this frequency range is the inherent directionality of microphones capable of measurement in this range, which becomes particularly useful when operating in environments with low signal-to-noise ratios that are common in the rail industry. Using the UAE power spectrum signature, it is possible to pose the health monitoring problem as a multi-class classification problem, and make use of a multi-layer artificial neural network (ANN) to classify the UAE signature. One major problem limiting the usefulness of ANN’s for failure classification is the need for large quantities of training data. This becomes a particularly important issue when considering applications involving higher value components such as the turbo mechanisms and traction motors on diesel locomotives. Artificial training data, based on the statistical properties of a significantly smaller experimental data set is created to train the artificial neural network. The combination of the artificial training methods and ultrasonic frequency range being used results in an approach generic enough to suggest that this particular method is applicable to a variety of systems and components where persistent UAE exist.


1994 ◽  
Vol 73 (11) ◽  
pp. 812-823 ◽  
Author(s):  
Barry P. Kimberley ◽  
Brent M. Kimberley Leah Roth

Distortion Product Emission (DPE) growth functions, demographic data, and pure tone thresholds were recorded in 229 normal-hearing and hearing-impaired ears. Half of the data set (115 ears) was used to train a set of neural networks to map DPE and demographic features to pure tone thresholds at six frequencies in the audiometric range. The six networks developed from this training process were then used to predict pure tone thresholds in the remaining 114-ear data set. When normal pure tone threshold was defined as a threshold less than 20 dB HL, frequency-specific prediction accuracy varied from 57% (correct classification of hearing impairment at 1025 Hz) to 100% (correct classification of hearing impairment at 2050 Hz). Overall prediction accuracy was 90% for impaired pure tone thresholds and 80% for normal pure tone thresholds. This neural network approach was found to be more accurate than discriminant analysis in the prediction of pure tone thresholds. It is concluded that a general strategy exists whereby DPE measures can accurately categorize pure tone thresholds as normal or impaired in large populations of ears with purely cochlear hearing dysfunction.


Sensors ◽  
2021 ◽  
Vol 21 (5) ◽  
pp. 1688
Author(s):  
Luqman Ali ◽  
Fady Alnajjar ◽  
Hamad Al Jassmi ◽  
Munkhjargal Gochoo ◽  
Wasif Khan ◽  
...  

This paper proposes a customized convolutional neural network for crack detection in concrete structures. The proposed method is compared to four existing deep learning methods based on training data size, data heterogeneity, network complexity, and the number of epochs. The performance of the proposed convolutional neural network (CNN) model is evaluated and compared to pretrained networks, i.e., the VGG-16, VGG-19, ResNet-50, and Inception V3 models, on eight datasets of different sizes, created from two public datasets. For each model, the evaluation considered computational time, crack localization results, and classification measures, e.g., accuracy, precision, recall, and F1-score. Experimental results demonstrated that training data size and heterogeneity among data samples significantly affect model performance. All models demonstrated promising performance on a limited number of diverse training data; however, increasing the training data size and reducing diversity reduced generalization performance, and led to overfitting. The proposed customized CNN and VGG-16 models outperformed the other methods in terms of classification, localization, and computational time on a small amount of data, and the results indicate that these two models demonstrate superior crack detection and localization for concrete structures.


2014 ◽  
Vol 539 ◽  
pp. 181-184
Author(s):  
Wan Li Zuo ◽  
Zhi Yan Wang ◽  
Ning Ma ◽  
Hong Liang

Accurate classification of text is a basic premise of extracting various types of information on the Web efficiently and utilizing the network resources properly. In this paper, a brand new text classification method was proposed. Consistency analysis method is a type of iterative algorithm, which mainly trains different classifiers (weak classifier) by aiming at the same training set, and then these classifiers will be gathered for testing the consistency degrees of various classification methods for the same text, thus to manifest the knowledge of each type of classifier. It main determines the weight of each sample according to the fact is the classification of each sample is accurate in each training set, as well as the accuracy of the last overall classification, and then sends the new data set whose weight has been modified to the subordinate classifier for training. In the end, the classifier gained in the training will be integrated as the final decision classifier. The classifier with consistency analysis can eliminate some unnecessary training data characteristics and place the key words on key training data. According to the experimental result, the average accuracy of this method is 91.0%, while the average recall rate is 88.1%.


Author(s):  
M. Takadoya ◽  
M. Notake ◽  
M. Kitahara ◽  
J. D. Achenbach ◽  
Q. C. Guo ◽  
...  

2017 ◽  
Vol 25 (0) ◽  
pp. 42-48 ◽  
Author(s):  
Abul Hasnat ◽  
Anindya Ghosh ◽  
Amina Khatun ◽  
Santanu Halder

This study proposes a fabric defect classification system using a Probabilistic Neural Network (PNN) and its hardware implementation using a Field Programmable Gate Arrays (FPGA) based system. The PNN classifier achieves an accuracy of 98 ± 2% for the test data set, whereas the FPGA based hardware system of the PNN classifier realises about 94±2% testing accuracy. The FPGA system operates as fast as 50.777 MHz, corresponding to a clock period of 19.694 ns.


2019 ◽  
Vol 2 (1) ◽  
Author(s):  
Jeffrey Micher

We present a method for building a morphological generator from the output of an existing analyzer for Inuktitut, in the absence of a two-way finite state transducer which would normally provide this functionality. We make use of a sequence to sequence neural network which “translates” underlying Inuktitut morpheme sequences into surface character sequences. The neural network uses only the previous and the following morphemes as context. We report a morpheme accuracy of approximately 86%. We are able to increase this accuracy slightly by passing deep morphemes directly to output for unknown morphemes. We do not see significant improvement when increasing training data set size, and postulate possible causes for this.


2014 ◽  
Vol 17 (1) ◽  
pp. 56-74 ◽  
Author(s):  
Gurjeet Singh ◽  
Rabindra K. Panda ◽  
Marc Lamers

The reported study was undertaken in a small agricultural watershed, namely, Kapgari in Eastern India having a drainage area of 973 ha. The watershed was subdivided into three sub-watersheds on the basis of drainage network and land topography. An attempt was made to relate the continuously monitored runoff data from the sub-watersheds and the whole-watershed with the rainfall and temperature data using the artificial neural network (ANN) technique. The reported study also evaluated the bias in the prediction of daily runoff with shorter length of training data set using different resampling techniques with the ANN modeling. A 10-fold cross-validation (CV) technique was used to find the optimum number of hidden neurons in the hidden layer and to avoid neural network over-fitting during the training process for shorter length of data. The results illustrated that the ANN models developed with shorter length of training data set avoid neural network over-fitting during the training process, using a 10-fold CV method. Moreover, the biasness was investigated using the bootstrap resampling technique based ANN (BANN) for short length of training data set. In comparison with the 10-fold CV technique, the BANN is more efficient in solving the problems of the over-fitting and under-fitting during training of models for shorter length of data set.


Sign in / Sign up

Export Citation Format

Share Document