Performance Evaluation of Neural Networks in Concrete Condition Assessment

Author(s):  
D. R. Martinelli ◽  
Samir N. Shoukry

A neural network modeling approach is used to identify concrete specimens that contain internal cracks. Different types of neural nets are used and their performance is evaluated. Correct classification of the signals received from a cracked specimen could be achieved with an accuracy of 75 percent for the test set and 95 percent for the training set. These recognition rates lead to the correct classification of all the individual test specimens. Although some neural net architectures may show high performance with a particular training data set, their results might be inconsistent. In situations in which the number of data sets is small, consistent performance of a neural network may be achieved by shuffling the training and testing data sets.

Author(s):  
Samir N. Shoukry ◽  
D.R. Martinelli

Ultrasonic testing of concrete structures using the pitch-catch method is an effective technique for testing concrete structures that cannot be accessed on two opposing surfaces. However, the ultrasonic signals so measured are extremely noisy and contain a complicated pattern of multiple frequency-coupled reflections that makes interpretation a difficult task. In this investigation, a neural network modeling approach is used to classify ultrasonically tested concrete specimens into one of two classes: defective or nondefective. Different types of neural nets are used, and their performance is evaluated. It was found that correct classification of the individual ultrasonic signals could be achieved with an accuracy of 75 percent for the test set and 95 percent for the training set. These recognition rates lead to the correct classification of all the individual test specimens. The study shows that although some neural net architectures may show high performance using a particular training data set, their results might not be consistent. In this paper, the consistency of the network performance was tested by shuffling the training and testing data sets.


Sensors ◽  
2020 ◽  
Vol 20 (3) ◽  
pp. 825 ◽  
Author(s):  
Fadi Al Machot ◽  
Mohammed R. Elkobaisi ◽  
Kyandoghere Kyamakya

Due to significant advances in sensor technology, studies towards activity recognition have gained interest and maturity in the last few years. Existing machine learning algorithms have demonstrated promising results by classifying activities whose instances have been already seen during training. Activity recognition methods based on real-life settings should cover a growing number of activities in various domains, whereby a significant part of instances will not be present in the training data set. However, to cover all possible activities in advance is a complex and expensive task. Concretely, we need a method that can extend the learning model to detect unseen activities without prior knowledge regarding sensor readings about those previously unseen activities. In this paper, we introduce an approach to leverage sensor data in discovering new unseen activities which were not present in the training set. We show that sensor readings can lead to promising results for zero-shot learning, whereby the necessary knowledge can be transferred from seen to unseen activities by using semantic similarity. The evaluation conducted on two data sets extracted from the well-known CASAS datasets show that the proposed zero-shot learning approach achieves a high performance in recognizing unseen (i.e., not present in the training dataset) new activities.


2013 ◽  
Vol 373-375 ◽  
pp. 1212-1219
Author(s):  
Afrias Sarotama ◽  
Benyamin Kusumoputro

A good model is necessary in order to design a controller of a system off-line. It is especially beneficial in the implementation of new advanced control schemes in Unmanned Aerial Vehicle (UAV). Considering the safety and benefit of an off-line tuning of the UAV controllers, this paper identifies a dynamic MIMO UAV nonlinear system which is derived based on the collection of input-output data taken from a test flights (36250 samples data). These input-output sample flight data are grouped into two flight data sets. The first flight data set, a chirp signal, is used for training the neural network in order to determine parameters (weights) for the network. Validation of the network is performed using the second data set, which is not used for training, and is a representation of UAV circular flight movement. An artificial neural network is trained using the training data set and thereafter the network is excited by the second set input data set. The predicted outputs based on our proposed Neural Network model is similar to the desired outputs (roll, pitch and yaw) which has been produced by real UAV system.


The project “Disease Prediction Model” focuses on predicting the type of skin cancer. It deals with constructing a Convolutional Neural Network(CNN) sequential model in order to find the type of a skin cancer which takes a huge troll on mankind well-being. Since development of programmed methods increases the accuracy at high scale for identifying the type of skin cancer, we use Convolutional Neural Network, CNN algorithm in order to build our model . For this we make use of a sequential model. The data set that we have considered for this project is collected from NCBI, which is well known as HAM10000 dataset, it consists of massive amounts of information regarding several dermatoscopic images of most trivial pigmented lesions of skin which are collected from different sufferers. Once the dataset is collected, cleaned, it is split into training and testing data sets. We used CNN to build our model and using the training data we trained the model , later using the testing data we tested the model. Once the model is implemented over the testing data, plots are made in order to analyze the relation between the echos and loss function. It is also used to analyse accuracy and echos for both training and testing data.


2018 ◽  
Vol 25 (3) ◽  
pp. 655-670 ◽  
Author(s):  
Tsung-Wei Ke ◽  
Aaron S. Brewster ◽  
Stella X. Yu ◽  
Daniela Ushizima ◽  
Chao Yang ◽  
...  

A new tool is introduced for screening macromolecular X-ray crystallography diffraction images produced at an X-ray free-electron laser light source. Based on a data-driven deep learning approach, the proposed tool executes a convolutional neural network to detect Bragg spots. Automatic image processing algorithms described can enable the classification of large data sets, acquired under realistic conditions consisting of noisy data with experimental artifacts. Outcomes are compared for different data regimes, including samples from multiple instruments and differing amounts of training data for neural network optimization.


2020 ◽  
Vol 493 (3) ◽  
pp. 3178-3193 ◽  
Author(s):  
Wei Wei ◽  
E A Huerta ◽  
Bradley C Whitmore ◽  
Janice C Lee ◽  
Stephen Hannon ◽  
...  

ABSTRACT We present the results of a proof-of-concept experiment that demonstrates that deep learning can successfully be used for production-scale classification of compact star clusters detected in Hubble Space Telescope(HST) ultraviolet-optical imaging of nearby spiral galaxies ($D\lesssim 20\, \textrm{Mpc}$) in the Physics at High Angular Resolution in Nearby GalaxieS (PHANGS)–HST survey. Given the relatively small nature of existing, human-labelled star cluster samples, we transfer the knowledge of state-of-the-art neural network models for real-object recognition to classify star clusters candidates into four morphological classes. We perform a series of experiments to determine the dependence of classification performance on neural network architecture (ResNet18 and VGG19-BN), training data sets curated by either a single expert or three astronomers, and the size of the images used for training. We find that the overall classification accuracies are not significantly affected by these choices. The networks are used to classify star cluster candidates in the PHANGS–HST galaxy NGC 1559, which was not included in the training samples. The resulting prediction accuracies are 70 per cent, 40 per cent, 40–50 per cent, and 50–70 per cent for class 1, 2, 3 star clusters, and class 4 non-clusters, respectively. This performance is competitive with consistency achieved in previously published human and automated quantitative classification of star cluster candidate samples (70–80 per cent, 40–50 per cent, 40–50 per cent, and 60–70 per cent). The methods introduced herein lay the foundations to automate classification for star clusters at scale, and exhibit the need to prepare a standardized data set of human-labelled star cluster classifications, agreed upon by a full range of experts in the field, to further improve the performance of the networks introduced in this study.


2020 ◽  
Vol 224 (1) ◽  
pp. 230-240
Author(s):  
Sean W Johnson ◽  
Derrick J A Chambers ◽  
Michael S Boltz ◽  
Keith D Koper

SUMMARY Monitoring mining-induced seismicity (MIS) can help engineers understand the rock mass response to resource extraction. With a thorough understanding of ongoing geomechanical processes, engineers can operate mines, especially those mines with the propensity for rockbursting, more safely and efficiently. Unfortunately, processing MIS data usually requires significant effort from human analysts, which can result in substantial costs and time commitments. The problem is exacerbated for operations that produce copious amounts of MIS, such as mines with high-stress and/or extraction ratios. Recently, deep learning methods have shown the ability to significantly improve the quality of automated arrival-time picking on earthquake data recorded by regional seismic networks. However, relatively little has been published on applying these techniques to MIS. In this study, we compare the performance of a convolutional neural network (CNN) originally trained to pick arrival times on the Southern California Seismic Network (SCSN) to that of human analysts on coal-mine-related MIS. We perform comparisons on several coal-related MIS data sets recorded at various network scales, sampling rates and mines. We find that the Southern-California-trained CNN does not perform well on any of our data sets without retraining. However, applying the concept of transfer learning, we retrain the SCSN model with relatively little MIS data after which the CNN performs nearly as well as a human analyst. When retrained with data from a single analyst, the analyst-CNN pick time residual variance is lower than the variance observed between human analysts. We also compare the retrained CNN to a simpler, optimized picking algorithm, which falls short of the CNN's performance. We conclude that CNNs can achieve a significant improvement in automated phase picking although some data set-specific training will usually be required. Moreover, initializing training with weights found from other, even very different, data sets can greatly reduce the amount of training data required to achieve a given performance threshold.


Author(s):  
William Kirchner ◽  
Steve Southward ◽  
Mehdi Ahmadian

This work presents a generic passive non-contact based acoustic health monitoring approach using ultrasonic acoustic emissions (UAE) to facilitate classification of bearing health via neural networks. This generic approach is applied to classifying the operating condition of conventional ball bearings. The acoustic emission signals used in this study are in the ultrasonic range (20–120 kHz), which is significantly higher than the majority of the research in this area thus far. A direct benefit of working in this frequency range is the inherent directionality of microphones capable of measurement in this range, which becomes particularly useful when operating in environments with low signal-to-noise ratios that are common in the rail industry. Using the UAE power spectrum signature, it is possible to pose the health monitoring problem as a multi-class classification problem, and make use of a multi-layer artificial neural network (ANN) to classify the UAE signature. One major problem limiting the usefulness of ANN’s for failure classification is the need for large quantities of training data. This becomes a particularly important issue when considering applications involving higher value components such as the turbo mechanisms and traction motors on diesel locomotives. Artificial training data, based on the statistical properties of a significantly smaller experimental data set is created to train the artificial neural network. The combination of the artificial training methods and ultrasonic frequency range being used results in an approach generic enough to suggest that this particular method is applicable to a variety of systems and components where persistent UAE exist.


Mathematics ◽  
2020 ◽  
Vol 8 (9) ◽  
pp. 1439
Author(s):  
Gerhard X. Ritter ◽  
Gonzalo Urcid ◽  
Luis-David Lara-Rodríguez

This paper presents a novel lattice based biomimetic neural network trained by means of a similarity measure derived from a lattice positive valuation. For a wide class of pattern recognition problems, the proposed artificial neural network, implemented as a dendritic hetero-associative memory delivers high percentages of successful classification. The memory is a feedforward dendritic network whose arithmetical operations are based on lattice algebra and can be applied to real multivalued inputs. In this approach, the realization of recognition tasks, shows the inherent capability of prototype-class pattern associations in a fast and straightforward manner without need of any iterative scheme subject to issues about convergence. Using an artificially designed data set we show how the proposed trained neural net classifies a test input pattern. Application to a few typical real-world data sets illustrate the overall network classification performance using different training and testing sample subsets generated randomly.


2020 ◽  
Vol 162 (10) ◽  
pp. 2463-2474
Author(s):  
Florian Grimm ◽  
Florian Edl ◽  
Susanne R. Kerscher ◽  
Kay Nieselt ◽  
Isabel Gugel ◽  
...  

Abstract Background For the segmentation of medical imaging data, a multitude of precise but very specific algorithms exist. In previous studies, we investigated the possibility of segmenting MRI data to determine cerebrospinal fluid and brain volume using a classical machine learning algorithm. It demonstrated good clinical usability and a very accurate correlation of the volumes to the single area determination in a reproducible axial layer. This study aims to investigate whether these established segmentation algorithms can be transferred to new, more generalizable deep learning algorithms employing an extended transfer learning procedure and whether medically meaningful segmentation is possible. Methods Ninety-five routinely performed true FISP MRI sequences were retrospectively analyzed in 43 patients with pediatric hydrocephalus. Using a freely available and clinically established segmentation algorithm based on a hidden Markov random field model, four classes of segmentation (brain, cerebrospinal fluid (CSF), background, and tissue) were generated. Fifty-nine randomly selected data sets (10,432 slices) were used as a training data set. Images were augmented for contrast, brightness, and random left/right and X/Y translation. A convolutional neural network (CNN) for semantic image segmentation composed of an encoder and corresponding decoder subnetwork was set up. The network was pre-initialized with layers and weights from a pre-trained VGG 16 model. Following the network was trained with the labeled image data set. A validation data set of 18 scans (3289 slices) was used to monitor the performance as the deep CNN trained. The classification results were tested on 18 randomly allocated labeled data sets (3319 slices) and on a T2-weighted BrainWeb data set with known ground truth. Results The segmentation of clinical test data provided reliable results (global accuracy 0.90, Dice coefficient 0.86), while the CNN segmentation of data from the BrainWeb data set showed comparable results (global accuracy 0.89, Dice coefficient 0.84). The segmentation of the BrainWeb data set with the classical FAST algorithm produced consistent findings (global accuracy 0.90, Dice coefficient 0.87). Likewise, the area development of brain and CSF in the long-term clinical course of three patients was presented. Conclusion Using the presented methods, we showed that conventional segmentation algorithms can be transferred to new advances in deep learning with comparable accuracy, generating a large number of training data sets with relatively little effort. A clinically meaningful segmentation possibility was demonstrated.


Sign in / Sign up

Export Citation Format

Share Document