scholarly journals Application of Feedforward Neural Network and SPT Results in the Estimation of Seismic Soil Liquefaction Triggering

2021 ◽  
Vol 2021 ◽  
pp. 1-14
Author(s):  
Tuan Anh Pham

Soil liquefaction is a dangerous phenomenon for structures that lose their shear strength and soil resistance, occurring during seismic shocks such as earthquakes or sudden stress conditions. Determining the liquefaction and nonliquefaction capacity of soil is a difficult but necessary job when constructing structures in earthquake zones. Usually, the possibility of soil liquefaction is determined by laboratory tests on soil samples subjected to dynamic loads, and this is time-consuming and costly. Therefore, this study focuses on the development of a machine learning model called a Forward Neural Network (FNN) to estimate the activation of soil liquefaction under seismic condition. The database is collected from the published literature, including 270 liquefaction cases and 216 nonliquefaction case histories under different geological conditions and earthquakes used for construction and confirming the model. The model is built and optimized for hyperparameters based on a technique known as random search (RS). Then, the L2 regularization technique is used to solve the overfitting problem of the model. The analysis results are compared with a series of empirical formulas as well as some popular machine learning (ML) models. The results show that the RS-L2-FNN model successfully predicts soil liquefaction with an accuracy of 90.33% on the entire dataset and an average accuracy of 88.4% after 300 simulations which takes into account the random split of the datasets. Compared with the empirical formulas as well as other machine learning models, the RS-L2-FNN model shows superior performance and solves the overfitting problem of the model. In addition, the global sensitivity analysis technique is used to detect the most important input characteristics affecting the activation prediction of liquefied soils. The results show that the corrected SPT resistance (N1)60 is the most important input variable, affecting the determination of the liquefaction capacity of the soil. This study provides a powerful tool that allows rapid and accurate prediction of liquefaction based on several basic soil properties.

2021 ◽  
Author(s):  
Mohammed Ayub ◽  
SanLinn Kaka

Abstract Manual first-break picking from a large volume of seismic data is extremely tedious and costly. Deployment of machine learning models makes the process fast and cost effective. However, these machine learning models require high representative and effective features for accurate automatic picking. Therefore, First- Break (FB) picking classification model that uses effective minimum number of features and promises performance efficiency is proposed. The variants of Recurrent Neural Networks (RNNs) such as Long ShortTerm Memory (LSTM) and Gated Recurrent Unit (GRU) can retain contextual information from long previous time steps. We deploy this advantage for FB picking as seismic traces are amplitude values of vibration along the time-axis. We use behavioral fluctuation of amplitude as input features for LSTM and GRU. The models are trained on noisy data and tested for generalization on original traces not seen during the training and validation process. In order to analyze the real-time suitability, the performance is benchmarked using accuracy, F1-measure and three other established metrics. We have trained two RNN models and two deep Neural Network models for FB classification using only amplitude values as features. Both LSTM and GRU have the accuracy and F1-measure with a score of 94.20%. With the same features, Convolutional Neural Network (CNN) has an accuracy of 93.58% and F1-score of 93.63%. Again, Deep Neural Network (DNN) model has scores of 92.83% and 92.59% as accuracy and F1-measure, respectively. From the pexperiment results, we see significant superior performance of LSTM and GRU to CNN and DNN when used the same features. For robustness of LSTM and GRU models, the performance is compared with DNN model that is trained using nine features derived from seismic traces and observed that the performance superiority of RNN models. Therefore, it is safe to conclude that RNN models (LSTM and GRU) are capable of classifying the FB events efficiently even by using a minimum number of features that are not computationally expensive. The novelty of our work is the capability of automatic FB classification with the RNN models that incorporate contextual behavioral information without the need for sophisticated feature extraction or engineering techniques that in turn can help in reducing the cost and fostering classification model robust and faster.


2020 ◽  
Vol 12 (11) ◽  
pp. 1838 ◽  
Author(s):  
Zhao Zhang ◽  
Paulo Flores ◽  
C. Igathinathane ◽  
Dayakar L. Naik ◽  
Ravi Kiran ◽  
...  

The current mainstream approach of using manual measurements and visual inspections for crop lodging detection is inefficient, time-consuming, and subjective. An innovative method for wheat lodging detection that can overcome or alleviate these shortcomings would be welcomed. This study proposed a systematic approach for wheat lodging detection in research plots (372 experimental plots), which consisted of using unmanned aerial systems (UAS) for aerial imagery acquisition, manual field evaluation, and machine learning algorithms to detect the occurrence or not of lodging. UAS imagery was collected on three different dates (23 and 30 July 2019, and 8 August 2019) after lodging occurred. Traditional machine learning and deep learning were evaluated and compared in this study in terms of classification accuracy and standard deviation. For traditional machine learning, five types of features (i.e. gray level co-occurrence matrix, local binary pattern, Gabor, intensity, and Hu-moment) were extracted and fed into three traditional machine learning algorithms (i.e., random forest (RF), neural network, and support vector machine) for detecting lodged plots. For the datasets on each imagery collection date, the accuracies of the three algorithms were not significantly different from each other. For any of the three algorithms, accuracies on the first and last date datasets had the lowest and highest values, respectively. Incorporating standard deviation as a measurement of performance robustness, RF was determined as the most satisfactory. Regarding deep learning, three different convolutional neural networks (simple convolutional neural network, VGG-16, and GoogLeNet) were tested. For any of the single date datasets, GoogLeNet consistently had superior performance over the other two methods. Further comparisons between RF and GoogLeNet demonstrated that the detection accuracies of the two methods were not significantly different from each other (p > 0.05); hence, the choice of any of the two would not affect the final detection accuracies. However, considering the fact that the average accuracy of GoogLeNet (93%) was larger than RF (91%), it was recommended to use GoogLeNet for wheat lodging detection. This research demonstrated that UAS RGB imagery, coupled with the GoogLeNet machine learning algorithm, can be a novel, reliable, objective, simple, low-cost, and effective (accuracy > 90%) tool for wheat lodging detection.


2019 ◽  
Vol 2 (1) ◽  
Author(s):  
Luca Baronti ◽  
Biao Zhang ◽  
Marco Castellani ◽  
Duc Truong Pham

AbstractIn this paper we propose an innovative machine learning approach to the hydraulic motor load balancing problem involving intelligent optimisation and neural networks. Two different nonlinear artificial neural network approaches are investigated, and their accuracy is compared to that of a linearised analytical model. The first neural network approach uses a multi-layer perceptron to reproduce the load simulator dynamics. The multi-layer perceptron is trained using the Rprop algorithm. The second approach uses a hybrid scheme featuring an analytical model to represent the main system behaviour, and a multi-layer perceptron to reproduce unmodelled nonlinear terms. Four techniques are tested for the optimisation of the parameters of the analytical model: random search, an evolutionary algorithm, particle swarm optimisation, and the Bees Algorithm. Experimental tests on 4500 real data samples from an electro-hydraulic load simulator rig reveal that the accuracy of the hybrid and the neural network models is comparable, and significantly superior to the accuracy of the analytical model. The results of the optimisation procedures suggest also that the inferior performance of the analytical model is likely due to the non-negligible magnitude of the unmodelled nonlinearities, rather than suboptimal setting of the parameters. Despite its limitations, the analytical linear model performs comparably to the state-of-the-art in the literature, whilst the neural and hybrid approaches compare favourably.


Author(s):  
V. N. Manjunath Aradhya ◽  
Mufti Mahmud ◽  
D. S. Guru ◽  
Basant Agarwal ◽  
M. Shamim Kaiser

AbstractCoronavirus disease (COVID-19) has infected over more than 28.3 million people around the globe and killed 913K people worldwide as on 11 September 2020. With this pandemic, to combat the spreading of COVID-19, effective testing methodologies and immediate medical treatments are much required. Chest X-rays are the widely available modalities for immediate diagnosis of COVID-19. Hence, automation of detection of COVID-19 from chest X-ray images using machine learning approaches is of greater demand. A model for detecting COVID-19 from chest X-ray images is proposed in this paper. A novel concept of cluster-based one-shot learning is introduced in this work. The introduced concept has an advantage of learning from a few samples against learning from many samples in case of deep leaning architectures. The proposed model is a multi-class classification model as it classifies images of four classes, viz., pneumonia bacterial, pneumonia virus, normal, and COVID-19. The proposed model is based on ensemble of Generalized Regression Neural Network (GRNN) and Probabilistic Neural Network (PNN) classifiers at decision level. The effectiveness of the proposed model has been demonstrated through extensive experimentation on a publicly available dataset consisting of 306 images. The proposed cluster-based one-shot learning has been found to be more effective on GRNN and PNN ensembled model to distinguish COVID-19 images from that of the other three classes. It has also been experimentally observed that the model has a superior performance over contemporary deep learning architectures. The concept of one-shot cluster-based learning is being first of its kind in literature, expected to open up several new dimensions in the field of machine learning which require further researching for various applications.


Author(s):  
Alessandro Cennamo ◽  
Florian Kaestner ◽  
Anton Kummert

AbstractThe last decade has witnessed important advancements in the field of computer vision and scene understanding, enabling applications such us autonomous vehicles. Radar is a commonly adopted sensor in automotive industry, but its suitability to machine learning techniques still remains an open question. In this work, we propose a neural network (NN) based solution to efficiently process radar data. We introduce RadarPCNN, an architecture specifically designed for performing semantic segmentation on radar point clouds. It uses PointNet$$++$$ + + as a building-block—enhancing the sampling stage with mean-shift—and an attention mechanism to fuse information. Additionally, we propose a machine learning radar pre-processing module that confers the network the ability to learn from radar features. We show that our solutions are effective, yielding superior performance than the state-of-the-art.


2021 ◽  
Vol 22 (S6) ◽  
Author(s):  
Xinnan Dai ◽  
Fan Xu ◽  
Shike Wang ◽  
Piyushkumar A. Mundra ◽  
Jie Zheng

Abstract Background Recent advances in simultaneous measurement of RNA and protein abundances at single-cell level provide a unique opportunity to predict protein abundance from scRNA-seq data using machine learning models. However, existing machine learning methods have not considered relationship among the proteins sufficiently. Results We formulate this task in a multi-label prediction framework where multiple proteins are linked to each other at the single-cell level. Then, we propose a novel method for single-cell RNA to protein prediction named PIKE-R2P, which incorporates protein–protein interactions (PPI) and prior knowledge embedding into a graph neural network. Compared with existing methods, PIKE-R2P could significantly improve prediction performance in terms of smaller errors and higher correlations with the gold standard measurements. Conclusion The superior performance of PIKE-R2P indicates that adding the prior knowledge of PPI to graph neural networks can be a powerful strategy for cross-modality prediction of protein abundances at the single-cell level.


2019 ◽  
Author(s):  
Ryther Anderson ◽  
Achay Biong ◽  
Diego Gómez-Gualdrón

<div>Tailoring the structure and chemistry of metal-organic frameworks (MOFs) enables the manipulation of their adsorption properties to suit specific energy and environmental applications. As there are millions of possible MOFs (with tens of thousands already synthesized), molecular simulation, such as grand canonical Monte Carlo (GCMC), has frequently been used to rapidly evaluate the adsorption performance of a large set of MOFs. This allows subsequent experiments to focus only on a small subset of the most promising MOFs. In many instances, however, even molecular simulation becomes prohibitively time consuming, underscoring the need for alternative screening methods, such as machine learning, to precede molecular simulation efforts. In this study, as a proof of concept, we trained a neural network as the first example of a machine learning model capable of predicting full adsorption isotherms of different molecules not included in the training of the model. To achieve this, we trained our neural network only on alchemical species, represented only by their geometry and force field parameters, and used this neural network to predict the loadings of real adsorbates. We focused on predicting room temperature adsorption of small (one- and two-atom) molecules relevant to chemical separations. Namely, argon, krypton, xenon, methane, ethane, and nitrogen. However, we also observed surprisingly promising predictions for more complex molecules, whose properties are outside the range spanned by the alchemical adsorbates. Prediction accuracies suitable for large-scale screening were achieved using simple MOF (e.g. geometric properties and chemical moieties), and adsorbate (e.g. forcefield parameters and geometry) descriptors. Our results illustrate a new philosophy of training that opens the path towards development of machine learning models that can predict the adsorption loading of any new adsorbate at any new operating conditions in any new MOF.</div>


2020 ◽  
Vol 15 ◽  
Author(s):  
Elham Shamsara ◽  
Sara Saffar Soflaei ◽  
Mohammad Tajfard ◽  
Ivan Yamshchikov ◽  
Habibollah Esmaili ◽  
...  

Background: Coronary artery disease (CAD) is an important cause of mortality and morbidity globally. Objective : The early prediction of the CAD would be valuable in identifying individuals at risk, and in focusing resources on its prevention. In this paper, we aimed to establish a diagnostic model to predict CAD by using three approaches of ANN (pattern recognition-ANN, LVQ-ANN, and competitive ANN). Methods: One promising method for early prediction of disease based on risk factors is machine learning. Among different machine learning algorithms, the artificial neural network (ANN) algo-rithms have been applied widely in medicine and a variety of real-world classifications. ANN is a non-linear computational model, that is inspired by the human brain to analyze and process complex datasets. Results: Different methods of ANN that are investigated in this paper indicates in both pattern recognition ANN and LVQ-ANN methods, the predictions of Angiography+ class have high accuracy. Moreover, in CNN the correlations between the individuals in cluster ”c” with the class of Angiography+ is strongly high. This accuracy indicates the significant difference among some of the input features in Angiography+ class and the other two output classes. A comparison among the chosen weights in these three methods in separating control class and Angiography+ shows that hs-CRP, FSG, and WBC are the most substantial excitatory weights in recognizing the Angiography+ individuals although, HDL-C and MCH are determined as inhibitory weights. Furthermore, the effect of decomposition of a multi-class problem to a set of binary classes and random sampling on the accuracy of the diagnostic model is investigated. Conclusion : This study confirms that pattern recognition-ANN had the most accuracy of performance among different methods of ANN. That’s due to the back-propagation procedure of the process in which the network classify input variables based on labeled classes. The results of binarization show that decomposition of the multi-class set to binary sets could achieve higher accuracy.


2020 ◽  
Author(s):  
Dianbo Liu

BACKGROUND Applications of machine learning (ML) on health care can have a great impact on people’s lives. At the same time, medical data is usually big, requiring a significant amount of computational resources. Although it might not be a problem for wide-adoption of ML tools in developed nations, availability of computational resource can very well be limited in third-world nations and on mobile devices. This can prevent many people from benefiting of the advancement in ML applications for healthcare. OBJECTIVE In this paper we explored three methods to increase computational efficiency of either recurrent neural net-work(RNN) or feedforward (deep) neural network (DNN) while not compromising its accuracy. We used in-patient mortality prediction as our case analysis upon intensive care dataset. METHODS We reduced the size of RNN and DNN by applying pruning of “unused” neurons. Additionally, we modified the RNN structure by adding a hidden-layer to the RNN cell but reduce the total number of recurrent layers to accomplish a reduction of total parameters in the network. Finally, we implemented quantization on DNN—forcing the weights to be 8-bits instead of 32-bits. RESULTS We found that all methods increased implementation efficiency–including training speed, memory size and inference speed–without reducing the accuracy of mortality prediction. CONCLUSIONS This improvements allow the implementation of sophisticated NN algorithms on devices with lower computational resources.


2019 ◽  
Vol 9 (1) ◽  
Author(s):  
Narjes Rohani ◽  
Changiz Eslahchi

Abstract Drug-Drug Interaction (DDI) prediction is one of the most critical issues in drug development and health. Proposing appropriate computational methods for predicting unknown DDI with high precision is challenging. We proposed "NDD: Neural network-based method for drug-drug interaction prediction" for predicting unknown DDIs using various information about drugs. Multiple drug similarities based on drug substructure, target, side effect, off-label side effect, pathway, transporter, and indication data are calculated. At first, NDD uses a heuristic similarity selection process and then integrates the selected similarities with a nonlinear similarity fusion method to achieve high-level features. Afterward, it uses a neural network for interaction prediction. The similarity selection and similarity integration parts of NDD have been proposed in previous studies of other problems. Our novelty is to combine these parts with new neural network architecture and apply these approaches in the context of DDI prediction. We compared NDD with six machine learning classifiers and six state-of-the-art graph-based methods on three benchmark datasets. NDD achieved superior performance in cross-validation with AUPR ranging from 0.830 to 0.947, AUC from 0.954 to 0.994 and F-measure from 0.772 to 0.902. Moreover, cumulative evidence in case studies on numerous drug pairs, further confirm the ability of NDD to predict unknown DDIs. The evaluations corroborate that NDD is an efficient method for predicting unknown DDIs. The data and implementation of NDD are available at https://github.com/nrohani/NDD.


Sign in / Sign up

Export Citation Format

Share Document