Modelling of Seismic Liquefaction Using Classification Techniques

2021 ◽  
Vol 12 (1) ◽  
pp. 12-21
Author(s):  
Azad Kumar Mehta ◽  
Deepak Kumar ◽  
Pijush Samui

Liquefaction susceptibility of soil is a complex problem due to non-linear behaviour of soil and its physical attributes. The assessment of liquefaction potential is commonly assessed by the in-situ testing methods. The classification problem of liquefaction is non-linear in nature and difficult to model considering all independent variables (seismic and soil properties) using traditional techniques. In this study, four different classification techniques, namely Fast k-NN (F-kNN), Naïve Bayes Classifier (NBC), Decision Forest Classifier (DFC), and Group Method of Data Handling (GMDH), were used. The SPT-based case record was used to train and validate the models. The performance of these models was assessed using different indexes, namely sensitivity, specificity, type-I error, type-II error, and accuracy rate. Additionally, receiver operating characteristic (ROC) curve were plotted for comparative study. The results show that the F-kNN models perform far better than other models and can be used as a reliable technique for analysis of liquefaction susceptibility of soil.

1996 ◽  
Vol 1 (1) ◽  
pp. 25-28 ◽  
Author(s):  
Martin A. Weinstock

Background: Accurate understanding of certain basic statistical terms and principles is key to critical appraisal of published literature. Objective: This review describes type I error, type II error, null hypothesis, p value, statistical significance, a, two-tailed and one-tailed tests, effect size, alternate hypothesis, statistical power, β, publication bias, confidence interval, standard error, and standard deviation, while including examples from reports of dermatologic studies. Conclusion: The application of the results of published studies to individual patients should be informed by an understanding of certain basic statistical concepts.


2013 ◽  
Vol 19 (4) ◽  
pp. 505-517 ◽  
Author(s):  
Jui-Sheng Chou ◽  
Chih-Fong Tsai ◽  
Yu-Hsin Lu

This study compares several well-known machine learning techniques for public-private partnership (PPP) project dispute problems. Single and hybrid classification techniques are applied to construct models for PPP project dispute prediction. The single classification techniques utilized are multilayer perceptron (MLP) neural networks, decision trees (DTs), support vector machines, the naïve Bayes classifier, and k-nearest neighbor. Two types of hybrid learning models are developed. One combines clustering and classification techniques and the other combines multiple classification techniques. Experimental results indicate that hybrid models outperform single models in prediction accuracy, Type I and II errors, and the receiver operating characteristic curve. Additionally, the hybrid model combining multiple classification techniques perform better than that combining clustering and classification techniques. Particularly, the MLP+MLP and DT+DT models perform best and second best, achieving prediction accuracies of 97.08% and 95.77%, respectively. This study demonstrates the efficiency and effectiveness of hybrid machine learning techniques for early prediction of dispute occurrence using conceptual project information as model input. The models provide a proactive warning and decision-support information needed to select the appropriate resolution strategy before a dispute occurs.


2009 ◽  
Vol 96 (3) ◽  
pp. 522a ◽  
Author(s):  
Stefan Muenster ◽  
Philip Kollmannsberger ◽  
Thorsten M. Koch ◽  
Louise M. Jawerth ◽  
David A. Vader ◽  
...  

Author(s):  
D. Attaf ◽  
K. Djerriri ◽  
D. Mansour ◽  
D. Hamdadou

<p><strong>Abstract.</strong> Mapping of burned areas caused by forest fires was always a main concern to researchers in the field of remote sensing. Thus, various spectral indices and classification techniques have been proposed in the literature. In such a problem, only one specific class is of real interest and could be referred to as a one-class classification problem. One-class classification methods are highly desirable for quick mapping of classes of interest. A common used solution to deal with One-Class classification problem is based on oneclass support vector machine (OC-SVM). This method has proved useful in classification of remote sensing images. However, overfitting problem and difficulty in tuning parameters have become the major obstacles for this method. The new Presence and Background Learning (PBL) framework does not require complicated model selection and can generate very high accuracy results. On the other hand the Google Earth Engine (GEE) portal provides access to satellite and other ancillary data, cloud computing, and algorithms for processing large amounts of data with relative ease. Therefore, this study mainly aims to investigate the possibility of using the PBL framework within the GEE platform to extract burned areas from freely available Landsat archive in the year 2015. The quality of the results obtained using PBL framework was assessed using ground truth digitized by qualified technicians and compared to other classification techniques: Thresholding burned area spectral Index (BAI) and OC-SVM classifiers. Experimental results demonstrate that PBL framework for mapping the burned areas shows the higher classification accuracy than the other classifiers, and it highlights the suitability for the cases with few positive labelled samples available, which facilitates the tedious work of manual digitizing.</p>


2018 ◽  
Vol 3 (6) ◽  
pp. 32 ◽  
Author(s):  
Aliyu Ozovehe ◽  
Okpo U. Okereke ◽  
Anene E. Chibuzo ◽  
Abraham U. Usman

Traffic congestion prediction is a non-linear process that involves obtaining valuable information from a set of traffic data and regression or auto-regression linear models cannot be applied as they are limited in their ability to deal with such problems. However, Artificial Intelligent (AI) techniques have shown great ability to deal with non-linear problems and two of such techniques which have found application in traffic prediction are the Artificial Neural Networks (ANN) and Adaptive Neuro-Fuzzy Inference Systems (ANFIS). In this work, Multiple Layer Perceptron Neural Network (MLP-NN), Radial Basis Function Neural Network (RBF-NN), Group Method of Data Handling (GMDH) and an Adaptive Neuro-Fuzzy Inference Systems (ANFIS) are trained based on busy hour (BH) traffic measurement data taken from some GSM/GPRS sites in Abuja, Nigeria. The trained networks were then used to predict traffic congestion for some macrocells and their accuracy are compared using four statistical indices. The GMDH model on the average gave goodness of fit (R2), root mean square error (RMSE), standard deviation (σ), and mean absolute error (µ) values of 99, 3.16, 3.53 and 2.32 % respectively. It was observed that GMDH model has the best fit in all cases and on the average predict better than ANFIS, MLP and RBF models. The GMDH model is found to offer improved prediction results in terms of increasing the R2 by 20% and reducing RMSE by 60% over ANFIS, the closest model to the GMDH in term of prediction accuracy.


2022 ◽  
Vol 13 (1) ◽  
Author(s):  
Zachary R. McCaw ◽  
Thomas Colthurst ◽  
Taedong Yun ◽  
Nicholas A. Furlotte ◽  
Andrew Carroll ◽  
...  

AbstractGenome-wide association studies (GWASs) examine the association between genotype and phenotype while adjusting for a set of covariates. Although the covariates may have non-linear or interactive effects, due to the challenge of specifying the model, GWAS often neglect such terms. Here we introduce DeepNull, a method that identifies and adjusts for non-linear and interactive covariate effects using a deep neural network. In analyses of simulated and real data, we demonstrate that DeepNull maintains tight control of the type I error while increasing statistical power by up to 20% in the presence of non-linear and interactive effects. Moreover, in the absence of such effects, DeepNull incurs no loss of power. When applied to 10 phenotypes from the UK Biobank (n = 370K), DeepNull discovered more hits (+6%) and loci (+7%), on average, than conventional association analyses, many of which are biologically plausible or have previously been reported. Finally, DeepNull improves upon linear modeling for phenotypic prediction (+23% on average).


Information ◽  
2020 ◽  
Vol 11 (3) ◽  
pp. 160
Author(s):  
Jarmila Horváthová ◽  
Martina Mokrišová

This paper focuses on business financial health evaluation with the use of selected mathematical and statistical methods. The issue of financial health assessment and prediction of business failure is a widely discussed topic across various industries in Slovakia and abroad. The aim of this paper was to formulate a data envelopment analysis (DEA) model and to verify the estimation accuracy of this model in comparison with the logit model. The research was carried out on a sample of companies operating in the field of heat supply in Slovakia. For this sample of businesses, we selected appropriate financial indicators as determinants of bankruptcy. The indicators were selected using related empirical studies, a univariate logit model, and a correlation matrix. In this paper, we applied two main models: the BCC DEA model, processed in DEAFrontier software; and the logit model, processed in Statistica software. We compared the estimation accuracy of the constructed models using error type I and error type II. The main conclusion of the paper is that the DEA method is a suitable alternative in assessing the financial health of businesses from the analyzed sample. In contrast to the logit model, the results of this method are independent of any assumptions.


2019 ◽  
Vol 69 ◽  
pp. 00014 ◽  
Author(s):  
Mikhail Basimov

The article raises the question of studying the statistical dependencies in a sociological research. Most sociologists, if they study cause and effect relations, offer interpretations of only linear relations and linear models based on their data. However, the problem arises not only with respect to the fact that sociologists interpreting any linear dependencies ignore a large number of simple non-linear relations (type 1 errors) often without understanding the essence of the issue. In the last 20-25 years, they often study processes not so simple to describe them by means of linear models. And sociologists go (consciously or not) along the way when weak linear relations (not strong ones are detected), referring to the hypothesis of zero correlation coefficient (saving stars SPSS), began to be presented as “significant” and tacitly understood as sufficiently strong correlations of a scientific interest for interpretation of cause and effect relations.But there is an even more significant error (type 2 errors), when they fail to notice not only the simplest non-linear dependencies, but strong simple non-linear dependencies between the parameters that provide their linear approximations with a weak correlation, and even a very weak correlation (0.11-0.3), which completely distorts the real picture of the phenomenon or process under study. It turns out to be scientific knowledge that does not correspond to reality, which contributes to the parallel development of philosophical (qualitative) analysis of social processes, based mainly on an intuitive understanding of social problems, the emergence of contradictions between the approaches. The article deals with some individual dependencies and their interpretation according to the results of the study of political preferences of young people, demonstrating the type 1 and 2 errors.


2014 NORCHIP ◽  
2014 ◽  
Author(s):  
Jue Shen ◽  
Fredrik Jonsson ◽  
Jian Chen ◽  
Hannu Tenhunen ◽  
Lirong Zheng
Keyword(s):  
Type I ◽  

Sign in / Sign up

Export Citation Format

Share Document