Prediction Models for Truck Accidents at Freeway Ramps in Washington State Using Regression and Artificial Intelligence Techniques

Author(s):  
Wael H. Awad ◽  
Bruce N. Janson

Three different modeling approaches were applied to explain truck accidents at interchanges in Washington State during a 27-month period. Three models were developed for each ramp type including linear regression, neural networks, and a hybrid system using fuzzy logic and neural networks. The study showed that linear regression was able to predict accident frequencies that fell within one standard deviation from the overall mean of the dependent variable. However, the coefficient of determination was very low in all cases. The other two artificial intelligence (AI) approaches showed a high level of performance in identifying different patterns of accidents in the training data and presented a better fit when compared to the regression model. However, the ability of these AI models to predict test data that were not included in the training process showed unsatisfactory results.

Author(s):  
Paul J. Roebber

AbstractWe introduce an adaptive form of postprocessor where algorithm structures are neural networks where the number of hidden nodes and the network training features evolve. Key potential advantages of this system are the flexible, nonlinear mapping capabilities of neural networks and, through backpropagation, the ability to rapidly establish capable predictors in an algorithm population. The system can be implemented after one initial training process and future changes to postprocessor inputs (new observations, new inputs or model upgrades) are incorporated as they become available. As in prior work, the implementation in the form of a predator-prey ecosystem allows for the ready construction of ensembles. Computational requirements are minimal, and the use of a moving data window means that data storage requirements are constrained.The system adds predictive skill to a demonstration dynamical model representing the hemispheric circulation, with skill competitive with or exceeding that obtainable from multiple linear regression and standard artificial neural networks constructed under typical operational limitations. The system incorporates new information rapidly and the dependence of the approach on the training data size is similar to multiple linear regression. A loss of performance occurs relative to a fixed neural network architecture in which only the weights are adjusted after training, but this loss is compensated for by gains from the ensemble predictions. While the demonstration dynamical model is complex, current numerical weather prediction models are considerably more so, and thus a future step will be to apply this technique to operational weather forecast data.


2016 ◽  
Vol 16 (2) ◽  
pp. 43-50 ◽  
Author(s):  
Samander Ali Malik ◽  
Assad Farooq ◽  
Thomas Gereke ◽  
Chokri Cherif

Abstract The present research work was carried out to develop the prediction models for blended ring spun yarn evenness and tensile parameters using artificial neural networks (ANNs) and multiple linear regression (MLR). Polyester/cotton blend ratio, twist multiplier, back roller hardness and break draft ratio were used as input parameters to predict yarn evenness in terms of CVm% and yarn tensile properties in terms of tenacity and elongation. Feed forward neural networks with Bayesian regularisation support were successfully trained and tested using the available experimental data. The coefficients of determination of ANN and regression models indicate that there is a strong correlation between the measured and predicted yarn characteristics with an acceptable mean absolute error values. The comparative analysis of two modelling techniques shows that the ANNs perform better than the MLR models. The relative importance of input variables was determined using rank analysis through input saliency test on optimised ANN models and standardised coefficients of regression models. These models are suitable for yarn manufacturers and can be used within the investigated knowledge domain.


2018 ◽  
Vol 8 (12) ◽  
pp. 2416 ◽  
Author(s):  
Ansi Zhang ◽  
Honglei Wang ◽  
Shaobo Li ◽  
Yuxin Cui ◽  
Zhonghao Liu ◽  
...  

Prognostics, such as remaining useful life (RUL) prediction, is a crucial task in condition-based maintenance. A major challenge in data-driven prognostics is the difficulty of obtaining a sufficient number of samples of failure progression. However, for traditional machine learning methods and deep neural networks, enough training data is a prerequisite to train good prediction models. In this work, we proposed a transfer learning algorithm based on Bi-directional Long Short-Term Memory (BLSTM) recurrent neural networks for RUL estimation, in which the models can be first trained on different but related datasets and then fine-tuned by the target dataset. Extensive experimental results show that transfer learning can in general improve the prediction models on the dataset with a small number of samples. There is one exception that when transferring from multi-type operating conditions to single operating conditions, transfer learning led to a worse result.


Author(s):  
Vishal Babu Siramshetty ◽  
Dac-Trung Nguyen ◽  
Natalia J. Martinez ◽  
Anton Simeonov ◽  
Noel T. Southall ◽  
...  

The rise of novel artificial intelligence methods necessitates a comparison of this wave of new approaches with classical machine learning for a typical drug discovery project. Inhibition of the potassium ion channel, whose alpha subunit is encoded by human Ether-à-go-go-Related Gene (hERG), leads to prolonged QT interval of the cardiac action potential and is a significant safety pharmacology target for the development of new medicines. Several computational approaches have been employed to develop prediction models for assessment of hERG liabilities of small molecules including recent work using deep learning methods. Here we perform a comprehensive comparison of prediction models based on classical (random forests and gradient boosting) and modern (deep neural networks and recurrent neural networks) artificial intelligence methods. The training set (~9000 compounds) was compiled by integrating hERG bioactivity data from ChEMBL database with experimental data generated from an in-house, high-throughput thallium flux assay. We utilized different molecular descriptors including the latent descriptors, which are real-valued continuous vectors derived from chemical autoencoders trained on a large chemical space (> 1.5 million compounds). The models were prospectively validated on ~840 in-house compounds screened in the same thallium flux assay. The deep neural networks performed significantly better than the classical methods with the latent descriptors. The recurrent neural networks that operate on SMILES provided highest model sensitivity. The best models were merged into a consensus model that offered superior performance compared to reference models from academic and commercial domains. Further, we shed light on the potential of artificial intelligence methods to exploit the chemistry big data and generate novel chemical representations useful in predictive modeling and tailoring new chemical space.<br>


Author(s):  
Y. A. Lumban-Gaol ◽  
K. A. Ohori ◽  
R. Y. Peters

Abstract. Satellite-Derived Bathymetry (SDB) has been used in many applications related to coastal management. SDB can efficiently fill data gaps obtained from traditional measurements with echo sounding. However, it still requires numerous training data, which is not available in many areas. Furthermore, the accuracy problem still arises considering the linear model could not address the non-relationship between reflectance and depth due to bottom variations and noise. Convolutional Neural Networks (CNN) offers the ability to capture the connection between neighbouring pixels and the non-linear relationship. These CNN characteristics make it compelling to be used for shallow water depth extraction. We investigate the accuracy of different architectures using different window sizes and band combinations. We use Sentinel-2 Level 2A images to provide reflectance values, and Lidar and Multi Beam Echo Sounder (MBES) datasets are used as depth references to train and test the model. A set of Sentinel-2 and in-situ depth subimage pairs are extracted to perform CNN training. The model is compared to the linear transform and applied to two other study areas. Resulting accuracy ranges from 1.3 m to 1.94 m, and the coefficient of determination reaches 0.94. The SDB model generated using a window size of 9x9 indicates compatibility with the reference depths, especially at areas deeper than 15 m. The addition of both short wave infrared bands to the four visible bands in training improves the overall accuracy of SDB. The implementation of the pre-trained model to other study areas provides similar results depending on the water conditions.


2019 ◽  
Vol 11 (14) ◽  
pp. 216 ◽  
Author(s):  
Bruno V. C. Guimarães ◽  
Sérgio L. R. Donato ◽  
Ignacio Aspiazú ◽  
Alcinei M. Azevedo ◽  
Abner J. de Carvalho

Behavior analysis and plant expression are the answers the researcher needs to construct predictive models that minimize the effects of the uncertainties of field production. The objective of this study was to compare the simple and multiple linear regression methods and the artificial neural networks to allow the maximum security in the prediction of harvest in &lsquo;Gigante&rsquo; cactus pear. The uniformity test was conducted at the Federal Institute of Bahia, Campus Guanambi, Bahia, Brazil, coordinates 14&deg;13&prime;30&Prime; S, 42&deg;46&prime;53&Prime; W and altitude of 525 m. At 930 days after planting, we evaluated 384 basic units, in which were measured the following variables: plant height (PH); cladode length (CL), width (CW) and thickness (CT); cladode number (CN); total cladode area (TCA); cladode area (CA) and cladode yield (Y). For the comparison between the artificial neural networks (ANN) and regression models (single and multiple-SLR and MLR), we considered the mean prediction error (MPE), the mean quadratic error (MQE), the mean square of deviation (MSD) and the coefficient of determination (R2).The values estimated by the ANN 7-5-1 showed the best proximity to the data obtained in field conditions, followed by ANN 6-2-1, MLR (TCA and CT), SLR (TCA) and SLR (CN). In this way, the ANN models with the topologies 7-2-1 and 6-2-1, MLR with the variables total cladode area and cladode thickness and SLR with the isolated descriptors total cladode area and cladode number, explain 85.1; 81.5; 76.3; 74.09 and 65.87%, respectively, of the yield variation. The ANNs were more efficient at predicting the yield of the &lsquo;Gigante&rsquo; cactus pear when compared to the simple and multiple linear regression models.


2016 ◽  
Vol 101 (1) ◽  
pp. 27-35 ◽  
Author(s):  
Maria Mrówczyńska

Abstract The field of processing information provided by measurement results is one of the most important components of geodetic technologies. The dynamic development of this field improves classic algorithms for numerical calculations in the aspect of analytical solutions that are difficult to achieve. Algorithms based on artificial intelligence in the form of artificial neural networks, including the topology of connections between neurons have become an important instrument connected to the problem of processing and modelling processes. This concept results from the integration of neural networks and parameter optimization methods and makes it possible to avoid the necessity to arbitrarily define the structure of a network. This kind of extension of the training process is exemplified by the algorithm called the Group Method of Data Handling (GMDH), which belongs to the class of evolutionary algorithms. The article presents a GMDH type network, used for modelling deformations of the geometrical axis of a steel chimney during its operation.


2018 ◽  
Author(s):  
Vanessa Isabell Jurtz ◽  
Leon Eyrich Jessen ◽  
Amalie Kai Bentzen ◽  
Martin Closter Jespersen ◽  
Swapnil Mahajan ◽  
...  

Predicting epitopes recognized by cytotoxic T cells has been a long standing challenge within the field of immuno- and bioinformatics. While reliable predictions of peptide binding are available for most Major Histocompatibility Complex class I (MHCI) alleles, prediction models of T cell receptor (TCR) interactions with MHC class I-peptide complexes remain poor due to the limited amount of available training data. Recent next generation sequencing projects have however generated a considerable amount of data relating TCR sequences with their cognate HLA-peptide complex target. Here, we utilize such data to train a sequence-based predictor of the interaction between TCRs and peptides presented by the most common human MHCI allele, HLA-A*02:01. Our model is based on convolutional neural networks, which are especially designed to meet the challenges posed by the large length variations of TCRs. We show that such a sequence-based model allows for the identification of TCRs binding a given cognate peptide-MHC target out of a large pool of non-binding TCRs.


2020 ◽  
Author(s):  
Zhe Xu

<p>Despite the fact that artificial intelligence boosted with data-driven methods (e.g., deep neural networks) has surpassed human-level performance in various tasks, its application to autonomous</p> <p>systems still faces fundamental challenges such as lack of interpretability, intensive need for data and lack of verifiability. In this overview paper, I overview some attempts to address these fundamental challenges by explaining, guiding and verifying autonomous systems, taking into account limited availability of simulated and real data, the expressivity of high-level</p> <p>knowledge representations and the uncertainties of the underlying model. Specifically, this paper covers learning high-level knowledge from data for interpretable autonomous systems,</p><p>guiding autonomous systems with high-level knowledge, and</p><p>verifying and controlling autonomous systems against high-level specifications.</p>


Materials ◽  
2019 ◽  
Vol 12 (10) ◽  
pp. 1670 ◽  
Author(s):  
Lu Minh Le ◽  
Hai-Bang Ly ◽  
Binh Thai Pham ◽  
Vuong Minh Le ◽  
Tuan Anh Pham ◽  
...  

This study aims to investigate the prediction of critical buckling load of steel columns using two hybrid Artificial Intelligence (AI) models such as Adaptive Neuro-Fuzzy Inference System optimized by Genetic Algorithm (ANFIS-GA) and Adaptive Neuro-Fuzzy Inference System optimized by Particle Swarm Optimization (ANFIS-PSO). For this purpose, a total number of 57 experimental buckling tests of novel high strength steel Y-section columns were collected from the available literature to generate the dataset for training and validating the two proposed AI models. Quality assessment criteria such as coefficient of determination (R2), Mean Absolute Error (MAE) and Root Mean Squared Error (RMSE) were used to validate and evaluate the performance of the prediction models. Results showed that both ANFIS-GA and ANFIS-PSO had a strong ability in predicting the buckling load of steel columns, but ANFIS-PSO (R2 = 0.929, RMSE = 60.522 and MAE = 44.044) was slightly better than ANFIS-GA (R2 = 0.916, RMSE = 65.371 and MAE = 48.588). The two models were also robust even with the presence of input variability, as investigated via Monte Carlo simulations. This study showed that the hybrid AI techniques could help constructing an efficient numerical tool for buckling analysis.


Sign in / Sign up

Export Citation Format

Share Document