PREDICTING THE RHEOLOGICAL PROPERTIES OF BITUMEN-FILLER MASTIC USING ARTIFICIAL NEURAL NETWORK METHODS

2017 ◽  
Vol 80 (1) ◽  
Author(s):  
Nursyahirah Khamis ◽  
Muhamad Razuhanafi Mat Yazid ◽  
Asmah Hamim ◽  
Sri Atmaja P. Rosyidi ◽  
Nur Izzi Md. Yusoff ◽  
...  

This study was conducted to develop two types of artificial neural network (ANN) model to predict the rheological properties of bitumen-filler mastic in terms of the complex modulus and phase angle. Two types of ANN models were developed namely; (i) a multilayer feed-forward neural network model and (ii) a radial basis function network model. This study was also conducted to evaluate the accuracy of both types of models in predicting the rheological properties of bitumen-filler mastics by means of statistical parameters such as the coefficient of determination (R2), mean absolute error (MAE), mean squared error (MSE) and root mean squared error (RMSE) for every developed model. A set of dynamic shear rheometer (DSR) test data was used on a range of the bitumen-filler mastics with three filler types (limestone, cement and grit stone) and two filler concentrations (35 and 65% by mass). Based on the analysis performed, it was found that both models were able to predict the complex modulus and phase angle of bitumen-filler mastics with the average R2 value exceeding 0.98. A comparison between the two types of models showed that the radial basis function network model has a higher accuracy than multilayer feed-forward neural network model with a higher value of R2 and lower value of MAE, MSE and RMSE. It can be concluded that the ANN model can be used as an alternative method to predict the rheological properties of bitumen-filler mastic. 

2013 ◽  
Vol 46 (1) ◽  
pp. 5-13
Author(s):  
H. Taghavifar ◽  
A. Mardani ◽  
I. Elahi

Abstract Soil-wheel interactions as a phenomenon in which both components are behaving nonlinearly has been considered a sophisticated and complex relation to be modeled. A well-trained artificial neural networks as a useful tool is widely used in variety of science and engineering fields. We inspired to use this facility for application of some soil-wheel interaction products since nonlinear and complex relationships between wheel and soil necessitate more precise and reliable calculations. A 2-14-2 feed forward neural network with back propagation algorithm was found to have acceptable performance with mean squared error of 0.020. This model was used to predict two output variables of rut depth and contact area with regression correlations of 0.99961 and 0.99996 for rut depth and contact area, respectively. Furthermore, the results were compared with conventional models proposed for predicting the contact area and rut depth. The promising results of ANN model give higher privilege over conventional models. The findings also introduce the potential of ANN for modeling. However, the authors recommend further studies to be conducted in this realm of computing due to its great potential and capability.


Author(s):  
Nonvikan Karl-Augustt ALAHASSA ◽  
alejandro Murua

We have built a Shallow Gibbs Network model as a Random Gibbs Network Forest to reach the performance of the Multilayer feedforward Neural Network in a few numbers of parameters, and fewer backpropagation iterations. To make it happens, we propose a novel optimization framework for our Bayesian Shallow Network, called the {Double Backpropagation Scheme} (DBS) that can also fit perfectly the data with appropriate learning rate, and which is convergent and universally applicable to any Bayesian neural network problem. The contribution of this model is broad. First, it integrates all the advantages of the Potts Model, which is a very rich random partitions model, that we have also modified to propose its Complete Shrinkage version using agglomerative clustering techniques. The model takes also an advantage of Gibbs Fields for its weights precision matrix structure, mainly through Markov Random Fields, and even has five (5) variants structures at the end: the Full-Gibbs, the Sparse-Gibbs, the Between layer Sparse Gibbs which is the B-Sparse Gibbs in a short, the Compound Symmetry Gibbs (CS-Gibbs in short), and the Sparse Compound Symmetry Gibbs (Sparse-CS-Gibbs) model. The Full-Gibbs is mainly to remind fully-connected models, and the other structures are useful to show how the model can be reduced in terms of complexity with sparsity and parsimony. All those models have been experimented with the Mulan project multivariate regression dataset, and the results arouse interest in those structures, in a sense that different structures help to reach different results in terms of Mean Squared Error (MSE) and Relative Root Mean Squared Error (RRMSE). For the Shallow Gibbs Network model, we have found the perfect learning framework : it is the $(l_1, \boldsymbol{\zeta}, \epsilon_{dbs})-\textbf{DBS}$ configuration, which is a combination of the \emph{Universal Approximation Theorem}, and the DBS optimization, coupled with the (\emph{dist})-Nearest Neighbor-(h)-Taylor Series-Perfect Multivariate Interpolation (\emph{dist}-NN-(h)-TS-PMI) model [which in turn is a combination of the research of the Nearest Neighborhood for a good Train-Test association, the Taylor Approximation Theorem, and finally the Multivariate Interpolation Method]. It indicates that, with an appropriate number $l_1$ of neurons on the hidden layer, an optimal number $\zeta$ of DBS updates, an optimal DBS learnnig rate $\epsilon_{dbs}$, an optimal distance \emph{dist}$_{opt}$ in the research of the nearest neighbor in the training dataset for each test data $x_i^{\mbox{test}}$, an optimal order $h_{opt}$ of the Taylor approximation for the Perfect Multivariate Interpolation (\emph{dist}-NN-(h)-TS-PMI) model once the {\bfseries DBS} has overfitted the training dataset, the train and the test error converge to zero (0).


2019 ◽  
Vol 8 (3) ◽  
pp. 5477-5482

E-sensor which are generally based on concept of E-nose are specially made to distinguish odours .In the present research work. E-sensor is developed using artificial intelligence technique to identify the concentration of carbon monoxide in a polluted environment. Data record access using Metal oxide sensor. The available data is broken into the number of segments .The length of data segment and the neurons in hidden layer is varied in number to find the optimized model of artificial neural network model using Mat Lab Coding. The artificial neural network model is optimized by verification in terms of mean squared error and regression. The regression is verified for training ,testing, validation and all. The mean squared error and regression are the artificial neural network model performance parameter


Buletin Palma ◽  
2019 ◽  
Vol 20 (2) ◽  
pp. 127
Author(s):  
Agustami Sitorus ◽  
Ramayanty Bulan

<p>The working conditions of an appropriate oil palm chopper are important to be explored in order to improve the machine's performance while working. At present, the selection of working conditions depends on empirical rules and experimental trials. It is conducted because of the complex interaction between the unit of the integrated counter machine (cutting machine unit, compression machine unit, and chopper machine unit) to estimate its performance. Therefore, this study aims to estimate the performance of an integrated palm frond chopper machine through the Artificial Neural Network (ANN) approach. The design of the ANN model was carried out at the Research Center for Appropriate Technology in 2017-2019. Data input is as many as nine operating parameters collected from experimental tests in laboratory conditions using the AE01 type integrated palm fronds chopper machine. The ANN model architecture (input-layer-output) tested is [9-5-1], [9-10-1], and [9-15-1] with the Levenberg – Marquardt algorithm. The results of this study were obtained that the best prediction model is formed by a layered architecture of 10 layers, which results in a smaller prediction error value compared to the others. The model determination coefficient with that layer is 0.99862. Prediction of chopper performance using test data gives a coefficient of determination close to one. The mean squared error (MSE) of the model in each training phase, validation, and testing were 2,69´10<sup>-15</sup>, 1,56´10<sup>-4</sup>, 3,38´10<sup>-5</sup>.</p><p><strong>ABSTRAK</strong></p><p><strong></strong>Kondisi kerja mesin pencacah pelepah sawit yang tepat penting ditelusuri guna meningkatkan performansi mesin saat bekerja. Saat ini, pemilihan kondisi kerja mesin tergantung pada aturan empiris dan uji coba eksperimental. Hal ini dilakukan karena interaksi yang kompleks antara unit bagian mesin pencacah terintegrasi (unit penggunting, pengempa dan pencacah) untuk memperkirakan kinerjanya. Penelitian ini bertujuan untuk memperkirakan kinerja mesin pencacah pelepah sawit terintegrasi melalui pendekatan Jaringan Saraf Tiruan (JST). Desain model JST dilakukan di Pusat Penelitian Teknologi Tepat Guna pada tahun 2017-2019. Parameter input data adalah sebanyak sembilan parameter operasi yang dikumpulkan dari uji coba eksperimental pada kondisi laboratorium menggunakan mesin pencacah pelepah sawit terintegrasi tipe-AE01. Arsitektur model JST (<em>input-layer-output</em>) yang diujicobakan adalah [9-5-1], [9-10-1], dan [9-15-1] dengan algoritma Levenberg–Marquardt. Hasil penelitian ini diperoleh bahwa model prediksi terbaik dibentuk dengan arsitektur layer sebanyak 10 buah yang menghasilkan nilai galat prediksi lebih kecil dibandingkan dengan yang lainnya. Koefisien determinasi model dengan layer tersebut adalah 0,99862. Prediksi kinerja mesin menggunakan data pengujian memberikan koefisien determinasi mendekati satu. <em>Mean squared error</em> (MSE) dari model pada masing-masing fase pelatihan, validasi dan pengujian adalah 2,69´10<sup>-15</sup>, 1,56´10<sup>-4</sup>, 3,38´10<sup>-5</sup>.</p>


Author(s):  
Nonvikan Karl-Augustt ALAHASSA ◽  
alejandro Murua

We have built a Shallow Gibbs Network model as a Random Gibbs Network Forest to reach the performance of the Multilayer feedforward Neural Network in a few numbers of parameters, and fewer backpropagation iterations. To make it happens, we propose a novel optimization framework for our Bayesian Shallow Network, called the {Double Backpropagation Scheme} (DBS) that can also fit perfectly the data with appropriate learning rate, and which is convergent and universally applicable to any Bayesian neural network problem. The contribution of this model is broad. First, it integrates all the advantages of the Potts Model, which is a very rich random partitions model, that we have also modified to propose its Complete Shrinkage version using agglomerative clustering techniques. The model takes also an advantage of Gibbs Fields for its weights precision matrix structure, mainly through Markov Random Fields, and even has five (5) variants structures at the end: the Full-Gibbs, the Sparse-Gibbs, the Between layer Sparse Gibbs which is the B-Sparse Gibbs in a short, the Compound Symmetry Gibbs (CS-Gibbs in short), and the Sparse Compound Symmetry Gibbs (Sparse-CS-Gibbs) model. The Full-Gibbs is mainly to remind fully-connected models, and the other structures are useful to show how the model can be reduced in terms of complexity with sparsity and parsimony. All those models have been experimented with the Mulan project multivariate regression dataset, and the results arouse interest in those structures, in a sense that different structures help to reach different results in terms of Mean Squared Error (MSE) and Relative Root Mean Squared Error (RRMSE). For the Shallow Gibbs Network model, we have found the perfect learning framework : it is the $(l_1, \boldsymbol{\zeta}, \epsilon_{dbs})-\textbf{DBS}$ configuration, which is a combination of the \emph{Universal Approximation Theorem}, and the DBS optimization, coupled with the (\emph{dist})-Nearest Neighbor-(h)-Taylor Series-Perfect Multivariate Interpolation (\emph{dist}-NN-(h)-TS-PMI) model [which in turn is a combination of the research of the Nearest Neighborhood for a good Train-Test association, the Taylor Approximation Theorem, and finally the Multivariate Interpolation Method]. It indicates that, with an appropriate number $l_1$ of neurons on the hidden layer, an optimal number $\zeta$ of DBS updates, an optimal DBS learnnig rate $\epsilon_{dbs}$, an optimal distance \emph{dist}$_{opt}$ in the research of the nearest neighbor in the training dataset for each test data $x_i^{\mbox{test}}$, an optimal order $h_{opt}$ of the Taylor approximation for the Perfect Multivariate Interpolation (\emph{dist}-NN-(h)-TS-PMI) model once the {\bfseries DBS} has overfitted the training dataset, the train and the test error converge to zero (0).


2018 ◽  
Vol 4 (1) ◽  
pp. 24
Author(s):  
Imam Halimi ◽  
Wahyu Andhyka Kusuma

Investasi saham merupakan hal yang tidak asing didengar maupun dilakukan. Ada berbagai macam saham di Indonesia, salah satunya adalah Indeks Harga Saham Gabungan (IHSG) atau dalam bahasa inggris disebut Indonesia Composite Index, ICI, atau IDX Composite. IHSG merupakan parameter penting yang dipertimbangkan pada saat akan melakukan investasi mengingat IHSG adalah saham gabungan. Penelitian ini bertujuan memprediksi pergerakan IHSG dengan teknik data mining menggunakan algoritma neural network dan dibandingkan dengan algoritma linear regression, yang dapat dijadikan acuan investor saat akan melakukan investasi. Hasil dari penelitian ini berupa nilai Root Mean Squared Error (RMSE) serta label tambahan angka hasil prediksi yang didapatkan setelah dilakukan validasi menggunakan sliding windows validation dengan hasil paling baik yaitu pada pengujian yang menggunakan algoritma neural network yang menggunakan windowing yaitu sebesar 37,786 dan pada pengujian yang tidak menggunakan windowing sebesar 13,597 dan untuk pengujian algoritma linear regression yang menggunakan windowing yaitu sebesar 35,026 dan pengujian yang tidak menggunakan windowing sebesar 12,657. Setelah dilakukan pengujian T-Test menunjukan bahwa pengujian menggunakan neural network yang dibandingkan dengan linear regression memiliki hasil yang tidak signifikan dengan nilai T-Test untuk pengujian dengan windowing dan tanpa windowing hasilnya sama, yaitu sebesar 1,000.


Sign in / Sign up

Export Citation Format

Share Document