scholarly journals Construction of Feed Forward MultiLayer Perceptron Model For Genetic Dataset in Leishmaniasis Using Cognitive Computing

2018 ◽  
Author(s):  
Sundar Mahalingam ◽  
Ritika Kabra ◽  
Shailza Singh

AbstractLeishmaniasis is an endemic parasitic disease, predominantly found in the poor locality of Africa, Asia and Latin America. It is associated with malnutrition, weak immune system of people and their housing locality. At present, it is diagnosed by microscopic identification, molecular and biochemical characterisation or serum analysis for parasitic compounds. In this study, we present a new approach for diagnosing Leishmaniasis using cognitive computing. The Genetic datasets of leishmaniasis are collected from Gene Expression Omnibus database and it’s then processed. The algorithm for training and developing a model, based on the data is prepared and coded using python. The algorithm and their corresponding datasets are integrated using TensorFlow dataframe. A feed forward Artificial Neural Network trained model with multi-layer perceptron is developed as a diagnosing model for Leishmaniasis, using genetic dataset. It is developed using recurrent neural network. The cognitive model of the trained network is interpreted using the maps and mathematical formula of the influencing parameters. The credit of the system is measured using the accuracy, loss and error of the system. This integrated system of the leishmaniasis genetic dataset and neural network proved to be the good choice for diagnosis with higher accuracy and lower error. Through this approach, all records of the data are effectively incorporated into the system. The experimental results of feed forward multilayer perceptron model after normalization; mean square error (219.84), loss function (1.94) and accuracy (85.71%) of the model, shows good fit of model with the process and it could possibly serve as a better solution for diagnosing Leishmaniasis in future, using genetic datasets.The code is available in Github repository:https://github.com/shailzasingh/Machine-Learning-code-for-analyzing-genetic-dataset-in-Leishmaniasis

Author(s):  
Tanujit Chakraborty

Decision tree algorithms have been among the most popular algorithms for interpretable (transparent) machine learning since the early 1980s. On the other hand, deep learning methods have boosted the capacity of machine learning algorithms and are now being used for non-trivial applications in various applied domains. But training a fully-connected deep feed-forward network by gradient-descent backpropagation is slow and requires arbitrary choices regarding the number of hidden units and layers. In this paper, we propose near-optimal neural regression trees, intending to make it much faster than deep feed-forward networks and for which it is not essential to specify the number of hidden units in the hidden layers of the neural network in advance. The key idea is to construct a decision tree and then simulate the decision tree with a neural network. This work aims to build a mathematical formulation of neural trees and gain the complementary benefits of both sparse optimal decision trees and neural trees. We propose near-optimal sparse neural trees (NSNT) that is shown to be asymptotically consistent and robust in nature. Additionally, the proposed NSNT model obtain a fast rate of convergence which is near-optimal up to some logarithmic factor. We comprehensively benchmark the proposed method on a sample of 80 datasets (40 classification datasets and 40 regression datasets) from the UCI machine learning repository. We establish that the proposed method is likely to outperform the current state-of-the-art methods (random forest, XGBoost, optimal classification tree, and near-optimal nonlinear trees) for the majority of the datasets.


Author(s):  
Saranya N ◽  
◽  
Kavi Priya S ◽  

In recent years, due to the increasing amounts of data gathered from the medical area, the Internet of Things are majorly developed. But the data gathered are of high volume, velocity, and variety. In the proposed work the heart disease is predicted using wearable devices. To analyze the data efficiently and effectively, Deep Canonical Neural Network Feed-Forward and Back Propagation (DCNN-FBP) algorithm is used. The data are gathered from wearable gadgets and preprocessed by employing normalization. The processed features are analyzed using a deep convolutional neural network. The DCNN-FBP algorithm is exercised by applying forward and backward propagation algorithm. Batch size, epochs, learning rate, activation function, and optimizer are the parameters used in DCNN-FBP. The datasets are taken from the UCI machine learning repository. The performance measures such as accuracy, specificity, sensitivity, and precision are used to validate the performance. From the results, the model attains 89% accuracy. Finally, the outcomes are juxtaposed with the traditional machine learning algorithms to illustrate that the DCNN-FBP model attained higher accuracy.


2021 ◽  
Vol 12 (2) ◽  
pp. 89
Author(s):  
As'ary Ramadhan

Estimasi biaya pengembangan proyek perangkat lunak merupakan salah satu masalah yang kritis dalam rekayasa perangkat lunak. Kegagalan dari proyek perangkat lunak diakibatkan ketidak akuratannya estimasi sumber daya yang dibutuhkan. Beberapa model telah dikembangkan dalam beberapa puluh tahun belakangan ini. Untuk meberikan keakuratan dalam estimasi biaya proyek perangkat lunak masih menjadi tantangan hingga saat ini. Tujuan dilakukannya penelitian ini meningkatkan akurasi estimasi biaya proyek perangkat lunak dengan menerapkan algoritma genetika sebagai proses pelatihan pada Feed Forward Neural Network Backpropagation (FFNN-BP) yang mengakomodasi formula dari Post Architecture Model (COCOMO II). Magnitude of Relative Error (MRE) dan Mean Magnitude of Relative-Error (MMRE) digunakan sebagai pengkuran indikasi kinerja. Hasil percobaan menunjukkan bahwa model yang diusulkan memberikan hasil estimasi biaya proyek perangkat lunak menjadi lebih akurat dari COCOMO II dan FFNN-BP. Dalam kasus ini MMRE untuk COCOMO II adalah 74.68%, FFNN-BP adalah 39.90% .  Kata kunci: COCOMO II, Machine Learning, Proyek Manajemen IT, Backpropagation


Water ◽  
2020 ◽  
Vol 12 (10) ◽  
pp. 2927
Author(s):  
Jiyeong Hong ◽  
Seoro Lee ◽  
Joo Hyun Bae ◽  
Jimin Lee ◽  
Woon Ji Park ◽  
...  

Predicting dam inflow is necessary for effective water management. This study created machine learning algorithms to predict the amount of inflow into the Soyang River Dam in South Korea, using weather and dam inflow data for 40 years. A total of six algorithms were used, as follows: decision tree (DT), multilayer perceptron (MLP), random forest (RF), gradient boosting (GB), recurrent neural network–long short-term memory (RNN–LSTM), and convolutional neural network–LSTM (CNN–LSTM). Among these models, the multilayer perceptron model showed the best results in predicting dam inflow, with the Nash–Sutcliffe efficiency (NSE) value of 0.812, root mean squared errors (RMSE) of 77.218 m3/s, mean absolute error (MAE) of 29.034 m3/s, correlation coefficient (R) of 0.924, and determination coefficient (R2) of 0.817. However, when the amount of dam inflow is below 100 m3/s, the ensemble models (random forest and gradient boosting models) performed better than MLP for the prediction of dam inflow. Therefore, two combined machine learning (CombML) models (RF_MLP and GB_MLP) were developed for the prediction of the dam inflow using the ensemble methods (RF and GB) at precipitation below 16 mm, and the MLP at precipitation above 16 mm. The precipitation of 16 mm is the average daily precipitation at the inflow of 100 m3/s or more. The results show the accuracy verification results of NSE 0.857, RMSE 68.417 m3/s, MAE 18.063 m3/s, R 0.927, and R2 0.859 in RF_MLP, and NSE 0.829, RMSE 73.918 m3/s, MAE 18.093 m3/s, R 0.912, and R2 0.831 in GB_MLP, which infers that the combination of the models predicts the dam inflow the most accurately. CombML algorithms showed that it is possible to predict inflow through inflow learning, considering flow characteristics such as flow regimes, by combining several machine learning algorithms.


2021 ◽  
Author(s):  
Ewerthon Dyego de Araújo Batista ◽  
Wellington Candeia de Araújo ◽  
Romeryto Vieira Lira ◽  
Laryssa Izabel de Araújo Batista

Dengue é um problema de saúde pública no Brasil, os casos da doença voltaram a crescer na Paraíba. O boletim epidemiológico da Paraíba, divulgado em agosto de 2021, informa um aumento de 53% de casos em relação ao ano anterior. Técnicas de Machine Learning (ML) e de Deep Learning estão sendo utilizadas como ferramentas para a predição da doença e suporte ao seu combate. Por meio das técnicas Random Forest (RF), Support Vector Regression (SVR), Multilayer Perceptron (MLP), Long ShortTerm Memory (LSTM) e Convolutional Neural Network (CNN), este artigo apresenta um sistema capaz de realizar previsões de internações causadas por dengue para as cidades Bayeux, Cabedelo, João Pessoa e Santa Rita. O sistema conseguiu realizar previsões para Bayeux com taxa de erro 0,5290, já em Cabedelo o erro foi 0,92742, João Pessoa 9,55288 e Santa Rita 0,74551.


Author(s):  
Belete Biazen Bezabeh ◽  
Abrham Debasu Mengistu

In the area of machine learning performance analysis is the major task in order to get a better performance both in training and testing model. In addition, performance analysis of machine learning techniques helps to identify how the machine is performing on the given input and also to find any improvements needed to make on the learning model. Feed-forward neural network (FFNN) has different area of applications, but the epoch convergences of the network differs from the usage of transfer function. In this study, to build the model for classification and moisture prediction of soil, rectified linear units (ReLU), Sigmoid, hyperbolic tangent (Tanh) and Gaussian transfer function of feed-forward neural network had been analyzed to identify an appropriate transfer function. Color, texture, shape and brisk local feature descriptor are used as a feature vector of FFNN in the input layer and 4 hidden layers were considered in this study. In each hidden layer 26 neurons are used. From the experiment, Gaussian transfer function outperforms than ReLU, sigmoid and tanh transfer function. But the convergence rate of Gaussian transfer function took more epoch than ReLU, Sigmoid and tanh.


Sensors ◽  
2019 ◽  
Vol 19 (4) ◽  
pp. 804 ◽  
Author(s):  
Sagar Shelke ◽  
Baris Aksanli

Convergence of Machine Learning, Internet of Things, and computationally powerful single-board computers has boosted research and implementation of smart spaces. Smart spaces make predictions based on historical data to enhance user experience. In this paper, we present a low-cost, low-energy smart space implementation to detect static and dynamic human activities that require simple motions. We use low-resolution (4 × 16) and non-intrusive thermal sensors to collect data. We train six machine learning algorithms, namely logistic regression, naive Bayes, support vector machine, decision tree, random forest and artificial neural network (vanilla feed-forward) on the dataset collected in our lab. Our experiments reveal a very high static activity detection rate with all algorithms, where the feed-forward neural network method gives the best accuracy of 99.96%. We also show how data collection methods and sensor placement plays an important role in the resulting accuracy of different machine learning algorithms. To detect dynamic activities in real time, we use cross-correlation and connected components of thermal images. Our smart space implementation, with its real-time properties, can be used in various domains and applications, such as conference room automation, elderly health-care, etc.


Sign in / Sign up

Export Citation Format

Share Document