scholarly journals Application of Machine Learning in the Control of Metal Melting Production Process

2020 ◽  
Vol 10 (17) ◽  
pp. 6048 ◽  
Author(s):  
Nedeljko Dučić ◽  
Aleksandar Jovičić ◽  
Srećko Manasijević ◽  
Radomir Radiša ◽  
Žarko Ćojbašić ◽  
...  

This paper presents the application of machine learning in the control of the metal melting process. Metal melting is a dynamic production process characterized by nonlinear relations between process parameters. In this particular case, the subject of research is the production of white cast iron. Two supervised machine learning algorithms have been applied: the neural network and the support vector regression. The goal of their application is the prediction of the amount of alloying additives in order to obtain the desired chemical composition of white cast iron. The neural network model provided better results than the support vector regression model in the training and testing phases, which qualifies it to be used in the control of the white cast iron production.

2021 ◽  
Author(s):  
Ewerthon Dyego de Araújo Batista ◽  
Wellington Candeia de Araújo ◽  
Romeryto Vieira Lira ◽  
Laryssa Izabel de Araújo Batista

Dengue é um problema de saúde pública no Brasil, os casos da doença voltaram a crescer na Paraíba. O boletim epidemiológico da Paraíba, divulgado em agosto de 2021, informa um aumento de 53% de casos em relação ao ano anterior. Técnicas de Machine Learning (ML) e de Deep Learning estão sendo utilizadas como ferramentas para a predição da doença e suporte ao seu combate. Por meio das técnicas Random Forest (RF), Support Vector Regression (SVR), Multilayer Perceptron (MLP), Long ShortTerm Memory (LSTM) e Convolutional Neural Network (CNN), este artigo apresenta um sistema capaz de realizar previsões de internações causadas por dengue para as cidades Bayeux, Cabedelo, João Pessoa e Santa Rita. O sistema conseguiu realizar previsões para Bayeux com taxa de erro 0,5290, já em Cabedelo o erro foi 0,92742, João Pessoa 9,55288 e Santa Rita 0,74551.


2019 ◽  
Author(s):  
Po-Ting Lai ◽  
Wei-Liang Lu ◽  
Ting-Rung Kuo ◽  
Chia-Ru Chung ◽  
Jen-Chieh Han ◽  
...  

BACKGROUND Research on disease-disease association, like comorbidity and complication, provides important insights into disease treatment and drug discovery, and a large body of literature has been published in the field. However, using current search tools, it is not easy for researchers to retrieve information on the latest disease association findings. For one thing, comorbidity and complication keywords pull up large numbers of PubMed studies. Secondly, disease is not highlighted in search results. Third, disease-disease association (DDA) is not identified, as currently no DDA extraction dataset or tools are available. OBJECTIVE Since there are no available disease-disease association extraction (DDAE) datasets or tools, we aim to develop (1) a DDAE dataset and (2) a neural network model for extracting DDAs from literature. METHODS In this study, we formulate DDAE as a supervised machine learning classification problem. To develop the system, we first build a DDAE dataset. We then employ two machine-learning models, support vector machine (SVM) and convolutional neural network (CNN), to extract DDAs. Furthermore, we evaluate the effect of using the output layer as features of the SVM-based model. Finally, we implement large margin context-aware convolutional neural network (LC-CNN) architecture to integrate context features and CNN through the large margin function. RESULTS Our DDAE dataset consists of 521 PubMed abstracts. Experiment results show that the SVM-based approach achieves an F1-measure of 80.32%, which is higher than the CNN-based approach (73.32%). Using the output layer of CNN as a feature for SVM does not further improve the performance of SVM. However, our LC-CNN achieves the highest F1-measure of 84.18%, and demonstrates combining the hinge loss function of SVM with CNN into a single NN architecture outperforms other approaches. CONCLUSIONS To facilitate the development of text-mining research for DDAE, we develop the first publicly available DDAE dataset consisting of disease mentions, MeSH IDs and relation annotations. We develop different conventional ML models and NN architectures, and evaluate their effects on our DDAE dataset. To further improve DDAE performance, we propose an LC-CNN model for DDAE that outperforms other approaches.


2021 ◽  
Vol 1 (1) ◽  
pp. 31
Author(s):  
Kristiawan Nugroho

The Covid-19 pandemic has occurred for a year on earth. Various attempts have been made to overcome this pandemic, especially in making various types of vaccines developed around the world. The level of vaccine effectiveness in dealing with Covid-19 is one of the questions that is often asked by the public. This research is an attempt to classify the names of vaccines that have been used in various nations by using one of the robust machine learning methods, namely the Neural Network. The results showed that the Neural Network method provides the best accuracy, which is 99.9% higher than the Random Forest and Support Vector Machine(SVM) methods.


Research ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Hang Guo ◽  
Ji Wan ◽  
Haobin Wang ◽  
Hanxiang Wu ◽  
Chen Xu ◽  
...  

Handwritten signatures widely exist in our daily lives. The main challenge of signal recognition on handwriting is in the development of approaches to obtain information effectively. External mechanical signals can be easily detected by triboelectric nanogenerators which can provide immediate opportunities for building new types of active sensors capable of recording handwritten signals. In this work, we report an intelligent human-machine interaction interface based on a triboelectric nanogenerator. Using the horizontal-vertical symmetrical electrode array, the handwritten triboelectric signal can be recorded without external energy supply. Combined with supervised machine learning methods, it can successfully recognize handwritten English letters, Chinese characters, and Arabic numerals. The principal component analysis algorithm preprocesses the triboelectric signal data to reduce the complexity of the neural network in the machine learning process. Further, it can realize the anticounterfeiting recognition of writing habits by controlling the samples input to the neural network. The results show that the intelligent human-computer interaction interface has broad application prospects in signature security and human-computer interaction.


2021 ◽  
Vol 27 (1) ◽  
pp. 146045822098387
Author(s):  
Boran Sekeroglu ◽  
Kubra Tuncal

Cancer is one of the most important and common public health problems on Earth that can occur in many different types. Treatments and precautions are aimed at minimizing the deaths caused by cancer; however, incidence rates continue to rise. Thus, it is important to analyze and estimate incidence rates to support the determination of more effective precautions. In this research, 2018 Cancer Datasheet of World Health Organization (WHO), is used and all countries on the European Continent are considered to analyze and predict the incidence rates until 2020, for Lung cancer, Breast cancer, Colorectal cancer, Prostate cancer and All types of cancer, which have highest incidence and mortality rates. Each cancer type is trained by six machine learning models namely, Linear Regression, Support Vector Regression, Decision Tree, Long-Short Term Memory neural network, Backpropagation neural network, and Radial Basis Function neural network according to gender types separately. Linear regression and support vector regression outperformed the other models with the [Formula: see text] scores 0.99 and 0.98, respectively, in initial experiments, and then used for prediction of incidence rates of the considered cancer types. The ML models estimated that the maximum rise of incidence rates would be in colorectal cancer for females by 6%.


2021 ◽  
pp. 1-13
Author(s):  
Nikolaos Napoleon Vlassis ◽  
Waiching Sun

Abstract Conventionally, neural network constitutive laws for path-dependent elasto-plastic solids are trained via supervised learning performed on recurrent neural network, with the time history of strain as input and the stress as input. However, training neural network to replicate path-dependent constitutive responses require significant more amount of data due to the path dependence. This demand on diverse and abundance of accurate data, as well as the lack of interpretability to guide the data generation process, could become major roadblocks for engineering applications. In this work, we attempt to simplify these training processes and improve the interpretability of the trained models by breaking down the training of material models into multiple supervised machine learning programs for elasticity, initial yielding and hardening laws that can be conducted sequentially. To predict pressure-sensitivity and rate dependence of the plastic responses, we reformulate the Hamliton-Jacobi equation such that the yield function is parametrized in the product space spanned by the principle stress, the accumulated plastic strain and time. To test the versatility of the neural network meta-modeling framework, we conduct multiple numerical experiments where neural networks are trained and validated against (1) data generated from known benchmark models, (2) data obtained from physical experiments and (3) data inferred from homogenizing sub-scale direct numerical simulations of microstructures. The neural network model is also incorporated into an offline FFT-FEM model to improve the efficiency of the multiscale calculations.


2022 ◽  
Vol 9 (1) ◽  
pp. 1-12
Author(s):  
Sipu Hou ◽  
Zongzhen Cai ◽  
Jiming Wu ◽  
Hongwei Du ◽  
Peng Xie

It is not easy for banks to sell their term-deposit products to new clients because many factors will affect customers’ purchasing decision and because banks may have difficulties to identify their target customers. To address this issue, we use different supervised machine learning algorithms to predict if a customer will subscribe a bank term deposit and then compare the performance of these prediction models. Specifically, the current paper employs these five algorithms: Naïve Bayes, Decision Tree, Random Forest, Support Vector Machine and Neural Network. This paper thus contributes to the artificial intelligence and Big Data field with an important evidence of the best performed model for predicting bank term deposit subscription.


Mathematics ◽  
2021 ◽  
Vol 9 (19) ◽  
pp. 2405
Author(s):  
Ioannis Mallidis ◽  
Volha Yakavenka ◽  
Anastasios Konstantinidis ◽  
Nikolaos Sariannidis

The paper develops a goal programming-based multi-criteria methodology, for assessing different machine learning (ML) regression models under accuracy and time efficiency criteria. The developed methodology provides users with high flexibility in assessing the models as it allows for a fast and computationally efficient sensitivity analysis of accuracy and time significance weights as well as accuracy and time significance threshold values. Four regression models were assessed, namely the decision tree, random forest, support vector and the neural network. The developed methodology was employed to forecast the time to failures of NASA Turbofans. The results reveal that decision tree regression (DTR) seems to be preferred for low values of accuracy weights (up to 30%) and low accuracy and time efficiency threshold values. As the accuracy weights tend to increase and for higher accuracy and time efficiency threshold values, random forest regression (RFR) seems to be the best choice. The preference for the RFR model however, seems to change towards the adoption of the neural network for accuracy weights equal to and higher than 90%.


2018 ◽  
Author(s):  
Nazmul Hossain ◽  
Fumihiko Yokota ◽  
Akira Fukuda ◽  
Ashir Ahmed

BACKGROUND Predictive analytics through machine learning has been extensively using across industries including eHealth and mHealth for analyzing patient’s health data, predicting diseases, enhancing the productivity of technology or devices used for providing healthcare services and so on. However, not enough studies were conducted to predict the usage of eHealth by rural patients in developing countries. OBJECTIVE The objective of this study is to predict rural patients’ use of eHealth through supervised machine learning algorithms and propose the best-fitted model after evaluating their performances in terms of predictive accuracy. METHODS Data were collected between June and July 2016 through a field survey with structured questionnaire form 292 randomly selected rural patients in a remote North-Western sub-district of Bangladesh. Four supervised machine learning algorithms namely logistic regression, boosted decision tree, support vector machine, and artificial neural network were chosen for this experiment. A ‘correlation-based feature selection’ technique was applied to include the most relevant but not redundant features into the model. A 10-fold cross-validation technique was applied to reduce bias and over-fitting of the data. RESULTS Logistic regression outperformed other three algorithms with 85.9% predictive accuracy, 86.4% precision, 90.5% recall, 88.1% F-score, and AUC of 91.5% followed by neural network, decision tree and support vector machine with the accuracy rate of 84.2%, 82.9 %, and 80.4% respectively. CONCLUSIONS The findings of this study are expected to be helpful for eHealth practitioners in selecting appropriate areas to serve and dealing with both under-capacity and over-capacity by predicting the patients’ response in advance with a certain level of accuracy and precision.


Sign in / Sign up

Export Citation Format

Share Document