scholarly journals DEVELOPMENT OF SUITABLE MACHINE LEARNING MODEL FOR A CEMENT PLANT CALCINER

Author(s):  
Prateek Sharma ◽  
M V Ramachandra Rao ◽  
Dr B N Mohapatra ◽  
A Saxena

Cement industry is one of the largest CO2 emitters and continuously working to minimize these emissions. Use of Artificial intelligence (AI) in manufacturing helps in reducing breakdowns/ failures by avoiding frequency of startups, reducing fuel fluctuations which will ultimately reduce carbon footprint. AI offers a new mode of digital manufacturing process that can increase productivity by optimizing the use of assets at the fraction of cost. Calciner is one of the key equipment of a cement plant which dissociates the calcium carbonate into calcium oxide and carbon dioxide by taking heat input from fuel combustion. An AI model of a calciner can provide valuable information which can be implemented in real time to optimize the calciner operation resulting in fuel savings. In this study, key process parameters of continuous operation for a period of 3 months were used to train the various machine learning models and a best suitable model was selected based on metrics like RMSE and R2 value. It is found that Artificial neural network is best fitted model for the calciner. This model is able to predict the calciner outlet temperature with high degree of accuracy (+/- 2% error) when validated against real world data. This model can be used by industries to estimate the calciner outlet temperature by changing the input parameters as it is not based on the chemical and physical process taking place in the calciner but on real world historical data.

2021 ◽  
pp. 1-24
Author(s):  
Avidit Acharya ◽  
Kirk Bansak ◽  
Jens Hainmueller

Abstract We introduce a constrained priority mechanism that combines outcome-based matching from machine learning with preference-based allocation schemes common in market design. Using real-world data, we illustrate how our mechanism could be applied to the assignment of refugee families to host country locations, and kindergarteners to schools. Our mechanism allows a planner to first specify a threshold $\bar g$ for the minimum acceptable average outcome score that should be achieved by the assignment. In the refugee matching context, this score corresponds to the probability of employment, whereas in the student assignment context, it corresponds to standardized test scores. The mechanism is a priority mechanism that considers both outcomes and preferences by assigning agents (refugee families and students) based on their preferences, but subject to meeting the planner’s specified threshold. The mechanism is both strategy-proof and constrained efficient in that it always generates a matching that is not Pareto dominated by any other matching that respects the planner’s threshold.


2021 ◽  
Vol 7 (1) ◽  
Author(s):  
Xiaoting Zhong ◽  
Brian Gallagher ◽  
Keenan Eves ◽  
Emily Robertson ◽  
T. Nathan Mundhenk ◽  
...  

AbstractMachine-learning (ML) techniques hold the potential of enabling efficient quantitative micrograph analysis, but the robustness of ML models with respect to real-world micrograph quality variations has not been carefully evaluated. We collected thousands of scanning electron microscopy (SEM) micrographs for molecular solid materials, in which image pixel intensities vary due to both the microstructure content and microscope instrument conditions. We then built ML models to predict the ultimate compressive strength (UCS) of consolidated molecular solids, by encoding micrographs with different image feature descriptors and training a random forest regressor, and by training an end-to-end deep-learning (DL) model. Results show that instrument-induced pixel intensity signals can affect ML model predictions in a consistently negative way. As a remedy, we explored intensity normalization techniques. It is seen that intensity normalization helps to improve micrograph data quality and ML model robustness, but microscope-induced intensity variations can be difficult to eliminate.


Author(s):  
Xianping Du ◽  
Onur Bilgen ◽  
Hongyi Xu

Abstract Machine learning for classification has been used widely in engineering design, for example, feasible domain recognition and hidden pattern discovery. Training an accurate machine learning model requires a large dataset; however, high computational or experimental costs are major issues in obtaining a large dataset for real-world problems. One possible solution is to generate a large pseudo dataset with surrogate models, which is established with a smaller set of real training data. However, it is not well understood whether the pseudo dataset can benefit the classification model by providing more information or deteriorates the machine learning performance due to the prediction errors and uncertainties introduced by the surrogate model. This paper presents a preliminary investigation towards this research question. A classification-and-regressiontree model is employed to recognize the design subspaces to support design decision-making. It is implemented on the geometric design of a vehicle energy-absorbing structure based on finite element simulations. Based on a small set of real-world data obtained by simulations, a surrogate model based on Gaussian process regression is employed to generate pseudo datasets for training. The results showed that the tree-based method could help recognize feasible design domains efficiently. Furthermore, the additional information provided by the surrogate model enhances the accuracy of classification. One important conclusion is that the accuracy of the surrogate model determines the quality of the pseudo dataset and hence, the improvements in the machine learning model.


2021 ◽  
Vol 14 (6) ◽  
pp. 997-1005
Author(s):  
Sandeep Tata ◽  
Navneet Potti ◽  
James B. Wendt ◽  
Lauro Beltrão Costa ◽  
Marc Najork ◽  
...  

Extracting structured information from templatic documents is an important problem with the potential to automate many real-world business workflows such as payment, procurement, and payroll. The core challenge is that such documents can be laid out in virtually infinitely different ways. A good solution to this problem is one that generalizes well not only to known templates such as invoices from a known vendor, but also to unseen ones. We developed a system called Glean to tackle this problem. Given a target schema for a document type and some labeled documents of that type, Glean uses machine learning to automatically extract structured information from other documents of that type. In this paper, we describe the overall architecture of Glean, and discuss three key data management challenges : 1) managing the quality of ground truth data, 2) generating training data for the machine learning model using labeled documents, and 3) building tools that help a developer rapidly build and improve a model for a given document type. Through empirical studies on a real-world dataset, we show that these data management techniques allow us to train a model that is over 5 F1 points better than the exact same model architecture without the techniques we describe. We argue that for such information-extraction problems, designing abstractions that carefully manage the training data is at least as important as choosing a good model architecture.


Author(s):  
Dwiti Krishna Bebarta ◽  
Birendra Biswal

Automated feature engineering is to build predictive models that are capable of transforming raw data into features, that is, creation of new features from existing ones on various datasets to create meaningful features and examining their effect on planned model performances on various parameters like accuracy, efficiency, and prevent data leakage. So the challenges for experts are to plan computationally efficient and effective machine, learning-based predictive models. This chapter will provide an imminent to the important intelligent techniques that could be utilized to enhance predictive analytics by using an advanced form of the predictive model. A computationally efficient and effective machine learning model using functional link artificial neural network (FLANN) is discussed to design for predicting the business needs with a high degree of accuracy for the traders or investors. The performance of the models using FLANN is encouraging when scientifically analyzed the experimental results of the model using different statistical analyses.


2021 ◽  
pp. 338-354
Author(s):  
Ute Schmid

With the growing number of applications of machine learning in complex real-world domains machine learning research has to meet new requirements to deal with the imperfections of real world data and the legal as well as ethical obligations to make classifier decisions transparent and comprehensible. In this contribution, arguments for interpretable and interactive approaches to machine learning are presented. It is argued that visual explanations are often not expressive enough to grasp critical information which relies on relations between different aspects or sub-concepts. Consequently, inductive logic programming (ILP) and the generation of verbal explanations from Prolog rules is advocated. Interactive learning in the context of ILP is illustrated with the Dare2Del system which helps users to manage their digital clutter. It is shown that verbal explanations overcome the explanatory one-way street from AI system to user. Interactive learning with mutual explanations allows the learning system to take into account not only class corrections but also corrections of explanations to guide learning. We propose mutual explanations as a building-block for human-like computing and an important ingredient for human AI partnership.


Sensors ◽  
2019 ◽  
Vol 19 (10) ◽  
pp. 2266 ◽  
Author(s):  
Nikolaos Sideris ◽  
Georgios Bardis ◽  
Athanasios Voulodimos ◽  
Georgios Miaoulis ◽  
Djamchid Ghazanfarpour

The constantly increasing amount and availability of urban data derived from varying sources leads to an assortment of challenges that include, among others, the consolidation, visualization, and maximal exploitation prospects of the aforementioned data. A preeminent problem affecting urban planning is the appropriate choice of location to host a particular activity (either commercial or common welfare service) or the correct use of an existing building or empty space. In this paper, we propose an approach to address these challenges availed with machine learning techniques. The proposed system combines, fuses, and merges various types of data from different sources, encodes them using a novel semantic model that can capture and utilize both low-level geometric information and higher level semantic information and subsequently feeds them to the random forests classifier, as well as other supervised machine learning models for comparisons. Our experimental evaluation on multiple real-world data sets comparing the performance of several classifiers (including Feedforward Neural Networks, Support Vector Machines, Bag of Decision Trees, k-Nearest Neighbors and Naïve Bayes), indicated the superiority of Random Forests in terms of the examined performance metrics (Accuracy, Specificity, Precision, Recall, F-measure and G-mean).


Sign in / Sign up

Export Citation Format

Share Document