scholarly journals Is Domain Knowledge Necessary for Machine Learning Materials Properties?

Author(s):  
Ryan Murdock ◽  
Steven Kauwe ◽  
Anthony Wang ◽  
Taylor Sparks

<div>New methods for describing materials as vectors in order to predict their properties using machine learning are common in the field of material informatics. However, little is known about the comparative efficacy of these methods. This work sets out to make clear which featurization methods should be used across various circumstances. Our findings include, surprisingly, that simple one-hot encoding of elements can be as effective as traditional and new descriptors when using large amounts of data. However, in the absence of large datasets or data that is not fully representative we show that domain knowledge offers advantages in predictive ability.</div><div><br></div>

Author(s):  
Ryan Murdock ◽  
Steven Kauwe ◽  
Anthony Wang ◽  
Taylor Sparks

<div>New methods for describing materials as vectors in order to predict their properties using machine learning are common in the field of material informatics. However, little is known about the comparative efficacy of these methods. This work sets out to make clear which featurization methods should be used across various circumstances. Our findings include, surprisingly, that simple one-hot encoding of elements can be as effective as traditional and new descriptors when using large amounts of data. However, in the absence of large datasets or data that is not fully representative we show that domain knowledge offers advantages in predictive ability.</div><div><br></div>


2020 ◽  
Vol 9 (3) ◽  
pp. 221-227
Author(s):  
Ryan J. Murdock ◽  
Steven K. Kauwe ◽  
Anthony Yu-Tung Wang ◽  
Taylor D. Sparks

Author(s):  
Shyamala G. Nadathur

Large datasets are regularly collected in biomedicine and healthcare (here referred to as the ‘health domain’). These datasets have some unique characteristics and problems. Therefore there is a need for methods which allow modelling in spite of the uniqueness of the datasets, capable of dealing with missing data, allow integrating data from various sources, explicitly indicate statistical dependence and independence and allow modelling with uncertainties. These requirements have given rise to an influx of new methods, especially from the fields of machine learning and probabilistic graphical models. In particular, Bayesian Networks (BNs), which are a type of graphical network model with directed links that offer a general and versatile approach to capturing and reasoning with uncertainty. In this chapter some background mathematics/statistics, description and relevant aspects of building the networks are given to better understand s and appreciate BN’s potential. There are also brief discussions of their applications, the unique value and the challenges of this modelling technique for the domain. As will be seen in this chapter, with the additional advantages the BNs can offer, it is not surprising that it is becoming an increasingly popular modelling tool in the health domain.


2020 ◽  
Vol 17 (3) ◽  
pp. 365-375
Author(s):  
Vasyl Kovalishyn ◽  
Diana Hodyna ◽  
Vitaliy O. Sinenko ◽  
Volodymyr Blagodatny ◽  
Ivan Semenyuta ◽  
...  

Background: Tuberculosis (TB) is an infection disease caused by Mycobacterium tuberculosis (Mtb) bacteria. One of the main causes of mortality from TB is the problem of Mtb resistance to known drugs. Objective: The goal of this work is to identify potent small molecule anti-TB agents by machine learning, synthesis and biological evaluation. Methods: The On-line Chemical Database and Modeling Environment (OCHEM) was used to build predictive machine learning models. Seven compounds were synthesized and tested in vitro for their antitubercular activity against H37Rv and resistant Mtb strains. Results: A set of predictive models was built with OCHEM based on a set of previously synthesized isoniazid (INH) derivatives containing a thiazole core and tested against Mtb. The predictive ability of the models was tested by a 5-fold cross-validation, and resulted in balanced accuracies (BA) of 61–78% for the binary classifiers. Test set validation showed that the models could be instrumental in predicting anti- TB activity with a reasonable accuracy (with BA = 67–79 %) within the applicability domain. Seven designed compounds were synthesized and demonstrated activity against both the H37Rv and multidrugresistant (MDR) Mtb strains resistant to rifampicin and isoniazid. According to the acute toxicity evaluation in Daphnia magna neonates, six compounds were classified as moderately toxic (LD50 in the range of 10−100 mg/L) and one as practically harmless (LD50 in the range of 100−1000 mg/L). Conclusion: The newly identified compounds may represent a starting point for further development of therapies against Mtb. The developed models are available online at OCHEM http://ochem.eu/article/11 1066 and can be used to virtually screen for potential compounds with anti-TB activity.


Author(s):  
Daniel R. Cassar ◽  
Saulo Martiello Mastelini ◽  
Tiago Botari ◽  
Edesio Alcobaça ◽  
André C.P. L.F. de Carvalho ◽  
...  

Entropy ◽  
2020 ◽  
Vol 23 (1) ◽  
pp. 18
Author(s):  
Pantelis Linardatos ◽  
Vasilis Papastefanopoulos ◽  
Sotiris Kotsiantis

Recent advances in artificial intelligence (AI) have led to its widespread industrial adoption, with machine learning systems demonstrating superhuman performance in a significant number of tasks. However, this surge in performance, has often been achieved through increased model complexity, turning such systems into “black box” approaches and causing uncertainty regarding the way they operate and, ultimately, the way that they come to decisions. This ambiguity has made it problematic for machine learning systems to be adopted in sensitive yet critical domains, where their value could be immense, such as healthcare. As a result, scientific interest in the field of Explainable Artificial Intelligence (XAI), a field that is concerned with the development of new methods that explain and interpret machine learning models, has been tremendously reignited over recent years. This study focuses on machine learning interpretability methods; more specifically, a literature review and taxonomy of these methods are presented, as well as links to their programming implementations, in the hope that this survey would serve as a reference point for both theorists and practitioners.


2020 ◽  
Vol 48 (10) ◽  
pp. 030006052095880
Author(s):  
Jianping Wu ◽  
Sulai Liu ◽  
Xiaoming Chen ◽  
Hongfei Xu ◽  
Yaoping Tang

Objective Colorectal cancer (CRC) is the most common cancer worldwide. Patient outcomes following recurrence of CRC are very poor. Therefore, identifying the risk of CRC recurrence at an early stage would improve patient care. Accumulating evidence shows that autophagy plays an active role in tumorigenesis, recurrence, and metastasis. Methods We used machine learning algorithms and two regression models, univariable Cox proportion and least absolute shrinkage and selection operator (LASSO), to identify 26 autophagy-related genes (ARGs) related to CRC recurrence. Results By functional annotation, these ARGs were shown to be enriched in necroptosis and apoptosis pathways. Protein–protein interactions identified SQSTM1, CASP8, HSP80AB1, FADD, and MAPK9 as core genes in CRC autophagy. Of 26 ARGs, BAX and PARP1 were regarded as having the most significant predictive ability of CRC recurrence, with prediction accuracy of 71.1%. Conclusion These results shed light on prediction of CRC recurrence by ARGs. Stratification of patients into recurrence risk groups by testing ARGs would be a valuable tool for early detection of CRC recurrence.


2021 ◽  
pp. 000370282110345
Author(s):  
Tatu Rojalin ◽  
Dexter Antonio ◽  
Ambarish Kulkarni ◽  
Randy P. Carney

Surface-enhanced Raman scattering (SERS) is a powerful technique for sensitive label-free analysis of chemical and biological samples. While much recent work has established sophisticated automation routines using machine learning and related artificial intelligence methods, these efforts have largely focused on downstream processing (e.g., classification tasks) of previously collected data. While fully automated analysis pipelines are desirable, current progress is limited by cumbersome and manually intensive sample preparation and data collection steps. Specifically, a typical lab-scale SERS experiment requires the user to evaluate the quality and reliability of the measurement (i.e., the spectra) as the data are being collected. This need for expert user-intuition is a major bottleneck that limits applicability of SERS-based diagnostics for point-of-care clinical applications, where trained spectroscopists are likely unavailable. While application-agnostic numerical approaches (e.g., signal-to-noise thresholding) are useful, there is an urgent need to develop algorithms that leverage expert user intuition and domain knowledge to simplify and accelerate data collection steps. To address this challenge, in this work, we introduce a machine learning-assisted method at the acquisition stage. We tested six common algorithms to measure best performance in the context of spectral quality judgment. For adoption into future automation platforms, we developed an open-source python package tailored for rapid expert user annotation to train machine learning algorithms. We expect that this new approach to use machine learning to assist in data acquisition can serve as a useful building block for point-of-care SERS diagnostic platforms.


2021 ◽  
Author(s):  
Richard Büssow ◽  
Bruno Hain ◽  
Ismael Al Nuaimi

Abstract Objective and Scope Analysis of operational plant data needs experts in order to interpret detected anomalies which are defined as unusual operation points. The next step on the digital transformation journey is to provide actionable insights into the data. Prescriptive Maintenance defines in advance which kind of detailed maintenance and spare parts will be required. This paper details requirements to improve these predictions for rotating equipment and show potential to integrate the outcome into an operational workflow. Methods, Procedures, Process First principle or physics-based modelling provides additional insights into the data, since the results are directly interpretable. However, such approaches are typically assumed to be expensive to build and not scalable. Identification of and focus on the relevant equipment to be modeled in a hybrid model using a combination of first principle physics and machine learning is a successful strategy. The model is trained using a machine learning approach with historic or current real plant data, to predict conditions which have not occurred before. The better the Artificial Intelligence is trained, the better the prediction will be. Results, Observations, Conclusions The general aim when operating a plant is the actual usage of operational data for process and maintenance optimization by advanced analytics. Typically a data-driven central oversight function supports operations and maintenance staff. A major lesson-learned is that the results of a rather simple statistical approach to detect anomalies fall behind the expectations and are too labor intensive. It is a widely spread misinterpretation that being able to deal with big data is sufficient to come up with good prediction quality for Prescriptive Maintenance. What big data companies are normally missing is domain knowledge, especially on plant critical rotating equipment. Without having domain knowledge the relevant input into the model will have shortcomings and hence the same will apply to its predictions. This paper gives an example of a refinery where the described hybrid model has been used. Novel and Additive Information First principle models are typically expensive to build and not scalable. This hybrid model approach, combining first principle physics based models with artificial intelligence and integration into an operational workflow shows a new way forward.


2021 ◽  
Author(s):  
Thitaree Lertliangchai ◽  
Birol Dindoruk ◽  
Ligang Lu ◽  
Xi Yang

Abstract Dew point pressure (DPP) is a key variable that may be needed to predict the condensate to gas ratio behavior of a reservoir along with some production/completion related issues and calibrate/constrain the EOS models for integrated modeling. However, DPP is a challenging property in terms of its predictability. Recognizing the complexities, we present a state-of-the-art method for DPP prediction using advanced machine learning (ML) techniques. We compare the outcomes of our methodology with that of published empirical correlation-based approaches on two datasets with small sizes and different inputs. Our ML method noticeably outperforms the correlation-based predictors while also showing its flexibility and robustness even with small training datasets provided various classes of fluids are represented within the datasets. We have collected the condensate PVT data from public domain resources and GeoMark RFDBASE containing dew point pressure (the target variable), and the compositional data (mole percentage of each component), temperature, molecular weight (MW), MW and specific gravity (SG) of heptane plus as input variables. Using domain knowledge, before embarking the study, we have extensively checked the measurement quality and the outcomes using statistical techniques. We then apply advanced ML techniques to train predictive models with cross-validation to avoid overfitting the models to the small datasets. We compare our models against the best published DDP predictors with empirical correlation-based techniques. For fair comparisons, the correlation-based predictors are also trained using the underlying datasets. In order to improve the outcomes and using the generalized input data, pseudo-critical properties and artificial proxy features are also employed.


Sign in / Sign up

Export Citation Format

Share Document