scholarly journals A Deep Learning Approach to Design and Discover Sustainable Cementitious Binders: Strategies to Learn From Small Databases and Develop Closed-form Analytical Models

2022 ◽  
Vol 8 ◽  
Author(s):  
Taihao Han ◽  
Sai Akshay Ponduru ◽  
Rachel Cook ◽  
Jie Huang ◽  
Gaurav Sant ◽  
...  

To reduce the energy-intensity and carbon footprint of Portland cement (PC), the prevailing practice embraced by concrete technologists is to partially replace the PC in concrete with supplementary cementitious materials [SCMs: geological materials (e.g., limestone); industrial by-products (e.g., fly ash); and processed materials (e.g., calcined clay)]. Chemistry and content of the SCM profoundly affect PC hydration kinetics; which, in turn, dictates the evolutions of microstructure and properties of the [PC + SCM] binder. Owing to the substantial diversity in SCMs’ compositions–plus the massive combinatorial spaces, and the highly nonlinear and mutually-interacting processes that arise from SCM-PC interactions–state-of-the-art computational models are unable to produce a priori predictions of hydration kinetics or properties of [PC + SCM] binders. In the past 2 decades, the combination of Big data and machine learning (ML)—commonly referred to as the fourth paradigm of science–has emerged as a promising approach to learn composition-property correlations in materials (e.g., concrete), and capitalize on such learnings to produce a priori predictions of properties of materials with new compositions. Notwithstanding these merits, widespread use of ML models is hindered because they: 1) Require Big data to learn composition-property correlations, and, in general, large databases for concrete are not publicly available; and 2) Function as black-boxes, thus providing little-to-no insights into the materials laws like theory-based analytical models do. This study presents a deep learning (DL) model capable of producing a priori, high-fidelity predictions of composition- and time-dependent hydration kinetics and phase assemblage development in [PC + SCM] pastes. The DL is coupled with: 1) A fast Fourier transformation algorithm that reduces the dimensionality of training datasets (e.g., kinetic datasets), thus allowing the model to learn intrinsic composition-property correlations from a small database; and 2) A thermodynamic model that constrains the model, thus ensuring that predictions do not violate fundamental materials laws. The training and outcomes of the DL are ultimately leveraged to develop a simple, easy-to-use, closed-form analytical model capable of predicting hydration kinetics and phase assemblage development in [PC + SCM] pastes, using their initial composition and mixture design as inputs.

2019 ◽  
Vol 53 (3) ◽  
pp. 281-294
Author(s):  
Jean-Michel Foucart ◽  
Augustin Chavanne ◽  
Jérôme Bourriau

Nombreux sont les apports envisagés de l’Intelligence Artificielle (IA) en médecine. En orthodontie, plusieurs solutions automatisées sont disponibles depuis quelques années en imagerie par rayons X (analyse céphalométrique automatisée, analyse automatisée des voies aériennes) ou depuis quelques mois (analyse automatique des modèles numériques, set-up automatisé; CS Model +, Carestream Dental™). L’objectif de cette étude, en deux parties, est d’évaluer la fiabilité de l’analyse automatisée des modèles tant au niveau de leur numérisation que de leur segmentation. La comparaison des résultats d’analyse des modèles obtenus automatiquement et par l’intermédiaire de plusieurs orthodontistes démontre la fiabilité de l’analyse automatique; l’erreur de mesure oscillant, in fine, entre 0,08 et 1,04 mm, ce qui est non significatif et comparable avec les erreurs de mesures inter-observateurs rapportées dans la littérature. Ces résultats ouvrent ainsi de nouvelles perspectives quand à l’apport de l’IA en Orthodontie qui, basée sur le deep learning et le big data, devrait permettre, à moyen terme, d’évoluer vers une orthodontie plus préventive et plus prédictive.


2020 ◽  
Author(s):  
Anusha Ampavathi ◽  
Vijaya Saradhi T

UNSTRUCTURED Big data and its approaches are generally helpful for healthcare and biomedical sectors for predicting the disease. For trivial symptoms, the difficulty is to meet the doctors at any time in the hospital. Thus, big data provides essential data regarding the diseases on the basis of the patient’s symptoms. For several medical organizations, disease prediction is important for making the best feasible health care decisions. Conversely, the conventional medical care model offers input as structured that requires more accurate and consistent prediction. This paper is planned to develop the multi-disease prediction using the improvised deep learning concept. Here, the different datasets pertain to “Diabetes, Hepatitis, lung cancer, liver tumor, heart disease, Parkinson’s disease, and Alzheimer’s disease”, from the benchmark UCI repository is gathered for conducting the experiment. The proposed model involves three phases (a) Data normalization (b) Weighted normalized feature extraction, and (c) prediction. Initially, the dataset is normalized in order to make the attribute's range at a certain level. Further, weighted feature extraction is performed, in which a weight function is multiplied with each attribute value for making large scale deviation. Here, the weight function is optimized using the combination of two meta-heuristic algorithms termed as Jaya Algorithm-based Multi-Verse Optimization algorithm (JA-MVO). The optimally extracted features are subjected to the hybrid deep learning algorithms like “Deep Belief Network (DBN) and Recurrent Neural Network (RNN)”. As a modification to hybrid deep learning architecture, the weight of both DBN and RNN is optimized using the same hybrid optimization algorithm. Further, the comparative evaluation of the proposed prediction over the existing models certifies its effectiveness through various performance measures.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Dipendra Jha ◽  
Vishu Gupta ◽  
Logan Ward ◽  
Zijiang Yang ◽  
Christopher Wolverton ◽  
...  

AbstractThe application of machine learning (ML) techniques in materials science has attracted significant attention in recent years, due to their impressive ability to efficiently extract data-driven linkages from various input materials representations to their output properties. While the application of traditional ML techniques has become quite ubiquitous, there have been limited applications of more advanced deep learning (DL) techniques, primarily because big materials datasets are relatively rare. Given the demonstrated potential and advantages of DL and the increasing availability of big materials datasets, it is attractive to go for deeper neural networks in a bid to boost model performance, but in reality, it leads to performance degradation due to the vanishing gradient problem. In this paper, we address the question of how to enable deeper learning for cases where big materials data is available. Here, we present a general deep learning framework based on Individual Residual learning (IRNet) composed of very deep neural networks that can work with any vector-based materials representation as input to build accurate property prediction models. We find that the proposed IRNet models can not only successfully alleviate the vanishing gradient problem and enable deeper learning, but also lead to significantly (up to 47%) better model accuracy as compared to plain deep neural networks and traditional ML techniques for a given input materials representation in the presence of big data.


2020 ◽  
Vol 513 ◽  
pp. 386-396 ◽  
Author(s):  
Mohammad Mehedi Hassan ◽  
Abdu Gumaei ◽  
Ahmed Alsanad ◽  
Majed Alrubaian ◽  
Giancarlo Fortino

Author(s):  
Christian N. Koyama ◽  
Manabu Watanabe ◽  
Edson E. Sano ◽  
Masato Hayashi ◽  
Izumi Nagatani ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document