Data-driven model development for quality prediction in forming technology

Author(s):  
Iris Kirchen ◽  
Birgit Vogel-Heuser ◽  
Philipp Hildenbrand ◽  
Robert Schulte ◽  
Manfred Vogel ◽  
...  
2018 ◽  
Author(s):  
Jukka Intosalmi ◽  
Adrian C. Scott ◽  
Michelle Hays ◽  
Nicholas Flann ◽  
Olli Yli-Harja ◽  
...  

AbstractMotivationMulticellular entities, such as mammalian tissues or microbial biofilms, typically exhibit complex spatial arrangements that are adapted to their specific functions or environments. These structures result from intercellular signaling as well as from the interaction with the environment that allow cells of the same genotype to differentiate into well-organized communities of diversified cells. Despite its importance, our understanding on how cell–cell and metabolic coupling produce functionally optimized structures is still limited.ResultsHere, we present a data-driven spatial framework to computationally investigate the development of one multicellular structure, yeast colonies. Using experimental growth data from homogeneous liquid media conditions, we develop and parameterize a dynamic cell state and growth model. We then use the resulting model in a coarse-grained spatial model, which we calibrate using experimental time-course data of colony growth. Throughout the model development process, we use state-of-the-art statistical techniques to handle the uncertainty of model structure and parameterization. Further, we validate the model predictions against independent experimental data and illustrate how metabolic coupling plays a central role in colony formation.AvailabilityExperimental data and a computational implementation to reproduce the results are available athttp://research.cs.aalto.fi/csb/software/multiscale/[email protected],[email protected]


2017 ◽  
Vol 162 ◽  
pp. 130-141 ◽  
Author(s):  
Bahareh Bidar ◽  
Jafar Sadeghi ◽  
Farhad Shahraki ◽  
Mir Mohammad Khalilipour

2021 ◽  
Vol 2 (1) ◽  
Author(s):  
Xianglin Zhu ◽  
Khalil Ur Rehman ◽  
Wang Bo ◽  
Muhammad Shahzad ◽  
Ahmad Hassan

Author(s):  
Mihaela van der Schaar ◽  
Harry Hemingway

Machine learning offers an alternative to the methods for prognosis research in large and complex datasets and for delivering dynamic models of prognosis. Machine learning foregrounds the capacity to learn from large and complex data about the pathways, predictors, and trajectories of health outcomes in individuals. This reflects wider societal drives for data-driven modelling embedded and automated within powerful computers to analyse large amounts of data. Machine learning derives algorithms that can learn from data and can allow the data full freedom, for example, to follow a pragmatic approach in developing a prognostic model. Rather than choosing factors for model development in advance, machine learning allows the data to reveal which features are important for which predictions. This chapter introduces key machine learning concepts relevant to each of the four prognosis research types, explains where it may enhance prognosis research, and highlights challenges.


Atmosphere ◽  
2020 ◽  
Vol 11 (7) ◽  
pp. 701
Author(s):  
Bong-Chul Seo

This study describes a framework that provides qualitative weather information on winter precipitation types using a data-driven approach. The framework incorporates the data retrieved from weather radars and the numerical weather prediction (NWP) model to account for relevant precipitation microphysics. To enable multimodel-based ensemble classification, we selected six supervised machine learning models: k-nearest neighbors, logistic regression, support vector machine, decision tree, random forest, and multi-layer perceptron. Our model training and cross-validation results based on Monte Carlo Simulation (MCS) showed that all the models performed better than our baseline method, which applies two thresholds (surface temperature and atmospheric layer thickness) for binary classification (i.e., rain/snow). Among all six models, random forest presented the best classification results for the basic classes (rain, freezing rain, and snow) and the further refinement of the snow classes (light, moderate, and heavy). Our model evaluation, which uses an independent dataset not associated with model development and learning, led to classification performance consistent with that from the MCS analysis. Based on the visual inspection of the classification maps generated for an individual radar domain, we confirmed the improved classification capability of the developed models (e.g., random forest) compared to the baseline one in representing both spatial variability and continuity.


2013 ◽  
Vol 10 (1) ◽  
pp. 145-187 ◽  
Author(s):  
N. J. Mount ◽  
C. W. Dawson ◽  
R. J. Abrahart

Abstract. In this paper we address the difficult problem of gaining an internal, mechanistic understanding of a neural network river forecasting (NNRF) model. Neural network models in hydrology have long been criticised for their black-box character, which prohibits adequate understanding of their modelling mechanisms and has limited their broad acceptance by hydrologists. In response, we here present a new, data-driven mechanistic modelling (DDMM) framework that incorporates an evaluation of the legitimacy of a neural network's internal modelling mechanism as a core element in the model development process. The framework is exemplified for two NNRF modelling scenarios, and uses a novel adaptation of first order, partial derivate, relative sensitivity analysis methods as the means by which each model's mechanistic legitimacy is explored. The results demonstrate the limitations of standard, goodness-of-fit validation procedures applied by NNRF modellers, by highlighting how the internal mechanisms of complex models that produce the best fit scores can have much lower legitimacy than simpler counterparts whose scores are only slightly inferior. The study emphasises the urgent need for better mechanistic understanding of neural network-based hydrological models and the further development of methods for elucidating their mechanisms.


Author(s):  
Masatoshi Funabashi

Recently emerging data-driven citizen sciences need to harness increasing amount of massive data with varying quality. This paper develops essential theoretical frameworks and example models and examine its computational complexity for interactive data-driven citizen science within the context of guided self-organization. We first define a conceptual model that incorporates quality of observation in terms of accuracy and reproducibility, ranging between subjectivity, inter-subjectivity, and objectivity. Next, we examine the database's algebraic and topological structure in relation to informational complexity measures, and evaluate its computational complexities with respect to exhaustive optimization. Conjectures of criticality are obtained on self-organizing processes of observation and dynamical model development. Example analysis is demonstrated with the use of biodiversity assessment database, the process that inevitably involves human subjectivity for the management in open complex systems.


Sign in / Sign up

Export Citation Format

Share Document