Over-Fitting and Model Tuning

2013 ◽  
pp. 61-92 ◽  
Author(s):  
Max Kuhn ◽  
Kjell Johnson
Keyword(s):  
TAPPI Journal ◽  
2019 ◽  
Vol 18 (11) ◽  
pp. 679-689
Author(s):  
CYDNEY RECHTIN ◽  
CHITTA RANJAN ◽  
ANTHONY LEWIS ◽  
BETH ANN ZARKO

Packaging manufacturers are challenged to achieve consistent strength targets and maximize production while reducing costs through smarter fiber utilization, chemical optimization, energy reduction, and more. With innovative instrumentation readily accessible, mills are collecting vast amounts of data that provide them with ever increasing visibility into their processes. Turning this visibility into actionable insight is key to successfully exceeding customer expectations and reducing costs. Predictive analytics supported by machine learning can provide real-time quality measures that remain robust and accurate in the face of changing machine conditions. These adaptive quality “soft sensors” allow for more informed, on-the-fly process changes; fast change detection; and process control optimization without requiring periodic model tuning. The use of predictive modeling in the paper industry has increased in recent years; however, little attention has been given to packaging finished quality. The use of machine learning to maintain prediction relevancy under everchanging machine conditions is novel. In this paper, we demonstrate the process of establishing real-time, adaptive quality predictions in an industry focused on reel-to-reel quality control, and we discuss the value created through the availability and use of real-time critical quality.


2012 ◽  
Author(s):  
Caren Marzban ◽  
David W. Jones ◽  
Scott A. Sandgathe
Keyword(s):  

2021 ◽  
Vol 83 (3) ◽  
Author(s):  
Jacob W. Brownscombe ◽  
Jonathan D. Midwood ◽  
Steven J. Cooke

Author(s):  
Raphael Sonabend ◽  
Franz J Király ◽  
Andreas Bender ◽  
Bernd Bischl ◽  
Michel Lang

Abstract Motivation As machine learning has become increasingly popular over the last few decades, so too has the number of machine learning interfaces for implementing these models. Whilst many R libraries exist for machine learning, very few offer extended support for survival analysis. This is problematic considering its importance in fields like medicine, bioinformatics, economics, engineering, and more. mlr3proba provides a comprehensive machine learning interface for survival analysis and connects with mlr3’s general model tuning and benchmarking facilities to provide a systematic infrastructure for survival modeling and evaluation. Availability mlr3proba is available under an LGPL-3 license on CRAN and at https://github.com/mlr-org/mlr3proba, with further documentation at https://mlr3book.mlr-org.com/survival.html.


2013 ◽  
Vol 2013 ◽  
pp. 1-11 ◽  
Author(s):  
B. Asgari ◽  
S. A. Osman ◽  
A. Adnan

The model tuning through sensitivity analysis is a prominent procedure to assess the structural behavior and dynamic characteristics of cable-stayed bridges. Most of the previous sensitivity-based model tuning methods are automatic iterative processes; however, the results of recent studies show that the most reasonable results are achievable by applying the manual methods to update the analytical model of cable-stayed bridges. This paper presents a model updating algorithm for highly redundant cable-stayed bridges that can be used as an iterative manual procedure. The updating parameters are selected through the sensitivity analysis which helps to better understand the structural behavior of the bridge. The finite element model of Tatara Bridge is considered for the numerical studies. The results of the simulations indicate the efficiency and applicability of the presented manual tuning method for updating the finite element model of cable-stayed bridges. The new aspects regarding effective material and structural parameters and model tuning procedure presented in this paper will be useful for analyzing and model updating of cable-stayed bridges.


2018 ◽  
Vol 19 (10) ◽  
pp. 1599-1616 ◽  
Author(s):  
Jonathan P. Conway ◽  
John W. Pomeroy ◽  
Warren D. Helgason ◽  
Nicholas J. Kinar

Abstract Forest clearings are common features of evergreen forests and produce snowpack accumulation and melt differing from that in adjacent forests and open terrain. This study has investigated the challenges in specifying the turbulent fluxes of sensible and latent heat to snowpacks in forest clearings. The snowpack in two forest clearings in the Canadian Rockies was simulated using a one-dimensional (1D) snowpack model. A trade-off was found between optimizing against measured snow surface temperature or snowmelt when choosing how to specify the turbulent fluxes. Schemes using the Monin–Obukhov similarity theory tended to produce negatively biased surface temperature, while schemes that enhanced turbulent fluxes, to reduce the surface temperature bias, resulted in too much melt. Uncertainty estimates from Monte Carlo experiments showed that no realistic parameter set could successfully remove biases in both surface temperature and melt. A simple scheme that excludes atmospheric stability correction was required to successfully simulate surface temperature under low wind speed conditions. Nonturbulent advective fluxes and/or nonlocal sources of turbulence are thought to account for the maintenance of heat exchange in low-wind conditions. The simulation of snowmelt was improved by allowing enhanced latent heat fluxes during low-wind conditions. Caution is warranted when snowpack models are optimized on surface temperature, as model tuning may compensate for deficiencies in conceptual and numerical models of radiative, conductive, and turbulent heat exchange at the snow surface and within the snowpack. Such model tuning could have large impacts on the melt rate and timing of the snow-free transition in simulations of forest clearings within hydrological and meteorological models.


2020 ◽  
Vol 56 (8) ◽  
pp. 412-420
Author(s):  
Kei HIGUCHI ◽  
Taichi IKEZAKI ◽  
Osamu KANEKO
Keyword(s):  

Author(s):  
Evangelos Alevizos ◽  
Athanasios V Argyriou ◽  
Dimitris Oikonomou ◽  
Dimitrios D Alexakis

Shallow bathymetry inversion algorithms have long been applied in various types of remote sensing imagery with relative success. However, this approach requires that imagery with increased radiometric resolution in the visible spectrum is available. The recent developments in drones and camera sensors allow for testing current inversion techniques on new types of datasets. This study explores the bathymetric mapping capabilities of fused RGB and multispectral imagery, as an alternative to costly hyperspectral sensors. Combining drone-based RGB and multispectral imagery into a single cube dataset, provides the necessary radiometric detail for shallow bathymetry inversion applications. This technique is based on commercial and open-source software and does not require input of reference depth measurements in contrast to other approaches. The robustness of this method was tested on three different coastal sites with contrasting seafloor types. The use of suitable end-member spectra which are representative of the seafloor types of the study area and the sun zenith angle are important parameters in model tuning. The results of this study show good correlation (R2>0.7) and less than half a meter error when they are compared with sonar depth data. Consequently, integration of various drone-based imagery may be applied for producing centimetre resolution bathymetry maps at low cost for small-scale shallow areas.


Sign in / Sign up

Export Citation Format

Share Document