scholarly journals Assessment of Analytical Orientation Prediction Models for Suspensions Containing Fibers and Spheres

2021 ◽  
Vol 5 (4) ◽  
pp. 107
Author(s):  
Bastien Dietemann ◽  
Fatih Bosna ◽  
Harald Kruggel-Emden ◽  
Torsten Kraft ◽  
Claas Bierwisch

Analytical orientation models like the Folgar Tucker (FT) model are widely applied to predict the orientation of suspended non-spherical particles. The accuracy of these models depends on empirical model parameters. In this work, we assess how well analytical orientation models can predict the orientation of suspensions not only consisting of fibers but also of an additional second particle type in the shape of disks, which are varied in size and filling fraction. We mainly focus on the FT model, and we also compare its accuracy to more complex models like Reduced-Strain Closure model (RSC), Moldflow Rotational Diffusion model (MRD), and Anisotropic Rotary Diffusion model (ARD). In our work, we address the following questions. First, can the FT model predict the orientation of suspensions despite the additional particle phase affecting the rotation of the fibers? Second, is it possible to formulate an expression for the sole Folgar Tucker model parameter that is based on the suspension composition? Third, is there an advantage to choose more complex orientation prediction models that require the adjustment of additional model parameters?

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Roger Ratcliff ◽  
Inhan Kang

AbstractRafiei and Rahnev (2021) presented an analysis of an experiment in which they manipulated speed-accuracy stress and stimulus contrast in an orientation discrimination task. They argued that the standard diffusion model could not account for the patterns of data their experiment produced. However, their experiment encouraged and produced fast guesses in the higher speed-stress conditions. These fast guesses are responses with chance accuracy and response times (RTs) less than 300 ms. We developed a simple mixture model in which fast guesses were represented by a simple normal distribution with fixed mean and standard deviation and other responses by the standard diffusion process. The model fit the whole pattern of accuracy and RTs as a function of speed/accuracy stress and stimulus contrast, including the sometimes bimodal shapes of RT distributions. In the model, speed-accuracy stress affected some model parameters while stimulus contrast affected a different one showing selective influence. Rafiei and Rahnev’s failure to fit the diffusion model was the result of driving subjects to fast guess in their experiment.


1982 ◽  
Vol 40 (6) ◽  
pp. 417-420 ◽  
Author(s):  
Robert J. Hall ◽  
Douglas A. Greenhalgh

2018 ◽  
Vol 11 (1) ◽  
pp. 64 ◽  
Author(s):  
Kyoung-jae Kim ◽  
Kichun Lee ◽  
Hyunchul Ahn

Measuring and managing the financial sustainability of the borrowers is crucial to financial institutions for their risk management. As a result, building an effective corporate financial distress prediction model has been an important research topic for a long time. Recently, researchers are exerting themselves to improve the accuracy of financial distress prediction models by applying various business analytics approaches including statistical and artificial intelligence methods. Among them, support vector machines (SVMs) are becoming popular. SVMs require only small training samples and have little possibility of overfitting if model parameters are properly tuned. Nonetheless, SVMs generally show high prediction accuracy since it can deal with complex nonlinear patterns. Despite of these advantages, SVMs are often criticized because their architectural factors are determined by heuristics, such as the parameters of a kernel function and the subsets of appropriate features and instances. In this study, we propose globally optimized SVMs, denoted by GOSVM, a novel hybrid SVM model designed to optimize feature selection, instance selection, and kernel parameters altogether. This study introduces genetic algorithm (GA) in order to simultaneously optimize multiple heterogeneous design factors of SVMs. Our study applies the proposed model to the real-world case for predicting financial distress. Experiments show that the proposed model significantly improves the prediction accuracy of conventional SVMs.


2018 ◽  
Vol 6 (4) ◽  
pp. 47 ◽  
Author(s):  
Florian Schmitz ◽  
Dominik Rotter ◽  
Oliver Wilhelm

Research suggests that the relation of mental speed with working memory capacity (WMC) depends on complexity and scoring methods of speed tasks and the type of task used to assess capacity limits in working memory. In the present study, we included conventional binding/updating measures of WMC as well as rapid serial visual presentation paradigms. The latter allowed for a computation of the attentional blink (AB) effect that was argued to measure capacity limitations at the encoding stage of working memory. Mental speed was assessed with a set of tasks and scored by diverse methods, including response time (RT) based scores, as well as ex-Gaussian and diffusion model parameterization. Relations of latent factors were investigated using structure equation modeling techniques. RT-based scores of mental speed yielded substantial correlations with WMC but only weak relations with the AB effect, while WMC and the AB magnitude were independent. The strength of the speed-WMC relation was shown to depend on task type. Additionally, the increase in predictive validity across RT quantiles changed across task types, suggesting that the worst performance rule (WPR) depends on task characteristics. In contrast to the latter, relations of speed with the AB effect did not change across RT quantiles. Relations of the model parameters were consistently found for the ex-Gaussian tau parameter and the diffusion model drift rate. However, depending on task type, other parameters showed plausible relations as well. The finding that characteristics of mental speed tasks determined the overall strength of relations with WMC, the occurrence of a WPR effect, and the specific pattern of relations of model parameters, implies that mental speed tasks are not exchangeable measurement tools. In spite of reflecting a general factor of mental speed, different speed tasks possess different requirements, supporting the notion of mental speed as a hierarchical construct.


Author(s):  
R. Darin Ellis ◽  
Kentaro Kotani

A visco-elastic model of the mechanical properties of muscle was used to describe age-differences in the buildup of force in isometric elbow flexion. Given information from the literature on age-related physiological changes, such as decreasing connective-tissue elasticity, one would expect changes in the mechanical properties of skeletal muscle and their related model parameters. Force vs. time curves were obtained for 7 young (aged 21–27) and 7 old (aged 69–83) female subject. There were significant age group differences in steady-state force level and the best fitting model parameters. In particular, the viscous damping element of the model plays a large role in describing the increased time to reach steady-state force levels in the older subject group. Implications of this research include incorporating parameter differences into more complex models, such as crash impact models.


2013 ◽  
Vol 5 (2) ◽  
pp. 55-77 ◽  
Author(s):  
Anthony H. Dekker

In this paper, the author explores epistemological aspects of simulation with a particular focus on using simulations to provide recommendations to managers and other decision-makers. The author presents formal definitions of knowledge (as justified true belief) and of simulation. The author shows that a simple model, the Kuramoto model of coupled-oscillators, satisfies the simulation definition (and therefore generates knowledge) through a justified mapping from the real world. The author argues that, for more complex models, such a justified mapping requires three techniques: using an appropriate and justified theoretical construct; using appropriate and justified values for model parameters; and testing or other verification processes to ensure that the mapping is correctly defined. The author illustrates these three techniques with experiments and models from the literature, including the Long House Valley model of Axtell et al., the SAFTE model of sleep, and the Segregation model of Wilensky.


2019 ◽  
Vol 98 (10) ◽  
pp. 1088-1095 ◽  
Author(s):  
J. Krois ◽  
C. Graetz ◽  
B. Holtfreter ◽  
P. Brinkmann ◽  
T. Kocher ◽  
...  

Prediction models learn patterns from available data (training) and are then validated on new data (testing). Prediction modeling is increasingly common in dental research. We aimed to evaluate how different model development and validation steps affect the predictive performance of tooth loss prediction models of patients with periodontitis. Two independent cohorts (627 patients, 11,651 teeth) were followed over a mean ± SD 18.2 ± 5.6 y (Kiel cohort) and 6.6 ± 2.9 y (Greifswald cohort). Tooth loss and 10 patient- and tooth-level predictors were recorded. The impact of different model development and validation steps was evaluated: 1) model complexity (logistic regression, recursive partitioning, random forest, extreme gradient boosting), 2) sample size (full data set or 10%, 25%, or 75% of cases dropped at random), 3) prediction periods (maximum 10, 15, or 20 y or uncensored), and 4) validation schemes (internal or external by centers/time). Tooth loss was generally a rare event (880 teeth were lost). All models showed limited sensitivity but high specificity. Patients’ age and tooth loss at baseline as well as probing pocket depths showed high variable importance. More complex models (random forest, extreme gradient boosting) had no consistent advantages over simpler ones (logistic regression, recursive partitioning). Internal validation (in sample) overestimated the predictive power (area under the curve up to 0.90), while external validation (out of sample) found lower areas under the curve (range 0.62 to 0.82). Reducing the sample size decreased the predictive power, particularly for more complex models. Censoring the prediction period had only limited impact. When the model was trained in one period and tested in another, model outcomes were similar to the base case, indicating temporal validation as a valid option. No model showed higher accuracy than the no-information rate. In conclusion, none of the developed models would be useful in a clinical setting, despite high accuracy. During modeling, rigorous development and external validation should be applied and reported accordingly.


Sign in / Sign up

Export Citation Format

Share Document