scholarly journals Parameter Space Compression Underlies Emergent Theories and Predictive Models

Science ◽  
2013 ◽  
Vol 342 (6158) ◽  
pp. 604-607 ◽  
Author(s):  
B. B. Machta ◽  
R. Chachra ◽  
M. K. Transtrum ◽  
J. P. Sethna
2018 ◽  
Vol 23 (11) ◽  
pp. 3643-3660 ◽  
Author(s):  
Laizhong Cui ◽  
Genghui Li ◽  
Zexuan Zhu ◽  
Zhong Ming ◽  
Zhenkun Wen ◽  
...  

2020 ◽  
Vol 118 (6) ◽  
pp. 1455-1465 ◽  
Author(s):  
Chieh-Ting (Jimmy) Hsu ◽  
Gary J. Brouhard ◽  
Paul François

2018 ◽  
Author(s):  
Chieh-Ting (Jimmy) Hsu ◽  
Gary J. Brouhard ◽  
Paul François

ABSTRACTPhysical models of biological systems can become difficult to interpret when they have a large number of parameters. But the models themselves actually depend on (i.e. are sensitive to) only a subset of those parameters. Rigorously identifying this subset of “stiff” parameters has been made possible by the development of parameter space compression (PSC). However, PSC has only been applied to analytically-solvable physical models. We have generalized this powerful method by developing a numerical approach to PSC that can be applied to any computational model. We validated our method against analytically-solvable models of random walk with drift and protein production and degradation. We then applied our method to an active area of biophysics research, namely to a simple computational model of microtubule dynamic instability. Such models have become increasingly complex, perhaps unnecessarily. By adding two new parameters that account for prominent structural features of microtubules, we identify one that can be “compressed away” (the “seam” in the microtubule) and another that is essential to model performance (the “tapering” of microtubule ends). Furthermore, we show that the microtubule model has an underlying, low-dimensional structure that explains the vast majority of our experimental data. We argue that numerical PSC can identify the low-dimensional structure of any computational model in biophysics. The low-dimensional structure of a model is easier to interpret and identifies the mechanisms and experiments that best characterize the system.


2021 ◽  
Author(s):  
Leon Bobrowski

The main challenges in data mining are related to large, multi-dimensional data sets. There is a need to develop algorithms that are precise and efficient enough to deal with big data problems. The Simplex algorithm from linear programming can be seen as an example of a successful big data problem solving tool. According to the fundamental theorem of linear programming the solution of the optimization problem can found in one of the vertices in the parameter space. The basis exchange algorithms also search for the optimal solution among finite number of the vertices in the parameter space. Basis exchange algorithms enable the design of complex layers of classifiers or predictive models based on a small number of multivariate data vectors.


Sign in / Sign up

Export Citation Format

Share Document