scholarly journals Node Metadata Can Produce Predictability Crossovers in Network Inference Problems

2022 ◽  
Vol 12 (1) ◽  
Author(s):  
Oscar Fajardo-Fontiveros ◽  
Roger Guimerà ◽  
Marta Sales-Pardo
2010 ◽  
Vol 08 (04) ◽  
pp. 661-677 ◽  
Author(s):  
SHUHEI KIMURA ◽  
YUICHI SHIRAISHI ◽  
MARIKO OKADA

When we apply inference methods based on a set of differential equations into actual genetic network inference problems, we often end up with a large number of false-positive regulations. However, as we must check the inferred regulations through biochemical experiments, fewer false-positive regulations are preferable. In order to reduce the number of regulations checked, this study proposes a new method that assigns confidence values to all of the regulations contained in the target network. For this purpose, we combine a residual bootstrap method with the existing method, i.e. the inference method using linear programming machines (LPMs). Through numerical experiments on an artificial genetic network inference problem, we confirmed that most of the regulations with high confidence values are actually present in the target networks. We then used the proposed method to analyze the bacterial SOS DNA repair system, and succeeded in assigning reasonable confidence values to its regulations. Although this study combined the bootstrap method with the inference method using the LPMs, the proposed bootstrap approach could be combined with any method that has an ability to infer a genetic network from time-series of gene expression levels.


2018 ◽  
Vol 30 (8) ◽  
pp. 2245-2283 ◽  
Author(s):  
Michiel Stock ◽  
Tapio Pahikkala ◽  
Antti Airola ◽  
Bernard De Baets ◽  
Willem Waegeman

Many machine learning problems can be formulated as predicting labels for a pair of objects. Problems of that kind are often referred to as pairwise learning, dyadic prediction, or network inference problems. During the past decade, kernel methods have played a dominant role in pairwise learning. They still obtain a state-of-the-art predictive performance, but a theoretical analysis of their behavior has been underexplored in the machine learning literature. In this work we review and unify kernel-based algorithms that are commonly used in different pairwise learning settings, ranging from matrix filtering to zero-shot learning. To this end, we focus on closed-form efficient instantiations of Kronecker kernel ridge regression. We show that independent task kernel ridge regression, two-step kernel ridge regression, and a linear matrix filter arise naturally as a special case of Kronecker kernel ridge regression, implying that all these methods implicitly minimize a squared loss. In addition, we analyze universality, consistency, and spectral filtering properties. Our theoretical results provide valuable insights into assessing the advantages and limitations of existing pairwise learning methods.


2016 ◽  
Vol 24 (4) ◽  
pp. 2539-2552 ◽  
Author(s):  
Mehdi Malboubi ◽  
Cuong Vu ◽  
Chen-Nee Chuah ◽  
Puneet Sharma

Sign in / Sign up

Export Citation Format

Share Document