worst case analysis
Recently Published Documents


TOTAL DOCUMENTS

397
(FIVE YEARS 52)

H-INDEX

36
(FIVE YEARS 3)

Author(s):  
Mohammad Javad Naderi ◽  
Austin Buchanan ◽  
Jose L. Walteros

Entropy ◽  
2021 ◽  
Vol 23 (9) ◽  
pp. 1116
Author(s):  
Ireneusz Szcześniak ◽  
Ireneusz Olszewski ◽  
Bożena Woźna-Szcześniak

We present a novel algorithm for dynamic routing with dedicated path protection which, as the presented simulation results suggest, can be efficient and exact. We present the algorithm in the setting of optical networks, but it should be applicable to other networks, where services have to be protected, and where the network resources are finite and discrete, e.g., wireless radio or networks capable of advance resource reservation. To the best of our knowledge, we are the first to propose an algorithm for this long-standing fundamental problem, which can be efficient and exact, as suggested by simulation results. The algorithm can be efficient because it can solve large problems, and it can be exact because its results are optimal, as demonstrated and corroborated by simulations. We offer a worst-case analysis to argue that the search space is polynomially upper bounded. Network operations, management, and control require efficient and exact algorithms, especially now, when greater emphasis is placed on network performance, reliability, softwarization, agility, and return on investment. The proposed algorithm uses our generic Dijkstra algorithm on a search graph generated “on-the-fly” based on the input graph. We corroborated the optimality of the results of the proposed algorithm with brute-force enumeration for networks up to 15 nodes large. We present the extensive simulation results of dedicated-path protection with signal modulation constraints for elastic optical networks of 25, 50, and 100 nodes, and with 160, 320, and 640 spectrum units. We also compare the bandwidth blocking probability with the commonly-used edge-exclusion algorithm. We had 48,600 simulation runs with about 41 million searches.


Author(s):  
Michał Dereziński ◽  
Rajiv Khanna ◽  
Michael W. Mahoney

The Column Subset Selection Problem (CSSP) and the Nystrom method are among the leading tools for constructing interpretable low-rank approximations of large datasets by selecting a small but representative set of features or instances. A fundamental question in this area is: what is the cost of this interpretability, i.e., how well can a data subset of size k compete with the best rank k approximation? We develop techniques which exploit spectral properties of the data matrix to obtain improved approximation guarantees which go beyond the standard worst-case analysis. Our approach leads to significantly better bounds for datasets with known rates of singular value decay, e.g., polynomial or exponential decay. Our analysis also reveals an intriguing phenomenon: the cost of interpretability as a function of k may exhibit multiple peaks and valleys, which we call a multiple-descent curve. A lower bound we establish shows that this behavior is not an artifact of our analysis, but rather it is an inherent property of the CSSP and Nystrom tasks. Finally, using the example of a radial basis function (RBF) kernel, we show that both our improved bounds and the multiple-descent curve can be observed on real datasets simply by varying the RBF parameter.


Author(s):  
Yu-Feng Li

Weakly supervised learning (WSL) refers to learning from a large amount of weak supervision data. This includes i) incomplete supervision (e.g., semi-supervised learning); ii) inexact supervision (e.g., multi-instance learning) and iii) inaccurate supervision (e.g., label noise learning). Unlike supervised learning which typically achieves performance improvement with more labeled data, WSL may sometimes even degenerate performance with more weak supervision data. It is thus desired to study safe WSL, which could robustly improve performance with weak supervision data. In this article, we share our understanding of the problem from in-distribution data to out-of-distribution data, and discuss possible ways to alleviate it, from the aspects of worst-case analysis, ensemble-learning, and bi-level optimization. We also share some open problems, to inspire future researches.


Sign in / Sign up

Export Citation Format

Share Document