scholarly journals Superquantile-Based Learning: A Direct Approach Using Gradient-Based Optimization

Author(s):  
Yassine Laguel ◽  
Jérôme Malick ◽  
Zaid Harchaoui
2016 ◽  
Vol 2 (1) ◽  
pp. 669-673 ◽  
Author(s):  
Jörn Kretschmer ◽  
Paul D. Docherty ◽  
Axel Riedlinger ◽  
Knut Möller

AbstractMathematical models can be employed to simulate a patient’s individual physiology and can therefore be used to predict reactions to changes in the therapy. To be clinically useful, those models need to be identifiable from data available at the bedside. Gradient based methods to identify the values of the model parameters that represent the recorded data highly depend on the initial estimates. The proposed work implements a previously developed method to overcome those dependencies to identify a three parameter model of gas exchange. The proposed hierarchical method uses models of lower order related to the three parameter model to calculate valid initial estimates for the parameter identification. The presented approach was evaluated using 12 synthetic patients and compared to a traditional direct approach as well as a global search method. Results show that the direct approach is highly dependent on how well the initial estimates are selected, while the hierarchical approach was able to find correct parameter values in all tested patients.


Author(s):  
B. Roy Frieden

Despite the skill and determination of electro-optical system designers, the images acquired using their best designs often suffer from blur and noise. The aim of an “image enhancer” such as myself is to improve these poor images, usually by digital means, such that they better resemble the true, “optical object,” input to the system. This problem is notoriously “ill-posed,” i.e. any direct approach at inversion of the image data suffers strongly from the presence of even a small amount of noise in the data. In fact, the fluctuations engendered in neighboring output values tend to be strongly negative-correlated, so that the output spatially oscillates up and down, with large amplitude, about the true object. What can be done about this situation? As we shall see, various concepts taken from statistical communication theory have proven to be of real use in attacking this problem. We offer below a brief summary of these concepts.


2016 ◽  
Vol 77 (S 02) ◽  
Author(s):  
Kazimierz Niemczyk ◽  
Robert Bartoszewicz ◽  
Krzysztof Morawski ◽  
Izabela Popieluch
Keyword(s):  

2007 ◽  
Vol 51 (1-2) ◽  
pp. 43
Author(s):  
Balázs Polgár ◽  
Endre Selényi
Keyword(s):  

2019 ◽  
Vol 63 (5) ◽  
pp. 50401-1-50401-7 ◽  
Author(s):  
Jing Chen ◽  
Jie Liao ◽  
Huanqiang Zeng ◽  
Canhui Cai ◽  
Kai-Kuang Ma

Abstract For a robust three-dimensional video transmission through error prone channels, an efficient multiple description coding for multi-view video based on the correlation of spatial polyphase transformed subsequences (CSPT_MDC_MVC) is proposed in this article. The input multi-view video sequence is first separated into four subsequences by spatial polyphase transform and then grouped into two descriptions. With the correlation of macroblocks in corresponding subsequence positions, these subsequences should not be coded in completely the same way. In each description, one subsequence is directly coded by the Joint Multi-view Video Coding (JMVC) encoder and the other subsequence is classified into four sets. According to the classification, the indirectly coding subsequence selectively employed the prediction mode and the prediction vector of the counter directly coding subsequence, which reduces the bitrate consumption and the coding complexity of multiple description coding for multi-view video. On the decoder side, the gradient-based directional interpolation is employed to improve the side reconstructed quality. The effectiveness and robustness of the proposed algorithm is verified by experiments in the JMVC coding platform.


Author(s):  
Louis Kaplow

Throughout the world, the rule against price fixing is competition law's most important and least controversial prohibition. Yet there is far less consensus than meets the eye on what constitutes price fixing, and prevalent understandings conflict with the teachings of oligopoly theory that supposedly underlie modern competition policy. This book offers a fresh, in-depth exploration of competition law's horizontal agreement requirement, presents a systematic analysis of how best to address the problem of coordinated oligopolistic price elevation, and compares the resulting direct approach to the orthodox prohibition. The book elaborates the relevant benefits and costs of potential solutions, investigates how coordinated price elevation is best detected in light of the error costs associated with different types of proof, and examines appropriate sanctions. Existing literature devotes remarkably little attention to these key subjects and instead concerns itself with limiting penalties to certain sorts of interfirm communications. Challenging conventional wisdom, the book shows how this circumscribed view is less well grounded in the statutes, principles, and precedents of competition law than is a more direct, functional proscription. More important, by comparison to the communications-based prohibition, the book explains how the direct approach targets situations that involve both greater social harm and less risk of chilling desirable behavior—and is also easier to apply.


Author(s):  
Yaniv Aspis ◽  
Krysia Broda ◽  
Alessandra Russo ◽  
Jorge Lobo

We introduce a novel approach for the computation of stable and supported models of normal logic programs in continuous vector spaces by a gradient-based search method. Specifically, the application of the immediate consequence operator of a program reduct can be computed in a vector space. To do this, Herbrand interpretations of a propositional program are embedded as 0-1 vectors in $\mathbb{R}^N$ and program reducts are represented as matrices in $\mathbb{R}^{N \times N}$. Using these representations we prove that the underlying semantics of a normal logic program is captured through matrix multiplication and a differentiable operation. As supported and stable models of a normal logic program can now be seen as fixed points in a continuous space, non-monotonic deduction can be performed using an optimisation process such as Newton's method. We report the results of several experiments using synthetically generated programs that demonstrate the feasibility of the approach and highlight how different parameter values can affect the behaviour of the system.


Sign in / Sign up

Export Citation Format

Share Document