bregman divergence
Recently Published Documents


TOTAL DOCUMENTS

87
(FIVE YEARS 27)

H-INDEX

12
(FIVE YEARS 2)

Entropy ◽  
2021 ◽  
Vol 23 (12) ◽  
pp. 1668
Author(s):  
Jan Naudts

The present paper investigates the update of an empirical probability distribution with the results of a new set of observations. The update reproduces the new observations and interpolates using prior information. The optimal update is obtained by minimizing either the Hellinger distance or the quadratic Bregman divergence. The results obtained by the two methods differ. Updates with information about conditional probabilities are considered as well.


2021 ◽  
Author(s):  
Lateef Olakunle Jolaoso ◽  
Pongsakorn Sunthrayuth ◽  
Prasit Cholamjiak ◽  
Yeol Je Cho

Abstract It is well-known that the use of Bregman divergence is an elegant and effective technique for solving many problems in applied sciences. In this paper, we introduce and analyze two new inertial-like algorithms with Bregman divergence for solving pseudomonotone variational inequalities in a real Hilbert space. The first algorithm is inspired by Halpern -type iteration and subgradient extragradient method and the second algorithm is inspired by Halpern -type iteration and Tseng's extragradient method. Under suitable conditions, the strong convergence theorems of the algorithms are established without assuming the Lipschitz continuity and the sequential weak continuity of any mapping. Finally, several numerical experiments with various types of Bregman divergence are also performed to illustrate the theoretical analysis. The results presented in this paper improve and generalize the related works in the literature.


2021 ◽  
pp. 1-14
Author(s):  
S. Penev ◽  
P. V. Shevchenko ◽  
W. Wu

2021 ◽  
Vol 0 (0) ◽  
pp. 0-0
Author(s):  
Mozhdeh Zandifar ◽  
Shiva Noori Saray ◽  
Jafar Tahmoresnezhad

2021 ◽  
Vol 213 ◽  
pp. 222-232
Author(s):  
Melaine C. De Oliveira ◽  
Luis M. Castro ◽  
Dipak K. Dey ◽  
Debajyoti Sinha

Entropy ◽  
2021 ◽  
Vol 23 (7) ◽  
pp. 833
Author(s):  
Stephen G. Walker ◽  
Cristiano Villa

In this paper, we introduce a novel objective prior distribution levering on the connections between information, divergence and scoring rules. In particular, we do so from the starting point of convex functions representing information in density functions. This provides a natural route to proper local scoring rules using Bregman divergence. Specifically, we determine the prior which solves setting the score function to be a constant. Although in itself this provides motivation for an objective prior, the prior also minimizes a corresponding information criterion.


2021 ◽  
pp. 138-147
Author(s):  
Mohit Kumar ◽  
Bernhard Moser ◽  
Lukas Fischer ◽  
Bernhard Freudenthaler

2021 ◽  
Vol 15 (2) ◽  
Author(s):  
Kanta Naito ◽  
Spiridon Penev
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document