Algorithm 349: polygamma functions with arbitrary precision [S14]

1969 ◽  
Vol 12 (4) ◽  
pp. 213-214 ◽  
Author(s):  
Georges Schwachheim
2019 ◽  
Vol 2019 (1) ◽  
Author(s):  
Li Yin ◽  
Jumei Zhang ◽  
XiuLi Lin

2017 ◽  
Vol 13 (08) ◽  
pp. 2097-2113 ◽  
Author(s):  
Shubho Banerjee ◽  
Blake Wilkerson

We study the Lambert series [Formula: see text], for all [Formula: see text]. We obtain the complete asymptotic expansion of [Formula: see text] near [Formula: see text]. Our analysis of the Lambert series yields the asymptotic forms for several related [Formula: see text]-series: the [Formula: see text]-gamma and [Formula: see text]-polygamma functions, the [Formula: see text]-Pochhammer symbol and the Jacobi theta functions. Some typical results include [Formula: see text] and [Formula: see text], with relative errors of order [Formula: see text] and [Formula: see text] respectively.


2018 ◽  
Vol 40 (6) ◽  
pp. C726-C747 ◽  
Author(s):  
Fredrik Johansson ◽  
Marc Mezzarobba

1979 ◽  
Vol 37 (2) ◽  
pp. 203-207
Author(s):  
C. Stuart Kelley
Keyword(s):  

2018 ◽  
Vol 110 (6) ◽  
pp. 581-589
Author(s):  
Necdet Batir
Keyword(s):  

2003 ◽  
Vol 15 (8) ◽  
pp. 1897-1929 ◽  
Author(s):  
Barbara Hammer ◽  
Peter Tiňo

Recent experimental studies indicate that recurrent neural networks initialized with “small” weights are inherently biased toward definite memory machines (Tiňno, Čerňanský, & Beňušková, 2002a, 2002b). This article establishes a theoretical counterpart: transition function of recurrent network with small weights and squashing activation function is a contraction. We prove that recurrent networks with contractive transition function can be approximated arbitrarily well on input sequences of unbounded length by a definite memory machine. Conversely, every definite memory machine can be simulated by a recurrent network with contractive transition function. Hence, initialization with small weights induces an architectural bias into learning with recurrent neural networks. This bias might have benefits from the point of view of statistical learning theory: it emphasizes one possible region of the weight space where generalization ability can be formally proved. It is well known that standard recurrent neural networks are not distribution independent learnable in the probably approximately correct (PAC) sense if arbitrary precision and inputs are considered. We prove that recurrent networks with contractive transition function with a fixed contraction parameter fulfill the so-called distribution independent uniform convergence of empirical distances property and hence, unlike general recurrent networks, are distribution independent PAC learnable.


Sign in / Sign up

Export Citation Format

Share Document