scholarly journals The multiplicative constant for the Meijer-G kernel determinant

Nonlinearity ◽  
2021 ◽  
Vol 34 (5) ◽  
pp. 2837-2877
Author(s):  
Christophe Charlier ◽  
Jonatan Lenells ◽  
Julian Mauersberger
Author(s):  
Joachim Toft ◽  
Anupam Gumber ◽  
Ramesh Manna ◽  
P. K. Ratnakumar

AbstractLet $$\mathcal H$$ H be a Hilbert space of distributions on $$\mathbf{R}^{d}$$ R d which contains at least one non-zero element of the Feichtinger algebra $$S_0$$ S 0 and is continuously embedded in $$\mathscr {D}'$$ D ′ . If $$\mathcal H$$ H is translation and modulation invariant, also in the sense of its norm, then we prove that $$\mathcal H= L^2$$ H = L 2 , with the same norm apart from a multiplicative constant.


2006 ◽  
Vol 16 (2) ◽  
pp. 211-226 ◽  
Author(s):  
Silvana Petruseva

This paper discusses the comparison of the efficiency of two algorithms, by estimation of their complexity. For solving the problem, the Neural Network Crossbar Adaptive Array (NN-CAA) is used as the agent architecture, implementing a model of an emotion. The problem discussed is how to find the shortest path in an environment with n states. The domains concerned are environments with n states, one of which is the starting state, one is the goal state, and some states are undesirable and they should be avoided. It is obtained that finding one path (one solution) is efficient, i.e. in polynomial time by both algorithms. One of the algorithms is faster than the other only in the multiplicative constant, and it shows a step forward toward the optimality of the learning process. However, finding the optimal solution (the shortest path) by both algorithms is in exponential time which is asserted by two theorems. It might be concluded that the concept of subgoal is one step forward toward the optimality of the process of the agent learning. Yet, it should be explored further on, in order to obtain an efficient, polynomial algorithm.


It is shown that the boundary layer approximation to the flow of a viscous fluid past a flat plate of length l , generally valid near the plate when the Reynolds number Re is large, fails within a distance O( lRe -3/4 ) of the trailing edge. The appropriate governing equations in this neighbourhood are the full Navier- Stokes equations. On the basis of Imai (1966) these equations are linearized with respect to a uniform shear and are then completely solved by means of a Wiener-Hopf integral equation. The solution so obtained joins smoothly on to that of the boundary layer for a flat plate upstream of the trailing edge and for a wake downstream of the trailing edge. The contribution to the drag coefficient is found to be O ( Re -3/4 ) and the multiplicative constant is explicitly worked out for the linearized equations.


Author(s):  
M. MORASCHINI ◽  
A. SAVINI

AbstractFollowing the philosophy behind the theory of maximal representations, we introduce the volume of a Zimmer’s cocycle Γ × X → PO° (n, 1), where Γ is a torsion-free (non-)uniform lattice in PO° (n, 1), with n > 3, and X is a suitable standard Borel probability Γ-space. Our numerical invariant extends the volume of representations for (non-)uniform lattices to measurable cocycles and in the uniform setting it agrees with the generalized version of the Euler number of self-couplings. We prove that our volume of cocycles satisfies a Milnor–Wood type inequality in terms of the volume of the manifold Γ\ℍn. Additionally this invariant can be interpreted as a suitable multiplicative constant between bounded cohomology classes. This allows us to define a family of measurable cocycles with vanishing volume. The same interpretation enables us to characterize maximal cocycles for being cohomologous to the cocycle induced by the standard lattice embedding via a measurable map X → PO° (n, 1) with essentially constant sign.As a by-product of our rigidity result for the volume of cocycles, we give a different proof of the mapping degree theorem. This allows us to provide a complete characterization of maps homotopic to local isometries between closed hyperbolic manifolds in terms of maximal cocycles.In dimension n = 2, we introduce the notion of Euler number of measurable cocycles associated to a closed surface group and we show that it extends the classic Euler number of representations. Our Euler number agrees with the generalized version of the Euler number of self-couplings up to a multiplicative constant. Imitating the techniques developed in the case of the volume, we show a Milnor–Wood type inequality whose upper bound is given by the modulus of the Euler characteristic of the associated closed surface. This gives an alternative proof of the same result for the generalized version of the Euler number of self-couplings. Finally, using the interpretation of the Euler number as a multiplicative constant between bounded cohomology classes, we characterize maximal cocycles as those which are cohomologous to the one induced by a hyperbolization.


2020 ◽  
Vol 29 (5) ◽  
pp. 698-721
Author(s):  
Tao Jiang ◽  
Liana Yepremyan

AbstractA classical result of Erdős and, independently, of Bondy and Simonovits [3] says that the maximum number of edges in an n-vertex graph not containing C2k, the cycle of length 2k, is O(n1+1/k). Simonovits established a corresponding supersaturation result for C2k’s, showing that there exist positive constants C,c depending only on k such that every n-vertex graph G with e(G)⩾ Cn1+1/k contains at least c(e(G)/v(G))2k copies of C2k, this number of copies tightly achieved by the random graph (up to a multiplicative constant).In this paper we extend Simonovits' result to a supersaturation result of r-uniform linear cycles of even length in r-uniform linear hypergraphs. Our proof is self-contained and includes the r = 2 case. As an auxiliary tool, we develop a reduction lemma from general host graphs to almost-regular host graphs that can be used for other supersaturation problems, and may therefore be of independent interest.


Author(s):  
Hua-Wen Liu ◽  
Feng Qin

By weakening the neutral element condition of semiuninorms, we introduce a new concept called weak-neutral semiuninorms (shortly, wn-semiuninorms). After analyzing their structure, several classes of wn-semiuninorms are presented and discussed. Particularly, based on a kind of monotone unary functions which are not necessarily continuous and strictly monotone, we introduce representable wn-semiuninorms and discuss some of their properties in detail. We show that there is no idempotent proper wn-semiuninorm. Each representable wn-semiuninorm is Archimidean but not strictly monotone, and its additive generator is unique up to a positive multiplicative constant under some conditions. In the discussion about the representable wn-semiuninorms, we also characterize the solutions to a class of Cauchy functional equations on a restricted domain.


2009 ◽  
Vol 18 (1-2) ◽  
pp. 227-245 ◽  
Author(s):  
NATI LINIAL ◽  
ADI SHRAIBMAN

This paper has two main focal points. We first consider an important class of machine learning algorithms: large margin classifiers, such as Support Vector Machines. The notion of margin complexity quantifies the extent to which a given class of functions can be learned by large margin classifiers. We prove that up to a small multiplicative constant, margin complexity is equal to the inverse of discrepancy. This establishes a strong tie between seemingly very different notions from two distinct areas.In the same way that matrix rigidity is related to rank, we introduce the notion of rigidity of margin complexity. We prove that sign matrices with small margin complexity rigidity are very rare. This leads to the question of proving lower bounds on the rigidity of margin complexity. Quite surprisingly, this question turns out to be closely related to basic open problems in communication complexity, e.g., whether PSPACE can be separated from the polynomial hierarchy in communication complexity.Communication is a key ingredient in many types of learning. This explains the relations between the field of learning theory and that of communication complexity [6, l0, 16, 26]. The results of this paper constitute another link in this rich web of relations. These new results have already been applied toward the solution of several open problems in communication complexity [18, 20, 29].


2015 ◽  
Vol 29 (2) ◽  
pp. 181-189 ◽  
Author(s):  
Babak Haji

We consider a queueing loss system with heterogeneous skill based servers with arbitrary distributions. We assume Poisson arrivals, with each arrival having a vector indicating which of the servers are eligible to serve it. Arrivals can only be assigned to a server that is both idle and eligible. We assume arrivals are assigned to the idle eligible server that has been idle the longest and derive, up to a multiplicative constant, the limiting distribution for this system. We show that the limiting probabilities of the ordered list of idle servers depend on the service time distributions only through their means. Moreover, conditional on the ordered list of idle servers, the remaining service times of the busy servers are independent and have their respective equilibrium service distributions. We also provide an algorithm using Gibbs sampler Markov Chain Monte Carlo method for estimating the limiting probabilities and other desired quantities of this system.


2000 ◽  
Vol 9 (2) ◽  
pp. 149-166 ◽  
Author(s):  
YOAV SEGINER

We compare the Euclidean operator norm of a random matrix with the Euclidean norm of its rows and columns. In the first part of this paper, we show that if A is a random matrix with i.i.d. zero mean entries, then E∥A∥h [les ] Kh (E maxi ∥ai[bull ] ∥h + E maxj ∥aj[bull ] ∥h), where K is a constant which does not depend on the dimensions or distribution of A (h, however, does depend on the dimensions). In the second part we drop the assumption that the entries of A are i.i.d. We therefore consider the Euclidean operator norm of a random matrix, A, obtained from a (non-random) matrix by randomizing the signs of the matrix's entries. We show that in this case, the best inequality possible (up to a multiplicative constant) is E∥A∥h [les ] (c log1/4 min {m, n})h (E maxi ∥ai[bull ] ∥h + E maxj ∥aj[bull ] ∥h) (m, n the dimensions of the matrix and c a constant independent of m, n).


Sign in / Sign up

Export Citation Format

Share Document