fixed constant
Recently Published Documents


TOTAL DOCUMENTS

89
(FIVE YEARS 31)

H-INDEX

12
(FIVE YEARS 2)

2021 ◽  
Vol 13 (3) ◽  
pp. 1-16
Author(s):  
Fedor V. Fomin ◽  
Petr A. Golovach ◽  
Daniel Lokshtanov ◽  
Fahad Panolan ◽  
Saket Saurabh ◽  
...  

Parameterization above a guarantee is a successful paradigm in Parameterized Complexity. To the best of our knowledge, all fixed-parameter tractable problems in this paradigm share an additive form defined as follows. Given an instance ( I,k ) of some (parameterized) problem π with a guarantee g(I) , decide whether I admits a solution of size at least (or at most) k + g(I) . Here, g(I) is usually a lower bound on the minimum size of a solution. Since its introduction in 1999 for M AX SAT and M AX C UT (with g(I) being half the number of clauses and half the number of edges, respectively, in the input), analysis of parameterization above a guarantee has become a very active and fruitful topic of research. We highlight a multiplicative form of parameterization above (or, rather, times) a guarantee: Given an instance ( I,k ) of some (parameterized) problem π with a guarantee g(I) , decide whether I admits a solution of size at least (or at most) k · g(I) . In particular, we study the Long Cycle problem with a multiplicative parameterization above the girth g(I) of the input graph, which is the most natural guarantee for this problem, and provide a fixed-parameter algorithm. Apart from being of independent interest, this exemplifies how parameterization above a multiplicative guarantee can arise naturally. We also show that, for any fixed constant ε > 0, multiplicative parameterization above g(I) 1+ε of Long Cycle yields para-NP-hardness, thus our parameterization is tight in this sense. We complement our main result with the design (or refutation of the existence) of fixed-parameter algorithms as well as kernelization algorithms for additional problems parameterized multiplicatively above girth.


2021 ◽  
Vol 181 (2-3) ◽  
pp. 99-127
Author(s):  
Viliam Geffert ◽  
Zuzana Bednárová

We show that, for automata using a finite number of counters, the minimal space that is required for accepting a nonregular language is (log n)ɛ. This is required for weak space bounds on the size of their counters, for real-time and one-way, and for nondeterministic and alternating versions of these automata. The same holds for two-way automata, independent of whether they work with strong or weak space bounds, and of whether they are deterministic, nondeterministic, or alternating. (Here ɛ denotes an arbitrarily small—but fixed—constant; the “space” refers to the values stored in the counters, rather than to the lengths of their binary representation.) On the other hand, we show that the minimal space required for accepting a nonregular language is nɛ for multicounter automata with strong space bounds, both for real-time and one-way versions, independent of whether they are deterministic, nondeterministic, or alternating, and also for real-time and one-way deterministic multicounter automata with weak space bounds. All these bounds are optimal both for unary and general nonregular languages. However, for automata equipped with only one counter, it was known that one-way nondeterministic automata cannot recognize any unary nonregular languages at all, even if the size of the counter is not restricted, while, with weak space bound log n, we present a real-time nondeterministic automaton recognizing a binary nonregular language here.


2021 ◽  
Vol 2021 (1) ◽  
Author(s):  
Yusuf I. Suleiman ◽  
Poom Kumam ◽  
Habib ur Rehman ◽  
Wiyada Kumam

AbstractHe (J. Inequal. Appl. 2012:Article ID 162 2012) introduced the proximal point CQ algorithm (PPCQ) for solving the split equilibrium problem (SEP). However, the PPCQ converges weakly to a solution of the SEP and is restricted to monotone bifunctions. In addition, the step-size used in the PPCQ is a fixed constant μ in the interval $(0, \frac{1}{ \| A \|^{2} } )$ ( 0 , 1 ∥ A ∥ 2 ) . This often leads to excessive numerical computation in each iteration, which may affect the applicability of the PPCQ. In order to overcome these intrinsic drawbacks, we propose a robust step-size $\{ \mu _{n} \}_{n=1}^{\infty }$ { μ n } n = 1 ∞ which does not require computation of $\| A \|$ ∥ A ∥ and apply the adaptive step-size rule on $\{ \mu _{n} \}_{n=1}^{\infty }$ { μ n } n = 1 ∞ in such a way that it adjusts itself in accordance with the movement of associated components of the algorithm in each iteration. Then, we introduce a self-adaptive extragradient-CQ algorithm (SECQ) for solving the SEP and prove that our proposed SECQ converges strongly to a solution of the SEP with more general pseudomonotone equilibrium bifunctions. Finally, we present a preliminary numerical test to demonstrate that our SECQ outperforms the PPCQ.


2021 ◽  
Vol 15 ◽  
Author(s):  
Isam Al-Darabsah ◽  
Liang Chen ◽  
Wilten Nicola ◽  
Sue Ann Campbell

The human brain constitutes one of the most advanced networks produced by nature, consisting of billions of neurons communicating with each other. However, this communication is not in real-time, with different communication or time-delays occurring between neurons in different brain areas. Here, we investigate the impacts of these delays by modeling large interacting neural circuits as neural-field systems which model the bulk activity of populations of neurons. By using a Master Stability Function analysis combined with numerical simulations, we find that delays (1) may actually stabilize brain dynamics by temporarily preventing the onset to oscillatory and pathologically synchronized dynamics and (2) may enhance or diminish synchronization depending on the underlying eigenvalue spectrum of the connectivity matrix. Real eigenvalues with large magnitudes result in increased synchronizability while complex eigenvalues with large magnitudes and positive real parts yield a decrease in synchronizability in the delay vs. instantaneously coupled case. This result applies to networks with fixed, constant delays, and was robust to networks with heterogeneous delays. In the case of real brain networks, where the eigenvalues are predominantly real, owing to the nearly symmetric nature of these weight matrices, biologically plausible, small delays, are likely to increase synchronization, rather than decreasing it.


10.37236/9510 ◽  
2021 ◽  
Vol 28 (2) ◽  
Author(s):  
Max Hahn-Klimroth ◽  
Giulia Maesaka ◽  
Yannick Mogge ◽  
Samuel Mohr ◽  
Olaf Parczyk

In the model of randomly perturbed graphs we consider the union of a deterministic graph $\mathcal{G}_\alpha$ with minimum degree $\alpha n$ and the binomial random graph $\mathbb{G}(n,p)$. This model was introduced by Bohman, Frieze, and Martin and for Hamilton cycles their result bridges the gap between Dirac's theorem and the results by Pósa and Korshunov on the threshold in $\mathbb{G}(n,p)$. In this note we extend this result in $\mathcal{G}_\alpha\cup\mathbb{G}(n,p)$ to sparser graphs with $\alpha=o(1)$. More precisely, for any $\varepsilon>0$ and $\alpha \colon \mathbb{N} \mapsto (0,1)$ we show that a.a.s. $\mathcal{G}_\alpha\cup \mathbb{G}(n,\beta /n)$ is Hamiltonian, where $\beta = -(6 + \varepsilon) \log(\alpha)$. If $\alpha>0$ is a fixed constant this gives the aforementioned result by Bohman, Frieze, and Martin and if $\alpha=O(1/n)$ the random part $\mathbb{G}(n,p)$ is sufficient for a Hamilton cycle. We also discuss embeddings of bounded degree trees and other spanning structures in this model, which lead to interesting questions on almost spanning embeddings into $\mathbb{G}(n,p)$.


Mathematics ◽  
2021 ◽  
Vol 9 (9) ◽  
pp. 934
Author(s):  
Long Teng

In this work, we extend the Heston stochastic volatility model by including a time-dependent correlation that is driven by isospectral flows instead of a constant correlation, being motivated by the fact that the correlation between, e.g., financial products and financial institutions is hardly a fixed constant. We apply different numerical methods, including the method for backward stochastic differential equations (BSDEs) for a fast computation of the extended Heston model. An example of calibration to market data illustrates that our extended Heston model can provide a better volatility smile than the Heston model with other considered extensions.


2021 ◽  
Vol 3 (2) ◽  
pp. 19-22
Author(s):  
Jiří Stávek

We have newly interpreted the Rydberg constant R∞ as the Gaussian curvature – the ratio of the 4π electron spin rotation to the area on the Gauss – Bohr sphere travelled by that electron. Rydberg constant for hydrogen RH was newly derived and can be experimentally tested and compared with the value RH derived from the reduced mass. The de Broglie electron on the helical path embedded on the Gauss – Bohr sphere was projected as two shadows: the real shadow Re [cos(t)] and the imaginary shadow Im [i sin(t)]. This model differs from the Schrödinger famous quantum wave description in the physical interpretation. The wave amplitude is here interpreted as the distance of the shadow from the Gauss – Bohr sphere. Moreover, we have newly inserted into the wave equation curvature and torsion of that de Broglie helix. One very interesting result of this model is the estimation of the constant c of the speed of light with three additional significant figures. We have divided the very precise CODATA 2018 value for R∞ expressed in frequency and the CODATA 1986 value for R∞ expressed in wavenumber unit. Based on these precise spectroscopic data we might increase the accuracy of the constant c of the speed of light to twelve significant figures.


2021 ◽  
Vol 30 (3) ◽  
pp. 13-16
Author(s):  
Dong-Hoon LEE ◽  
Kee-Suk HONG

We discuss the candela (cd), the SI unit for light intensity, and its relation to single-photon technology. Currently, the definition of candela is based on the radiant flux in the unit of watts (W) with a fixed constant Kcd, and its primary standard is implemented electrically. Recent advances in the generation and the detection of a single photon indicate that photon-counting techniques with very small uncertainties of less than 1 ppm will become available in the near future. Thus single-photon technology will allow the light intensity to be defined simply in terms of the number of photons counted rather than the power measured in watts.


2021 ◽  
Vol 3 (4) ◽  
Author(s):  
Mohammed Salih Mahdi ◽  
Nidaa Falih Hassan ◽  
Ghassan H. Abdul-Majeed

AbstractIn recent years, revolution of development was exceedingly quick in the Internet. Nevertheless, instead of only linking personal computers, mobiles and wearable equipment's, Internet growths from a web binding to true world physical substances that is indicated to novel connotation, which is labeled as Internet of Things (IoT). This concept is utilized in many scopes like education, health care, agriculture and commerce. IoT devices are presented with batteries to have independence from electric current; consequently, their working time is specified by the total time of the power of these batteries. In many IoT applications, data of IoT devices are extremely critical and should be encrypted. Current encryption approaches are created with a high complexity of an arithmetical process to provide a high level of security. However, these arithmetical processes lead to troubles concerning the efficiency and power consumption. ChaCha cipher is one of these approaches, which recently attracted attention due to its deployment in several applications by Google. In the present study, a new stream cipher procedure is proposed (called Super ChaCha), which performs low duty cycles for securing data on IoT devices. The proposed algorithm represents an improved revision to the standard ChaCha algorithm by increasing resistance to cryptanalysis. The modification focuses on rotation procedure which has been changed from a fixed constant to a variable constant based on random value. Also, the inputs of the cipher are changing in the columns form followed by diagonals form to zigzag form and then by alternate form to provide improved diffusion in comparison with the standard ChaCha. Results regarding the security illustrate that Super ChaCha needs 2512 probable keys to break by brute-force attack. Furthermore, the randomness of Super ChaCha successfully passed the five benchmark and NIST test.


2021 ◽  
Vol 9 ◽  
Author(s):  
Steven Heilman ◽  
Alex Tarter

Abstract Using the calculus of variations, we prove the following structure theorem for noise-stable partitions: a partition of n-dimensional Euclidean space into m disjoint sets of fixed Gaussian volumes that maximise their noise stability must be $(m-1)$ -dimensional, if $m-1\leq n$ . In particular, the maximum noise stability of a partition of m sets in $\mathbb {R}^{n}$ of fixed Gaussian volumes is constant for all n satisfying $n\geq m-1$ . From this result, we obtain: (i) A proof of the plurality is stablest conjecture for three candidate elections, for all correlation parameters $\rho $ satisfying $0<\rho <\rho _{0}$ , where $\rho _{0}>0$ is a fixed constant (that does not depend on the dimension n), when each candidate has an equal chance of winning. (ii) A variational proof of Borell’s inequality (corresponding to the case $m=2$ ). The structure theorem answers a question of De–Mossel–Neeman and of Ghazi–Kamath–Raghavendra. Item (i) is the first proof of any case of the plurality is stablest conjecture of Khot-Kindler-Mossel-O’Donnell for fixed $\rho $ , with the case $\rho \to L1^{-}$ being solved recently. Item (i) is also the first evidence for the optimality of the Frieze–Jerrum semidefinite program for solving MAX-3-CUT, assuming the unique games conjecture. Without the assumption that each candidate has an equal chance of winning in (i), the plurality is stablest conjecture is known to be false.


Sign in / Sign up

Export Citation Format

Share Document