optimal complexity
Recently Published Documents


TOTAL DOCUMENTS

86
(FIVE YEARS 22)

H-INDEX

16
(FIVE YEARS 2)

2021 ◽  
Author(s):  
Dmytro Hrishko ◽  
Oleksandr Trofymenko ◽  
Katerina Bovsunoskaja ◽  
Olena Nosovets ◽  
Irina Dykan ◽  
...  

Author(s):  
Andi Han ◽  
Junbin Gao

We propose a stochastic recursive momentum method for Riemannian non-convex optimization that achieves a nearly-optimal complexity to find epsilon-approximate solution with one sample. The new algorithm requires one-sample gradient evaluations per iteration and does not require restarting with a large batch gradient, which is commonly used to obtain a faster rate. Extensive experiment results demonstrate the superiority of the proposed algorithm. Extensions to nonsmooth and constrained optimization settings are also discussed.


Mathematics ◽  
2021 ◽  
Vol 9 (15) ◽  
pp. 1804
Author(s):  
Thomas Rüberg ◽  
Lars Kielhorn ◽  
Jürgen Zechner

The numerical analysis of electromagnetic devices by means of finite element methods (FEM) is often hindered by the need to incorporate the surrounding domain. The discretisation of the air may become complex and has to be truncated by artificial boundaries incurring a modelling error. Even more problematic are moving parts that require tedious re-meshing and mapping techniques. In this work, we tackle these problems by using the boundary element method (BEM) in conjunction with FEM. Whereas the solid parts of the electrical device are discretised by FEM, which can easily account for material non-linearities, the surrounding domain is represented by BEM, which requires only a surface discretisation. This approach completely avoids an air mesh and its re-meshing during the simulation with moving or deforming parts. Our approach is robust, shows optimal complexity, and provides an accurate calculation of electromagnetic forces that are required to study the mechanical behaviour of the device.


Author(s):  
Radu-Alexandru Dragomir ◽  
Adrien B. Taylor ◽  
Alexandre d’Aspremont ◽  
Jérôme Bolte

2021 ◽  
Vol 27 (2) ◽  
pp. 105-112
Author(s):  
Eric Peña ◽  
Hiroki Sayama

Abstract Cellular automata (CA) have been lauded for their ability to generate complex global patterns from simple local rules. The late English mathematician, John Horton Conway, developed his illustrious Game of Life (Life) CA in 1970, which has since remained one of the most quintessential CA constructions—capable of producing a myriad of complex dynamic patterns and computational universality. Life and several other Life-like rules have been classified in the same group of aesthetically and dynamically interesting CA rules characterized by their complex behaviors. However, a rigorous quantitative comparison among similarly classified Life-like rules has not yet been fully established. Here we show that Life is capable of maintaining as much complexity as similar rules while remaining the most parsimonious. In other words, Life contains a consistent amount of complexity throughout its evolution, with the least number of rule conditions compared to other Life-like rules. We also found that the complexity of higher density Life-like rules, which themselves contain the Life rule as a subset, form a distinct concave density-complexity relationship whereby an optimal complexity candidate is proposed. Our results also support the notion that Life functions as the basic ingredient for cultivating the balance between structure and randomness to maintain complexity in 2D CA for low- and high-density regimes, especially over many iterations. This work highlights the genius of John Horton Conway and serves as a testament to his timeless marvel, which is referred to simply as: Life.


Author(s):  
Tiziano Dalmonte ◽  
Björn Lellmann ◽  
Nicola Olivetti ◽  
Elaine Pimentel

Abstract We present some hypersequent calculi for all systems of the classical cube and their extensions with axioms ${T}$, ${P}$ and ${D}$ and for every $n \geq 1$, rule ${RD}_n^+$. The calculi are internal as they only employ the language of the logic, plus additional structural connectives. We show that the calculi are complete with respect to the corresponding axiomatization by a syntactic proof of cut elimination. Then, we define a terminating proof search strategy in the hypersequent calculi and show that it is optimal for coNP-complete logics. Moreover, we show that from every failed proof of a formula or hypersequent it is possible to directly extract a countermodel of it in the bi-neighbourhood semantics of polynomial size for coNP logics, and for regular logics also in the relational semantics. We finish the paper by giving a translation between hypersequent rule applications and derivations in a labelled system for the classical cube.


2020 ◽  
Vol 2020 ◽  
pp. 1-12
Author(s):  
Shengnan Zhao ◽  
Xiangfu Song ◽  
Han Jiang ◽  
Ming Ma ◽  
Zhihua Zheng ◽  
...  

Oblivious transfer (OT) is a cryptographic primitive originally used to transfer a collection of messages from the sender to the receiver in an oblivious manner. OT extension protocol reduces expensive asymmetric operations by running a small number of OT instances first and then cheap symmetric operations. While most earlier works discussed security model or communication and computation complexity of OT in general case, we focus on concrete application scenarios, especially where the sender in the OT protocol is a database with less computation and limited interaction capability. In this paper, we propose a generic outsourced OT extension protocol ( O Tex ) that outsources all the asymmetric operations of the sender to a semihonest server so as to adapt to specific scenarios above. We give O Tex a standard security definition, and the proposed protocol is proven secure in the semihonest model. In O Tex , the sender works on the fly and performs only symmetric operations locally. Whatever the number of rounds OT to be executed and the length of messages in OT to be sent, our protocol realizes optimal complexity. Besides, O Tex can be used to construct high-level protocols, such as private membership test (PMT) and private set intersection (PSI). We believe our O Tex construction may be a building block in other applications as well.


2020 ◽  
Vol 15 (1) ◽  
pp. 157-173
Author(s):  
Laszlo Csirmaz

AbstractSecret sharing is an important building block in cryptography. All explicit secret sharing schemes which are known to have optimal complexity are multi-linear, thus are closely related to linear codes. The dual of such a linear scheme, in the sense of duality of linear codes, gives another scheme for the dual access structure. These schemes have the same complexity, namely the largest share size relative to the secret size is the same. It is a long-standing open problem whether this fact is true in general: the complexity of any access structure is the same as the complexity of its dual. We give a partial answer to this question. An almost perfect scheme allows negligible errors, both in the recovery and in the independence. There exists an almost perfect ideal scheme on 174 participants whose complexity is strictly smaller than that of its dual.


2020 ◽  
Vol 2 (2) ◽  
pp. 32-40
Author(s):  
David M Garner ◽  

Introduction: Approximate Entropy (ApEn) is a widely enforced metric to evaluate the chaotic response and irregularities of RR intervals from an electrocardiogram. We applied the metric to estimate these responses in subjects with type 1 diabetes mellitus (DM1). So far, as a technique it has one key problem – the accurate choices of the tolerance (r) and embedding dimension (M). So, we attempted to overcome this drawback by applying different groupings to detect the optimum. Methods: We studied 46 subjects split into two equal groups: DM1 and control. To evaluate autonomic modulation the heart rate was measured for 30 min in a supine position without any physical, sensory, or pharmacological stimuli. For the time-series, the ApEn was applied with set values for r (0.1→0.5 in intervals of 0.1) and M (1→5 in intervals of 1) and the differences between the two groups and their effect size by two measures (Cohen’s ds and Hedges’s gs) were computed. Results: The highest value of statistical significance accomplished for the effect sizes (ES) for any of the combinations performed was -0.7137 for Cohen’s ds and -0.7015 for Hedges’s gs with M = 2 and r = 0.08. Conclusion: ApEn was able to identify the reduction in chaotic response in DM1 subjects. Still, ApEn is relatively unreliable as a mathematical marker to determine this.


Sign in / Sign up

Export Citation Format

Share Document