scholarly journals Physical Portrayal of Computational Complexity

2012 ◽  
Vol 2012 ◽  
pp. 1-15 ◽  
Author(s):  
Arto Annila

Computational complexity is examined using the principle of increasing entropy. To consider computation as a physical process from an initial instance to the final acceptance is motivated because information requires physical representations and because many natural processes complete in nondeterministic polynomial time (NP). The irreversible process with three or more degrees of freedom is found intractable when, in terms of physics, flows of energy are inseparable from their driving forces. In computational terms, when solving a problem in the class NP, decisions among alternatives will affect subsequently available sets of decisions. Thus the state space of a nondeterministic finite automaton is evolving due to the computation itself, hence it cannot be efficiently contracted using a deterministic finite automaton. Conversely when solving problems in the class P, the set of states does not depend on computational history, hence it can be efficiently contracted to the accepting state by a deterministic sequence of dissipative transformations. Thus it is concluded that the state set of class P is inherently smaller than the state set of class NP. Since the computational time needed to contract a given set is proportional to dissipation, the computational complexity class P is a proper (strict) subset of NP.

2018 ◽  
Vol 29 (02) ◽  
pp. 315-329 ◽  
Author(s):  
Timothy Ng ◽  
David Rappaport ◽  
Kai Salomaa

The neighbourhood of a language [Formula: see text] with respect to an additive distance consists of all strings that have distance at most the given radius from some string of [Formula: see text]. We show that the worst case deterministic state complexity of a radius [Formula: see text] neighbourhood of a language recognized by an [Formula: see text] state nondeterministic finite automaton [Formula: see text] is [Formula: see text]. In the case where [Formula: see text] is deterministic we get the same lower bound for the state complexity of the neighbourhood if we use an additive quasi-distance. The lower bound constructions use an alphabet of size linear in [Formula: see text]. We show that the worst case state complexity of the set of strings that contain a substring within distance [Formula: see text] from a string recognized by [Formula: see text] is [Formula: see text].


Entropy ◽  
2021 ◽  
Vol 23 (2) ◽  
pp. 223
Author(s):  
Yen-Ling Tai ◽  
Shin-Jhe Huang ◽  
Chien-Chang Chen ◽  
Henry Horng-Shing Lu

Nowadays, deep learning methods with high structural complexity and flexibility inevitably lean on the computational capability of the hardware. A platform with high-performance GPUs and large amounts of memory could support neural networks having large numbers of layers and kernels. However, naively pursuing high-cost hardware would probably drag the technical development of deep learning methods. In the article, we thus establish a new preprocessing method to reduce the computational complexity of the neural networks. Inspired by the band theory of solids in physics, we map the image space into a noninteraction physical system isomorphically and then treat image voxels as particle-like clusters. Then, we reconstruct the Fermi–Dirac distribution to be a correction function for the normalization of the voxel intensity and as a filter of insignificant cluster components. The filtered clusters at the circumstance can delineate the morphological heterogeneity of the image voxels. We used the BraTS 2019 datasets and the dimensional fusion U-net for the algorithmic validation, and the proposed Fermi–Dirac correction function exhibited comparable performance to other employed preprocessing methods. By comparing to the conventional z-score normalization function and the Gamma correction function, the proposed algorithm can save at least 38% of computational time cost under a low-cost hardware architecture. Even though the correction function of global histogram equalization has the lowest computational time among the employed correction functions, the proposed Fermi–Dirac correction function exhibits better capabilities of image augmentation and segmentation.


2021 ◽  
Vol 11 (5) ◽  
pp. 2346
Author(s):  
Alessandro Tringali ◽  
Silvio Cocuzza

The minimization of energy consumption is of the utmost importance in space robotics. For redundant manipulators tracking a desired end-effector trajectory, most of the proposed solutions are based on locally optimal inverse kinematics methods. On the one hand, these methods are suitable for real-time implementation; nevertheless, on the other hand, they often provide solutions quite far from the globally optimal one and, moreover, are prone to singularities. In this paper, a novel inverse kinematics method for redundant manipulators is presented, which overcomes the above mentioned issues and is suitable for real-time implementation. The proposed method is based on the optimization of the kinetic energy integral on a limited subset of future end-effector path points, making the manipulator joints to move in the direction of minimum kinetic energy. The proposed method is tested by simulation of a three degrees of freedom (DOF) planar manipulator in a number of test cases, and its performance is compared to the classical pseudoinverse solution and to a global optimal method. The proposed method outperforms the pseudoinverse-based one and proves to be able to avoid singularities. Furthermore, it provides a solution very close to the global optimal one with a much lower computational time, which is compatible for real-time implementation.


2003 ◽  
Vol 125 (4) ◽  
pp. 234-241 ◽  
Author(s):  
Vincent Y. Blouin ◽  
Michael M. Bernitsas ◽  
Denby Morrison

In structural redesign (inverse design), selection of the number and type of performance constraints is a major challenge. This issue is directly related to the computational effort and, most importantly, to the success of the optimization solver in finding a solution. These issues are the focus of this paper, which provides and discusses techniques that can help designers formulate a well-posed integrated complex redesign problem. LargE Admissible Perturbations (LEAP) is a general methodology, which solves redesign problems of complex structures with, among others, free vibration, static deformation, and forced response amplitude constraints. The existing algorithm, referred to as the Incremental Method is improved in this paper for problems with static and forced response amplitude constraints. This new algorithm, referred to as the Direct Method, offers comparable level of accuracy for less computational time and provides robustness in solving large-scale redesign problems in the presence of damping, nonstructural mass, and fluid-structure interaction effects. Common redesign problems include several natural frequency constraints and forced response amplitude constraints at various frequencies of excitation. Several locations on the structure and degrees of freedom can be constrained simultaneously. The designer must exercise judgment and physical intuition to limit the number of constraints and consequently the computational time. Strategies and guidelines are discussed. Such techniques are presented and applied to a 2,694 degree of freedom offshore tower.


Author(s):  
Vincent Delos ◽  
Santiago Arroyave-Tobón ◽  
Denis Teissandier

In mechanical design, tolerance zones and contact gaps can be represented by sets of geometric constraints. For computing the accumulation of possible manufacturing defects, these sets have to be summed and/or intersected according to the assembly architecture. The advantage of this approach is its robustness for treating even over-constrained mechanisms i.e. mechanisms in which some degrees of freedom are suppressed in a redundant way. However, the sum of constraints, which must be computed when simulating the accumulation of defects in serial joints, is a very time-consuming operation. In this work, we compare three methods for summing sets of constraints using polyhedral objects. The difference between them lie in the way the degrees of freedom (DOFs) (or invariance) of joints and features are treated. The first method proposes to virtually limit the DOFs of the toleranced features and joints to turn the polyhedra into polytopes and avoid manipulating unbounded objects. Even though this approach enables to sum, it also introduces bounding or cap facets which increase the complexity of the operand sets. This complexity increases after each operation until becoming far too significant. The second method aims to face this problem by cleaning, after each sum, the calculated polytope to keep under control the effects of the propagation of the DOFs. The third method is new and based on the identification of the sub-space in which the projection of the operands are bounded sets. Calculating the sum in this sub-space allows reducing significantly the operands complexity and consequently the computational time. After presenting the geometric properties on which the approaches rely, we demonstrate them on an industrial case. Then we compare the computation times and deduce the equality of the results of all the methods.


Water ◽  
2021 ◽  
Vol 13 (20) ◽  
pp. 2847
Author(s):  
Feng Zhang ◽  
Li Zhang ◽  
Yanshuang Xie ◽  
Zhiyuan Wang ◽  
Shaoping Shang

This work investigates the dynamic behaviors of floating structures with moorings using open−source software for smoothed particle hydrodynamics. DualSPHysics permits us to use graphics processing units to recreate designs that include complex calculations at high resolution with reasonable computational time. A free damped oscillation was simulated, and its results were compared with theoretical data to validate the numerical model developed. The simulated three degrees of freedom (3−DoF) (surge, heave, and pitch) of a rectangular floating box have excellent consistency with experimental data. MoorDyn was coupled with DualSPHysics to include a mooring simulation. Finally, we modelled and simulated a real mariculture platform on the coast of China. We simulated the 3−DoF of this mariculture platform under a typical annual wave and a Typhoon Dujuan wave. The motion was light and gentle under the typical annual wave but vigorous under the Typhoon Dujuan wave. Experiments at different tidal water levels revealed an earlier motion response and smaller motion range during the high tide. The results reveal that DualSPHysics combined with MoorDyn is an adaptive scheme to simulate a coupled fluid–solid–mooring system. This work provides support to disaster warning, emergency evacuation, and proper engineering design.


Author(s):  
Xiang Kong ◽  
Qizhe Xie ◽  
Zihang Dai ◽  
Eduard Hovy

Mixture of Softmaxes (MoS) has been shown to be effective at addressing the expressiveness limitation of Softmax-based models. Despite the known advantage, MoS is practically sealed by its large consumption of memory and computational time due to the need of computing multiple Softmaxes. In this work, we set out to unleash the power of MoS in practical applications by investigating improved word coding schemes, which could effectively reduce the vocabulary size and hence relieve the memory and computation burden. We show both BPE and our proposed Hybrid-LightRNN lead to improved encoding mechanisms that can halve the time and memory consumption of MoS without performance losses. With MoS, we achieve an improvement of 1.5 BLEU scores on IWSLT 2014 German-to-English corpus and an improvement of 0.76 CIDEr score on image captioning. Moreover, on the larger WMT 2014 machine translation dataset, our MoSboosted Transformer yields 29.6 BLEU score for English-toGerman and 42.1 BLEU score for English-to-French, outperforming the single-Softmax Transformer by 0.9 and 0.4 BLEU scores respectively and achieving the state-of-the-art result on WMT 2014 English-to-German task.


2019 ◽  
Vol 139 (3) ◽  
pp. 393-406
Author(s):  
Sarah Cogos ◽  
Samuel Roturier ◽  
Lars Östlund

AbstractIn Sweden, prescribed burning was trialed as early as the 1890s for forest regeneration purposes. However, the origins of prescribed burning in Sweden are commonly attributed to Joel Efraim Wretlind, forest manager in the State Forest district of Malå, Västerbotten County, from 1920 to 1952. To more fully understand the role he played in the development of prescribed burning and the extent of his burning, we examined historical records from the State Forest Company’s archive and Wretlind’s personal archive. The data showed that at least 11,208 ha was burned through prescribed burning between 1921 and 1970, representing 18.7% of the Malå state-owned forest area. Wretlind thus created a new forestry-driven fire regime, reaching, during peak years, extents close to historical fire regimes before the fire suppression era, and much higher than present-day burning. His use of prescribed fire to regenerate forests served as a guide for many other forest managers, spreading to all of northern Sweden during the 1950–1960s. Our analysis of Wretlind’s latest accounts also shows how he stood against the evolutions of modern forestry to defend a forestry system based on the reproduction of natural processes, such as fire.


2020 ◽  
Vol 2020 ◽  
pp. 1-16
Author(s):  
Ali Rahim Taleqani ◽  
Chrysafis Vogiatzis ◽  
Jill Hough

In this work, we investigate a new paradigm for dock-less bike sharing. Recently, it has become essential to accommodate connected and free-floating bicycles in modern bike-sharing operations. This change comes with an increase in the coordination cost, as bicycles are no longer checked in and out from bike-sharing stations that are fully equipped to handle the volume of requests; instead, bicycles can be checked in and out from virtually anywhere. In this paper, we propose a new framework for combining traditional bike stations with locations that can serve as free-floating bike-sharing stations. The framework we propose here focuses on identifying highly centralized k-clubs (i.e., connected subgraphs of restricted diameter). The restricted diameter reduces coordination costs as dock-less bicycles can only be found in specific locations. In addition, we use closeness centrality as this metric allows for quick access to dock-less bike sharing while, at the same time, optimizing the reach of service to bikers/customers. For the proposed problem, we first derive its computational complexity and show that it is NP-hard (by reduction from the 3-SATISFIABILITY problem), and then provide an integer programming formulation. Due to its computational complexity, the problem cannot be solved exactly in a large-scale setting, as is such of an urban area. Hence, we provide a greedy heuristic approach that is shown to run in reasonable computational time. We also provide the presentation and analysis of a case study in two cities of the state of North Dakota: Casselton and Fargo. Our work concludes with the cost-benefit analysis of both models (docked vs. dockless) to suggest the potential advantages of the proposed model.


2014 ◽  
Vol 2014 ◽  
pp. 1-10 ◽  
Author(s):  
Junghwan Song ◽  
Kwanhyung Lee ◽  
Hwanjin Lee

Biclique cryptanalysis is an attack which reduces the computational complexity by finding a biclique which is a kind of bipartite graph. We show a single-key full-round attack of the Crypton-256 and mCrypton-128 by using biclique cryptanalysis. In this paper, 4-round bicliques are constructed for Crypton-256 and mCrypton-128. And these bicliques are used to recover master key for the full rounds of Crypton-256 and mCrypton-128 with the computational complexities of 2253.78and 2126.5, respectively. This is the first known single-key full-round attack on the Crypton-256. And our result on the mCrypton-128 has superiority over known result of biclique cryptanalysis on the mCrypton-128 which constructs 3-round bicliques in terms of computational time complexity.


Sign in / Sign up

Export Citation Format

Share Document