computational complexity
Recently Published Documents


TOTAL DOCUMENTS

3973
(FIVE YEARS 1012)

H-INDEX

76
(FIVE YEARS 8)

Automatica ◽  
2022 ◽  
Vol 136 ◽  
pp. 110083
Author(s):  
Yanwen Mao ◽  
Aritra Mitra ◽  
Shreyas Sundaram ◽  
Paulo Tabuada

2022 ◽  
Vol 23 (1) ◽  
pp. 1-35
Author(s):  
Manuel Bodirsky ◽  
Marcello Mamino ◽  
Caterina Viola

Valued constraint satisfaction problems (VCSPs) are a large class of combinatorial optimisation problems. The computational complexity of VCSPs depends on the set of allowed cost functions in the input. Recently, the computational complexity of all VCSPs for finite sets of cost functions over finite domains has been classified. Many natural optimisation problems, however, cannot be formulated as VCSPs over a finite domain. We initiate the systematic investigation of the complexity of infinite-domain VCSPs with piecewise linear homogeneous cost functions. Such VCSPs can be solved in polynomial time if the cost functions are improved by fully symmetric fractional operations of all arities. We show this by reducing the problem to a finite-domain VCSP which can be solved using the basic linear program relaxation. It follows that VCSPs for submodular PLH cost functions can be solved in polynomial time; in fact, we show that submodular PLH functions form a maximally tractable class of PLH cost functions.


Sensors ◽  
2022 ◽  
Vol 22 (2) ◽  
pp. 676
Author(s):  
Vamsi K. Amalladinne ◽  
Jamison R. Ebert ◽  
Jean-Francois Chamberland ◽  
Krishna R. Narayanan

Unsourced random access (URA) has emerged as a pragmatic framework for next-generation distributed sensor networks. Within URA, concatenated coding structures are often employed to ensure that the central base station can accurately recover the set of sent codewords during a given transmission period. Many URA algorithms employ independent inner and outer decoders, which can help reduce computational complexity at the expense of a decay in performance. In this article, an enhanced decoding algorithm is presented for a concatenated coding structure consisting of a wide range of inner codes and an outer tree-based code. It is shown that this algorithmic enhancement has the potential to simultaneously improve error performance and decrease the computational complexity of the decoder. This enhanced decoding algorithm is applied to two existing URA algorithms, and the performance benefits of the algorithm are characterized. Findings are supported by numerical simulations.


2022 ◽  
Author(s):  
Kuan-Jung Chiang ◽  
Chi Man Wong ◽  
Feng Wan ◽  
Tzyy-Ping Jung ◽  
Masaki Nakanishi

Numerical simulations with synthetic data were conducted.


2022 ◽  
Author(s):  
Kuan-Jung Chiang ◽  
Chi Man Wong ◽  
Feng Wan ◽  
Tzyy-Ping Jung ◽  
Masaki Nakanishi

Numerical simulations with synthetic data were conducted.


Author(s):  
Cheng Huang ◽  
Xiaoming Huo

Testing for independence plays a fundamental role in many statistical techniques. Among the nonparametric approaches, the distance-based methods (such as the distance correlation-based hypotheses testing for independence) have many advantages, compared with many other alternatives. A known limitation of the distance-based method is that its computational complexity can be high. In general, when the sample size is n, the order of computational complexity of a distance-based method, which typically requires computing of all pairwise distances, can be O(n2). Recent advances have discovered that in the univariate cases, a fast method with O(n log  n) computational complexity and O(n) memory requirement exists. In this paper, we introduce a test of independence method based on random projection and distance correlation, which achieves nearly the same power as the state-of-the-art distance-based approach, works in the multivariate cases, and enjoys the O(nK log  n) computational complexity and O( max{n, K}) memory requirement, where K is the number of random projections. Note that saving is achieved when K < n/ log  n. We name our method a Randomly Projected Distance Covariance (RPDC). The statistical theoretical analysis takes advantage of some techniques on the random projection which are rooted in contemporary machine learning. Numerical experiments demonstrate the efficiency of the proposed method, relative to numerous competitors.


2022 ◽  
Author(s):  
Diego Argüello Ron ◽  
Pedro Jorge Freire De Carvalho Sourza ◽  
Jaroslaw E. Prilepsky ◽  
Morteza Kamalian-Kopae ◽  
Antonio Napoli ◽  
...  

Abstract The deployment of artificial neural networks-based optical channel equalizers on edge-computing devices is critically important for the next generation of optical communication systems. However, this is a highly challenging problem, mainly due to the computational complexity of the artificial neural networks (NNs) required for the efficient equalization of nonlinear optical channels with large memory. To implement the NN-based optical channel equalizer in hardware, a substantial complexity reduction is needed, while keeping an acceptable performance level. In this work, we address this problem by applying pruning and quantization techniques to an NN-based optical channel equalizer. We use an exemplary NN architecture, the multi-layer perceptron (MLP), and address its complexity reduction for the 30 GBd 1000 km transmission over a standard single-mode fiber. We demonstrate that it is feasible to reduce the equalizer’s memory by up to 87.12%, and its complexity by up to 91.5%, without noticeable performance degradation. In addition to this, we accurately define the computational complexity of a compressed NN-based equalizer in the digital signal processing (DSP) sense and examine the impact of using different CPU and GPU settings on power consumption and latency for the compressed equalizer. We also verify the developed technique experimentally, using two standard edge-computing hardware units: Raspberry Pi 4 and Nvidia Jetson Nano.


2022 ◽  
Vol 5 (1) ◽  
Author(s):  
Kirill P. Kalinin ◽  
Natalia G. Berloff

AbstractA promising approach to achieve computational supremacy over the classical von Neumann architecture explores classical and quantum hardware as Ising machines. The minimisation of the Ising Hamiltonian is known to be NP-hard problem yet not all problem instances are equivalently hard to optimise. Given that the operational principles of Ising machines are suited to the structure of some problems but not others, we propose to identify computationally simple instances with an ‘optimisation simplicity criterion’. Neuromorphic architectures based on optical, photonic, and electronic systems can naturally operate to optimise instances satisfying this criterion, which are therefore often chosen to illustrate the computational advantages of new Ising machines. As an example, we show that the Ising model on the Möbius ladder graph is ‘easy’ for Ising machines. By rewiring the Möbius ladder graph to random 3-regular graphs, we probe an intermediate computational complexity between P and NP-hard classes with several numerical methods. Significant fractions of polynomially simple instances are further found for a wide range of small size models from spin glasses to maximum cut problems. A compelling approach for distinguishing easy and hard instances within the same NP-hard class of problems can be a starting point in developing a standardised procedure for the performance evaluation of emerging physical simulators and physics-inspired algorithms.


2022 ◽  
Author(s):  
Juan Pablo Franco ◽  
Peter Bossaerts ◽  
Carsten Murawski

Many everyday tasks require people to solve computationally complex problems. However, little is known about the effects of computational hardness on the neural processes associated with solving such problems. Here, we draw on computational complexity theory to address this issue. We performed an experiment in which participants solved several instances of the 0-1 knapsack problem, a combinatorial optimization problem, while undergoing ultra-high field (7T) functional magnetic resonance imaging (fMRI). Instances varied in two task-independent measures of intrinsic computational hardness: complexity and proof hardness. We characterise a network of brain regions whose activation was correlated with both measures but in distinct ways, including the anterior insula, dorsal anterior cingulate cortex and the intra-parietal sulcus/angular gyrus. Activation and connectivity changed dynamically as a function of complexity and proof hardness, in line with theoretical computational requirements. Overall, our results suggest that computational complexity theory provides a suitable framework to study the effects of computational hardness on the neural processes associated with solving complex cognitive tasks.


Sign in / Sign up

Export Citation Format

Share Document