Computational Complexity
Recently Published Documents


TOTAL DOCUMENTS

6200
(FIVE YEARS 2711)

H-INDEX

86
(FIVE YEARS 20)

Author(s):  
Huiping Guo ◽  
Hongru Li

AbstractDecomposition hybrid algorithms with the recursive framework which recursively decompose the structural task into structural subtasks to reduce computational complexity are employed to learn Bayesian network (BN) structure. Merging rules are commonly adopted as the combination method in the combination step. The direction determination rule of merging rules has problems in using the idea of keeping v-structures unchanged before and after combination to determine directions of edges in the whole structure. It breaks down in one case due to appearances of wrong v-structures, and is hard to operate in practice. Therefore, we adopt a novel approach for direction determination and propose a two-stage combination method. In the first-stage combination method, we determine nodes, links of edges by merging rules and adopt the idea of permutation and combination to determine directions of contradictory edges. In the second-stage combination method, we restrict edges between nodes that do not satisfy the decomposition property and their parent nodes by determining the target domain according to the decomposition property. Simulation experiments on four networks show that the proposed algorithm can obtain BN structure with higher accuracy compared with other algorithms. Finally, the proposed algorithm is applied to the thickening process of gold hydrometallurgy to solve the practical problem.


2022 ◽  
Author(s):  
Kuan-Jung Chiang ◽  
Chi Man Wong ◽  
Feng Wan ◽  
Tzyy-Ping Jung ◽  
Masaki Nakanishi

Numerical simulations with synthetic data were conducted.


Author(s):  
В.О. Жилинский ◽  
Л.Г. Гагарина

Проведен обзор методов и алгоритмов формирования рабочего созвездия навигационных космических аппаратов при решении задач определения местоположения потребителя ГНСС. Появление новых орбитальных группировок и развитие прошлых поколений глобальных навигационных спутниковых систем (ГНСС) способствует увеличению как количества навигационных аппаратов, так и навигационных радиосигналов, излучаемых каждым спутником, в связи с чем решение проблемы выбора навигационных аппаратов является важной составляющей навигационной задачи. Рассмотрены исследования, посвященные типовым алгоритмам формирования рабочего созвездия, а также современным алгоритмам, построенным с привлечением элементов теории машинного обучения. Представлена связь ошибок определения координат потребителя, погрешностей определения псевдодальностей и пространственного расположения навигационных аппаратов и потребителя. Среди рассмотренных алгоритмов выделены три направления исследований: 1) нацеленных на поиск оптимального рабочего созвездия, обеспечивающего минимальную оценку выбранного геометрического фактора снижения точности; 2) нацеленных на поиск квазиоптимальных рабочих созвездий с целью уменьшения вычислительной сложности алгоритма ввиду большого количества видимых спутников; 3) позволяющих одновременно работать в совмещенном режиме по нескольким ГНСС. Приводятся особенности реализаций алгоритмов, их преимущества и недостатки. В заключении приведены рекомендации по изменению подхода к оценке эффективности алгоритмов, а также делается вывод о необходимости учета как геометрического расположения космических аппаратов, так и погрешности определения псевдодальности при выборе космического аппарата в рабочее созвездие The article provides an overview of methods and algorithms for forming a satellite constellation as a part of the navigation problem for the positioning, navigation and timing service. The emergence of new orbital constellations and the development of past GNSS generations increase both the number of navigation satellites and radio signals emitted by every satellite, and therefore the proper solution of satellite selection problem is an important component of the positioning, navigation and timing service. We considered the works devoted to typical algorithms of working constellation formation, as well as to modern algorithms built with the use of machine-learning theory elements. We present the relationship between user coordinates errors, pseudorange errors and the influence of spatial location of satellites and the user. Three directions of researche among reviewed algorithms are outlined: 1) finding the best satellite constellation that provides the minimum geometric dilution of precision; 2) finding quasi-optimal satellite constellation in order to reduce the computational complexity of the algorithm due to the large number of visible satellites; 3) possibility to work in a combined mode using radio signals of multiple GNSS simultaneously. The article presents the features of the algorithms' implementations, their advantages and disadvantages. The conclusion presents the recommendations to change the approach to assessing the performance of the algorithms, and concludes that it is necessary to take into account both the satellite geometric configuration, and pseudorange errors when satellite constellation is being formed


2022 ◽  
Author(s):  
Kuan-Jung Chiang ◽  
Chi Man Wong ◽  
Feng Wan ◽  
Tzyy-Ping Jung ◽  
Masaki Nakanishi

Numerical simulations with synthetic data were conducted.


Author(s):  
Cheng Huang ◽  
Xiaoming Huo

Testing for independence plays a fundamental role in many statistical techniques. Among the nonparametric approaches, the distance-based methods (such as the distance correlation-based hypotheses testing for independence) have many advantages, compared with many other alternatives. A known limitation of the distance-based method is that its computational complexity can be high. In general, when the sample size is n, the order of computational complexity of a distance-based method, which typically requires computing of all pairwise distances, can be O(n2). Recent advances have discovered that in the univariate cases, a fast method with O(n log  n) computational complexity and O(n) memory requirement exists. In this paper, we introduce a test of independence method based on random projection and distance correlation, which achieves nearly the same power as the state-of-the-art distance-based approach, works in the multivariate cases, and enjoys the O(nK log  n) computational complexity and O( max{n, K}) memory requirement, where K is the number of random projections. Note that saving is achieved when K < n/ log  n. We name our method a Randomly Projected Distance Covariance (RPDC). The statistical theoretical analysis takes advantage of some techniques on the random projection which are rooted in contemporary machine learning. Numerical experiments demonstrate the efficiency of the proposed method, relative to numerous competitors.


2022 ◽  
Vol 5 (1) ◽  
Author(s):  
Kirill P. Kalinin ◽  
Natalia G. Berloff

AbstractA promising approach to achieve computational supremacy over the classical von Neumann architecture explores classical and quantum hardware as Ising machines. The minimisation of the Ising Hamiltonian is known to be NP-hard problem yet not all problem instances are equivalently hard to optimise. Given that the operational principles of Ising machines are suited to the structure of some problems but not others, we propose to identify computationally simple instances with an ‘optimisation simplicity criterion’. Neuromorphic architectures based on optical, photonic, and electronic systems can naturally operate to optimise instances satisfying this criterion, which are therefore often chosen to illustrate the computational advantages of new Ising machines. As an example, we show that the Ising model on the Möbius ladder graph is ‘easy’ for Ising machines. By rewiring the Möbius ladder graph to random 3-regular graphs, we probe an intermediate computational complexity between P and NP-hard classes with several numerical methods. Significant fractions of polynomially simple instances are further found for a wide range of small size models from spin glasses to maximum cut problems. A compelling approach for distinguishing easy and hard instances within the same NP-hard class of problems can be a starting point in developing a standardised procedure for the performance evaluation of emerging physical simulators and physics-inspired algorithms.


2022 ◽  
Author(s):  
Diego Argüello Ron ◽  
Pedro Jorge Freire De Carvalho Sourza ◽  
Jaroslaw E. Prilepsky ◽  
Morteza Kamalian-Kopae ◽  
Antonio Napoli ◽  
...  

Abstract The deployment of artificial neural networks-based optical channel equalizers on edge-computing devices is critically important for the next generation of optical communication systems. However, this is a highly challenging problem, mainly due to the computational complexity of the artificial neural networks (NNs) required for the efficient equalization of nonlinear optical channels with large memory. To implement the NN-based optical channel equalizer in hardware, a substantial complexity reduction is needed, while keeping an acceptable performance level. In this work, we address this problem by applying pruning and quantization techniques to an NN-based optical channel equalizer. We use an exemplary NN architecture, the multi-layer perceptron (MLP), and address its complexity reduction for the 30 GBd 1000 km transmission over a standard single-mode fiber. We demonstrate that it is feasible to reduce the equalizer’s memory by up to 87.12%, and its complexity by up to 91.5%, without noticeable performance degradation. In addition to this, we accurately define the computational complexity of a compressed NN-based equalizer in the digital signal processing (DSP) sense and examine the impact of using different CPU and GPU settings on power consumption and latency for the compressed equalizer. We also verify the developed technique experimentally, using two standard edge-computing hardware units: Raspberry Pi 4 and Nvidia Jetson Nano.


2022 ◽  
Vol 2 ◽  
Author(s):  
Xiaohu Zhao ◽  
Yuanyuan Zou ◽  
Shaoyuan Li

This paper investigates the multi-agent persistent monitoring problem via a novel distributed submodular receding horizon control approach. In order to approximate global monitoring performance, with the definition of sub-modularity, the original persistent monitoring objective is divided into several local objectives in a receding horizon framework, and the optimal trajectories of each agent are obtained by taking into account the neighborhood information. Specifically, the optimization horizon of each local objective is derived from the local target states and the information received from their neighboring agents. Based on the sub-modularity of each local objective, the distributed greedy algorithm is proposed. As a result, each agent coordinates with neighboring agents asynchronously and optimizes its trajectory independently, which reduces the computational complexity while achieving the global performance as much as possible. The conditions are established to ensure the estimation error converges to a bounded global performance. Finally, simulation results show the effectiveness of the proposed method.


2022 ◽  
Vol 183 (1-2) ◽  
pp. 125-167
Author(s):  
Ronny Tredup

For a fixed type of Petri nets τ, τ-SYNTHESIS is the task of finding for a given transition system A a Petri net N of type τ(τ-net, for short) whose reachability graph is isomorphic to A if there is one. The decision version of this search problem is called τ-SOLVABILITY. If an input A allows a positive decision, then it is called τ-solvable and a sought net N τ-solves A. As a well known fact, A is τ-solvable if and only if it has the so-called τ-event state separation property (τ-ESSP, for short) and the τ-state separation property (τ-SSP, for short). The question whether A has the τ-ESSP or the τ-SSP defines also decision problems. In this paper, for all b ∈ ℕ, we completely characterize the computational complexity of τ-SOLVABILITY, τ-ESSP and τ-SSP for the types of pure b-bounded Place/Transition-nets, the b-bounded Place/Transitionnets and their corresponding ℤb+1-extensions.


2022 ◽  
Vol 12 (1) ◽  
Author(s):  
Su Min Hoi ◽  
Ean Hin Ooi ◽  
Irene Mei Leng Chew ◽  
Ji Jinn Foo

AbstractA 3D stationary particle tracking velocimetry (SPTV) with a unique recursive corrective algorithm has been successfully established to detect the instantaneous regional fluid flow characteristics. The veracity of SPTV is corroborated by conducting actual displacement measurement validation, which gives a maximum percentage deviation of about 0.8%. This supports the accuracy of the current SPTV system in 3D position detection. More importantly, the SPTV detected velocity fluctuations are highly repeatable. In this study, SPTV is proven to be able to express the nature of chaotic fractal grid-induced regional turbulence, namely: the high turbulence intensity attributed to multilength-scale wake interactions, the Kolmogorov’s −5/3 law decay, vortex shedding, and the Gaussian flow undulations immediately leeward of the grid followed by non-Gaussian behaviour further downstream. Moreover, by comparing the flow fields between control no-grid and fractal grid-generated turbulence of a plate-fin array, SPTV reveals vigorous turbulence intensity, smaller regional integral-length-scale, and energetic vortex shedding at higher frequency for the latter, particularly between fins. Thereupon, it allows the unravelling of detailed thermofluid interplays of plate-fin heat sink heat transfer augmentation. The novelty of SPTV lies in its simplicity, use of low-cost off-the-shelf components, and most remarkably, low computational complexity in detecting fundamental characteristics of turbulent fluid flow.


Sign in / Sign up

Export Citation Format

Share Document