serial algorithm
Recently Published Documents


TOTAL DOCUMENTS

29
(FIVE YEARS 6)

H-INDEX

7
(FIVE YEARS 1)

2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Bingjie Zhang ◽  
Yiming Liu

With the rapid development of information society, a large amount of vague or uncertain English interpretation information appears in daily life. Uncertain information processing is an important research content in the field of artificial intelligence. In this paper, we combine the three-branch concept lattice and linguistic values with the digital elevation model and propose the three-branch fuzzy linguistic concept lattice as well as the attribute approximation method. In this paper, an improved serial algorithm for sink accumulation is proposed. The improved algorithm changes the order of cell calculation; after the cumulative amount of a “sub-basin” is calculated, all cells of the next “sub-basin” are calculated until all cells are calculated. The improved algorithm reduces the overhead space in the calculation process, reduces the pressure of cells entering and leaving the queue, and improves the calculation efficiency. The improved cumulant algorithm is compared with the commonly used recursive cumulant algorithm and the nonrecursive cumulant algorithm, and the improved algorithm improves by about 17% compared with the nonrecursive algorithm at 106 cell level, and the computation time of the recursive algorithm is about 3 times of the improved algorithm. Because the sink accumulation serial algorithm is an important part of the parallel calculation of sink accumulation, and the execution time is shorter by using the improved algorithm, this paper applies the proposed improved accumulation serial algorithm to the process of parallel calculation of accumulation.


2021 ◽  
Vol 80 (2) ◽  
pp. 347-375
Author(s):  
Timotej Hrga ◽  
Janez Povh

AbstractWe present , a parallel semidefinite-based exact solver for Max-Cut, a problem of finding the cut with the maximum weight in a given graph. The algorithm uses the branch and bound paradigm that applies the alternating direction method of multipliers as the bounding routine to solve the basic semidefinite relaxation strengthened by a subset of hypermetric inequalities. The benefit of the new approach is a less computationally expensive update rule for the dual variable with respect to the inequality constraints. We provide a theoretical convergence of the algorithm as well as extensive computational experiments with this method, to show that our algorithm outperforms state-of-the-art approaches. Furthermore, by combining algorithmic ingredients from the serial algorithm, we develop an efficient distributed parallel solver based on MPI.


2021 ◽  
pp. 1-1
Author(s):  
Sirawit Khittiwitchayakul ◽  
Watid Phakphisut ◽  
Pornchai Supnithi

Sensors ◽  
2020 ◽  
Vol 20 (24) ◽  
pp. 7123
Author(s):  
Jakub Niedzwiedzki ◽  
Adam Niewola ◽  
Piotr Lipinski ◽  
Piotr Swaczyna ◽  
Aleksander Bobinski ◽  
...  

In this paper, we introduce a real-time parallel-serial algorithm for autonomous robot positioning for GPS-denied, dark environments, such as caves and mine galleries. To achieve a good complexity-accuracy trade-off, we fuse data from light detection and ranging (LiDAR) and an inertial measurement unit (IMU). The proposed algorithm’s main novelty is that, unlike in most algorithms, we apply an extended Kalman filter (EKF) to each LiDAR scan point and calculate the location relative to a triangular mesh. We also introduce three implementations of the algorithm: serial, parallel, and parallel-serial. The first implementation verifies the correctness of our innovative approach, but is too slow for real-time execution. The second approach implements a well-known parallel data fusion approach, but is still too slow for our application. The third and final implementation of the presented algorithm along with the state-of-the-art GPU data structures achieves real-time performance. According to our experimental findings, our algorithm outperforms the reference Gaussian mixture model (GMM) localization algorithm in terms of accuracy by a factor of two.


2020 ◽  
Vol 10 (3) ◽  
pp. 1184 ◽  
Author(s):  
Fanxing Li ◽  
Wei Yan ◽  
Fupin Peng ◽  
Simo Wang ◽  
Jialin Du

The phase retrieval method based on random phase modulation can wipe out any ambiguity and stagnation problem in reconstruction. However, the two existing reconstruction algorithms for the random phase modulation method are suffering from problems. The serial algorithm from the spread-spectrum phase retrieval method can realize rapid convergence but has poor noise immunity. Although there is a parallel framework that can suppress noise, the convergence speed is slow. Here, we propose a random phase modulation phase retrieval method based on a serial–parallel cascaded reconstruction framework to simultaneously achieve quality imaging and rapid convergence. The proposed serial–parallel cascaded method uses the phased result from the serial algorithm to serve as the initialization of the subsequent parallel process. Simulations and experiments demonstrate that the superiorities of both serial and parallel algorithms are fetched by the proposed serial–parallel cascaded method. In the end, we analyze the effect of iteration numbers from the serial process on the reconstruction performance to find the optimal allocation scope of iteration numbers.


Sensors ◽  
2019 ◽  
Vol 19 (14) ◽  
pp. 3177
Author(s):  
Fabrício Costa ◽  
Glauberto Leilson Albuquerque ◽  
Luiz Felipe Silveira ◽  
Carlos Valderrama ◽  
Samuel Xavier-de-Souza

The acquisition is the most time-consuming step performed by a Global Navigation Satellite System (GNSS) receiver. The objective is to detect which satellites are transmitting and what are the phase and Doppler frequency shift of the signal. It is the step with the highest computational complexity, especially for signals subjected to large Doppler shifts. Improving acquisition performance has a large impact on the overall performance of the GNSS reception. In this paper, we present a two-step Global Positioning System (GPS) acquisition algorithm whose first step performs an incremental correlation to find a coarse pair of phase and frequency and the second step, triggered by the variance of the largest correlation values, refines the first step. The proposed strategy, based on the conventional time-domain serial algorithm, reduces the average execution time of the acquisition process to about 1/5 of the conventional acquisition while keeping the same modest logic hardware requirements and slightly better success and false-positive rates. Additionally, the new method reduces memory usage by a factor that is proportional to the signal’s sampling frequency. All these advantages over conventional acquisition contribute together to significantly improve the overall performance and cost of GPS receivers.


2018 ◽  
Vol 18 (13&14) ◽  
pp. 1095-1114
Author(s):  
Zongyuan Zhang ◽  
Zhijin Guan ◽  
Hong Zhang ◽  
Haiying Ma ◽  
Weiping Ding

In order to realize the linear nearest neighbor{(LNN)} of the quantum circuits and reduce the quantum cost of linear reversible quantum circuits, a method for synthesizing and optimizing linear reversible quantum circuits based on matrix multiplication of the structure of the quantum circuit is proposed. This method shows the matrix representation of linear quantum circuits by multiplying matrices of different parts of the whole circuit. The LNN realization by adding the SWAP gates is proposed and the equivalence of two ways of adding the SWAP gates is proved. The elimination rules of the SWAP gates between two overlapped adjacent quantum gates in different cases are proposed, which reduce the quantum cost of quantum circuits after realizing the LNN architecture. We propose an algorithm based on parallel processing in order to effectively reduce the time consumption for large-scale quantum circuits. Experiments show that the quantum cost can be improved by 34.31\% on average and the speed-up ratio of the GPU-based algorithm can reach 4 times compared with the CPU-based algorithm. The average time optimization ratio of the benchmark large-scale circuits in RevLib processed by the parallel algorithm is {95.57\%} comparing with the serial algorithm.


Symmetry ◽  
2018 ◽  
Vol 10 (10) ◽  
pp. 477
Author(s):  
Jianqiang Hao ◽  
Jianzhi Sun ◽  
Yi Chen ◽  
Qiang Cai ◽  
Li Tan

This paper provides a full theoretical and experimental analysis of a serial algorithm for the point-in-polygon test, which requires less running time than previous algorithms and can handle all degenerate cases. The serial algorithm can quickly determine whether a point is inside or outside a polygon and accurately determine the contours of input polygon. It describes all degenerate cases and simultaneously provides a corresponding solution to each degenerate case to ensure the stability and reliability. This also creates the prerequisites and basis for our novel boolean operations algorithm that inherits all the benefits of the serial algorithm. Using geometric probability and straight-line equation F ( P ) = ( y i − y i + 1 ) ( x p − x i ) − ( y i − y p ) ( x i + 1 − x i ) , it optimizes our two algorithms that avoid the division operation and do not need to compute any intersection points. Our algorithms are applicable to any polygon that may be self-intersecting or with holes nested to any level of depth. They do not have to sort the vertices clockwise or counterclockwise beforehand. Consequently, they process all edges one by one in any order for input polygons. This allows a parallel implementation of each algorithm to be made very easily. We also prove several theorems guaranteeing the correctness of algorithms. To speed up the operations, we assign each vector a number code and derive two iterative formulas using differential calculus. However, the experimental results as well as the theoretical proof show that our serial algorithm for the point-in-polygon test is optimal and the time complexities of all algorithms are linear. Our methods can be extended to three-dimensional space, in particular, they can be applied to 3D printing to improve its performance.


2015 ◽  
Vol 713-715 ◽  
pp. 1712-1715
Author(s):  
Hui Wang

We present a novel and powerful parallel algorithm, PMFI, for mining all the maximal frequent itemsets from a big database. PMFI utilizes novel technologies to make the I/O overhead down drastically. The key principle is to utilize prefix-based equivalence classes to decompose the search space. It distributes the work among the processors by equivalence class weights. It re-represents the database with vertical format, so the frequency counting can be done by simple tid-list intersection operations. It bases a novel serial algorithm MaxMining which utilizes multiple-level backtrack pruning strategy, so that each processor can count the maximal frequent itemsets independently by selectively duplicating the pieces of database. These techniques eliminate the need for synchronization. The dynamic load balance schema is applied in PMFI, it would be hopeful to achieve better performance.


Sign in / Sign up

Export Citation Format

Share Document