scholarly journals Closest Pair Algorithms in 2D space,a commentary on complexity and reductions

Author(s):  
Anand Sunder ◽  

One of the most challenging problems in computational geometry is closest pair of points given n points. Brute force algorithms[1] and Divide and conquer[1] have been verified and the lowest complexity of attributed to latter class of algorithms, with worst case being for the former being . We propose a method of partitioning the set of n-points based on the least area rectangle that can circumscribe these points

Author(s):  
Rhowel M. Dellosa ◽  
Arnel C Fajardo ◽  
Ruji P. Medina

<span>This paper introduces an algorithm to solve the closest pair of points problem in a 2D plane based on dynamic warping. The algorithm computes all the distances between the set of points P(x, y) and a reference point R(i, j), records all the result in a grid and finally determines the minimum distance using schematic steps. Results show that the algorithm of finding the closest pair of points has achieved less number of comparisons in determining the closest pair of points compared with the brute force and divide-and-conquer methods of the closest pair of points. </span>


2010 ◽  
Vol 09 (02) ◽  
pp. 241-256 ◽  
Author(s):  
ELEONORA GUERRINI ◽  
EMMANUELA ORSINI ◽  
MASSIMILIANO SALA

The most important families of nonlinear codes are systematic. A brute-force check is the only known method to compute their weight distribution and distance distribution. On the other hand, it outputs also all closest word pairs in the code. In the black-box complexity model, the check is optimal among closest-pair algorithms. In this paper, we provide a Gröbner basis technique to compute the weight/distance distribution of any systematic nonlinear code. Also our technique outputs all closest pairs. Unlike the check, our method can be extended to work on code families.


Author(s):  
James H. Critchley ◽  
Kurt S. Anderson

Optimal time efficient parallel computation methods for large multibody system dynamics are defined and investigated in detail. Comparative observations are made which demonstrate significant deficiencies in operating regions of practical importance and a new parallel algorithm is generated to address them. The new method of Recursive Coordinate Reduction Parallelism (RCRP) outperforms or directly reduces to the fastest general multibody algorithms available for small parallel resources and obtains “O(logk(n))” time complexity in the presence of larger parallel arrays. Performance of this method relative to the Divide and Conquer Algorithm is illustrated with an operations count for the worst case of a multibody chain system.


2007 ◽  
Vol 22 (4) ◽  
pp. 532-540 ◽  
Author(s):  
Minghui Jiang ◽  
Joel Gillespie

2020 ◽  
Vol 12 (1) ◽  
pp. 52-58
Author(s):  
Fenina Adline Twince Tobing ◽  
James Ronald Tambunan

Abstrak— Perbandingan algoritma dibutuhkan untuk mengetahui tingkat efisiensi suatu algoritma. Penelitian ini membandingkan efisiensi dari dua strategi algoritma sort yang sudah ada yaitu brute force dan divide and conquer. Algoritma brute force yang akan diuji adalah bubble sort dan selection sort. Algoritma divide and conquer yang akan diuji adalah quick sort dan merge sort. Cara yang dilakuakn dalam penelitian ini adalah melakukan tes dengan data sebanyak 50 sampai 100000 untuk setiap algoritma. Tes dilakukan dengan menggunakan bahasa pemrograman JavaScript. Hasil dari penelitian ini adalah algoritma quick sort dengan strategi divide and conquer memiliki efisiensi yang baik  serta running time yang cepat dan algoritma bubble sort dengan strategi brute force memiliki efisiensi yang buruk serta running time yang lama. Kata Kunci – Efisiensi, algoritma, brute force, divide and conquer, bubble sort, selection sort, quick sort, merge sort


2021 ◽  
Author(s):  
Sterling Baird ◽  
Taylor Sparks

A large collection of element-wise planar densities for compounds obtained from the Materials Project is calculated using brute force computational geometry methods. We demonstrate that the element-wise max lattice plane densities can be useful as machine learning features. The methods described here are implemented in an open-source Mathematica package hosted at https://github.com/sgbaird/LatticePlane.


Author(s):  
W. F. Smyth

Combinatorics on words began more than a century ago with a demonstration that an infinitely long string with no repetitions could be constructed on an alphabet of only three letters. Computing all the repetitions (such as ⋯ TTT ⋯ or ⋯ CGACGA ⋯ ) in a given string x of length n is one of the oldest and most important problems of computational stringology, requiring time in the worst case. About a dozen years ago, it was discovered that repetitions can be computed as a by-product of the Θ ( n )-time computation of all the maximal periodicities or runs in x . However, even though the computation is linear, it is also brute force: global data structures, such as the suffix array , the longest common prefix array and the Lempel–Ziv factorization , need to be computed in a preprocessing phase. Furthermore, all of this effort is required despite the fact that the expected number of runs in a string is generally a small fraction of the string length. In this paper, I explore the possibility that repetitions (perhaps also other regularities in strings) can be computed in a manner commensurate with the size of the output.


2010 ◽  
Vol 10 (03) ◽  
pp. 327-341 ◽  
Author(s):  
P. KARTHIGAIKUMAR ◽  
K. BASKARAN

Information security has always been important in all aspects of life as technology controls various operations. Cryptography provides a layer of security in cases where the medium of transmission is susceptible to interception, by translating a message into a form that cannot be read by an unauthorized third party. All non-quantum transmission media known today are capable of being intercepted in one way or another. This paper seeks to implement a novel partial pipelined, robust architecture of Blowfish algorithm in hardware. Blowfish algorithm has no known cryptanalysis. The best proven attack against Blowfish till date is an exhaustive brute-force attack. This makes Blowfish an attractive cryptographic algorithm since it is not susceptible to any reasonable attack. The hardware implementation of Blowfish would be a powerful tool for any mobile device, or any technology requiring strong encryption. The proposed design uses the core_slow library for worst-case scenario analysis and attains an incredible encryption speed of 2670 MBits/sec and decryption speed of 2642 MBits/sec. The area is 5986 LUT's and the power is a mere 77 mW.


1988 ◽  
Vol 11 (3) ◽  
pp. 275-288
Author(s):  
Jyrki Katajainen ◽  
Markku Koppinen

Recently Rex Dwyer [D87] presented an algorithm which constructs a Delaunay triangulation for a planar set of N sites in O(N log log N) expected time and O(N log N) worst-case time. We show that a slight modification of his algorithm preserves the worst-case running time, but has only O(N) average running time. The methcxl is a hybrid which combines the cell technique with the divide-and-conquer algorithm of Guibas & Stolfi [GS85]. First a square grid of size about N by N is placed on the set of sites. The grid forms about N cells (buckets), each of which is implemented as a list of the sites which fall into the corresponding square of the grid. A Delaunay triangulation of the generally rather few sites within each cell is constructed with the Guibas & Stolfi algorithm. Then the triangulations are merged, four by four, in a quadtree-like order.


Sign in / Sign up

Export Citation Format

Share Document