Quicker Sort Algorithm: Upgrading time complexity of Quick Sort to Linear Logarithmic

Author(s):  
Sandeep Kumar Budhani ◽  
Naveen Tewari ◽  
Mukesh Joshi ◽  
Kshitij Kala
2020 ◽  
Vol 11 (2) ◽  
pp. 95-102
Author(s):  
I Nyoman Aditya Yudiswara ◽  
Abba Suganda

Processor technology currently tends to increase the number of cores more than increasing the clock speed. This development is very useful and becomes an opportunity to improve the performance of sequential algorithms that are only done by one core. This paper discusses the sorting algorithm that is executed in parallel by several logical CPUs or cores using the openMP library. This algorithm is named QDM Sort which is a combination of sequential quick sort algorithm and double merge algorithm. This study uses a data parallelism approach to design parallel algorithms from sequential algorithms. The data used in this study are the data that have not been sorted and also the data that has been sorted is integer type which is stored in advance in a file. The parameter measured to determine the performance of the QDM Sort algorithm is speedup. In a condition where a large amount of data is above 4096 and the number of threads in QDM Sort is the same as the number of logical CPUs, the QDM Sort algorithm has a better speedup compared to the other parallel sorting algorithms discussed in this study. For small amounts of data it is still better to use sequential sorting algorithm.


2014 ◽  
Vol 701-702 ◽  
pp. 24-29
Author(s):  
Jun Zhang ◽  
Yong Ping Gao ◽  
Yue Shun He ◽  
Xue Yuan Wang

Two-way merge sort algorithm has a good time efficiency which has been used widely. The sort algorithm can be improved on speed and efficient based on its own potential parallelism via the parallel processing capacity of multi-core processor and the convenient programming interface of OpenMP. The time complexity is improved to O(nlog2n/TNUM) and inversely proportional to the number of parallel threads. The experiment results show that the improved two-way merge sort algorithm become much more efficient compared to the traditional one.


2014 ◽  
Vol 2014 ◽  
pp. 1-10
Author(s):  
Niraj Kumar Singh ◽  
Soubhik Chakraborty ◽  
Dheeresh Kumar Mallick

We present a new and improved worst case complexity model for quick sort as yworst(n,td)=b0+b1n2+g(n,td)+ɛ, where the LHS gives the worst case time complexity, n is the input size, td is the frequency of sample elements, and g(n,td) is a function of both the input size n and the parameter td. The rest of the terms arising due to linear regression have usual meanings. We claim this to be an improvement over the conventional model; namely, yworst(n)=b0+b1n+b2n2+ɛ, which stems from the worst case O(n2) complexity for this algorithm.


Sorting is an essential conceptin the study of data structures. There are many sorting algorithms that can sort elements in a given array or list. Counting sort is a sorting algorithm that has the best time complexity. However, the counting sort algorithm only works for positive integers. In this paper, an extension of the counting sort algorithm is proposed that can sort real numbers and integers (both positive and negative).


Petir ◽  
2021 ◽  
Vol 14 (2) ◽  
pp. 159-169
Author(s):  
Endang Sunandar

There are various kinds of data sorting methods that we know of which are the Bubble Sort, Selection Sort, Insertion Sort, Quick Sort, Shell Sort, Heap Sort, and Radix Sort methods. All of these methods have advantages and disadvantages of each, whose use is determined based on needs. Each method has a different algorithm, where different algorithms affect the execution time. One interesting algorithm to be implemented on 2 variant models of data sorting is the Bubble Sort algorithm, the reason is that this algorithm has a fairly long and detailed process flow to produce an ordered data sequence from a previously unordered data sequence. Two (2) data sorting variant models that will be implemented using the Bubble Sort algorithm are: Ascending data sorting variants moving from left to right, and Descending data sorting variants moving from left to right. And the device used in implementing the Bubble Sort algorithm is the Java programming language.


Data sorting hasmany advantages and applications in software and web development. Search engines use sorting techniques to sorttheresult before itispresented totheuser.Thewordsinadictionary are insorted ordersothatthewords canbe found easily.There aremanysorting algorithms that areused in many domains to perform some operation and obtain the desired output. But there are some sorting algorithms that take large time in sorting the data. This huge time can be vulnerable to the operation. Every sorting algorithm has the different sorting technique to sort the given data, Stooge sort is asorting algorithm which sorts the data recursively. Stooge sort takes comparatively more time as compared tomany othersorting algorithms.Stooge sortworks recursively to sort the data element but the Optimized Stooge sort does not use recursive process. In this paper, we propose Optimized Stooge sort to reduce the time complexity of the Stooge sort. The running time of Optimized Stooge sort is very much reduced as compared to theStooge sort algorithm. The existing researchfocuses onreducing therunning time of Stooge sort. Our results show that the Optimized Stooge sort is faster than the Stooge sort algorithm.


Author(s):  
Santosh Kumar Sahu ◽  
Sanjay Kumar Jena ◽  
Manish Verma

Outliers in the database are the objects that deviate from the rest of the dataset by some measure. The Nearest Neighbor Outlier Factor is considering to measure the degree of outlier-ness of the object in the dataset. Unlike the other methods like Local Outlier Factor, this approach shows the interest of a point from both neighbors and reverse neighbors, and after that, an object comes into consideration. We have observed that in GBBK algorithm that based on K-NN, used quick sort to find k nearest neighbors that take O (N log N) time. However, in proposed method, the time required for searching on K times which complete in O (KN) time to find k nearest neighbors (k < < log N). As a result, the proposed method improves the time complexity. The NSL-KDD and Fisher iris dataset is used, and experimental results compared with the GBBK method. The result is same in both the methods, but the proposed method takes less time for computation.


Sign in / Sign up

Export Citation Format

Share Document