An Enhanced Bidirectional Insertion Sort Over Classical Insertion Sort

Author(s):  
A. Kalaivani ◽  
K. Swetha

Sorting is a technique which is used to arrange the data in specific order. A sorting technique is applied to rearrange the elements in numerical order as ascending order or descending order or for words in alphabetical order. In this paper, we propose an efficient sorting algorithm known as Enhanced Bidirectional Insertion Sorting algorithm which is developed from insertion sort concept. A comparative analysis is done for the proposed Enhanced Bidirectional Insertion Sort algorithm with the selection sort and insertion sort algorithms. When compared to insertion sort algorithm the proposed algorithm outperforms with less number of comparisons in worst case and average case computing time. The proposed algorithm works efficiently for duplicated elements which is the advanced improvement and the results are proved.

Data sorting hasmany advantages and applications in software and web development. Search engines use sorting techniques to sorttheresult before itispresented totheuser.Thewordsinadictionary are insorted ordersothatthewords canbe found easily.There aremanysorting algorithms that areused in many domains to perform some operation and obtain the desired output. But there are some sorting algorithms that take large time in sorting the data. This huge time can be vulnerable to the operation. Every sorting algorithm has the different sorting technique to sort the given data, Stooge sort is asorting algorithm which sorts the data recursively. Stooge sort takes comparatively more time as compared tomany othersorting algorithms.Stooge sortworks recursively to sort the data element but the Optimized Stooge sort does not use recursive process. In this paper, we propose Optimized Stooge sort to reduce the time complexity of the Stooge sort. The running time of Optimized Stooge sort is very much reduced as compared to theStooge sort algorithm. The existing researchfocuses onreducing therunning time of Stooge sort. Our results show that the Optimized Stooge sort is faster than the Stooge sort algorithm.


Author(s):  
Nirupma Pathak ◽  
Shubham Tiwari

In this paper, we present the work regarding the selection sorting technique for double ended selection sort. This sorting algorithm is both theoretical and programmatically analysis show that the introduce advance selection sort algorithm which enhances the performance of selection sort. It is much faster than the selection sort because of its selection of minimum and maximum elements simultaneously. Advance selection sort algorithm possibility of enhancing execution speed up to 30%. Code for this algorithm is written in C programming Language. So easy to understand the concept of this sorting algorithm by everyone because C is the popular language. Results and discusion show a higher level of performance for the sorting algorithm. It can theoretically prove that the algorithm can reduce steps with the selection short and will improve N2 sorts toward NlogN sort.


10.37236/6354 ◽  
2017 ◽  
Vol 24 (2) ◽  
Author(s):  
Carsten Schneider ◽  
Robin Sulzgruber

The Novelli-Pak-Stoyanovskii algorithm is a sorting algorithm for Young tableaux of a fixed shape that was originally devised to give a bijective proof of the hook-length formula. We obtain new asymptotic results on the average case and worst case complexity of this algorithm as the underlying shape tends to a fixed limit curve. Furthermore, using the summation package Sigma we prove an exact formula for the average case complexity when the underlying shape consists of only two rows. We thereby answer questions posed by Krattenthaler and Müller.


Author(s):  
Frantisek Franek ◽  
Michael Liut

There are two reasons to have an efficient algorithm for identifying all maximal Lyndon substrings of a string: firstly, Bannai et al. introduced in 2015 a linear algorithm to compute all runs of a string that relies on knowing all maximal Lyndon substrings of the input string, and secondly, Franek et al. showed in 2017 a linear equivalence of sorting suffixes and sorting maximal Lyndon substrings of a string, inspired by a novel suffix sorting algorithm of Baier. In 2016, Franek et al. presented a brief overview of algorithms for computing the Lyndon array that encodes the knowledge of maximal Lyndon substrings of the input string. Among the presented were two well-known algorithms for computing the Lyndon array: a quadratic in-place algorithm based on iterated Duval's algorithm for Lyndon factorization, and a linear algorithmic scheme based on linear suffix sorting, computing inverse suffix array, and applying to it the Next Smaller Value algorithm. Duval's algorithm works for strings over any ordered alphabet, while for linear suffix sorting, a constant or an integer alphabet is required. The authors at that time were not aware of Baier's algorithm. In 2017, our research group proposed a novel algorithm for the Lyndon array. Though the proposed algorithm is linear in the average case and has O(n log(n)) worst-case complexity, it is interesting as it emulates the fast Fourier algorithm's recursive approach and introduces tau-reduction that might be of independent interest. In 2018, we presented a linear algorithm to compute the Lyndon array of a string inspired by Phase I of Baier's algorithm for suffix sorting. This paper presents theoretical analysis of these two algorithms and provides empirical comparisons of both their C++ implementations with respect to iterated Duval's algorithm.


In the era of new technology, we have huge amount of data to deal with arranging the huge amount of data has remained a big challenge. This research paper includes two types of sorting algorithm, Heap Sort and Insertion Sort and also their performance analysis on thebasis of running time along with their complexity. This paper includes the algorithms and theirimplementation in Java programming language. For theresults of this research study,the comparison ofthese two sorting algorithms with different type of the data at running time such as Large, Average, and Small. In Large,data pass100 integers in the array. For Average data pass 50integers in the array and for Small data pass10 integers in the array. It checks that,which sorting technique is efficient according to the input data. Then identifiesthe efficiency of these algorithms according to this data three cases used that is Best, Average and Worst Case. The result of this analysis is showing with the help of graphs to show that how much time both algorithms take while given the desired output


Algorithms ◽  
2020 ◽  
Vol 13 (11) ◽  
pp. 294
Author(s):  
Frantisek Franek ◽  
Michael Liut

There are two reasons to have an efficient algorithm for identifying all right-maximal Lyndon substrings of a string: firstly, Bannai et al. introduced in 2015 a linear algorithm to compute all runs of a string that relies on knowing all right-maximal Lyndon substrings of the input string, and secondly, Franek et al. showed in 2017 a linear equivalence of sorting suffixes and sorting right-maximal Lyndon substrings of a string, inspired by a novel suffix sorting algorithm of Baier. In 2016, Franek et al. presented a brief overview of algorithms for computing the Lyndon array that encodes the knowledge of right-maximal Lyndon substrings of the input string. Among those presented were two well-known algorithms for computing the Lyndon array: a quadratic in-place algorithm based on the iterated Duval algorithm for Lyndon factorization and a linear algorithmic scheme based on linear suffix sorting, computing the inverse suffix array, and applying to it the next smaller value algorithm. Duval’s algorithm works for strings over any ordered alphabet, while for linear suffix sorting, a constant or an integer alphabet is required. The authors at that time were not aware of Baier’s algorithm. In 2017, our research group proposed a novel algorithm for the Lyndon array. Though the proposed algorithm is linear in the average case and has O(nlog(n)) worst-case complexity, it is interesting as it emulates the fast Fourier algorithm’s recursive approach and introduces τ-reduction, which might be of independent interest. In 2018, we presented a linear algorithm to compute the Lyndon array of a string inspired by Phase I of Baier’s algorithm for suffix sorting. This paper presents the theoretical analysis of these two algorithms and provides empirical comparisons of both of their C++ implementations with respect to the iterated Duval algorithm.


2020 ◽  
Vol 64 (7) ◽  
pp. 1197-1224
Author(s):  
Florian Stober ◽  
Armin Weiß

AbstractMergeInsertion, also known as the Ford-Johnson algorithm, is a sorting algorithm which, up to today, for many input sizes achieves the best known upper bound on the number of comparisons. Indeed, it gets extremely close to the information-theoretic lower bound. While the worst-case behavior is well understood, only little is known about the average case. This work takes a closer look at the average case behavior. In particular, we establish an upper bound of $n \log n - 1.4005n + o(n)$ n log n − 1.4005 n + o ( n ) comparisons. We also give an exact description of the probability distribution of the length of the chain a given element is inserted into and use it to approximate the average number of comparisons numerically. Moreover, we compute the exact average number of comparisons for n up to 148. Furthermore, we experimentally explore the impact of different decision trees for binary insertion. To conclude, we conduct experiments showing that a slightly different insertion order leads to a better average case and we compare the algorithm to Manacher’s combination of merging and MergeInsertion as well as to the recent combined algorithm with (1,2)-Insertionsort by Iwama and Teruyama.


Sorting is the basic activity in the field of computer science and it is commonly used in searching for information and data. The main goal of sorting is to make reports or records easier to edit, delete and search, etc. It organizes the given data in any sequence. There are many sorting algorithms like insertion sort, bubble sort, radix sort, heap sort, and so forth. Bubble sort and insertion sort are clearly described with algorithms and examples. In this paper, the bubble sort and insertion sort performance analysis is carried out by calculating the time complexity. These algorithm time complexities have been calculated by implementing in the rust and python languages and observed the best case, average case, and worst case. The flowchart shows the complete workflow of this study. The results have been shown graphically and time complexity has been shown in a tabular form. We have compared the efficiency of bubble sort and insertion sort algorithms in the rust and python platforms. The rust language is more efficient than python for both bubble and insertion sort algorithms. However, it is observed insertion sort is more efficient than the bubble sort algorithm.


2008 ◽  
Vol 19 (09) ◽  
pp. 1443-1458 ◽  
Author(s):  
DOMINIK STRZAŁKA ◽  
FRANCISZEK GRABOWSKI

Tsallis entropy introduced in 1988 is considered to have obtained new possibilities to construct generalized thermodynamical basis for statistical physics expanding classical Boltzmann–Gibbs thermodynamics for nonequilibrium states. During the last two decades this q-generalized theory has been successfully applied to considerable amount of physically interesting complex phenomena. The authors would like to present a new view on the problem of algorithms computational complexity analysis by the example of the possible thermodynamical basis of the sorting process and its dynamical behavior. A classical approach to the analysis of the amount of resources needed for algorithmic computation is based on the assumption that the contact between the algorithm and the input data stream is a simple system, because only the worst-case time complexity is considered to minimize the dependency on specific instances. Meanwhile the article shows that this process can be governed by long-range dependencies with thermodynamical basis expressed by the specific shapes of probability distributions. The classical approach does not allow to describe all properties of processes (especially the dynamical behavior of algorithms) that can appear during the computer algorithmic processing even if one takes into account the average case analysis in computational complexity. The importance of this problem is still neglected especially if one realizes two important things. The first one: nowadays computer systems work also in an interactive mode and for better understanding of its possible behavior one needs a proper thermodynamical basis. The second one: computers from mathematical point of view are Turing machines but in reality they have physical implementations that need energy for processing and the problem of entropy production appears. That is why the thermodynamical analysis of the possible behavior of the simple insertion sort algorithm will be given here.


Sign in / Sign up

Export Citation Format

Share Document