scholarly journals Efficient Data Sorting Introducing a New Method

Author(s):  
Raghavendra Devidas ◽  
Aishwarya Kulkarni

The efficiency of data sorting algorithms is the key aspect which determines the speed of data processing and searching. The best known efficiency of sorting algorithm has been Log (N) if there are N terms. All of the well-known sorting algorithms use various techniques to sort data. The basis for most of these are comparing the data terms with each other. In this manuscript, we are introducing a new approach for sorting data. This method is postulated to have the highest efficiency ever achieved by any of the sorting algorithms. We achieve this by sorting data without comparing the data terms. Or achieving results of data comparison without comparing the terms explicitly.

Data sorting hasmany advantages and applications in software and web development. Search engines use sorting techniques to sorttheresult before itispresented totheuser.Thewordsinadictionary are insorted ordersothatthewords canbe found easily.There aremanysorting algorithms that areused in many domains to perform some operation and obtain the desired output. But there are some sorting algorithms that take large time in sorting the data. This huge time can be vulnerable to the operation. Every sorting algorithm has the different sorting technique to sort the given data, Stooge sort is asorting algorithm which sorts the data recursively. Stooge sort takes comparatively more time as compared tomany othersorting algorithms.Stooge sortworks recursively to sort the data element but the Optimized Stooge sort does not use recursive process. In this paper, we propose Optimized Stooge sort to reduce the time complexity of the Stooge sort. The running time of Optimized Stooge sort is very much reduced as compared to theStooge sort algorithm. The existing researchfocuses onreducing therunning time of Stooge sort. Our results show that the Optimized Stooge sort is faster than the Stooge sort algorithm.


2020 ◽  
Vol 12 (2) ◽  
pp. 96-103
Author(s):  
Desi Anggreani ◽  
Aji Prasetya Wibawa ◽  
Purnawansyah Purnawansyah ◽  
Herman Herman

The most used algorithm is the sorting algorithm. There have been many popping sorting algorithms that can be used, in this study researchers took three sorting algorithms namely Insertion Sort, Selection Sort, and Merge Sort. As for this study will analyze the comparison of execution time and memory usage by considering the number of enter data of each algorithm used. The data used in this study is ukhuwah NET network bandwidth usage data connected in the Faculty of Computer Science in the form of double data types. After implementing and analyzing in terms of execution time merge sort algorithm has a faster execution time in sorting data with an average execution time value of 108.593777 ms on the 3000 data count. While in the same amount of data for the most execution time is the Selection Sort algorithm with a large execution time of 144.498144 ms, in terms of memory usage with the amount of data3000 Merge Sort Algorithm has the highest memory usage compared to the other two algorithms which is 21,444 MB while the other two algorithms have a succession of memory usage of 20,837 MB and 20,325MB.


2021 ◽  
Author(s):  
Shahriar Shirvani Moghaddam ◽  
Kiaksar Shirvani Moghaddam

Abstract Design an efficient data sorting algorithm that requires less time and space complexity is essential for large data sets in wireless networks, the Internet of things, data mining systems, computer science, and communications engineering. This paper proposes a low-complex data sorting algorithm that distinguishes the sorted/similar data, makes independent subarrays, and sorts the subarrays’ data using one of the popular sorting algorithms. It is proved that the mean-based pivot is as efficient as the median-based pivot for making equal-length subarrays. The numerical analyses indicate slight improvements in the elapsed time and the number of swaps of the proposed serial Merge-based and Quick-based algorithms compared to the conventional ones for low/high variance integer/non-integer uniform/Gaussian data, in different data lengths. However, using the gradual data extraction feature, the sorted parts can be extracted sequentially before ending the sorting process. Also, making independent subarrays proposes a general framework to parallel realization of sorting algorithms with separate parts. Simulation results indicate the effectiveness of the proposed parallel Merge-based and Quick-based algorithms to the conventional serial and multi-core parallel algorithms. Finally, the complexity of the proposed algorithm in both serial and parallel realizations is analyzed that shows an impressive improvement.


1992 ◽  
Vol 26 (9-11) ◽  
pp. 2345-2348 ◽  
Author(s):  
C. N. Haas

A new method for the quantitative analysis of multiple toxicity data is described and illustrated using a data set on metal exposure to copepods. Positive interactions are observed for Ni-Pb and Pb-Cr, with weak negative interactions observed for Ni-Cr.


Symmetry ◽  
2021 ◽  
Vol 13 (4) ◽  
pp. 726
Author(s):  
Lamya A. Baharith ◽  
Wedad H. Aljuhani

This article presents a new method for generating distributions. This method combines two techniques—the transformed—transformer and alpha power transformation approaches—allowing for tremendous flexibility in the resulting distributions. The new approach is applied to introduce the alpha power Weibull—exponential distribution. The density of this distribution can take asymmetric and near-symmetric shapes. Various asymmetric shapes, such as decreasing, increasing, L-shaped, near-symmetrical, and right-skewed shapes, are observed for the related failure rate function, making it more tractable for many modeling applications. Some significant mathematical features of the suggested distribution are determined. Estimates of the unknown parameters of the proposed distribution are obtained using the maximum likelihood method. Furthermore, some numerical studies were carried out, in order to evaluate the estimation performance. Three practical datasets are considered to analyze the usefulness and flexibility of the introduced distribution. The proposed alpha power Weibull–exponential distribution can outperform other well-known distributions, showing its great adaptability in the context of real data analysis.


2021 ◽  
pp. 000276422110216
Author(s):  
Kazimierz M. Slomczynski ◽  
Irina Tomescu-Dubrow ◽  
Ilona Wysmulek

This article proposes a new approach to analyze protest participation measured in surveys of uneven quality. Because single international survey projects cover only a fraction of the world’s nations in specific periods, researchers increasingly turn to ex-post harmonization of different survey data sets not a priori designed as comparable. However, very few scholars systematically examine the impact of the survey data quality on substantive results. We argue that the variation in source data, especially deviations from standards of survey documentation, data processing, and computer files—proposed by methodologists of Total Survey Error, Survey Quality Monitoring, and Fitness for Intended Use—is important for analyzing protest behavior. In particular, we apply the Survey Data Recycling framework to investigate the extent to which indicators of attending demonstrations and signing petitions in 1,184 national survey projects are associated with measures of data quality, controlling for variability in the questionnaire items. We demonstrate that the null hypothesis of no impact of measures of survey quality on indicators of protest participation must be rejected. Measures of survey documentation, data processing, and computer records, taken together, explain over 5% of the intersurvey variance in the proportions of the populations attending demonstrations or signing petitions.


1992 ◽  
Vol 101 (1) ◽  
pp. 51-60 ◽  
Author(s):  
Eiji Yanagisawa ◽  
Ken Yanagisawa ◽  
Jay B. Horowitz ◽  
Lawrence J. Mambrino

A new approach to microlaryngeal surgery using a specially designed video microlaryngoscope with a rigid endoscopic telescope and an attached video camera was introduced by Kantor et al in 1990. The ability to video document and perform surgery of the larynx by viewing a high-resolution television image was demonstrated. This method was recommended over the standard microscopic technique for increased visibility with greater depth of field, unimpeded instrument access, instant documentation, and superior teaching value. The authors tried this new method and the standard microscopic technique at the same sitting on a series of patients. This paper will compare these two different techniques and discuss their advantages and disadvantages. Although the new method has many advantages, the standard microscopic technique remains as a valuable method in laryngeal surgery.


2004 ◽  
Vol 61 (7) ◽  
pp. 1269-1284 ◽  
Author(s):  
RIC Chris Francis ◽  
Steven E Campana

In 1985, Boehlert (Fish. Bull. 83: 103–117) suggested that fish age could be estimated from otolith measurements. Since that time, a number of inferential techniques have been proposed and tested in a range of species. A review of these techniques shows that all are subject to at least one of four types of bias. In addition, they all focus on assigning ages to individual fish, whereas the estimation of population parameters (particularly proportions at age) is usually the goal. We propose a new flexible method of inference based on mixture analysis, which avoids these biases and makes better use of the data. We argue that the most appropriate technique for evaluating the performance of these methods is a cost–benefit analysis that compares the cost of the estimated ages with that of the traditional annulus count method. A simulation experiment is used to illustrate both the new method and the cost–benefit analysis.


Sign in / Sign up

Export Citation Format

Share Document