scholarly journals On the Average Running Time of Odd–Even Merge Sort

1997 ◽  
Vol 22 (2) ◽  
pp. 329-346 ◽  
Author(s):  
Christine Rüb
Mathematics ◽  
2021 ◽  
Vol 9 (11) ◽  
pp. 1306
Author(s):  
Elsayed Badr ◽  
Sultan Almotairi ◽  
Abdallah El Ghamry

In this paper, we propose a novel blended algorithm that has the advantages of the trisection method and the false position method. Numerical results indicate that the proposed algorithm outperforms the secant, the trisection, the Newton–Raphson, the bisection and the regula falsi methods, as well as the hybrid of the last two methods proposed by Sabharwal, with regard to the number of iterations and the average running time.


2021 ◽  
Vol 2021 ◽  
pp. 1-6
Author(s):  
Yixin Zhou ◽  
Zhen Guo

With the advent of the era of big data (BD), people’’s living standards and lifestyle have been greatly changed, and people’s requirements for the service level of the service industry are becoming higher and higher. The personalized needs of customers and private customization have become the hot issues of current research. The service industry is the core enterprise of the service industry. Optimizing the service industry supply network and reasonably allocating the tasks are the focus of the research at home and abroad. Under the background of BD, this paper takes the optimization of service industry supply network as the research object and studies the task allocation optimization of service industry supply network based on the analysis of customers’ personalized demand and user behavior. This paper optimizes the supply chain network of service industry based on genetic algorithm (GA), designs genetic operator, effectively avoids the premature of the algorithm, and improves the operation efficiency of the algorithm. The experimental results show that when m = 8 and n = 40, the average running time of the improved GA is 54.1 s. The network optimization running time of the algorithm used in this paper is very fast, and the stability is also higher.


2019 ◽  
Vol 12 (1) ◽  
pp. 117-132
Author(s):  
Xiao Yu ◽  
Xiang Li ◽  
Huihui Deng ◽  
Yuchen Tang ◽  
Zhepeng Hou ◽  
...  

In the field of the bioinformatics, during osmotic stress response genes mining processing, it is also very crucial to verify experimental data obtained in the course of complex experiments by using the computer. Aim of this paper is taking Arabidopsis thaliana as the experimental crop, designing technology roadmap, taking advantage of the skills of function and programming, then designing algorithms. After using the program to predict the transcription start point, the promoter sequence is extracted and simplified. In addition, different alignment methods are classified. Then, comparing the promoter sequence with the cis-element and using the formula for further processing. Finally, get the probability P value, which provide further help to experts and scholars on the basis of probability values to determine the correlation between the osmotic stress. The experimental data source of chromosomal sequences is received from Genbank database files, and cis-element sequence that associated with osmotic stress is collected from TRANSFAC and TRRD database. From this, the authors not only used the Arabidopsis promoter as the experimental data, but also use a variety of eukaryotic promoters include promoters GhNHX1 rice, cotton OsNHX1 promoter, as a comparison. Wherein the data obtained in the biological laboratory, which in the course of running the program, 70% have been verified. P value close to 0.8, this article will be treated as the promoter contains osmotic stress cis-elements, the expression of gene induced by osmotic stress. For thaliana, cotton and rice, programs running average time was 51s, 72s and 114s. Through the use of some commonly used bioinformatics gene mining algorithms, MEME algorithm and BioProspector algorithm for the same data have been processed, the average running time of the system is increasing with the increase of data. Running time of MEME algorithm increases from 60s to reach 198s, BioProspector algorithm increases from 45s to 150s model process used herein were 50s, 75s, 110s, 135s. At the same time, the authors can see in the three algorithms, the model algorithm used herein with respect to the first two more optimized. To ensure the accuracy rate, meanwhile has high speed and stabilization of higher.


2018 ◽  
Vol 8 (12) ◽  
pp. 2548 ◽  
Author(s):  
Dianlong You ◽  
Xindong Wu ◽  
Limin Shen ◽  
Yi He ◽  
Xu Yuan ◽  
...  

Online feature selection is a challenging topic in data mining. It aims to reduce the dimensionality of streaming features by removing irrelevant and redundant features in real time. Existing works, such as Alpha-investing and Online Streaming Feature Selection (OSFS), have been proposed to serve this purpose, but they have drawbacks, including low prediction accuracy and high running time if the streaming features exhibit characteristics such as low redundancy and high relevance. In this paper, we propose a novel algorithm about online streaming feature selection, named ConInd that uses a three-layer filtering strategy to process streaming features with the aim of overcoming such drawbacks. Through three-layer filtering, i.e., null-conditional independence, single-conditional independence, and multi-conditional independence, we can obtain an approximate Markov blanket with high accuracy and low running time. To validate the efficiency, we implemented the proposed algorithm and tested its performance on a prevalent dataset, i.e., NIPS 2003 and Causality Workbench. Through extensive experimental results, we demonstrated that ConInd offers significant performance improvements in prediction accuracy and running time compared to Alpha-investing and OSFS. ConInd offers 5.62% higher average prediction accuracy than Alpha-investing, with a 53.56% lower average running time compared to that for OSFS when the dataset is lowly redundant and highly relevant. In addition, the ratio of the average number of features for ConInd is 242% less than that for Alpha-investing.


2020 ◽  
Vol 12 (1) ◽  
pp. 52-58
Author(s):  
Fenina Adline Twince Tobing ◽  
James Ronald Tambunan

Abstrak— Perbandingan algoritma dibutuhkan untuk mengetahui tingkat efisiensi suatu algoritma. Penelitian ini membandingkan efisiensi dari dua strategi algoritma sort yang sudah ada yaitu brute force dan divide and conquer. Algoritma brute force yang akan diuji adalah bubble sort dan selection sort. Algoritma divide and conquer yang akan diuji adalah quick sort dan merge sort. Cara yang dilakuakn dalam penelitian ini adalah melakukan tes dengan data sebanyak 50 sampai 100000 untuk setiap algoritma. Tes dilakukan dengan menggunakan bahasa pemrograman JavaScript. Hasil dari penelitian ini adalah algoritma quick sort dengan strategi divide and conquer memiliki efisiensi yang baik  serta running time yang cepat dan algoritma bubble sort dengan strategi brute force memiliki efisiensi yang buruk serta running time yang lama. Kata Kunci – Efisiensi, algoritma, brute force, divide and conquer, bubble sort, selection sort, quick sort, merge sort


2022 ◽  
Vol 2146 (1) ◽  
pp. 012037
Author(s):  
Ying Zou

Abstract Aiming at the problems of high complexity and low accuracy of visual depth map feature recognition, a graph recognition algorithm based on principal component direction depth gradient histogram (pca-hodg) is designed in this study. In order to obtain high-quality depth map, it is necessary to calculate the parallax of the visual image. At the same time, in order to obtain the quantized regional shape histogram, it is necessary to carry out edge detection and gradient calculation on the depth map, then reduce the dimension of the depth map combined with the principal component, and use the sliding window detection method to reduce the dimension again to realize the feature extraction of the depth map. The results show that compared with other algorithms, the pca-hodg algorithm designed in this study improves the average classification accuracy and significantly reduces the average running time. This shows that the algorithm can reduce the running time by reducing the dimension, extract the depth map features more accurately, and has good robustness.


Sign in / Sign up

Export Citation Format

Share Document