TOP HILL METHOD - A NEW SORTING APPROACH TO REDUCE EXECUTION TIME

2018 ◽  
Vol 12 (4) ◽  
pp. 28
Author(s):  
BHARALI DEBABRAT ◽  
KUMAR SHARMA SANDEEP ◽  
◽  
Author(s):  
Nurcin Celik ◽  
Esfandyar Mazhari ◽  
John Canby ◽  
Omid Kazemi ◽  
Parag Sarfare ◽  
...  

Simulating large-scale systems usually entails exhaustive computational powers and lengthy execution times. The goal of this research is to reduce execution time of large-scale simulations without sacrificing their accuracy by partitioning a monolithic model into multiple pieces automatically and executing them in a distributed computing environment. While this partitioning allows us to distribute required computational power to multiple computers, it creates a new challenge of synchronizing the partitioned models. In this article, a partitioning methodology based on a modified Prim’s algorithm is proposed to minimize the overall simulation execution time considering 1) internal computation in each of the partitioned models and 2) time synchronization between them. In addition, the authors seek to find the most advantageous number of partitioned models from the monolithic model by evaluating the tradeoff between reduced computations vs. increased time synchronization requirements. In this article, epoch- based synchronization is employed to synchronize logical times of the partitioned simulations, where an appropriate time interval is determined based on the off-line simulation analyses. A computational grid framework is employed for execution of the simulations partitioned by the proposed methodology. The experimental results reveal that the proposed approach reduces simulation execution time significantly while maintaining the accuracy as compared with the monolithic simulation execution approach.


Author(s):  
Nurcin Celik ◽  
Esfandyar Mazhari ◽  
John Canby ◽  
Omid Kazemi ◽  
Parag Sarfare ◽  
...  

Simulating large-scale systems usually entails exhaustive computational powers and lengthy execution times. The goal of this research is to reduce execution time of large-scale simulations without sacrificing their accuracy by partitioning a monolithic model into multiple pieces automatically and executing them in a distributed computing environment. While this partitioning allows us to distribute required computational power to multiple computers, it creates a new challenge of synchronizing the partitioned models. In this article, a partitioning methodology based on a modified Prim’s algorithm is proposed to minimize the overall simulation execution time considering 1) internal computation in each of the partitioned models and 2) time synchronization between them. In addition, the authors seek to find the most advantageous number of partitioned models from the monolithic model by evaluating the tradeoff between reduced computations vs. increased time synchronization requirements. In this article, epoch- based synchronization is employed to synchronize logical times of the partitioned simulations, where an appropriate time interval is determined based on the off-line simulation analyses. A computational grid framework is employed for execution of the simulations partitioned by the proposed methodology. The experimental results reveal that the proposed approach reduces simulation execution time significantly while maintaining the accuracy as compared with the monolithic simulation execution approach.


2015 ◽  
Vol 734 ◽  
pp. 472-475
Author(s):  
Wei Jin ◽  
Xiao Rong Zhao

Clustering analysis plays an important role in scientific research and commercial application. K-means algorithm is a widely used partition method in clustering. in this method.The number of clusters is predefined and the technique is highly dependent off the initial identification of elements that represent the clusters well. As the dataset’s scale increases rapidly, it is difficult to use K-means and deal with massive data. partitions.To prevent this problem,refining initial points algorithm provided.it can reduce execution time and improve solutions for large data by setting the refinement of initial conditions.The experiments demonstrate that sample-based K-means is more stable and more accurate.


Algorithms ◽  
2021 ◽  
Vol 14 (2) ◽  
pp. 67
Author(s):  
Jin Nakabe ◽  
Teruhiro Mizumoto ◽  
Hirohiko Suwa ◽  
Keiichi Yasumoto

As the number of users who cook their own food increases, there is increasing demand for an optimal cooking procedure for multiple dishes, but the optimal cooking procedure varies from user to user due to the difference of each user’s cooking skill and environment. In this paper, we propose a system of presenting optimal cooking procedures that enables parallel cooking of multiple recipes. We formulate the problem of deciding optimal cooking procedures as a task scheduling problem by creating a task graph for each recipe. To reduce execution time, we propose two extensions to the preprocessing and bounding operation of PDF/IHS, a sequential optimization algorithm for the task scheduling problem, each taking into account the cooking characteristics. We confirmed that the proposed algorithm can reduce execution time by up to 44% compared to the base PDF/IHS, and increase execution time by about 900 times even when the number of required searches increases by 10,000 times. In addition, through the experiment with three recipes for 10 participants each, it was confirmed that by following the optimal cooking procedure for a certain menu, the actual cooking time was reduced by up to 13 min (14.8% of the time when users cooked freely) compared to the time when users cooked freely.


1996 ◽  
Vol 24 (1) ◽  
pp. 6-11
Author(s):  
Oh-Young Kwon ◽  
Gi-Ho Park ◽  
Tack-Don Han

2020 ◽  
Vol 8 (6) ◽  
pp. 2227-2235

In this article, we provide a novel model to address the issue of webpage access prediction. In particular, the main approach we propose aims to reduce execution time by reducing the sequence space. This solution combines calculation of PageRank values of sequences in sequence databases and analysis of sequences from these shortened sequence databases. To evaluate the solution, we chose K-fold validation with K = 10 by randomizing the dataset 10 times; then the system calculated the average PageRank values of sequences. Next, with acceptable accuracy (when the size of datasets was reduced by up to 30% by PageRank calculation), we performed next access page prediction by analysing 1000 sequences. Experimental results for the real FIFA dataset show that our new proposed approach is much better than previous approaches in terms of prediction execution time.


2012 ◽  
pp. 380-406
Author(s):  
Nurcin Celik ◽  
Esfandyar Mazhari ◽  
John Canby ◽  
Omid Kazemi ◽  
Parag Sarfare ◽  
...  

Simulating large-scale systems usually entails exhaustive computational powers and lengthy execution times. The goal of this research is to reduce execution time of large-scale simulations without sacrificing their accuracy by partitioning a monolithic model into multiple pieces automatically and executing them in a distributed computing environment. While this partitioning allows us to distribute required computational power to multiple computers, it creates a new challenge of synchronizing the partitioned models. In this article, a partitioning methodology based on a modified Prim’s algorithm is proposed to minimize the overall simulation execution time considering 1) internal computation in each of the partitioned models and 2) time synchronization between them. In addition, the authors seek to find the most advantageous number of partitioned models from the monolithic model by evaluating the tradeoff between reduced computations vs. increased time synchronization requirements. In this article, epoch- based synchronization is employed to synchronize logical times of the partitioned simulations, where an appropriate time interval is determined based on the off-line simulation analyses. A computational grid framework is employed for execution of the simulations partitioned by the proposed methodology. The experimental results reveal that the proposed approach reduces simulation execution time significantly while maintaining the accuracy as compared with the monolithic simulation execution approach.


Sign in / Sign up

Export Citation Format

Share Document