reduce execution time
Recently Published Documents


TOTAL DOCUMENTS

25
(FIVE YEARS 10)

H-INDEX

3
(FIVE YEARS 0)

Water ◽  
2021 ◽  
Vol 13 (18) ◽  
pp. 2536
Author(s):  
Jinbo Lin ◽  
Hongfei Mao ◽  
Weiye Ding ◽  
Baozhu Jia ◽  
Xinxiang Pan ◽  
...  

Hydraulic jumps are a rapid transition from supercritical to subcritical flow and generally occur in rivers or spillways. Owing to the high energy dissipation rate, hydraulic jumps are widely applied as energy dissipators in hydraulic projects. To achieve efficient and accurate simulations of 2D hydraulic jumps in open channels, a parallel Weakly Compressible Smoothed Particle Hydrodynamics model (WCSPH) with Shepard Density filter was established in this study. The acceleration of the model was obtained by OpenMP to reduce execution time. To further reduce execution time, a suitable and efficient scheduling strategy was selected for the parallel numerical model by comparing parallel speed-ups under different scheduling strategies in OpenMP. Following this, two test cases of uniform flow in open channels and hydraulic jumps with different inflow conditions were investigated to validate the model. The comparison of the water depth and velocity fields between the numerical results and the analytical solution generally showed good agreement, although there was a minor discrepancy in conjugate water depths. The numerical results showed free surface undulation with decreasing amplitude, which is more consistent with physical reality, with a low inflow Froude number. Simultaneously, the Shepard filter was able to smooth the pressure fields of the hydraulic jumps with a high inflow Froude number. Moreover, the parallel speed-up was generally able to reach theoretical maximum acceleration by analyzing the performance of the model according to different particle numbers.


Algorithms ◽  
2021 ◽  
Vol 14 (2) ◽  
pp. 67
Author(s):  
Jin Nakabe ◽  
Teruhiro Mizumoto ◽  
Hirohiko Suwa ◽  
Keiichi Yasumoto

As the number of users who cook their own food increases, there is increasing demand for an optimal cooking procedure for multiple dishes, but the optimal cooking procedure varies from user to user due to the difference of each user’s cooking skill and environment. In this paper, we propose a system of presenting optimal cooking procedures that enables parallel cooking of multiple recipes. We formulate the problem of deciding optimal cooking procedures as a task scheduling problem by creating a task graph for each recipe. To reduce execution time, we propose two extensions to the preprocessing and bounding operation of PDF/IHS, a sequential optimization algorithm for the task scheduling problem, each taking into account the cooking characteristics. We confirmed that the proposed algorithm can reduce execution time by up to 44% compared to the base PDF/IHS, and increase execution time by about 900 times even when the number of required searches increases by 10,000 times. In addition, through the experiment with three recipes for 10 participants each, it was confirmed that by following the optimal cooking procedure for a certain menu, the actual cooking time was reduced by up to 13 min (14.8% of the time when users cooked freely) compared to the time when users cooked freely.


Author(s):  
Dário Ribeiros ◽  
Paula Ventura ◽  
Silvia Fernandes

The chapter intends to exemplify process innovation challenges and trends. A case is studied—stock management—at an important enterprise in Portugal. It involves the analysis and improvement of this process within a firm related with gas distribution. Stock management is critical to deliver value to other processes such as sales. This issue has led to a focus on improving the inventory process. As it involves sub-value chains, this work highlights a comparison between current process and its proposed redesign. DMAIC method (define-measure-analyze-improve-control) is systematically applied, and new data emerge from tests made in the ERP (enterprise resource planning system) of the company. The improved process tends to greatly reduce execution time, as well as the number of actors and amount of information circulating outside the system. Other aspects are studied in line with new trends in ERP platforms due to cloud computing.


2020 ◽  
Vol 20 (1) ◽  
pp. e02
Author(s):  
Erica Soledad Montes de Oca ◽  
Remo Suppi ◽  
Laura Cristina De Gisuti ◽  
Marcelo Naiouf

The increase in temperature caused by the climate change has resulted in the rapid dissemination of infectious diseases. Given the alert for the current situation, the World Health Organization (WHO) has declared a state of health emergency, highlighting the severity of the situation in some countries. For this reason, coming up with knowledge and tools that can help control and eradicate the vectors propagating these diseases is of the utmost importance. High-performance modeling and simulation can be used to produce knowledge and strategies that allow predicting infections, guiding actions and/or training health/civil protection agents. The model developed as part of this research work is aimed at assisting the decision-making process for disease prevention and control, as well as evaluating the reproduction and predicting the evolution of the Aedes aegypti mosquito, which is the transmitting vector of the dengue, Zika and chikungunya diseases. Since a large number of simulation runs are required to achieve results with statistical variability, GPU has been used. This platform has enough computational power to reduce execution time while maintaining a lower energy consumption. Different scenarios and experiments are proposed to corroborate the benefits of the architecture proposed.


2020 ◽  
Vol 21 (2) ◽  
Author(s):  
Borislava Vrigazova ◽  
Ivan Ivanov

Cross validation is often used to split input data into training and test set in Support vector machines. The two most commonly used cross validation versions are the tenfold and leave-one-out cross validation. Another commonly used resampling method is the random test/train split. The advantage of these methods is that they avoid overfitting in the model and perform model selection. They, however, can increase the computational time for fitting Support vector machines with the increase of the size of the dataset. In this research, we propose an alternative for fitting SVM, which we call the tenfold bootstrap for Support vector machines. This resampling procedure can significantly reduce execution time despite the big number of observations, while preserving model’s accuracy. With this finding, we propose a solution to the problem of slow execution time when fitting support vector machines on big datasets.


2020 ◽  
Vol 8 (6) ◽  
pp. 2227-2235

In this article, we provide a novel model to address the issue of webpage access prediction. In particular, the main approach we propose aims to reduce execution time by reducing the sequence space. This solution combines calculation of PageRank values of sequences in sequence databases and analysis of sequences from these shortened sequence databases. To evaluate the solution, we chose K-fold validation with K = 10 by randomizing the dataset 10 times; then the system calculated the average PageRank values of sequences. Next, with acceptable accuracy (when the size of datasets was reduced by up to 30% by PageRank calculation), we performed next access page prediction by analysing 1000 sequences. Experimental results for the real FIFA dataset show that our new proposed approach is much better than previous approaches in terms of prediction execution time.


Author(s):  
Paul Tawo Bukie ◽  
Chinedu Leonard Udeze ◽  
Iwara Ofem Obono ◽  
Edim Bassey Edim

With the existence of several programming languages such as C/C++, Java, C#, LISP, Prolog, Python, Simula, F#, Go, Haskell, Scala, Ruby, Dart, Swift, Groovy etc. and diverse paradigms like structured, object-oriented, list, aspect-oriented, service-oriented, web, mobile and logic programming, there is a need to perform an exhaustive comparative analysis of diverse compilers and environments before making a choice of implementation technology in software engineering. Optimization of compilers helps to reduce execution time by making use of high speed processor registers, thereby, eliminating redundant computation. This paper reports some series of performance analysis done with some popular programming languages including Java, C++, Python and PHP. Programs involving recursive and iterative functions like factorial of large numbers and binary search of large arrays were run on the various platforms with the execution time recorded in milliseconds and represented in a chart. This can aid in making a selection of the appropriate language to use for a given application domain.


Author(s):  
Wenjun Tang ◽  
Rong Chen ◽  
Shikai Guo

In recent years, crowdsourcing has gradually become a promising way of using netizens to accomplish tiny tasks on, or even complex works through crowdsourcing workflows that decompose them into tiny ones to publish sequentially on the crowdsourcing platforms. One of the significant challenges in this process is how to determine the parameters for task publishing. Still some technique applied constraint solving to select the optimal tasks parameters so that the total cost of completing all tasks is minimized. However, experimental results show that computational complexity makes these tools unsuitable for solving large-scale problems because of its excessive execution time. Taking into account the real-time requirements of crowdsourcing, this study uses a heuristic algorithm with four heuristic strategies to solve the problem in order to reduce execution time. The experiment results also show that the proposed heuristic strategies produce good quality approximate solutions in an acceptable timeframe.


Cloud computing in very simple terms, is basically where a company uses someone else’s computing services (usually over the internet) instead of having to run that software on their own computers. Today, cloud computing plays an important role in service-oriented technologies. The main purpose of cloud computing is, it allows consumers and businesses to use applications without installation and access their personal files at any computer with internet access and it allow users to easily and efficiently calculate and save resources. The recent approach is to process data expression and search. To improve cloud performance, it is necessary to optimize the processing time. Our research provides a comprehensive overview of the different models and methods used to optimize queries to reduce execution time and improve resource utilization. We conducted various query optimization research activities for the classic SQL and Map Reduce platforms.


Sign in / Sign up

Export Citation Format

Share Document