A Novel Approach to Publishing Tasks for Collaboratively Crowdsourcing Workflows

Author(s):  
Wenjun Tang ◽  
Rong Chen ◽  
Shikai Guo

In recent years, crowdsourcing has gradually become a promising way of using netizens to accomplish tiny tasks on, or even complex works through crowdsourcing workflows that decompose them into tiny ones to publish sequentially on the crowdsourcing platforms. One of the significant challenges in this process is how to determine the parameters for task publishing. Still some technique applied constraint solving to select the optimal tasks parameters so that the total cost of completing all tasks is minimized. However, experimental results show that computational complexity makes these tools unsuitable for solving large-scale problems because of its excessive execution time. Taking into account the real-time requirements of crowdsourcing, this study uses a heuristic algorithm with four heuristic strategies to solve the problem in order to reduce execution time. The experiment results also show that the proposed heuristic strategies produce good quality approximate solutions in an acceptable timeframe.

Author(s):  
Nurcin Celik ◽  
Esfandyar Mazhari ◽  
John Canby ◽  
Omid Kazemi ◽  
Parag Sarfare ◽  
...  

Simulating large-scale systems usually entails exhaustive computational powers and lengthy execution times. The goal of this research is to reduce execution time of large-scale simulations without sacrificing their accuracy by partitioning a monolithic model into multiple pieces automatically and executing them in a distributed computing environment. While this partitioning allows us to distribute required computational power to multiple computers, it creates a new challenge of synchronizing the partitioned models. In this article, a partitioning methodology based on a modified Prim’s algorithm is proposed to minimize the overall simulation execution time considering 1) internal computation in each of the partitioned models and 2) time synchronization between them. In addition, the authors seek to find the most advantageous number of partitioned models from the monolithic model by evaluating the tradeoff between reduced computations vs. increased time synchronization requirements. In this article, epoch- based synchronization is employed to synchronize logical times of the partitioned simulations, where an appropriate time interval is determined based on the off-line simulation analyses. A computational grid framework is employed for execution of the simulations partitioned by the proposed methodology. The experimental results reveal that the proposed approach reduces simulation execution time significantly while maintaining the accuracy as compared with the monolithic simulation execution approach.


Author(s):  
Nurcin Celik ◽  
Esfandyar Mazhari ◽  
John Canby ◽  
Omid Kazemi ◽  
Parag Sarfare ◽  
...  

Simulating large-scale systems usually entails exhaustive computational powers and lengthy execution times. The goal of this research is to reduce execution time of large-scale simulations without sacrificing their accuracy by partitioning a monolithic model into multiple pieces automatically and executing them in a distributed computing environment. While this partitioning allows us to distribute required computational power to multiple computers, it creates a new challenge of synchronizing the partitioned models. In this article, a partitioning methodology based on a modified Prim’s algorithm is proposed to minimize the overall simulation execution time considering 1) internal computation in each of the partitioned models and 2) time synchronization between them. In addition, the authors seek to find the most advantageous number of partitioned models from the monolithic model by evaluating the tradeoff between reduced computations vs. increased time synchronization requirements. In this article, epoch- based synchronization is employed to synchronize logical times of the partitioned simulations, where an appropriate time interval is determined based on the off-line simulation analyses. A computational grid framework is employed for execution of the simulations partitioned by the proposed methodology. The experimental results reveal that the proposed approach reduces simulation execution time significantly while maintaining the accuracy as compared with the monolithic simulation execution approach.


2014 ◽  
Vol 31 (04) ◽  
pp. 1450022 ◽  
Author(s):  
ALEXANDER ENGAU

We present two recent integer programming models in molecular biology and study practical reformulations to compute solutions to some of these problems. In extension of previously tested linearization techniques, we formulate corresponding semidefinite relaxations and discuss practical rounding strategies to find good feasible approximate solutions. Our computational results highlight the possible advantages and remaining challenges of this approach especially on large-scale problems.


Author(s):  
Minh N. Bùi ◽  
Patrick L. Combettes

We propose a novel approach to monotone operator splitting based on the notion of a saddle operator. Under investigation is a highly structured multivariate monotone inclusion problem involving a mix of set-valued, cocoercive, and Lipschitzian monotone operators, as well as various monotonicity-preserving operations among them. This model encompasses most formulations found in the literature. A limitation of existing primal-dual algorithms is that they operate in a product space that is too small to achieve full splitting of our problem in the sense that each operator is used individually. To circumvent this difficulty, we recast the problem as that of finding a zero of a saddle operator that acts on a bigger space. This leads to an algorithm of unprecedented flexibility, which achieves full splitting, exploits the specific attributes of each operator, is asynchronous, and requires to activate only blocks of operators at each iteration, as opposed to activating all of them. The latter feature is of critical importance in large-scale problems. The weak convergence of the main algorithm is established, as well as the strong convergence of a variant. Various applications are discussed, and instantiations of the proposed framework in the context of variational inequalities and minimization problems are presented.


2012 ◽  
pp. 380-406
Author(s):  
Nurcin Celik ◽  
Esfandyar Mazhari ◽  
John Canby ◽  
Omid Kazemi ◽  
Parag Sarfare ◽  
...  

Simulating large-scale systems usually entails exhaustive computational powers and lengthy execution times. The goal of this research is to reduce execution time of large-scale simulations without sacrificing their accuracy by partitioning a monolithic model into multiple pieces automatically and executing them in a distributed computing environment. While this partitioning allows us to distribute required computational power to multiple computers, it creates a new challenge of synchronizing the partitioned models. In this article, a partitioning methodology based on a modified Prim’s algorithm is proposed to minimize the overall simulation execution time considering 1) internal computation in each of the partitioned models and 2) time synchronization between them. In addition, the authors seek to find the most advantageous number of partitioned models from the monolithic model by evaluating the tradeoff between reduced computations vs. increased time synchronization requirements. In this article, epoch- based synchronization is employed to synchronize logical times of the partitioned simulations, where an appropriate time interval is determined based on the off-line simulation analyses. A computational grid framework is employed for execution of the simulations partitioned by the proposed methodology. The experimental results reveal that the proposed approach reduces simulation execution time significantly while maintaining the accuracy as compared with the monolithic simulation execution approach.


Symmetry ◽  
2021 ◽  
Vol 13 (6) ◽  
pp. 984
Author(s):  
Mahmoud S. Alrawashdeh ◽  
Seba A. Migdady ◽  
Ioannis K. Argyros

We present some new results that deal with the fractional decomposition method (FDM). This method is suitable to handle fractional calculus applications. We also explore exact and approximate solutions to fractional differential equations. The Caputo derivative is used because it allows traditional initial and boundary conditions to be included in the formulation of the problem. This is of great significance for large-scale problems. The study outlines the significant features of the FDM. The relation between the natural transform and Laplace transform is a symmetrical one. Our work can be considered as an alternative to existing techniques, and will have wide applications in science and engineering fields.


Author(s):  
Sumanth Dathathri ◽  
Nikos Arechiga ◽  
Sicun Gao ◽  
Richard M. Murray

We propose a new abstraction refinement procedure based on machine learning to improve the performance of nonlinear constraint solving algorithms on large-scale problems. The proposed approach decomposes the original set of constraints into smaller subsets, and uses learning algorithms to propose sequences of abstractions that take the form of conjunctions of classifiers. The core procedure is a refinement loop that keeps improving the learned results based on counterexamples that are obtained from partial constraints that are easy to solve. Experiments show that the proposed techniques significantly improve the performance of state-of-the-art constraint solvers on many challenging benchmarks. The mechanism is capable of producing intermediate symbolic abstractions that are also important for many applications and for understanding the internal structures of hard constraint solving problems.


2019 ◽  
Vol 64 ◽  
pp. 987-1023
Author(s):  
Allan R. Leite ◽  
Fabricio Enembreck

The distributed constraint optimization problem (DCOP) has emerged as one of the most promising coordination techniques in multiagent systems. However, because DCOP is known to be NP-hard, the existing DCOP techniques are often unsuitable for large-scale applications, which require distributed and scalable algorithms to deal with severely limited computing and communication. In this paper, we present a novel approach to provide approximate solutions for large-scale, complex DCOPs. This approach introduces concepts of synchronization of coupled oscillators for speeding up the convergence process towards high-quality solutions. We propose a new anytime local search DCOP algorithm, called Coupled Oscillator OPTimization (COOPT), which amounts to iteratively solving a DCOP by agents exchanging local information that brings them to a consensus. We empirically evaluate COOPT on constraint networks involving hundreds of variables with different topologies, domains, and densities. Our experimental results demonstrate that COOPT outperforms other incomplete state-of-the-art DCOP algorithms, especially in terms of the agents' communication cost and solution quality.


2018 ◽  
Vol 12 (4) ◽  
pp. 28
Author(s):  
BHARALI DEBABRAT ◽  
KUMAR SHARMA SANDEEP ◽  
◽  

Sign in / Sign up

Export Citation Format

Share Document