Effective Service Composition in Large Scale Service Market

2012 ◽  
Vol 9 (1) ◽  
pp. 74-94 ◽  
Author(s):  
Xianzhi Wang ◽  
Zhongjie Wang ◽  
Xiaofei Xu

The web has undergone a tremendous shift from information repository to the provisioning capacity of services. As an effective means of constructing coarse-grained solutions by dynamically aggregating a set of services to satisfy complex requirements, traditional service composition suffers from dramatic decrease on the efficiency of determining the optimal solution when large scale services are available in the Internet based service market. Most current approaches look for the optimal composition solution by real-time computation, and the composition efficiency greatly depends on the adopted algorithms. To eliminate such deficiency, this paper proposes a semi-empirical composition approach which incorporates the extraction of empirical evidence from historical experiences to provide guidance to solution space reduction to real-time service selection. Service communities and historical requirements are further organized into clusters based on similarity measurement, and then the probabilistic correspondences between the two types of clusters are identified by statistical analysis. For each new request, its hosting requirement cluster would be identified and corresponding service clusters would be determined by leveraging Bayesian inference. Concrete services would be selected from the reduced solution space to constitute the final composition. Timing strategies for re-clustering and consideration to special cases in clustering ensures continual adaption of the approach to changing environment. Instead of relying solely on pure real-time computation, the approach distinguishes from traditional methods by combining the two perspectives together.

2014 ◽  
Vol 8 (4) ◽  
pp. 2025-2032
Author(s):  
Hu Jingjing ◽  
Ma Siying ◽  
Zhao Xing ◽  
Cao Yinyin

Author(s):  
Claudio Contardo ◽  
Jorge A. Sefair

We present a progressive approximation algorithm for the exact solution of several classes of interdiction games in which two noncooperative players (namely an attacker and a follower) interact sequentially. The follower must solve an optimization problem that has been previously perturbed by means of a series of attacking actions led by the attacker. These attacking actions aim at augmenting the cost of the decision variables of the follower’s optimization problem. The objective, from the attacker’s viewpoint, is that of choosing an attacking strategy that reduces as much as possible the quality of the optimal solution attainable by the follower. The progressive approximation mechanism consists of the iterative solution of an interdiction problem in which the attacker actions are restricted to a subset of the whole solution space and a pricing subproblem invoked with the objective of proving the optimality of the attacking strategy. This scheme is especially useful when the optimal solutions to the follower’s subproblem intersect with the decision space of the attacker only in a small number of decision variables. In such cases, the progressive approximation method can solve interdiction games otherwise intractable for classical methods. We illustrate the efficiency of our approach on the shortest path, 0-1 knapsack and facility location interdiction games. Summary of Contribution: In this article, we present a progressive approximation algorithm for the exact solution of several classes of interdiction games in which two noncooperative players (namely an attacker and a follower) interact sequentially. We exploit the discrete nature of this interdiction game to design an effective algorithmic framework that improves the performance of general-purpose solvers. Our algorithm combines elements from mathematical programming and computer science, including a metaheuristic algorithm, a binary search procedure, a cutting-planes algorithm, and supervalid inequalities. Although we illustrate our results on three specific problems (shortest path, 0-1 knapsack, and facility location), our algorithmic framework can be extended to a broader class of interdiction problems.


2021 ◽  
Vol 15 (3) ◽  
pp. 1-25
Author(s):  
Chen Chen ◽  
Ruiyue Peng ◽  
Lei Ying ◽  
Hanghang Tong

The connectivity of networks has been widely studied in many high-impact applications, ranging from immunization, critical infrastructure analysis, social network mining, to bioinformatic system studies. Regardless of the end application domains, connectivity minimization has always been a fundamental task to effectively control the functioning of the underlying system. The combinatorial nature of the connectivity minimization problem imposes an exponential computational complexity to find the optimal solution, which is intractable in large systems. To tackle the computational barrier, greedy algorithm is extensively used to ensure a near-optimal solution by exploiting the diminishing returns property of the problem. Despite the empirical success, the theoretical and algorithmic challenges of the problems still remain wide open. On the theoretical side, the intrinsic hardness and the approximability of the general connectivity minimization problem are still unknown except for a few special cases. On the algorithmic side, existing algorithms are hard to balance between the optimization quality and computational efficiency. In this article, we address the two challenges by (1) proving that the general connectivity minimization problem is NP-hard and is the best approximation ratio for any polynomial algorithms, and (2) proposing the algorithm CONTAIN and its variant CONTAIN + that can well balance optimization effectiveness and computational efficiency for eigen-function based connectivity minimization problems in large networks.


2021 ◽  
Vol 51 (5) ◽  
pp. 373-390
Author(s):  
Hao Yi Ong ◽  
Daniel Freund ◽  
Davide Crapis

Drivers on the Lyft ride-share platform do not always know where the areas of supply shortage are in real time. This lack of information hurts both riders trying to find a ride and drivers trying to determine how to maximize their earnings opportunities. Lyft’s Personal Power Zone (PPZ) product helps the company to maintain high levels of service on the platform by influencing the spatial distribution of drivers in real time via monetary incentives that encourage them to reposition their vehicles. The underlying system that powers the product has two main components: (1) a novel “escrow mechanism” that tracks available incentive budgets tied to locations within a city in real time, and (2) an algorithm that solves the stochastic driver-positioning problem to maximize short-run revenue from riders’ fares. The optimization problem is a multiagent dynamic program that is too complicated to solve optimally for our large-scale application. Our approach is to decompose it into two subproblems. The first determines the set of drivers to incentivize and where to incentivize them to position themselves. The second determines how to fund each incentive using the escrow budget. By formulating it as two convex programs, we are able to use commercial solvers that find the optimal solution in a matter of seconds. Rolled out to all 320 cities in which Lyft operates in a little more than a year, the system now generates millions of bonuses that incentivize hundreds of thousands of active drivers to optimally position themselves in anticipation of ride requests every week. Together, the PPZ product and its underlying algorithms represent a paradigm shift in how Lyft drivers drive and generate earnings on the platform. Its direct business impact has been a 0.5% increase in incremental bookings, amounting to tens of millions of dollars per year. In addition, the product has brought about significant improvements to the driver and rider experience on the platform. These include statistically significant reductions in pick-up times and ride cancellations. Finally, internal surveys reveal that the vast majority of drivers prefer PPZs over the legacy system.


2010 ◽  
Vol 23 (3) ◽  
pp. 273-286 ◽  
Author(s):  
Nouraddin Alhagi ◽  
Maher Hawash ◽  
Marek Perkowski

This paper presents a new algorithm MP (multiple pass) to synthesize large reversible binary circuits without ancilla bits. The well-known MMD algorithm for synthesis of reversible circuits requires to store a truth table (or a Reed-Muller - RM transform) as a 2n vector to represent a reversible function of n variables. This representation prohibits synthesis of large functions. However, in MP we do not store such an exponentially growing data structure. The values of minterms are calculated in MP dynamically, one-by-one, from a set of logic equations that specify the reversible circuit to be designed. This allows for synthesis of large scale reversible circuits (30-bits), which is not possible with any existing algorithm. In addition, our unique multi-pass approach where the circuit is synthesized with various, yet specific, minterm orders yields quasi-optimal solution. The algorithm returns a description of the quasi-optimal circuit with respect to gate count or to its 'quantum cost'. Although the synthesis process in MP is relatively slower, the solution is found in real-time for smaller circuits of 8 bits or less.


2020 ◽  
Author(s):  
Waleed Bahgat ◽  
Mahmoud A. Salam ◽  
Ahmed Atwan ◽  
Mahmoud Badawy ◽  
Eman El-Daydamony

Abstract It has recently become a critical issue to provide software development in a service-based conceptual style for business companies . As a powerful technology for service-oriented computing, the composition of web services is investigated. This offered great opportunities to improve IT industries and business processes by forming new value-added services that satisfy the user’s complex requirements. Unfortunately, many challenges are facing the service composition process. These include the difficulties to satisfy the user’s complex demands, maintaining the performance to be matched with the quality of service (QoS) requirements, and search space reduction for QoS missing or changeable values. Accordingly, this paper proposes a cloud-based QoS provisioning service composition (CQPC) framework to address these challenges. To prove the concept and the applicability of the CQPC framework, a Hybrid Bio-Inspired QoS provisioning (HBIQP) technique is presented for the operation of the CQPC framework modules. The solution space is reduced via utilizing skyline concepts to have faster execution time and keep only reliable and most interesting services. The CQPC framework is equipped with two proposed algorithms: (i) the modified highly accurate prediction (MHAP) algorithm to enhance the prediction of QoS values of the services participating in the composition process, (ii) the MapReduce fruit fly Particle swarm Optimization (MR-FPSO) algorithm to handle composing web services for large scale of data in the cloud environment. The experimental results demonstrate the worthiness of the HBIQP technique to meet the performance metrics more than other state-of-the-art techniques in terms of average fitness value, accuracy, and execution time.


2020 ◽  
Vol 34 (04) ◽  
pp. 5117-5124 ◽  
Author(s):  
Xiaolong Ma ◽  
Fu-Ming Guo ◽  
Wei Niu ◽  
Xue Lin ◽  
Jian Tang ◽  
...  

Model compression techniques on Deep Neural Network (DNN) have been widely acknowledged as an effective way to achieve acceleration on a variety of platforms, and DNN weight pruning is a straightforward and effective method. There are currently two mainstreams of pruning methods representing two extremes of pruning regularity: non-structured, fine-grained pruning can achieve high sparsity and accuracy, but is not hardware friendly; structured, coarse-grained pruning exploits hardware-efficient structures in pruning, but suffers from accuracy drop when the pruning rate is high. In this paper, we introduce PCONV, comprising a new sparsity dimension, – fine-grained pruning patterns inside the coarse-grained structures. PCONV comprises two types of sparsities, Sparse Convolution Patterns (SCP) which is generated from intra-convolution kernel pruning and connectivity sparsity generated from inter-convolution kernel pruning. Essentially, SCP enhances accuracy due to its special vision properties, and connectivity sparsity increases pruning rate while maintaining balanced workload on filter computation. To deploy PCONV, we develop a novel compiler-assisted DNN inference framework and execute PCONV models in real-time without accuracy compromise, which cannot be achieved in prior work. Our experimental results show that, PCONV outperforms three state-of-art end-to-end DNN frameworks, TensorFlow-Lite, TVM, and Alibaba Mobile Neural Network with speedup up to 39.2 ×, 11.4 ×, and 6.3 ×, respectively, with no accuracy loss. Mobile devices can achieve real-time inference on large-scale DNNs.


Energies ◽  
2019 ◽  
Vol 12 (22) ◽  
pp. 4320 ◽  
Author(s):  
Michael Short ◽  
Sergio Rodriguez ◽  
Richard Charlesworth ◽  
Tracey Crosbie ◽  
Nashwan Dawood

Demand response (DR) involves economic incentives aimed at balancing energy demand during critical demand periods. In doing so DR offers the potential to assist with grid balancing, integrate renewable energy generation and improve energy network security. Buildings account for roughly 40% of global energy consumption. Therefore, the potential for DR using building stock offers a largely untapped resource. Heating, ventilation and air conditioning (HVAC) systems provide one of the largest possible sources for DR in buildings. However, coordinating the real-time aggregated response of multiple HVAC units across large numbers of buildings and stakeholders poses a challenging problem. Leveraging upon the concepts of Industry 4.0, this paper presents a large-scale decentralized discrete optimization framework to address this problem. Specifically, the paper first focuses upon the real-time dispatch problem for individual HVAC units in the presence of a tertiary DR program. The dispatch problem is formulated as a non-linear constrained predictive control problem, and an efficient dynamic programming (DP) algorithm with fixed memory and computation time overheads is developed for its efficient solution in real-time on individual HVAC units. Subsequently, in order to coordinate dispatch among multiple HVAC units in parallel by a DR aggregator, a flexible and efficient allocation/reallocation DP algorithm is developed to extract the cost-optimal solution and generate dispatch instructions for individual units. Accurate baselining at individual unit and aggregated levels for post-settlement is considered as an integrated component of the presented algorithms. A number of calibrated simulation studies and practical experimental tests are described to verify and illustrate the performance of the proposed schemes. The results illustrate that the distributed optimization algorithm enables a scalable, flexible solution helping to deliver the provision of aggregated tertiary DR for HVAC systems for both aggregators and individual customers. The paper concludes with a discussion of future work.


Sign in / Sign up

Export Citation Format

Share Document