scholarly journals Effective Cost Models for Predicting Web Query Execution Cost

Classical query optimizers rely on sophisticated cost models to estimate the cost of executing a query and its operators. By using this cost model, an efficient global plan is created by the optimizer which will be used to execute a given query. This cost modeling facility is difficult to be implemented in Web query engines because many local data sources might not be comfortable in sharing meta data information due to confidentiality issues. In this work, an efficient and effective cost modeling techniques for Web query engines are proposed. These techniques does not force the local data sources to reveal their meta data but employs a learning mechanism to estimate the cost of executing a given local query. Two cost modeling algorithms namely: Poisson cost model and Exponential cost model algorithms are presented. Empirical results over real world datasets reveal the efficiency and effectiveness of the new cost models.

Author(s):  
Elvira Albert ◽  
Jesús Correas ◽  
Pablo Gordillo ◽  
Guillermo Román-Díez ◽  
Albert Rubio

Abstract We present the main concepts, components, and usage of Gasol, a Gas AnalysiS and Optimization tooL for Ethereum smart contracts. Gasol offers a wide variety of cost models that allow inferring the gas consumption associated to selected types of EVM instructions and/or inferring the number of times that such types of bytecode instructions are executed. Among others, we have cost models to measure only storage opcodes, to measure a selected family of gas-consumption opcodes following the Ethereum’s classification, to estimate the cost of a selected program line, etc. After choosing the desired cost model and the function of interest, Gasol returns to the user an upper bound of the cost for this function. As the gas consumption is often dominated by the instructions that access the storage, Gasol uses the gas analysis to detect under-optimized storage patterns, and includes an (optional) automatic optimization of the selected function. Our tool can be used within an Eclipse plugin for which displays the gas and instructions bounds and, when applicable, the gas-optimized function.


Author(s):  
Amy Lujan

In recent years, the possibility of panels replacing wafers in some fan-out applications has been a topic of interest. Questions of cost and yield continue to arise even as the industry appears to be full steam ahead. While large panels allow for more packages to be produced at once, the cost does not scale simply based on how many more packages can be generated from a panel over a wafer. This analysis begins by breaking down the types of cost and will discuss how those types of cost are impacted (or not) by the shift from wafer to panel. Activity based cost modeling is used; this is a detailed, bottom-up approach that takes into account each type of cost for each activity in a process flow. Two complete cost models were constructed for this analysis. A variety of package sizes are analyzed, and multiple panel sizes are included as well. For each set of activities in the fan-out process flow, there is an explanation of how the process changes with the move to panel, including assumptions related to throughput, equipment price, and materials. The cost reduction that may be achieved at each package and panel size will be presented for each processing segment. The focus of this analysis is on the details of each segment of the process flow, but results for the total cost of various packages will also be presented. There is also a section of analysis related to the impact of yield on the competitiveness of panel processing.


1999 ◽  
Vol 103 (1026) ◽  
pp. 383-388 ◽  
Author(s):  
K. Gantois ◽  
A. J. Morris

Abstract The Paper describes a metal and composite recurrent cost model of a large civil aircraft wing structure for a multidisciplinary design, analysis and optimisation (MDO) environment. The work was part of a recent European MDO project (BE95-2056) which investigated methods for the integration of structures, aerodynamics, dynamics and manufacturing cost at the preliminary design stage. The paper discusses the cost modelling approach, which is based on parametric and process cost model methods, and the integration of the cost models into an MDO process. Results for the cost models are shown. A framework has been successfully developed which allows the incorporation of manufacturing cost models into an MDO environment. It allows a designer to evaluate cost changes with respect to specific design changes such as rib pitch, stringer pitch, wing area and wing sweep.


Author(s):  
Maira Bruck ◽  
Navid Goudarzi ◽  
Peter Sandborn

The cost of energy is an increasingly important issue in the world as renewable energy resources are growing in demand. Performance-based energy contracts are designed to keep the price of energy as low as possible while controlling the risk for both parties (i.e., the Buyer and the Seller). Price and risk are often balanced using complex Power Purchase Agreements (PPAs). Since wind is not a constant supply source, to keep risk low, wind PPAs contain clauses that require the purchase and sale of energy to fall within reasonable limits. However, the existence of those limits also creates pressure on prices causing increases in the Levelized Cost of Energy (LCOE). Depending on the variation in capacity factor (CF), the power generator (the Seller) may find that the limitations on power purchasing given by the utility (the Buyer) are not favorable and will result in higher costs of energy than predicted. Existing cost models do not take into account energy purchase limitations or variations in energy production when calculating an LCOE. A new cost model is developed to evaluate the price of electricity from wind energy under a PPA contract. This study develops a method that an energy Seller can use to negotiate delivery penalties within their PPA. This model has been tested on a controlled wind farm and with real wind farm data. The results show that LCOE depends on the limitations on energy purchase within a PPA contract as well as the expected performance characteristics associated with wind farms.


2012 ◽  
Vol 2012 (1) ◽  
pp. 000012-000017
Author(s):  
Chet Palesko ◽  
Alan Palesko

Demands on the electronics industry for smaller, better, and cheaper packages have made the supply chain more complex. Outsourcing, new technologies, and increasing performance requirements make designing and building the right product for the right price more difficult than ever. We will present a framework for understanding and managing the supply chain through cost modeling. Cost models that accurately reflect the cost impact from technology and design decisions enable a more precise understanding of supply chain behavior. Cost models can show the extra cost of adding a layer, the expected savings from relaxing design rules, or the cost of package on package assembly compared to 3D packaging with through silicon vias (TSVs). The models also provide context to understanding the ″should cost″ of a product and the path to achieving it. Since the guidance from cost models is based on the actual supplier cost drivers and pricing behavior, designer cost reduction efforts will result in higher savings compared to not using the cost models. Without cost models, designers risk missing their suppliers' real cost drivers and, therefore, the opportunity to decrease cost. This cost modeling framework allows the designers to realize the lowest cost product by matching the right design with the right supplier. It is a method for understanding a design decision's cost impact: a design change, a supplier change, or even the impact of new technology.


Entropy ◽  
2020 ◽  
Vol 22 (12) ◽  
pp. 1352
Author(s):  
Felipe Castro-Medina ◽  
Lisbeth Rodríguez-Mazahua ◽  
Asdrúbal López-Chau ◽  
Jair Cervantes ◽  
Giner Alor-Hernández ◽  
...  

Fragmentation is a design technique widely used in multimedia databases, because it produces substantial benefits in reducing response times, causing lower execution costs in each operation performed. Multimedia databases include data whose main characteristic is their large size, therefore, database administrators face a challenge of great importance, since they must contemplate the different qualities of non-trivial data. These databases over time undergo changes in their access patterns. Different fragmentation techniques presented in related studies show adequate workflows, however, some do not contemplate changes in access patterns. This paper aims to provide an in-depth review of the literature related to dynamic fragmentation of multimedia databases, to identify the main challenges, technologies employed, types of fragmentation used, and characteristics of the cost model. This review provides valuable information for database administrators by showing essential characteristics to perform proper fragmentation and to improve the performance of fragmentation schemes. The reduction of costs in fragmentation methods is one of the most desired main properties. To fulfill this objective, the works include cost models, covering different qualities. In this analysis, a set of characteristics used in the cost models of each work is presented to facilitate the creation of a new cost model including the most used qualities. In addition, different data sets or reference points used in the testing stage of each work analyzed are presented.


2022 ◽  
Vol 19 (1) ◽  
pp. 1-26
Author(s):  
Prasanth Chatarasi ◽  
Hyoukjun Kwon ◽  
Angshuman Parashar ◽  
Michael Pellauer ◽  
Tushar Krishna ◽  
...  

A spatial accelerator’s efficiency depends heavily on both its mapper and cost models to generate optimized mappings for various operators of DNN models. However, existing cost models lack a formal boundary over their input programs (operators) for accurate and tractable cost analysis of the mappings, and this results in adaptability challenges to the cost models for new operators. We consider the recently introduced Maestro Data-Centric (MDC) notation and its analytical cost model to address this challenge because any mapping expressed in the notation is precisely analyzable using the MDC’s cost model. In this article, we characterize the set of input operators and their mappings expressed in the MDC notation by introducing a set of conformability rules . The outcome of these rules is that any loop nest that is perfectly nested with affine tensor subscripts and without conditionals is conformable to the MDC notation. A majority of the primitive operators in deep learning are such loop nests. In addition, our rules enable us to automatically translate a mapping expressed in the loop nest form to MDC notation and use the MDC’s cost model to guide upstream mappers. Our conformability rules over the input operators result in a structured mapping space of the operators, which enables us to introduce a mapper based on our decoupled off-chip/on-chip approach to accelerate mapping space exploration. Our mapper decomposes the original higher-dimensional mapping space of operators into two lower-dimensional off-chip and on-chip subspaces and then optimizes the off-chip subspace followed by the on-chip subspace. We implemented our overall approach in a tool called Marvel , and a benefit of our approach is that it applies to any operator conformable with the MDC notation. We evaluated Marvel over major DNN operators and compared it with past optimizers.


2008 ◽  
Vol 8 (3) ◽  
pp. 393-409 ◽  
Author(s):  
EDNA RUCKHAUS ◽  
EDUARDO RUIZ ◽  
MARÍA-ESTHER VIDAL

AbstractWe address the problem of answering Web ontology queries efficiently. An ontology is formalized as adeductive ontology base(DOB), a deductive database that comprises the ontology's inference axioms and facts. A cost-based query optimization technique for DOB is presented. A hybrid cost model is proposed to estimate the cost and cardinality of basic and inferred facts. Cardinality and cost of inferred facts are estimated using an adaptive sampling technique, while techniques of traditional relational cost models are used for estimating the cost of basic facts and conjunctive ontology queries. Finally, we implement a dynamic-programming optimization algorithm to identify query evaluation plans that minimize the number of intermediate inferred facts. We modeled a subset of the Web ontology language Lite as a DOB and performed an experimental study to analyze the predictive capacity of our cost model and the benefits of the query optimization technique. Our study has been conducted over synthetic and real-world Web ontology language ontologies and shows that the techniques are accurate and improve query performance.


2019 ◽  
Vol 35 (6) ◽  
pp. 258-269
Author(s):  
Casey R. Tak ◽  
Jaewhan Kim ◽  
Karen Gunning ◽  
Catherine M. Sherwin ◽  
Nancy A. Nickman ◽  
...  

Background: Rates of zoster vaccination in US adults aged 60+ were approximately 30.6% in 2015. Out-of-pocket cost-sharing has been identified as a major barrier to vaccination for patients. To date, herpes zoster vaccine cost-sharing requirements for adults aged 60 to 64 has not been described. Objective: Compare the cost-sharing requirements for zoster vaccination in adults aged 60 to 64 and adults aged 65+. Methods: A retrospective cohort design examined pharmacy claims for zoster vaccination from the Utah All Payer Claims Database for adults aged 60+. Descriptive statistics and a 2-part cost model compared cost-sharing requirements for adults aged 60 to 64 and adults 65+. Results: Of the 30 293 zoster vaccine claims, 13 398 (45.8%) had no cost-sharing, 1716 (5.9%) had low cost-sharing (defined as $1 to less than $30), and 14 133 (48.3%) had high cost-sharing (defined as $30 or more). In the cost models, adults aged 65+ had higher odds of any cost-sharing (odds ratio = 39.86) and 29% higher cost-sharing as compared with adults aged 60 to 64. Conclusions: Adults aged 60 to 64 encounter lower cost-sharing requirements than adults aged 65+. Providers should be cognizant of this dynamic and encourage zoster vaccination prior to the age of 65.


Sensors ◽  
2019 ◽  
Vol 19 (13) ◽  
pp. 2954 ◽  
Author(s):  
Sudheer Kumar Battula ◽  
Saurabh Garg ◽  
Ranesh Kumar Naha ◽  
Parimala Thulasiraman ◽  
Ruppa Thulasiram

Fog computing aims to support applications requiring low latency and high scalability by using resources at the edge level. In general, fog computing comprises several autonomous mobile or static devices that share their idle resources to run different services. The providers of these devices also need to be compensated based on their device usage. In any fog-based resource-allocation problem, both cost and performance need to be considered for generating an efficient resource-allocation plan. Estimating the cost of using fog devices prior to the resource allocation helps to minimize the cost and maximize the performance of the system. In the fog computing domain, recent research works have proposed various resource-allocation algorithms without considering the compensation to resource providers and the cost estimation of the fog resources. Moreover, the existing cost models in similar paradigms such as in the cloud are not suitable for fog environments as the scaling of different autonomous resources with heterogeneity and variety of offerings is much more complicated. To fill this gap, this study first proposes a micro-level compensation cost model and then proposes a new resource-allocation method based on the cost model, which benefits both providers and users. Experimental results show that the proposed algorithm ensures better resource-allocation performance and lowers application processing costs when compared to the existing best-fit algorithm.


Sign in / Sign up

Export Citation Format

Share Document