scholarly journals Marvel: A Data-Centric Approach for Mapping Deep Learning Operators on Spatial Accelerators

2022 ◽  
Vol 19 (1) ◽  
pp. 1-26
Author(s):  
Prasanth Chatarasi ◽  
Hyoukjun Kwon ◽  
Angshuman Parashar ◽  
Michael Pellauer ◽  
Tushar Krishna ◽  
...  

A spatial accelerator’s efficiency depends heavily on both its mapper and cost models to generate optimized mappings for various operators of DNN models. However, existing cost models lack a formal boundary over their input programs (operators) for accurate and tractable cost analysis of the mappings, and this results in adaptability challenges to the cost models for new operators. We consider the recently introduced Maestro Data-Centric (MDC) notation and its analytical cost model to address this challenge because any mapping expressed in the notation is precisely analyzable using the MDC’s cost model. In this article, we characterize the set of input operators and their mappings expressed in the MDC notation by introducing a set of conformability rules . The outcome of these rules is that any loop nest that is perfectly nested with affine tensor subscripts and without conditionals is conformable to the MDC notation. A majority of the primitive operators in deep learning are such loop nests. In addition, our rules enable us to automatically translate a mapping expressed in the loop nest form to MDC notation and use the MDC’s cost model to guide upstream mappers. Our conformability rules over the input operators result in a structured mapping space of the operators, which enables us to introduce a mapper based on our decoupled off-chip/on-chip approach to accelerate mapping space exploration. Our mapper decomposes the original higher-dimensional mapping space of operators into two lower-dimensional off-chip and on-chip subspaces and then optimizes the off-chip subspace followed by the on-chip subspace. We implemented our overall approach in a tool called Marvel , and a benefit of our approach is that it applies to any operator conformable with the MDC notation. We evaluated Marvel over major DNN operators and compared it with past optimizers.

Author(s):  
Elvira Albert ◽  
Jesús Correas ◽  
Pablo Gordillo ◽  
Guillermo Román-Díez ◽  
Albert Rubio

Abstract We present the main concepts, components, and usage of Gasol, a Gas AnalysiS and Optimization tooL for Ethereum smart contracts. Gasol offers a wide variety of cost models that allow inferring the gas consumption associated to selected types of EVM instructions and/or inferring the number of times that such types of bytecode instructions are executed. Among others, we have cost models to measure only storage opcodes, to measure a selected family of gas-consumption opcodes following the Ethereum’s classification, to estimate the cost of a selected program line, etc. After choosing the desired cost model and the function of interest, Gasol returns to the user an upper bound of the cost for this function. As the gas consumption is often dominated by the instructions that access the storage, Gasol uses the gas analysis to detect under-optimized storage patterns, and includes an (optional) automatic optimization of the selected function. Our tool can be used within an Eclipse plugin for which displays the gas and instructions bounds and, when applicable, the gas-optimized function.


1999 ◽  
Vol 103 (1026) ◽  
pp. 383-388 ◽  
Author(s):  
K. Gantois ◽  
A. J. Morris

Abstract The Paper describes a metal and composite recurrent cost model of a large civil aircraft wing structure for a multidisciplinary design, analysis and optimisation (MDO) environment. The work was part of a recent European MDO project (BE95-2056) which investigated methods for the integration of structures, aerodynamics, dynamics and manufacturing cost at the preliminary design stage. The paper discusses the cost modelling approach, which is based on parametric and process cost model methods, and the integration of the cost models into an MDO process. Results for the cost models are shown. A framework has been successfully developed which allows the incorporation of manufacturing cost models into an MDO environment. It allows a designer to evaluate cost changes with respect to specific design changes such as rib pitch, stringer pitch, wing area and wing sweep.


Author(s):  
Maira Bruck ◽  
Navid Goudarzi ◽  
Peter Sandborn

The cost of energy is an increasingly important issue in the world as renewable energy resources are growing in demand. Performance-based energy contracts are designed to keep the price of energy as low as possible while controlling the risk for both parties (i.e., the Buyer and the Seller). Price and risk are often balanced using complex Power Purchase Agreements (PPAs). Since wind is not a constant supply source, to keep risk low, wind PPAs contain clauses that require the purchase and sale of energy to fall within reasonable limits. However, the existence of those limits also creates pressure on prices causing increases in the Levelized Cost of Energy (LCOE). Depending on the variation in capacity factor (CF), the power generator (the Seller) may find that the limitations on power purchasing given by the utility (the Buyer) are not favorable and will result in higher costs of energy than predicted. Existing cost models do not take into account energy purchase limitations or variations in energy production when calculating an LCOE. A new cost model is developed to evaluate the price of electricity from wind energy under a PPA contract. This study develops a method that an energy Seller can use to negotiate delivery penalties within their PPA. This model has been tested on a controlled wind farm and with real wind farm data. The results show that LCOE depends on the limitations on energy purchase within a PPA contract as well as the expected performance characteristics associated with wind farms.


Entropy ◽  
2020 ◽  
Vol 22 (12) ◽  
pp. 1352
Author(s):  
Felipe Castro-Medina ◽  
Lisbeth Rodríguez-Mazahua ◽  
Asdrúbal López-Chau ◽  
Jair Cervantes ◽  
Giner Alor-Hernández ◽  
...  

Fragmentation is a design technique widely used in multimedia databases, because it produces substantial benefits in reducing response times, causing lower execution costs in each operation performed. Multimedia databases include data whose main characteristic is their large size, therefore, database administrators face a challenge of great importance, since they must contemplate the different qualities of non-trivial data. These databases over time undergo changes in their access patterns. Different fragmentation techniques presented in related studies show adequate workflows, however, some do not contemplate changes in access patterns. This paper aims to provide an in-depth review of the literature related to dynamic fragmentation of multimedia databases, to identify the main challenges, technologies employed, types of fragmentation used, and characteristics of the cost model. This review provides valuable information for database administrators by showing essential characteristics to perform proper fragmentation and to improve the performance of fragmentation schemes. The reduction of costs in fragmentation methods is one of the most desired main properties. To fulfill this objective, the works include cost models, covering different qualities. In this analysis, a set of characteristics used in the cost models of each work is presented to facilitate the creation of a new cost model including the most used qualities. In addition, different data sets or reference points used in the testing stage of each work analyzed are presented.


2008 ◽  
Vol 8 (3) ◽  
pp. 393-409 ◽  
Author(s):  
EDNA RUCKHAUS ◽  
EDUARDO RUIZ ◽  
MARÍA-ESTHER VIDAL

AbstractWe address the problem of answering Web ontology queries efficiently. An ontology is formalized as adeductive ontology base(DOB), a deductive database that comprises the ontology's inference axioms and facts. A cost-based query optimization technique for DOB is presented. A hybrid cost model is proposed to estimate the cost and cardinality of basic and inferred facts. Cardinality and cost of inferred facts are estimated using an adaptive sampling technique, while techniques of traditional relational cost models are used for estimating the cost of basic facts and conjunctive ontology queries. Finally, we implement a dynamic-programming optimization algorithm to identify query evaluation plans that minimize the number of intermediate inferred facts. We modeled a subset of the Web ontology language Lite as a DOB and performed an experimental study to analyze the predictive capacity of our cost model and the benefits of the query optimization technique. Our study has been conducted over synthetic and real-world Web ontology language ontologies and shows that the techniques are accurate and improve query performance.


Classical query optimizers rely on sophisticated cost models to estimate the cost of executing a query and its operators. By using this cost model, an efficient global plan is created by the optimizer which will be used to execute a given query. This cost modeling facility is difficult to be implemented in Web query engines because many local data sources might not be comfortable in sharing meta data information due to confidentiality issues. In this work, an efficient and effective cost modeling techniques for Web query engines are proposed. These techniques does not force the local data sources to reveal their meta data but employs a learning mechanism to estimate the cost of executing a given local query. Two cost modeling algorithms namely: Poisson cost model and Exponential cost model algorithms are presented. Empirical results over real world datasets reveal the efficiency and effectiveness of the new cost models.


2019 ◽  
Vol 35 (6) ◽  
pp. 258-269
Author(s):  
Casey R. Tak ◽  
Jaewhan Kim ◽  
Karen Gunning ◽  
Catherine M. Sherwin ◽  
Nancy A. Nickman ◽  
...  

Background: Rates of zoster vaccination in US adults aged 60+ were approximately 30.6% in 2015. Out-of-pocket cost-sharing has been identified as a major barrier to vaccination for patients. To date, herpes zoster vaccine cost-sharing requirements for adults aged 60 to 64 has not been described. Objective: Compare the cost-sharing requirements for zoster vaccination in adults aged 60 to 64 and adults aged 65+. Methods: A retrospective cohort design examined pharmacy claims for zoster vaccination from the Utah All Payer Claims Database for adults aged 60+. Descriptive statistics and a 2-part cost model compared cost-sharing requirements for adults aged 60 to 64 and adults 65+. Results: Of the 30 293 zoster vaccine claims, 13 398 (45.8%) had no cost-sharing, 1716 (5.9%) had low cost-sharing (defined as $1 to less than $30), and 14 133 (48.3%) had high cost-sharing (defined as $30 or more). In the cost models, adults aged 65+ had higher odds of any cost-sharing (odds ratio = 39.86) and 29% higher cost-sharing as compared with adults aged 60 to 64. Conclusions: Adults aged 60 to 64 encounter lower cost-sharing requirements than adults aged 65+. Providers should be cognizant of this dynamic and encourage zoster vaccination prior to the age of 65.


Sensors ◽  
2019 ◽  
Vol 19 (13) ◽  
pp. 2954 ◽  
Author(s):  
Sudheer Kumar Battula ◽  
Saurabh Garg ◽  
Ranesh Kumar Naha ◽  
Parimala Thulasiraman ◽  
Ruppa Thulasiram

Fog computing aims to support applications requiring low latency and high scalability by using resources at the edge level. In general, fog computing comprises several autonomous mobile or static devices that share their idle resources to run different services. The providers of these devices also need to be compensated based on their device usage. In any fog-based resource-allocation problem, both cost and performance need to be considered for generating an efficient resource-allocation plan. Estimating the cost of using fog devices prior to the resource allocation helps to minimize the cost and maximize the performance of the system. In the fog computing domain, recent research works have proposed various resource-allocation algorithms without considering the compensation to resource providers and the cost estimation of the fog resources. Moreover, the existing cost models in similar paradigms such as in the cloud are not suitable for fog environments as the scaling of different autonomous resources with heterogeneity and variety of offerings is much more complicated. To fill this gap, this study first proposes a micro-level compensation cost model and then proposes a new resource-allocation method based on the cost model, which benefits both providers and users. Experimental results show that the proposed algorithm ensures better resource-allocation performance and lowers application processing costs when compared to the existing best-fit algorithm.


2018 ◽  
Vol 20 (2) ◽  
pp. 125-148 ◽  
Author(s):  
Simon Forge ◽  
Lara Srivastava

Purpose Tariffs for international mobile roaming (IMR) are often viewed by governments as an additional tax on international trade and on tourism. IMR customer bills may appear to be arbitrary and sometimes excessive. The purpose of this paper is therefore to set out a pragmatic approach to assessing international charges for mobile roaming, making use of a realistic cost model of the international roaming process and its cost elements, at a level that is useful to regulatory authorities and operators. Design/methodology/approach The discussion presented is based on industry practices for handling voice calls and data sessions with the mobile network operators (MNOs) business model, based on industry sources. The basic mechanisms use two common constructs from business analysis – business processes and use-cases – to provide a simplified form of activity-based costing. This provides a model suitable for national regulatory authorities to move towards cost-based IMR tariffs. Findings Using a perspective on costs based on a bottom-up survey procedure for elucidating the key information, the paper presents the cost elements for the various IMR network components and business processes, with an approach suitable for analysing both wholesale and retail pricing. Research limitations/implications The method is specifically designed to overcome the key problem of such approaches, the limitations set by differences in network technologies, network topology, operational scale and the engineering, as well as MNO business model and accounting practices, which otherwise would preclude the method presented here from being vendor neutral. Practical implications Vendor and network engineering neutrality implies the approach can be used to compare different MNOs in terms of the validity of their IMR charges and whether they are cost based. Social implications Impacts on society of so-called “bill-shock” have become quite common, increasingly for data sessions. The cost model presented here was developed with the intention of improving the accountability and transparency of the mobile roaming market. It thus assists in the introduction of cost-based tariffs over an economic region, such the European Union. Originality/value The paper examines the practical implications of building large-scale cost models for assessing the real IMR costs, a modelling exercise that has not been seen elsewhere in terms of its approach and neutrality as to MNO structure and assets.


2021 ◽  
Author(s):  
Yann Haddad ◽  
Michaël Defferrard ◽  
Gionata Ghiggi

<p>Ensemble predictions are essential to characterize the forecast uncertainty and the likelihood of an event to occur. Stochasticity in predictions comes from data and model uncertainty. In deep learning (DL), data uncertainty can be approached by training an ensemble of DL models on data subsets or by performing data augmentations (e.g., random or singular value decomposition (SVD) perturbations). Model uncertainty is typically addressed by training a DL model multiple times from different weight initializations (DeepEnsemble) or by training sub-networks by dropping weights (Dropout). Dropout is cheap but less effective, while DeepEnsemble is computationally expensive.</p><p>We propose instead to tackle model uncertainty with SWAG (Maddox et al., 2019), a method to learn stochastic weights—the sampling of which allows to draw hundreds of forecast realizations at a fraction of the cost required by DeepEnsemble. In the context of data-driven weather forecasting, we demonstrate that the SWAG ensemble has i) better deterministic skills than a single DL model trained in the usual way, and ii) approaches deterministic and probabilistic skills of DeepEnsemble at a fraction of the cost. Finally, multiSWAG (SWAG applied on top of DeepEnsemble models) provides a trade-off between computational cost, model diversity, and performance.</p><p>We believe that the method we present will become a common tool to generate large ensembles at a fraction of the current cost. Additionally, the possibility of sampling DL models allows the design of data-driven/emulated stochastic model components and sub-grid parameterizations.</p><p><strong>Reference</strong></p><p>Maddox W.J, Garipov T., Izmailov P., Vetrov D., Wilson A. G., 2019: A Simple Baseline for Bayesian Uncertainty in Deep Learning. arXiv:1902.02476</p>


Sign in / Sign up

Export Citation Format

Share Document