cost metrics
Recently Published Documents


TOTAL DOCUMENTS

75
(FIVE YEARS 26)

H-INDEX

11
(FIVE YEARS 2)

2022 ◽  
Vol 3 (1) ◽  
pp. 1-14
Author(s):  
Alexandru Paler ◽  
Robert Basmadjian

Quantum circuits are difficult to simulate, and their automated optimisation is complex as well. Significant optimisations have been achieved manually (pen and paper) and not by software. This is the first in-depth study on the cost of compiling and optimising large-scale quantum circuits with state-of-the-art quantum software. We propose a hierarchy of cost metrics covering the quantum software stack and use energy as the long-term cost of operating hardware. We are going to quantify optimisation costs by estimating the energy consumed by a CPU doing the quantum circuit optimisation. We use QUANTIFY, a tool based on Google Cirq, to optimise bucket brigade QRAM and multiplication circuits having between 32 and 8,192 qubits. Although our classical optimisation methods have polynomial complexity, we observe that their energy cost grows extremely fast with the number of qubits. We profile the methods and software and provide evidence that there are high constant costs associated to the operations performed during optimisation. The costs are the result of dynamically typed programming languages and the generic data structures used in the background. We conclude that state-of-the-art quantum software frameworks have to massively improve their scalability to be practical for large circuits.


2021 ◽  
Vol 2021 ◽  
pp. 1-5
Author(s):  
K. Mahalakshmi ◽  
K. Kousalya ◽  
Himanshu Shekhar ◽  
Aby K. Thomas ◽  
L. Bhagyalakshmi ◽  
...  

Cloud storage provides a potential solution replacing physical disk drives in terms of prominent outsourcing services. A threaten from an untrusted server affects the security and integrity of the data. However, the major problem between the data integrity and cost of communication and computation is directly proportional to each other. It is hence necessary to develop a model that provides the trade-off between the data integrity and cost metrics in cloud environment. In this paper, we develop an integrity verification mechanism that enables the utilisation of cryptographic solution with algebraic signature. The model utilises elliptic curve digital signature algorithm (ECDSA) to verify the data outsources. The study further resists the malicious attacks including forgery attacks, replacing attacks and replay attacks. The symmetric encryption guarantees the privacy of the data. The simulation is conducted to test the efficacy of the algorithm in maintaining the data integrity with reduced cost. The performance of the entire model is tested against the existing methods in terms of their communication cost, computation cost, and overhead cost. The results of simulation show that the proposed method obtains reduced computational of 0.25% and communication cost of 0.21% than other public auditing schemes.


2021 ◽  
Author(s):  
John E.T. Bistline

Abstract Modeling tools are increasingly used to inform and evaluate proposed power sector climate and clean electricity policies such as renewable portfolio and clean electricity standards, carbon pricing, emissions caps, and tax incentives. However, claims about economic and environmental impacts often lack transparency and may be based on incomplete metrics that can obscure differences in policy design. This paper examines model-based metrics used to assess the economic efficiency impacts of prospective electric sector policies. The appropriateness of alternative metrics varies by context, model, audience, and application, depending on the prioritization of comprehensiveness, measurability, transparency, and credible precision. This paper provides guidance for the modeling community on calculating and communicating cost metrics and for consumers of model outputs on interpreting these economic indicators. Using an illustrative example of clean electricity standards in the U.S. power sector, model outputs highlight strengths and limitations of different cost metrics. Transformations of power systems with lower-carbon resources and zero-marginal-cost generation may entail shifts in when and where system costs are incurred, and given how these changes may not be appropriated reflected in metrics that were commonly reported in the past such as wholesale energy prices, showing a decomposition of system costs across standard reporting categories could be a more robust reporting practice. Ultimately, providing better metrics is only one element in a portfolio of transparency-related practices, and although it is insufficient by itself, such reporting can help to move dialogues in more productive directions and encourage better modeling practices.


Author(s):  
Naeem Maroof ◽  
Ali Y. Al-Zahrani

In the modern Block-chain and Artificial Intelligence era, energy efficiency has become one of the most important design concerns. Approximate computing is a new and an evolving field promising to provide energy-accuracy trade-off. Several applications are tolerant to small degradation in results, and hence tasks like image and video processing are candidates to benefit from Approximate Computing. In this paper, we propose a new design approach for designing approximate adders and further optimize the accuracy and cost metrics. Our approach is based on minimizing the errors while cascading more than one 1-bit adder. We insert [Formula: see text] on specific locations to achieve a reasonable circuit minimization and reduce the [Formula: see text] cost. We compare our design with exact adder and relevant state-of-the-art approximate adders. Through analysis and simulations, we show that our approach provides higher accuracy and far better performance compared with other designs. The proposed double bit approximate adder provides more than 25% savings in gate count compared with the exact adder, has a mean absolute error of 0.25 which is lowest among all the reference approximate adders and reduces the power-delay product by more than 60% compared to the exact adder. When employed for image filtering, the proposed design provides a [Formula: see text] of 96%, a [Formula: see text] of 95% and a [Formula: see text] of 91% relative to the actual results, while the second best approximate adder only achieves 64%, 54% and 71% of these image quality metrics, respectively.


2021 ◽  
Author(s):  
Mark Kuster

Where practicable, the total end-to-end test-and-calibration program cost would serve as the ultimate measurement quality metric (MQM). Total cost includes both the capitalization and ongoing costs that support product quality (sometimes called cost of quality) and the consequence costs (sometimes called cost of poor quality) that result from imperfect measurement and products. End-to-end means capturing costs from the entire traceability chain: from measurement standards to end products. Minimizing this MQM, total end-toend cost (TETEC), equates to optimizing measurement quality assurance (MQA). Lacking easily available measurement and performance data automatically fed to modeling software, organizations have found cost metrics unimaginable or impracticable, so their measurement programs instead target more easily computed MQMs, such as false-accept risk or simpler proxies thereof, setting minimum, but sub-optimal, quality levels. However, modern computing systems and software, such as laboratory management systems with testpoint- level traceability, rapidly approach the point at which the TETEC MQM will become practicable. Preparing for this eventuality, the NCSLI 173 Metrology Practices Committee has developed models that relate costs to measurement program information such as product specifications, test and measurement uncertainties, calibration intervals and reliability targets. Applications include optimizing overall program MQA, but also estimating the value of metrology and return on equipment investments, selecting instruments, designing test and calibration processes, designing products. This paper applies the cost models to case studies and examples to illustrate some applications.


2021 ◽  
Author(s):  
Collin Delker ◽  

Where practicable, the total end-to-end test-and-calibration program cost would serve as the ultimate measurement quality metric (MQM). Total cost includes both the capitalization and ongoing costs that support product quality (sometimes called cost of quality) and the consequence costs (sometimes called cost of poor quality) that result from imperfect measurement and products. End-to-end means capturing costs from the entire traceability chain: from measurement standards to end products. Minimizing this MQM, total end-toend cost (TETEC), equates to optimizing measurement quality assurance (MQA). Lacking easily available measurement and performance data automatically fed to modeling software, organizations have found cost metrics unimaginable or impracticable, so their measurement programs instead target more easily computed MQMs, such as false-accept risk or simpler proxies thereof, setting minimum, but sub-optimal, quality levels. However, modern computing systems and software, such as laboratory management systems with testpoint- level traceability, rapidly approach the point at which the TETEC MQM will become practicable. Preparing for this eventuality, the NCSLI 173 Metrology Practices Committee has developed models that relate costs to measurement program information such as product specifications, test and measurement uncertainties, calibration intervals and reliability targets. Applications include optimizing overall program MQA, but also estimating the value of metrology and return on equipment investments, selecting instruments, designing test and calibration processes, designing products. This paper applies the cost models to case studies and examples to illustrate some applications.


Author(s):  
Christopher M Purdy ◽  
Alena J Raymond ◽  
Jason T. DeJong ◽  
Alissa Kendall ◽  
Christopher Krage ◽  
...  

The life cycle impacts of site characterization, an important component of most geotechnical engineering projects, are typically not considered in practice nor have they been studied in detail. A life cycle sustainability assessment (LCSA) was performed to evaluate the environmental and economic impacts of several common site investigation methods. The potential impacts of these methods were computed to provide normalized metrics for the mobilization, drilling, sampling and/or testing, and borehole sealing phases of the life cycle. These environmental impact and cost metrics were then applied to a ‘typical’ 30 m exploration to compare different site investigation methods. Next, the metrics were used to assess the impacts of small and midsized industry investigation programs to investigate how impacts scale with project size. Scenario analyses were then performed on the midsized project to consider how different mobilization choices, grouting materials, and exploration methods influence total impacts. Collectively, this study provides a reference and framework that allows practitioners to assess environmental impacts in parallel with cost when designing site investigation scopes of work.


Sensors ◽  
2021 ◽  
Vol 21 (15) ◽  
pp. 5102
Author(s):  
Saleha Sikandar ◽  
Naveed Khan Baloch ◽  
Fawad Hussain ◽  
Waqar Amin ◽  
Yousaf Bin Zikria ◽  
...  

Mapping application task graphs on intellectual property (IP) cores into network-on-chip (NoC) is a non-deterministic polynomial-time hard problem. The evolution of network performance mainly depends on an effective and efficient mapping technique and the optimization of performance and cost metrics. These metrics mainly include power, reliability, area, thermal distribution and delay. A state-of-the-art mapping technique for NoC is introduced with the name of sailfish optimization algorithm (SFOA). The proposed algorithm minimizes the power dissipation of NoC via an empirical base applying a shared k-nearest neighbor clustering approach, and it gives quicker mapping over six considered standard benchmarks. The experimental results indicate that the proposed techniques outperform other existing nature-inspired metaheuristic approaches, especially in large application task graphs.


Author(s):  
Masahiro Sugiyama ◽  
Shinichiro Fujimori ◽  
Kenichi Wada ◽  
Ken Oshiro ◽  
Etsushi Kato ◽  
...  

AbstractIn June, 2019, Japan submitted its mid-century strategy to the United Nations Framework Convention on Climate Change and pledged 80% emissions cuts by 2050. The strategy has not gone through a systematic analysis, however. The present study, Stanford Energy Modeling Forum (EMF) 35 Japan Model Intercomparison project (JMIP), employs five energy-economic and integrated assessment models to evaluate the nationally determined contribution and mid-century strategy of Japan. EMF 35 JMIP conducts a suite of sensitivity analyses on dimensions including emissions constraints, technology availability, and demand projections. The results confirm that Japan needs to deploy all of its mitigation strategies at a substantial scale, including energy efficiency, electricity decarbonization, and end-use electrification. Moreover, they suggest that with the absence of structural changes in the economy, heavy industries will be one of the hardest to decarbonize. Partitioning of the sum of squares based on a two-way analysis of variance (ANOVA) reconfirms that mitigation strategies, such as energy efficiency and electrification, are fairly robust across models and scenarios, but that the cost metrics are uncertain. There is a wide gap of policy strength and breadth between the current policy instruments and those suggested by the models. Japan should strengthen its climate action in all aspects of society and economy to achieve its long-term target.


2021 ◽  
Vol 16 (2) ◽  
pp. 77-83 ◽  
Author(s):  
Michael I Ellenbogen ◽  
Laura Prichett ◽  
Pamela T Johnson ◽  
Daniel J Brotman

OBJECTIVE: We developed a diagnostic overuse index that identifies hospitals with high levels of diagnostic intensity by comparing negative diagnostic testing rates for common diagnoses. METHODS: We prospectively identified candidate overuse metrics, each defined by the percentage of patients with a particular diagnosis who underwent a potentially unnecessary diagnostic test. We used data from seven states participating in the State Inpatient Databases. Candidate metrics were tested for temporal stability and internal consistency. Using mixed-effects ordinal regression and adjusting for regional and hospital characteristics, we compared results of our index with three Dartmouth health service area-level utilization metrics and three Medicare county-level cost metrics. RESULTS: The index was comprised of five metrics with good temporal stability and internal consistency. It correlated with five of the six prespecified overuse measures. Among the Dartmouth metrics, our index correlated most closely with physician reimbursement, with an odds ratio of 2.02 (95% CI, 1.11-3.66) of being in a higher tertile of the overuse index when comparing tertiles 3 and 1 of this Dartmouth metric. Among the Medicare county-level metrics, our index correlated most closely with standardized costs of procedures per capita, with an odds ratio of 2.03 (95% CI, 1.21-3.39) of being in a higher overuse index tertile when comparing tertiles 3 and 1 of this metric. CONCLUSIONS: We developed a novel overuse index that is preliminary in nature. This index is derived from readily available administrative data and shows some promise for measuring overuse of diagnostic testing at the hospital level.


Sign in / Sign up

Export Citation Format

Share Document