cost model
Recently Published Documents





2022 ◽  
Vol 30 (7) ◽  
pp. 0-0

With the rise of cloud computing, big data and Internet of Things technology, intelligent manufacturing is leading the transformation of manufacturing mode and industrial upgrading of manufacturing industry, becoming the commanding point of a new round of global manufacturing competition. Based on the literature review of intelligent manufacturing and intelligent supply chain, a total factor production cost model for intelligent manufacturing and its formal expression are proposed. Based on the analysis of the model, 12 first-level indicators and 29 second-level indicators of production line, workshop/factory, enterprise and enterprise collaboration are proposed to evaluate the intelligent manufacturing capability of supply chain. This article also further studies the layout superiority and spatial agglomeration characteristics of intelligent manufacturing supply chain, providing useful reference and support for enterprises and policy makers in the decision-making.

2022 ◽  
Vol 13 (2) ◽  
pp. 1-28
Yan Tang ◽  
Weilong Cui ◽  
Jianwen Su

A business process (workflow) is an assembly of tasks to accomplish a business goal. Real-world workflow models often demanded to change due to new laws and policies, changes in the environment, and so on. To understand the inner workings of a business process to facilitate changes, workflow logs have the potential to enable inspecting, monitoring, diagnosing, analyzing, and improving the design of a complex workflow. Querying workflow logs, however, is still mostly an ad hoc practice by workflow managers. In this article, we focus on the problem of querying workflow log concerning both control flow and dataflow properties. We develop a query language based on “incident patterns” to allow the user to directly query workflow logs instead of having to transform such queries into database operations. We provide the formal semantics and a query evaluation algorithm of our language. By deriving an accurate cost model, we develop an optimization mechanism to accelerate query evaluation. Our experiment results demonstrate the effectiveness of the optimization and achieves up to 50× speedup over an adaption of existing evaluation method.

2022 ◽  
Vol 19 (1) ◽  
pp. 1-26
Prasanth Chatarasi ◽  
Hyoukjun Kwon ◽  
Angshuman Parashar ◽  
Michael Pellauer ◽  
Tushar Krishna ◽  

A spatial accelerator’s efficiency depends heavily on both its mapper and cost models to generate optimized mappings for various operators of DNN models. However, existing cost models lack a formal boundary over their input programs (operators) for accurate and tractable cost analysis of the mappings, and this results in adaptability challenges to the cost models for new operators. We consider the recently introduced Maestro Data-Centric (MDC) notation and its analytical cost model to address this challenge because any mapping expressed in the notation is precisely analyzable using the MDC’s cost model. In this article, we characterize the set of input operators and their mappings expressed in the MDC notation by introducing a set of conformability rules . The outcome of these rules is that any loop nest that is perfectly nested with affine tensor subscripts and without conditionals is conformable to the MDC notation. A majority of the primitive operators in deep learning are such loop nests. In addition, our rules enable us to automatically translate a mapping expressed in the loop nest form to MDC notation and use the MDC’s cost model to guide upstream mappers. Our conformability rules over the input operators result in a structured mapping space of the operators, which enables us to introduce a mapper based on our decoupled off-chip/on-chip approach to accelerate mapping space exploration. Our mapper decomposes the original higher-dimensional mapping space of operators into two lower-dimensional off-chip and on-chip subspaces and then optimizes the off-chip subspace followed by the on-chip subspace. We implemented our overall approach in a tool called Marvel , and a benefit of our approach is that it applies to any operator conformable with the MDC notation. We evaluated Marvel over major DNN operators and compared it with past optimizers.

2023 ◽  
Vol 1 (1) ◽  
pp. 1
Sholiq Sholiq ◽  
Ragesa Mario Junior ◽  
Apol Subriadi

2022 ◽  
Vol 114 ◽  
pp. 103557
Jonathan D. Ogland-Hand ◽  
Ryan M. Kammer ◽  
Jeffrey A. Bennett ◽  
Kevin M. Ellett ◽  
Richard S. Middleton

Mohammad Mehdi Alemi ◽  
Athulya A. Simon ◽  
Jack Geissinger ◽  
Alan T. Asbeck

Despite several attempts to quantify the metabolic savings resulting from the use of passive back-support exoskeletons (BSEs), no study has modeled the metabolic change while wearing an exoskeleton during lifting. The objectives of this study were to: 1) quantify the metabolic reductions due to the VT-Lowe's exoskeleton during lifting; and 2) provide a comprehensive model to estimate the metabolic reductions from using a passive BSE. In this study, 15 healthy adults (13M, 2F) of ages 20 to 34 years (mean=25.33, SD=4.43) performed repeated freestyle lifting and lowering of an empty box and a box with 20% of their bodyweight. Oxygen consumption and metabolic expenditure data were collected. A model for metabolic expenditure was developed and fitted with the experimental data of two prior studies and the without-exoskeleton experimental results. The metabolic cost model was then modified to reflect the effect of the exoskeleton. The experimental results revealed that VT-Lowe's exoskeleton significantly lowered the oxygen consumption by ~9% for an empty box and 8% for a 20% bodyweight box, which corresponds to a net metabolic cost reduction of ~12% and ~9%, respectively. The mean metabolic difference (i.e., without-exo minus with-exo) and the 95% confidence interval were 0.36 and (0.2-0.52) [Watts/kg] for 0% bodyweight, and 0.43 and (0.18-0.69) [Watts/kg] for 20% bodyweight. Our modeling predictions for with-exoskeleton conditions were precise, with absolute freestyle prediction errors of <2.1%. The model developed in this study can be modified based on different study designs, and can assist researchers in enhancing designs of future lifting exoskeletons.

2022 ◽  
James T Bates ◽  
Christopher W Kelly ◽  
Joshua E Lane

ABSTRACT Introduction Exsanguination is the leading cause of preventable death on the battlefield and in austere environments. Multiple courses have been developed to save lives by stopping hemorrhage. Training for this requires simulation models; however, many models are expensive, preventing the further expansion of this life-saving training. We present a low-cost model for hemorrhage training and realistic moulage based on simple medical supplies and grocery store meats. Materials and Methods Wound packing training was completed by use of a block of pork shoulder roast with an incision simulating a wound and IV tubing connected to a syringe with fake blood. Hemostasis was obtained with proper wound packing by the student, causing the bleeding to be tamponaded. Wound moulage utilized remaining supplies of pork roast being attached to patient actors or mannequins and adorned with fake blood creating wounds with the appearance and feel of real tissues. Results Tactical Combat Casualty Care (TCCC) training was completed at a small military medical facility with a start-up cost of less than $70 and a single course as cheap as $15. These methods have been utilized to establish other TCCC training centers while keeping costs low. Conclusions We present low-cost models for simulating massive hemorrhage for wound packing with pork roast and realistic moulage. These methods can be utilized for other hemorrhage training courses such as TCCC, Advanced Wilderness Life Support, and Stop the Bleed.

2022 ◽  
M. Asif Naeem ◽  
Wasiullah Waqar ◽  
Farhaan Mirza ◽  
Ali Tahir

Abstract Semi-stream join is an emerging research problem in the domain of near-real-time data warehousing. A semi-stream join is basically a join between a fast stream (S) and a slow disk-based relation (R). In the modern era of technology, huge amounts of data are being generated swiftly on a daily basis which needs to be instantly analyzed for making successful business decisions. Keeping this in mind, a famous algorithm called CACHEJOIN (Cache Join) was proposed. The limitation of the CACHEJOIN algorithm is that it does not deal with the frequently changing trends in a stream data efficiently. To overcome this limitation, in this paper we propose a TinyLFU-CACHEJOIN algorithm, a modified version of the original CACHEJOIN algorithm, which is designed to enhance the performance of a CACHEJOIN algorithm. TinyLFU-CACHEJOIN employs an intelligent strategy which keeps only those records of $R$ in the cache that have a high hit rate in S. This mechanism of TinyLFU-CACHEJOIN allows it to deal with the sudden and abrupt trend changes in S. We developed a cost model for our TinyLFU-CACHEJOIN algorithm and proved it empirically. We also assessed the performance of our proposed TinyLFU-CACHEJOIN algorithm with the existing CACHEJOIN algorithm on a skewed synthetic dataset. The experiments proved that TinyLFU-CACHEJOIN algorithm significantly outperforms the CACHEJOIN algorithm.

2022 ◽  
Tracey Ziev ◽  
Erfan Rasouli ◽  
Ines Noelly-Tano ◽  
Ziheng Wu ◽  
Srujana Yarasi Rao ◽  

Developing low cost, high efficiency heat exchangers (HX) for application in concentrated solar power (CSP) is critical to reducing CSP costs. However, the extreme operating conditions in CSP systems present a challenge for typical high efficiency HX manufacturing processes. We describe a process-based cost model (PBCM) to estimate the cost of fabricating an HX for this application using additive manufacturing (AM). The PBCM is designed to assess the effectiveness of different designs, processes choices, and manufacturing innovations to reduce HX cost. We describe HX design and AM process modifications that reduce HX cost from a baseline of$780/kW-thto$570/kW-th. We further evaluate the impact of alternative current and potential future technologies on HX cost, and identify a pathway to further reduce HX cost to$270/kW-th.

2022 ◽  
Vol 6 (1) ◽  
pp. 18
James Clarke ◽  
Alistair McIlhagger ◽  
Dorian Dixon ◽  
Edward Archer ◽  
Glenda Stewart ◽  

Lack of cost information is a barrier to acceptance of 3D woven preforms as reinforcements for composite materials, compared with 2D preforms. A parametric, resource-based technical cost model (TCM) was developed for 3D woven preforms based on a novel relationship equating manufacturing time and 3D preform complexity. Manufacturing time, and therefore cost, was found to scale with complexity for seventeen bespoke manufactured 3D preforms. Two sub-models were derived for a Weavebird loom and a Jacquard loom. For each loom, there was a strong correlation between preform complexity and manufacturing time. For a large, highly complex preform, the Jacquard loom is more efficient, so preform cost will be much lower than for the Weavebird. Provided production is continuous, learning, either by human agency or an autonomous loom control algorithm, can reduce preform cost for one or both looms to a commercially acceptable level. The TCM cost model framework could incorporate appropriate learning curves with digital twin/multi-variate analysis so that cost per preform of bespoke 3D woven fabrics for customised products with low production rates may be predicted with greater accuracy. A more accurate model could highlight resources such as tooling, labour and material for targeted cost reduction.

Sign in / Sign up

Export Citation Format

Share Document