execution mode
Recently Published Documents


TOTAL DOCUMENTS

28
(FIVE YEARS 9)

H-INDEX

3
(FIVE YEARS 1)

Author(s):  
Angelo Garofalo ◽  
Gianmarco Ottavi ◽  
Alfio di Mauro ◽  
Francesco Conti ◽  
Giuseppe Tagliavini ◽  
...  

2021 ◽  
Vol 2 (1) ◽  
pp. 1-25
Author(s):  
Srinivasan Iyengar ◽  
Stephen Lee ◽  
David Irwin ◽  
Prashant Shenoy ◽  
Benjamin Weil

Buildings consume over 40% of the total energy in modern societies, and improving their energy efficiency can significantly reduce our energy footprint. In this article, we present WattScale, a data-driven approach to identify the least energy-efficient buildings from a large population of buildings in a city or a region. Unlike previous methods such as least-squares that use point estimates, WattScale uses Bayesian inference to capture the stochasticity in the daily energy usage by estimating the distribution of parameters that affect a building. Further, it compares them with similar homes in a given population. WattScale also incorporates a fault detection algorithm to identify the underlying causes of energy inefficiency. We validate our approach using ground truth data from different geographical locations, which showcases its applicability in various settings. WattScale has two execution modes—(i) individual and (ii) region-based, which we highlight using two case studies. For the individual execution mode, we present results from a city containing >10,000 buildings and show that more than half of the buildings are inefficient in one way or another indicating a significant potential from energy improvement measures. Additionally, we provide probable cause of inefficiency and find that 41%, 23.73%, and 0.51% homes have poor building envelope, heating, and cooling system faults, respectively. For the region-based execution mode, we show that WattScale can be extended to millions of homes in the U.S. due to the recent availability of representative energy datasets.


2020 ◽  
Vol 4 (4) ◽  
pp. 32
Author(s):  
Tamas Foldi ◽  
Chris von Csefalvay ◽  
Nicolas A. Perez

The new barrier mode in Apache Spark allows for embedding distributed deep learning training as a Spark stage to simplify the distributed training workflow. In Spark, a task in a stage does not depend on any other tasks in the same stage, and hence it can be scheduled independently. However, several algorithms require more sophisticated inter-task communications, similar to the MPI paradigm. By combining distributed message passing (using asynchronous network IO), OpenJDK’s new auto-vectorization and Spark’s barrier execution mode, we can add non-map/reduce-based algorithms, such as Cannon’s distributed matrix multiplication to Spark. We document an efficient distributed matrix multiplication using Cannon’s algorithm, which significantly improves on the performance of the existing MLlib implementation. Used within a barrier task, the algorithm described herein results in an up to 24% performance increase on a 10,000 × 10,000 square matrix with a significantly lower memory footprint. Applications of efficient matrix multiplication include, among others, accelerating the training and implementation of deep convolutional neural network-based workloads, and thus such efficient algorithms can play a ground-breaking role in the faster and more efficient execution of even the most complicated machine learning tasks.


2020 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Sayyid Ali Banihashemi ◽  
Mohammad Khalilzadeh

PurposeThe purpose of this paper is to evaluate project activities' efficiency in different execution modes for the optimization of time–cost-quality and environmental impacts trade-off problem.Design/methodology/approachThis paper presents a parallel Data Envelopment Analysis (DEA) method for evaluation of project activities with different execution modes to select the best execution mode and find a trade-off between objectives. Also, according to the nature of the project activities, outputs are categorized into desirable (quality) and undesirable (time, cost and environmental impacts) and analyzed based on the DEA model. In order to rank efficient execution modes, the ideal and anti-ideal virtual units method is used. The proposed model is implemented on a real case of a rural water supply construction project to demonstrate its validity.FindingsThe findings show that the use of the efficient execution mode in each activity leads to an optimal trade-off between the four project objectives (time, cost, quality and environmental impacts).Practical implicationsThis study help project managers and practitioners with choosing the most efficient execution modes of project activities taking time–cost-quality-environmental impacts into account.Originality/valueIn this paper, in addition to time and cost optimization of construction projects, quality factors and environmental impacts are considered. Further to the authors' knowledge, there is no method for evaluating project activities' efficiency. The efficiency of different activity modes is also evaluated for the first time to select the most efficient modes. This research can assist project managers with choosing the most appropriate execution modes for the activities to ultimately accomplish the project with the lowest time, cost and environmental impacts along with the highest quality.


Author(s):  
Tamas Foldi ◽  
Chris von Csefalvay ◽  
Nicolas A. Perez

The new barrier mode in Apache Spark allows embedding distributed deep learning training as a Spark stage to simplify the distributed training workflow. In Spark, a task in a stage doesn’t depend on any other tasks in the same stage, and hence it can be scheduled independently. However, several algorithms require more sophisticated inter-task communications, similar to the MPI paradigm. By combining distributed message passing (using asynchronous network IO), OpenJDK’s new auto-vectorization and Spark’s barrier execution mode, we can add non-map/reduce based algorithms, such as Cannon’s distributed matrix multiplication to Spark. We document an efficient distributed matrix multiplication using Cannon’s algorithm, which improves significantly on the performance of the existing MLlib implementation. Used within a barrier task, the algorithm described herein results in an up to 24% performance increase on a 10,000x10,000 square matrix with a significantly lower memory footprint. Applications of efficient matrix multiplication include, among others, accelerating the training and implementation of deep convolutional neural network based workloads, and thus such efficient algorithms can play a ground-breaking role in faster, more efficient execution of even the most complicated machine learning tasks


2019 ◽  
Vol 46 (7) ◽  
pp. 581-600 ◽  
Author(s):  
Bahaa Hussein ◽  
Osama Moselhi

This study introduces a newly developed method for optimized time-cost trade-off under uncertainty. It identifies the optimal execution mode for each project activity that results in minimizing the overall project cost and (or) duration while satisfying a specified joint confidence level of both time and cost. The method uses an evolutionary-based algorithm along with a design generator of experiments and blocking techniques. The developed method accounts for managerial flexibility towards the selection of execution modes. This accommodates experience-based judgement of project managers in this process. Hence, the second fold of the developed method is a completely randomized experiment module that depicts the main effect of changing an activity mode on the project total cost and overall duration. The method provides the decision-maker a guideline for making well-informed implementation strategies. The results obtained demonstrate benefits and accuracy of the developed method and its applicability for large-scale projects.


2019 ◽  
Vol 06 (01) ◽  
pp. 69-90
Author(s):  
Jarosław Wikarek ◽  
Paweł Sitek

Scheduling and resource allocation problems are widespread in many areas of today’s technology and management. Their different forms and structures appear in production, logistics, software engineering, computer networks, project and human resources management, services, etc. The literature (problem classification, scheduling and resource allocation models, solutions) is vast and exhaustive. In practice, however, classical scheduling problems with fixed structures and standard constraints (precedence, disjoint, etc.) are rare. Practical scheduling problems include also logical and nonlinear constraints, and they use nonstandard criteria of schedule evaluations. Indeed, in many cases, decision makers are interested in the feasibility and/or optimality of a given schedule for specified conditions formulated as general and/or specific questions. Thus, there is a need to develop a programming framework that will facilitate the modeling and solving of a variety of diverse scheduling problems. The framework should be able to (a) model any types of constraints, (b) ask questions/criteria relating to the schedule execution mode and (c) be highly effective in finding solutions (schedule development). This paper proposes such a constraint-based declarative programming framework for modeling and solving scheduling problems which satisfies the assumptions above. It was built with the Constraint Logic Programming (CLP) environment and supported with Mathematical Programming (MP). The functionality and effectiveness of this framework are presented with the use of an illustrative example for the resource-constrained scheduling problem with additional resources.


Author(s):  
Constantin Anghelache ◽  
Sorinel Căpușneanu ◽  
Dan Ioan Topor ◽  
Andreea Marin-Pantelescu

This chapter highlights aspects of the contribution of an IT program used in cost accounting and its management according to the target costing (TC) and its impact on the business strategy of an economic entity. The authors present the historical evolution of the TC, its implementation steps, and the methodological steps that go into the management accounting. The characteristics of a software program specifically designed for cost accounting and management of TC, its design, implementation stages, execution mode, are presented. The guarantee of a managerial decision is based on the provision of real, accurate, and reliable information that can be obtained and analyzed with this software program. The theoretical and methodological aspects presented are based on the existing literature, university studies, and specialty from all over the world. Through the authors' contribution, a new conceptual-empirical framework is created to discuss issues that impact on the business environment of economic entities.


2018 ◽  
Vol 19 (3) ◽  
pp. 259-274
Author(s):  
Beata Bylina ◽  
Jaroslaw Bylina

Efficient thread mapping relies upon matching the behaviour of the application with system characteristics. The main aim of this paper is to evaluate the influence of the OpenMP thread mapping on the computation performance of the matrix factorisations on Intel Xeon Phi coprocessor and hybrid CPU-MIC platforms. The authors consider parallel LU factorisations with and without pivoting, both from MKL (Math Kernel Library) library. The results show that the choice of thread affinity, the number of threads and the execution mode have a measurable impact on the performance and the scalability of the LU factorisations.


Sign in / Sign up

Export Citation Format

Share Document