scholarly journals Timed Negotiations

Author(s):  
S. Akshay ◽  
Blaise Genest ◽  
Loïc Hélouët ◽  
Sharvik Mital

AbstractNegotiations were introduced in [6] as a model for concurrent systems with multiparty decisions. What is very appealing with negotiations is that it is one of the very few non-trivial concurrent models where several interesting problems, such as soundness, i.e. absence of deadlocks, can be solved in PTIME [3]. In this paper, we introduce the model of timed negotiations and consider the problem of computing the minimum and the maximum execution times of a negotiation. The latter can be solved using the algorithm of [10] computing costs in negotiations, but surprisingly minimum execution time cannot.This paper proposes new algorithms to compute both minimum and maximum execution time, that work in much more general classes of negotiations than [10], that only considered sound and deterministic negotiations. Further, we uncover the precise complexities of these questions, ranging from PTIME to $$\varDelta _2^P$$ Δ 2 P -complete. In particular, we show that computing the minimum execution time is more complex than computing the maximum execution time in most classes of negotiations we consider.

1988 ◽  
Vol 11 (1) ◽  
pp. 1-19
Author(s):  
Andrzej Rowicki

The purpose of the paper is to consider an algorithm for preemptive scheduling for two-processor systems with identical processors. Computations submitted to the systems are composed of dependent tasks with arbitrary execution times and contain no loops and have only one output. We assume that preemptions times are completely unconstrained, and preemptions consume no time. Moreover, the algorithm determines the total execution time of the computation. It has been proved that this algorithm is optimal, that is, the total execution time of the computation (schedule length) is minimized.


2018 ◽  
Vol 10 (1) ◽  
Author(s):  
Aaron Kite-Powell ◽  
Michael Coletta ◽  
Jamie Smimble

Objective: The objective of this work is to describe the use and performance of the NSSP ESSENCE system by analyzing the structured query language (SQL) logs generated by users of the National Syndromic Surveillance Program’s (NSSP) Electronic Surveillance System for the Early Notification of Community-based Epidemics (ESSENCE).Introduction: As system users develop queries within ESSENCE, they step through the user-interface to select data sources and parameters needed for their query. Then they select from the available output options (e.g., time series, table builder, data details). These activities execute a SQL query on the database, the majority of which are saved in a log so that system developers can troubleshoot problems. Secondarily, these data can be used as a form of web analytics to describe user query choices, query volume, query execution time, and develop an understanding of ESSENCE query patterns.Methods: ESSENCE SQL query logs were extracted from April 1, 2016 to August 23th, 2017. Overall query volume was assessed by summarizing volume of queries over time (e.g., by hour, day, and week), and by Site. To better understand system performance the mean, median, and maximum query execution times were summarized over time and by Site. SQL query text was parsed so that we could isolate, 1) Syndromes queried, 2) Sub-syndromes queried, 3) Keyword categories queried, and 4) Free text query terms used. Syndromes, sub-syndromes, and keyword categories were tabulated in total and by Site. Frequencies of free text query terms were analyzed using n-grams, wordclouds, and term co-occurrence relationships. Term co-occurrence network graphs were used to visualize the structure and relationships among terms.Results: There were a total of 354,101 SQL queries generated by users of ESSENCE between April 1, 2016 and August 23rd, 2017. Over this entire time period there was a weekly mean of 4,785 SQL queries performed by users. When looking at 2017 data through August 23rd this figure increases to a mean of 7,618 SQL queries per week for 2017, and since May 2017 the mean number of SQL queries has increased to 10,485 per week. The maximum number of user generated SQL queries in a week was 29,173. The mean, median, and maximum query execution times for all data was 0.61 minutes, 0 minutes, and 365 minutes, respectively. When looking at only queries with a free text component the mean query execution time increases slightly to 0.94 minutes, though the median is still 0 minutes. The peak usage period based on number of SQL queries performed is between 12:00pm and 3:00pm EST.Conclusions: The use of NSSP ESSENCE has grown since implementation. This is the first time the ESSENCE system has been used at a National level with this volume of data, and number of users. Our focus to date has been on successfully on-boarding new Sites so that they can benefit from use of the available tools, providing trainings to new users, and optimizing ESSENCE performance. Routine analysis of the ESSENCE SQL logs can assist us in understanding how the system is being used, how well it is performing, and in evaluating our system optimization efforts.


2014 ◽  
Vol 2014 ◽  
pp. 1-6 ◽  
Author(s):  
Xiangrong Liu ◽  
Ziming Li ◽  
Juan Suo ◽  
Ying Ju ◽  
Juan Liu ◽  
...  

Tissue P system is a class of parallel and distributed model; a feature of traditional tissue P system is that the execution time of certain biological processes is very sensitive to environmental factors that might be hard to control. In this work, we construct a family of tissue P systems that works independently from the values associated with the execution times of the rules. Furthermore, we present a time-free efficient solution to multidimensional 0-1 knapsack problem by timed recognizer tissue P systems.


2021 ◽  
Author(s):  
Stefan Draskovic ◽  
Rehan Ahmed ◽  
Pengcheng Huang ◽  
Lothar Thiele

AbstractMixed-criticality systems often need to fulfill safety standards that dictate different requirements for each criticality level, for example given in the ‘probability of failure per hour’ format. A recent trend suggests designing this kind of systems by jointly scheduling tasks of different criticality levels on a shared platform. When this is done, the usual assumption is that tasks of lower criticality are degraded when a higher criticality task needs more resources, for example when it overruns a bound on its execution time. However, a way to quantify the impact this degradation has on the overall system is not well understood. Meanwhile, to improve schedulability and to avoid over-provisioning of resources due to overly pessimistic worst-case execution time estimates of higher criticality tasks, a new paradigm emerged where task’s execution times are modeled with random variables. In this paper, we analyze a system with probabilistic execution times, and propose metrics that are inspired by safety standards. Among these metrics are the probability of deadline miss per hour, the expected time before degradation happens, and the duration of the degradation. We argue that these quantities provide a holistic view of the system’s operation and schedulability.


Author(s):  
Katembo Kituta Ezéchiel ◽  
Shri Kant ◽  
Ruchi Agarwal

While replicating data over a decentralized Peer-to- Peer (P2P) network, transactions broadcasting updates arising from different peers run simultaneously so that a destination peer replica can be updated concurrently, that always causes transaction and data conflicts. Moreover, during data migration, connectivity interruption and network overload corrupt running transactions so that destination peers can experience duplicated data or improper data or missing data, hence replicas remain inconsistent. Different methodological approaches have been combined to solve these problems: the audit log technique to capture the changes made to data; the algorithmic method to design and analyse algorithms and the statistical method to analyse the performance of new algorithms and to design prediction models of the execution time based on other parameters. A Graphical User Interface software as prototype, have been designed with C #, to implement these new algorithms to obtain a database synchronizer-mediator. A stream of experiments, showed that the new algorithms were effective. So, the hypothesis according to which “The execution time of replication and reconciliation transactions totally depends on independent factors.” has been confirmed.


2015 ◽  
Vol 37 ◽  
pp. 230 ◽  
Author(s):  
Azad Noori ◽  
Farzad Moradi

There are several routes to go from point A to point B in many computer games and computer player have to choose the best route. To do this, the pathfinding algorithms is used. Currently, several algorithms have been proposed for routing in games so that the general challenges of them is high consumption of memory and a long Execution time. Due to these problems, the development and introduction of new algorithms will be continued. At the first part of this article, in addition to basic and important used algorithms, the new algorithm BIDDFS is introduced.In the second part, these algorithms in the various modes, are simulated on 2D-Grid, and compared based on their efficency (memory consumption and execution time) , Simulated algorithms include: Dijkstra, Iddfs, Biddfs, Bfs (Breadth), Greedy Best First Search, Ida*, A*, Jump point search, HPA*.


2020 ◽  
Vol 245 ◽  
pp. 05037
Author(s):  
Caterina Marcon ◽  
Oxana Smirnova ◽  
Servesh Muralidharan

Experimental observations and advanced computer simulations in High Energy Physics (HEP) paved the way for the recent discoveries at the Large Hadron Collider (LHC) at CERN. Currently, Monte Carlo simulations account for a very significant amount of computational resources of the Worldwide LHC Computing Grid (WLCG). The current growth in available computing performance will not be enough to fulfill the expected demand for the forthcoming High Luminosity run (HL-LHC). More efficient simulation codes are therefore required. This study focuses on evaluating the impact of different build methods on the simulation execution time. The Geant4 toolkit, the standard simulation code for the LHC experiments, consists of a set of libraries which can be either dynamically or statically linked to the simulation executable. Dynamic libraries are currently the preferred build method. In this work, three versions of the GCC compiler, namely 4.8.5, 6.2.0 and 8.2.0 have been used. In addition, a comparison between four optimization levels (Os, O1, O2 and O3) has also been performed. Static builds for all the GCC versions considered, exhibit a reduction in execution times of about 10%. Switching to newer GCC version results in an average of 30% improvement in the execution time regardless of the build type. In particular, a static build with GCC 8.2.0 leads to an improvement of about 34% with respect to the default configuration (GCC 4.8.5, dynamic, O2). The different GCC optimization flags do not affect the execution times.


2020 ◽  
Vol 27 (2) ◽  
pp. 218-233
Author(s):  
Mark G. Gonopolskiy ◽  
Alevtina B. Glonina

The paper presents an algorithm for the worst case response time (WCRT) estimation for multiprocessor systems with fixed-priority preemptive schedulers and the interval uncertainty of tasks execution times. Each task has a unique priority within its processor, a period, an execution time interval [BCET, WCET] and can have data dependency on other tasks. If a decrease in the execution time of the task A can lead to an increase in the response time of the another task B, then task A is called an anomalous task for task B. According to the chosen approach, in order to estimate a task’s WCRT, two steps should be performed. The first one is to construct a set of anomalous tasks using the proposed algorithm for the given task. The paper provides the algorithm and the proof of its correctness. The second one is to find the WCRT estimation using a genetic algorithm. The proposed approach has been implemented software as a program in Python3. A set of experiments have been carried out in order to compare the proposed method in terms of precision and speed with two well-known WCRT estimating methods: the method that does not take into account interval uncertainty (assuming that the execution time of a given task is equal to WCET) and the brute force method. The results of the experiments have shown that, in contrast to the brute force method, the proposed method is applicable to the analysis of the real scale computing systems and also allows to achieve greater precision than the method that does not take into account interval uncertainty.


2019 ◽  
Vol 9 (7) ◽  
pp. 1438
Author(s):  
HanBit Kim ◽  
Seokhie Hong ◽  
HeeSeok Kim

A masking method is a widely known countermeasure against side-channel attacks. To apply a masking method to cryptosystems consisting of Boolean and arithmetic operations, such as ARX (Addition, Rotation, XOR) block ciphers, a masking conversion algorithm should be used. Masking conversion algorithms can be classified into two categories: “Boolean to Arithmetic (B2A)” and “Arithmetic to Boolean (A2B)”. The A2B algorithm generally requires more execution time than the B2A algorithm. Using pre-computation tables, the A2B algorithm substantially reduces its execution time, although it requires additional space in RAM. In CHES2012, B. Debraize proposed a conversion algorithm that somewhat reduced the memory cost of using pre-computation tables. However, they still require ( 2 ( k + 1 ) ) entries of length ( k + 1 ) -bit where k denotes the size of the processed data. In this paper, we propose a low-memory algorithm to convert A2B masking that requires only ( 2 k ) ( k ) -bit. Our contributions are three-fold. First, we specifically show how to reduce the pre-computation table from ( k + 1 ) -bit to ( k ) -bit, as a result, the memory use for the pre-computation table is reduced from ( 2 ( k + 1 ) ) ( k + 1 ) -bit to ( 2 k ) ( k ) -bit. Second, we optimize the execution times of the pre-computation phase and the conversion phase, and determine that our pre-computation algorithm requires approximately half of the operations than Debraize’s algorithm. The results of the 8/16/32-bit simulation show improved speed in the pre-computation phase and the conversion phase as compared to Debraize’s results. Finally, we verify the security of the algorithm against side-channel attacks as well as the soundness of the proposed algorithm.


Mathematics ◽  
2020 ◽  
Vol 8 (3) ◽  
pp. 314 ◽  
Author(s):  
Matteo Fusi ◽  
Fabio Mazzocchetti ◽  
Albert Farres ◽  
Leonidas Kosmidis ◽  
Ramon Canal ◽  
...  

Some high performance computing (HPC) applications exhibit increasing real-time requirements, which call for effective means to predict their high execution times distribution. This is a new challenge for HPC applications but a well-known problem for real-time embedded applications where solutions already exist, although they target low-performance systems running single-threaded applications. In this paper, we show how some performance validation and measurement-based practices for real-time execution time prediction can be leveraged in the context of HPC applications on high-performance platforms, thus enabling reliable means to obtain real-time guarantees for those applications. In particular, the proposed methodology uses coordinately techniques that randomly explore potential timing behavior of the application together with Extreme Value Theory (EVT) to predict rare (and high) execution times to, eventually, derive probabilistic Worst-Case Execution Time (pWCET) curves. We demonstrate the effectiveness of this approach for an acoustic wave inversion application used for geophysical exploration.


Sign in / Sign up

Export Citation Format

Share Document