shared variables
Recently Published Documents


TOTAL DOCUMENTS

92
(FIVE YEARS 9)

H-INDEX

10
(FIVE YEARS 0)

2021 ◽  
Vol 15 ◽  
Author(s):  
Usama Riaz ◽  
Fuleah A. Razzaq ◽  
Shiang Hu ◽  
Pedro A. Valdés-Sosa

Finding the common principal component (CPC) for ultra-high dimensional data is a multivariate technique used to discover the latent structure of covariance matrices of shared variables measured in two or more k conditions. Common eigenvectors are assumed for the covariance matrix of all conditions, only the eigenvalues being specific to each condition. Stepwise CPC computes a limited number of these CPCs, as the name indicates, sequentially and is, therefore, less time-consuming. This method becomes unfeasible when the number of variables p is ultra-high since storing k covariance matrices requires O(kp2) memory. Many dimensionality reduction algorithms have been improved to avoid explicit covariance calculation and storage (covariance-free). Here we propose a covariance-free stepwise CPC, which only requires O(kn) memory, where n is the total number of examples. Thus for n < < p, the new algorithm shows apparent advantages. It computes components quickly, with low consumption of machine resources. We validate our method CFCPC with the classical Iris data. We then show that CFCPC allows extracting the shared anatomical structure of EEG and MEG source spectra across a frequency range of 0.01–40 Hz.


Author(s):  
Nguyen Ngoc Khai ◽  
Truong Anh Hoang ◽  
Dang Duc Hanh

Estimating memory required by complex programs is a well-known research topic. In this work, we build a type system to statically estimate the memory bounds required by shared variables in software transactional memory (STM) programs. This work extends our previous works with additional language features such as explicitly declared shared variables, introduction of primitive types, and allowing loop body to contain any statement, not required to be well-typed as in our previous works. Also, the new type system has better compositionality compared to available type systems.


2021 ◽  
Author(s):  
Ayleen Schinko ◽  
Walter Vogler ◽  
Johannes Gareis ◽  
N. Tri Nguyen ◽  
Gerald Lüttgen

AbstractInterface theories based on Interface Automata (IA) are formalisms for the component-based specification of concurrent systems. Extensions of their basic synchronization mechanism permit the modelling of data, but are studied in more complex settings involving modal transition systems or do not abstract from internal computation. In this article, we show how de Alfaro and Henzinger’s original IA theory can be conservatively extended by shared memory data, without sacrificing simplicity or imposing restrictions. Our extension IA for shared Memory (IAM) decorates transitions with pre- and post-conditions over algebraic expressions on shared variables, which are taken into account by IA’s notion of component compatibility. Simplicity is preserved as IAM can be embedded into IA and, thus, accurately lifts IA’s compatibility concept to shared memory. We also provide a ground semantics for IAM that demonstrates that our abstract handling of data within IA’s open systems view is faithful to the standard treatment of data in closed systems.


Author(s):  
Ying Sheng ◽  
Yoni Zohar ◽  
Christophe Ringeissen ◽  
Andrew Reynolds ◽  
Clark Barrett ◽  
...  

AbstractWe make two contributions to the study of polite combination in satisfiability modulo theories. The first is a separation between politeness and strong politeness, by presenting a polite theory that is not strongly polite. This result shows that proving strong politeness (which is often harder than proving politeness) is sometimes needed in order to use polite combination. The second contribution is an optimization to the polite combination method, obtained by borrowing from the Nelson-Oppen method. The Nelson-Oppen method is based on guessing arrangements over shared variables. In contrast, polite combination requires an arrangement over all variables of the shared sorts. We show that when using polite combination, if the other theory is stably infinite with respect to a shared sort, only the shared variables of that sort need be considered in arrangements, as in the Nelson-Oppen method. The time required to reason about arrangements is exponential in the worst case, so reducing the number of variables considered has the potential to improve performance significantly. We show preliminary evidence for this by demonstrating a speed-up on a smart contract verification benchmark.


2020 ◽  
Vol 15 (10) ◽  
pp. 35
Author(s):  
Claudio Pinto

The measurement of the relative efficiency of a production process with the DEA approach considers the process itself as a "black box" that uses inputs to produce outputs. In reality, many production processes require the carrying out of many activities grouped into phases and interconnected with each other. For this reason, modeling a production process as a network system in which its sub-parts are differently interconnected certainly represents a modeling closer to reality. The NDEA approach born within the DEA methodology has developed several models to measure the relative efficiency of network systems such as independent models, or connected models or relational models. The latter differs from the other two in that it allows you to measure the relative efficiency of the entire process and its parts once the operations between the parts of the system have been considered. In this paper, as well as modeling a production process with four stages with shared variables, we propose a relational NDEA model under different preference systems in the distribution of resources between sub-processes to measure their relative efficiency. The proposed NDEA model is in the multiplicative version. We will use non-real data to solve the model. Our conclusions are that 1) a four-stage production process can represent numerous real processes, 2) the proposed NDEA model can therefore be used for multiple different applications and 3) the system of preferences on the distribution of resources among subs processes influences the measurement of relative efficiency both for the whole process and for its sub-processes.


2019 ◽  
Vol 26 (4) ◽  
pp. 475-487
Author(s):  
Igor S. Anureev

Reflex is a process-oriented language that provides a design of easy-to-maintain control software for programmable logic controllers. The language has been successfully used in a several reliability critical control systems, e. g. control software for a silicon single crystal growth furnace and electronic equipment control system. Currently, the main goal of the Reflex language project is to develop formal verification methods for Reflex programs in order to guarantee increased reliability of the software created on its basis. The paper presents the formal operational semantics of Reflex programs extended by annotations describing the formal specification of software requirements as a necessary basis for the application of such methods. A brief overview of the Reflex language is given and a simple example of its use – a control program for a hand dryer – is provided. The concepts of environment and variables shared with the environment are defined that allows to disengage from specific input/output ports. Types of annotations that specify restrictions on the values of the variables at program launch, restrictions on the environment (in particular, on the control object), invariants of the control cycle, pre- and postconditions of external functions used in Reflex programs are defined. Annotated Reflex also uses standard annotations assume, assert and havoc. The operational semantics of the annotated Reflex programs uses the global clock as well as the local clocks of separate processes, the time of which is measured in the number of iterations of the control cycle, to simulate time constraints on the execution of processes at certain states. It stores a complete history of changes of the values of shared variables for a more precise description of the time properties of the program and its environment. Semantics takes into account the infinity of the program execution cycle, the logic of process transition management from state to state and the interaction of processes with each other and with the environment. Extending the formal operational semantics of the Reflex language to annotations simplifies the proof of the correctness of the transformation approach to deductive verification of Reflex programs developed by the authors, transforming an annotated Reflex program to an annotated program in a very limited subset of the C language, by reducing a complex proof of preserving the truth of program requirements during the transformation to a simpler proof of equivalence of the original and the resulting annotated programs with respect to their operational semantics.


2019 ◽  
Vol 66 ◽  
Author(s):  
Gilles Pesant

The distinctive driving force of constraint programming to solve combinatorial problems has been a privileged access to problem structure through the high-level models it uses. From that exposed structure in the form of so-called global constraints, powerful inference algorithms have shared information between constraints by propagating it through shared variables’ domains, traditionally by removing unsupported values. This paper investigates a richer propagation medium made possible by recent work on counting solutions inside constraints. Beliefs about individual variable-value assignments are exchanged between contraints and iteratively adjusted. It generalizes standard support propagation and aims to converge to the true marginal distributions of the solutions over individual variables. Its advantage over standard belief propagation is that the higher-level models featuring large-arity (global) constraints do not tend to create as many cycles, which are known to be problematic for convergence. The necessary architectural changes to a constraint programming solver are described and an empirical study of the proposal is conducted on its implementation. We find that it provides close approximations to the true marginals and that it significantly improves search guidance.


Entropy ◽  
2019 ◽  
Vol 21 (8) ◽  
pp. 805
Author(s):  
Tommaso Bolognesi

Integrated Information Theory (IIT) is most typically applied to Boolean Nets, a state transition model in which system parts cooperate by sharing state variables. By contrast, in Process Algebra, whose semantics can also be formulated in terms of (labeled) state transitions, system parts—“processes”—cooperate by sharing transitions with matching labels, according to interaction patterns expressed by suitable composition operators. Despite this substantial difference, questioning how much additional information is provided by the integration of the interacting partners above and beyond the sum of their independent contributions appears perfectly legitimate with both types of cooperation. In fact, we collect statistical data about ϕ —integrated information—relative to pairs of boolean nets that cooperate by three alternative mechanisms: shared variables—the standard choice for boolean nets—and two forms of shared transition, inspired by two process algebras. We name these mechanisms α , β and γ . Quantitative characterizations of all of them are obtained by considering three alternative execution modes, namely synchronous, asynchronous and “hybrid”, by exploring the full range of possible coupling degrees in all three cases, and by considering two possible definitions of ϕ based on two alternative notions of distribution distance.


2019 ◽  
Vol 34 (s1) ◽  
pp. s38-s39
Author(s):  
Abdullah A Alhadhira ◽  
Michael S Molloy ◽  
Alexander Hart ◽  
Fadi Issa ◽  
Bader Alossaimi ◽  
...  

Introduction:Human Stampedes (HS) occur at religious mass gatherings. Religious events have a higher rate of morbidity and mortality than other events that experience HS. This study is a subset analysis of religious event HS data regarding the physics principles involved in HS, and the associated event morbidity and mortality.Aim:To analyze reports of religious HS to determine the initiating physics principles and associated morbidity and mortality.Methods:Thirty-four reports of religious HS were analyzed to find shared variables. Thirty-three (97.1%) were written media reports with photographic, drawn, or video documentation. 29 (85.3%) cited footage/photographs and 1 (2.9%) was not associated with visual evidence. Descriptive phrases associated with physics principles contributing to the onset of HS and morbidity data were extracted and analyzed to evaluate frequency before, during, and after events.Results:34 (39.1%) reports of HS found in the literature review were associated with religious HS. Of these, 83% were found to take place in an open space, and 82.3% were associated with population density changes. 82.3% of events were associated with architectural nozzles (small streets, alleys, etc). 100% were found to have loss of XY-axis motion and 89% reached an average velocity of zero. 100% had loss of proxemics and 91% had associated Z-axis displacement (falls). Minimum reported attendance for a religious HS was 3000. 100% of religious HS had reported mortality at the event and 56% with further associated morbidity.Discussion:HS are deadly events at religious mass gatherings. Religious events are often recurring, planned gatherings in specific geographic locations. They are frequently associated with an increase in population density, loss of proxemics and velocity, followed by Z-axis displacements, leading to injury and death. This is frequently due to architectural nozzles, which those organizing religious mass gatherings can predict and utilize to mitigate future events.


2018 ◽  
Vol 2018 ◽  
pp. 1-14 ◽  
Author(s):  
Longmei Nan ◽  
Xiaoyang Zeng ◽  
Yiran Du ◽  
Zibin Dai ◽  
Lin Chen

To solve the problem of complex relationships among variables and the difficulty of extracting shared variables from nonlinear Boolean functions (NLBFs), an association logic model of variables is established using the classical Apriori rule mining algorithm and the association analysis launched during shared variable extraction (SVE). This work transforms the SVE problem into a traveling salesman problem (TSP) and proposes an SVE based on particle swarm optimization (SVE-PSO) method that combines the association rule mining method with swarm intelligence to improve the efficiency of SVE. Then, according to the shared variables extracted from various NLBFs, the distribution of the shared variables is created, and two corresponding hardware circuits, Element A and Element B, based on cascade lookup table (LUT) structures are proposed to process the various NLBFs. Experimental results show that the performance of SVE via SVE-PSO method is significantly more efficient than the classical association rule mining algorithms. The ratio of the rules is 80.41%, but the operation time is only 21.47% when compared to the Apriori method, which uses 200 iterations. In addition, the area utilizations of Element A and Element B expended by the NLBFs via different parallelisms are measured and compared with other methods. The results show that the integrative performances of Element A and Element B are significantly better than those of other methods. The proposed SVE-PSO method and two cascade LUT-structure circuits can be widely used in coarse-grained reconfigurable cryptogrammic processors, or in application-specific instruction-set cryptogrammic processors, to advance the performance of NLBF processing and mapping.


Sign in / Sign up

Export Citation Format

Share Document