A type safe state abstraction for coordination in Java-like languages

2008 ◽  
Vol 45 (7-8) ◽  
pp. 479-536 ◽  
Author(s):  
Ferruccio Damiani ◽  
Elena Giachino ◽  
Paola Giannini ◽  
Sophia Drossopoulou
Keyword(s):  
2017 ◽  
Vol 76 (3) ◽  
pp. 594-604 ◽  
Author(s):  
Yongtai Ren ◽  
Jiping Yao ◽  
Dongyang Xu ◽  
Jing Wang

Regional water safety systems are affected by social, economic, ecological, hydrological and other factors, and their effects are complicated and variable. Studying water safety systems is crucial to promoting the coordinated development of regional water safety systems and anthropogenic processes. Thus, a similarity cloud model is developed to simulate the evolution mechanisms of fuzzy and complex regional systems of water security and overcome the uncertainty that is associated with the indices that are used in water safety index systems. This cloud generator is used to reciprocally transform a qualitative cloud image with a quantitative cloud characteristic value, and the stochastic weight assignment method is used to determine the weight of the evaluation indices. The results of case studies show that Jiansanjiang's water safety systems were in a safe state in 2002–2011, but the water safety systems in the arid area of Yinchuan City were in a dangerous state in 2006–2007 because of climate factors and a lack of effective water and soil resource protection. The experimental results are consistent with the research subjects' actual situations, and the proposed model provides a tool for decision makers to better understand the security issues that are associated with regional water safety systems.


2018 ◽  
Vol 4 (2) ◽  
pp. 149-154
Author(s):  
Aleksey Kulikov ◽  
Andrey Lepyokhin ◽  
Vitaly Polunichev

The purpose of the work was to optimize the parameters of the spillage system equipped with a gas pressure hydroaccumulator for a ship pressurized water reactor in a loss-of-coolant accident. The water-gas ratio in the hydroaccumulator and the hydraulic resistance of the path between the hydroaccumulator and the reactor were optimized at the designed hydroaccumulator geometric volume. The main dynamic processes were described using a mathematical model and a computational analysis. A series of numerical calculations were realized to simulate the behavior dynamics of the coolant level in the reactor during the accident – by varying the optimized parameters. Estimates of the minimum and maximum values of the coolant level were obtained: depending on the initial water-gas ratio in the hydroaccumulator at different diameters of the flow restrictor on the path between the hydroaccumulator and the reactor. These results were obtained subject to the restrictive conditions that, during spillage, the coolant level should remain above the core and below the blowdown nozzle. The first condition implies that the core is in safe state, the second excludes the coolant water blowdown. The optimization goal was to achieve the maximum time interval in which these conditions would be satisfied simultaneously. The authors propose methods for selecting the optimal spillage system parameters; these methods provide the maximum time for the core to be in a safe state during a loss-of-coolant accident at the designed hydroaccumulator volume. Using these methods, it is also possible to make assessments from the early stages of designing reactor plants.


Author(s):  
Carlos Diuk ◽  
Michael Littman

Reinforcement learning (RL) deals with the problem of an agent that has to learn how to behave to maximize its utility by its interactions with an environment (Sutton & Barto, 1998; Kaelbling, Littman & Moore, 1996). Reinforcement learning problems are usually formalized as Markov Decision Processes (MDP), which consist of a finite set of states and a finite number of possible actions that the agent can perform. At any given point in time, the agent is in a certain state and picks an action. It can then observe the new state this action leads to, and receives a reward signal. The goal of the agent is to maximize its long-term reward. In this standard formalization, no particular structure or relationship between states is assumed. However, learning in environments with extremely large state spaces is infeasible without some form of generalization. Exploiting the underlying structure of a problem can effect generalization and has long been recognized as an important aspect in representing sequential decision tasks (Boutilier et al., 1999). Hierarchical Reinforcement Learning is the subfield of RL that deals with the discovery and/or exploitation of this underlying structure. Two main ideas come into play in hierarchical RL. The first one is to break a task into a hierarchy of smaller subtasks, each of which can be learned faster and easier than the whole problem. Subtasks can also be performed multiple times in the course of achieving the larger task, reusing accumulated knowledge and skills. The second idea is to use state abstraction within subtasks: not every task needs to be concerned with every aspect of the state space, so some states can actually be abstracted away and treated as the same for the purpose of the given subtask.


2020 ◽  
Vol 28 (3) ◽  
pp. 1189-1212
Author(s):  
Martin Zimmermann ◽  
Franz Wotawa

Abstract Having systems that can adapt themselves in case of faults or changing environmental conditions is of growing interest for industry and especially for the automotive industry considering autonomous driving. In autonomous driving, it is vital to have a system that is able to cope with faults in order to enable the system to reach a safe state. In this paper, we present an adaptive control method that can be used for this purpose. The method selects alternative actions so that given goal states can be reached, providing the availability of a certain degree of redundancy. The action selection is based on weight models that are adapted over time, capturing the success rate of certain actions. Besides the method, we present a Java implementation and its validation based on two case studies motivated by the requirements of the autonomous driving domain. We show that the presented approach is applicable both in case of environmental changes but also in case of faults occurring during operation. In the latter case, the methods provide an adaptive behavior very much close to the optimal selection.


2011 ◽  
Vol 105-107 ◽  
pp. 1727-1730
Author(s):  
Yu Juan Tang ◽  
Jiong Wang

At present, the explosion isolator of the fuze safety system using stepping motor or based on the slider continuation move realized safe state restorability, but the former is restricted by the volume of stepping motor, however, the stepping motor is vulnerable to electromagnetic interference; the latter structure is complex, with several independent components, not compact and irreversible. In view of this, an action reversible mechanism based on piezoelectric actuator is proposed, realizing the fuze safety system reversibility. The principle of the piezoelectric actuator is described and the drive mechanism is designed. The feasibility is analysed. The study shows that the mechanism has the certain practical value for compact structure, small volume and reliable actuation.


Sign in / Sign up

Export Citation Format

Share Document