A BELIEF LINGUISTIC RULE BASED INFERENCE METHODOLOGY FOR HANDLING DECISION MAKING PROBLEM IN QUALITATIVE NATURE

Author(s):  
ALBERTO CALZADA ◽  
J. LIU ◽  
R.M. RODRIGUEZ ◽  
L. MARTINEZ
Author(s):  
Junfeng Zhang ◽  
Qing Xue

In a tactical wargame, the decisions of the artificial intelligence (AI) commander are critical to the final combat result. Due to the existence of fog-of-war, AI commanders are faced with unknown and invisible information on the battlefield and lack of understanding of the situation, and it is difficult to make appropriate tactical strategies. The traditional knowledge rule-based decision-making method lacks flexibility and autonomy. How to make flexible and autonomous decision-making when facing complex battlefield situations is a difficult problem. This paper aims to solve the decision-making problem of the AI commander by using the deep reinforcement learning (DRL) method. We develop a tactical wargame as the research environment, which contains built-in script AI and supports the machine–machine combat mode. On this basis, an end-to-end actor–critic framework for commander decision making based on the convolutional neural network is designed to represent the battlefield situation and the reinforcement learning method is used to try different tactical strategies. Finally, we carry out a combat experiment between a DRL-based agent and a rule-based agent in a jungle terrain scenario. The result shows that the AI commander who adopts the actor–critic method successfully learns how to get a higher score in the tactical wargame, and the DRL-based agent has a higher winning ratio than the rule-based agent.


2021 ◽  
Vol 11 (4) ◽  
pp. 1660 ◽  
Author(s):  
Ivan Marović ◽  
Monika Perić ◽  
Tomaš Hanak

A way to minimize uncertainty and achieve the best possible project performance in construction project management can be achieved during the procurement process, which involves selecting an optimal contractor according to “the most economically advantageous tender.” As resources are limited, decision-makers are often pulled apart by conflicting demands coming from various stakeholders. The challenge of addressing them at the same time can be modelled as a multi-criteria decision-making problem. The aim of this paper is to show that the analytic hierarchy process (AHP) together with PROMETHEE could cope with such a problem. As a result of their synergy, a decision support concept for selecting the optimal contractor (DSC-CONT) is proposed that: (a) allows the incorporation of opposing stakeholders’ demands; (b) increases the transparency of decision-making and the consistency of the decision-making process; (c) enhances the legitimacy of the final outcome; and (d) is a scientific approach with great potential for application to similar decision-making problems where sustainable decisions are needed.


2021 ◽  
pp. 1-15
Author(s):  
TaiBen Nan ◽  
Haidong Zhang ◽  
Yanping He

The overwhelming majority of existing decision-making methods combined with the Pythagorean fuzzy set (PFS) are based on aggregation operators, and their logical foundation is imperfect. Therefore, we attempt to establish two decision-making methods based on the Pythagorean fuzzy multiple I method. This paper is devoted to the discussion of the full implication multiple I method based on the PFS. We first propose the concepts of Pythagorean t-norm, Pythagorean t-conorm, residual Pythagorean fuzzy implication operator (RPFIO), Pythagorean fuzzy biresiduum, and the degree of similarity between PFSs based on the Pythagorean fuzzy biresiduum. In addition, the full implication multiple I method for Pythagorean fuzzy modus ponens (PFMP) is established, and the reversibility and continuity properties of the full implication multiple I method of PFMP are analyzed. Finally, a practical problem is discussed to demonstrate the effectiveness of the Pythagorean fuzzy full implication multiple I method in a decision-making problem. The advantages of the new method over existing methods are also explained. Overall, the proposed methods are based on logical reasoning, so they can more accurately and completely express decision information.


2021 ◽  
Vol 31 (3) ◽  
pp. 1-26
Author(s):  
Aravind Balakrishnan ◽  
Jaeyoung Lee ◽  
Ashish Gaurav ◽  
Krzysztof Czarnecki ◽  
Sean Sedwards

Reinforcement learning (RL) is an attractive way to implement high-level decision-making policies for autonomous driving, but learning directly from a real vehicle or a high-fidelity simulator is variously infeasible. We therefore consider the problem of transfer reinforcement learning and study how a policy learned in a simple environment using WiseMove can be transferred to our high-fidelity simulator, W ise M ove . WiseMove is a framework to study safety and other aspects of RL for autonomous driving. W ise M ove accurately reproduces the dynamics and software stack of our real vehicle. We find that the accurately modelled perception errors in W ise M ove contribute the most to the transfer problem. These errors, when even naively modelled in WiseMove , provide an RL policy that performs better in W ise M ove than a hand-crafted rule-based policy. Applying domain randomization to the environment in WiseMove yields an even better policy. The final RL policy reduces the failures due to perception errors from 10% to 2.75%. We also observe that the RL policy has significantly less reliance on velocity compared to the rule-based policy, having learned that its measurement is unreliable.


Sign in / Sign up

Export Citation Format

Share Document