Ontology and Reinforcement Learning Based Intelligent Agent Automatic Penetration Test

Author(s):  
Kexiang Qian ◽  
Daojuan Zhang ◽  
Peng Zhang ◽  
Zhihong Zhou ◽  
Xiuzhen Chen ◽  
...  
Author(s):  
Grzegorz Musiolik

Artificial intelligence evolves rapidly and will have a great impact on the society in the future. One important question which still cannot be addressed with satisfaction is whether the decision of an intelligent agent can be predicted. As a consequence of this, the general question arises if such agents can be controllable and future robotic applications can be safe. This chapter shows that unpredictable systems are very common in mathematics and physics although the underlying mathematical structure can be very simple. It also shows that such unpredictability can also emerge for intelligent agents in reinforcement learning, especially for complex tasks with various input parameters. An observer would not be capable to distinguish this unpredictability from a free will of the agent. This raises ethical questions and safety issues which are briefly presented.


2022 ◽  
Vol 9 ◽  
Author(s):  
Wenbo Song ◽  
Wei Sheng ◽  
Dong Li ◽  
Chong Wu ◽  
Jun Ma

The network topology of complex networks evolves dynamically with time. How to model the internal mechanism driving the dynamic change of network structure is the key problem in the field of complex networks. The models represented by WS, NW, BA usually assume that the evolution of network structure is driven by nodes’ passive behaviors based on some restrictive rules. However, in fact, network nodes are intelligent individuals, which actively update their relations based on experience and environment. To overcome this limitation, we attempt to construct a network model based on deep reinforcement learning, named as NMDRL. In the new model, each node in complex networks is regarded as an intelligent agent, which reacts with the agents around it for refreshing its relationships at every moment. Extensive experiments show that our model not only can generate networks owing the properties of scale-free and small-world, but also reveal how community structures emerge and evolve. The proposed NMDRL model is helpful to study propagation, game, and cooperation behaviors in networks.


2021 ◽  
Vol 24 (68) ◽  
pp. 1-20
Author(s):  
Jorge E Camargo ◽  
Rigoberto Sáenz

We want to measure the impact of the curriculum learning technique on a reinforcement training setup, several experiments were designed with different training curriculums adapted for the video game chosen as a case study. Then all were executed on a selected game simulation platform, using two reinforcement learning algorithms, and using the mean cumulative reward as a performance measure. Results suggest that curriculum learning has a significant impact on the training process, increasing training times in some cases, and decreasing them up to 40% percent in some other cases.


Author(s):  
Joseph Kim ◽  
Matthew E. Woicik ◽  
Matthew C. Gombolay ◽  
Sung-Hyun Son ◽  
Julie A. Shah

We envision an intelligent agent that analyzes conversations during human team meetings in order to infer the team’s plan, with the purpose of providing decision support to strengthen that plan. We present a novel learning technique to infer teams' final plans directly from a processed form of their planning conversation. Our method employs reinforcement learning to train a model that maps features of the discussed plan and patterns of dialogue exchange among participants to a final, agreed-upon plan. We employ planning domain models to efficiently search the large space of possible plans, and the costs of candidate plans serve as the reinforcement signal. We demonstrate that our technique successfully infers plans within a variety of challenging domains, with higher accuracy than prior art. With our domain-independent feature set, we empirically demonstrate that our model trained on one planning domain can be applied to successfully infer team plans within a novel planning domain.


2021 ◽  
Vol 11 (23) ◽  
pp. 11134
Author(s):  
Luis Orlando Philco ◽  
Luis Marrone ◽  
Emily Estupiñan

Coverage is an important factor for the effective transmission of data in the wireless sensor networks. Normally, the formation of coverage holes in the network deprives its performance and reduces the lifetime of the network. In this paper, a multi-intelligent agent enabled reinforcement learning-based coverage hole detection and recovery (MiA-CODER) is proposed in order to overcome the existing challenges related to coverage of the network. Initially, the formation of coverage holes is prevented by optimizing the energy consumption in the network. This is performed by constructing the unequal Sierpinski cluster-tree topology (USCT) and the cluster head is selected by implementing multi-objective black widow optimization (MoBWo) to facilitate the effective transmission of data. Further, the energy consumption of the nodes is minimized by performing dynamic sleep scheduling in which Tsallis entropy enabled Bayesian probability (TE2BP) is implemented to switch the nodes between active and sleep mode. Then, the coverage hole detection and repair are carried out in which the detection of coverage holes if any, both inside the cluster and between the clusters, is completed by using the virtual sector-based hole detection (ViSHD) protocol. Once the detection is over, the BS starts the hole repair process by using a multi-agent SARSA algorithm which selects the optimal mobile node and replaces it to cover the hole. By doing so, the coverage of the network is enhanced and better QoSensing is achieved. The proposed approach is simulated in NS 3.26 and evaluated in terms of coverage rate, number of dead nodes, average energy consumption and throughput.


Sign in / Sign up

Export Citation Format

Share Document