deterministic learning
Recently Published Documents


TOTAL DOCUMENTS

179
(FIVE YEARS 44)

H-INDEX

17
(FIVE YEARS 3)

Mathematics ◽  
2021 ◽  
Vol 9 (23) ◽  
pp. 3062
Author(s):  
Meng-Ta Chung ◽  
Shui-Lien Chen

The goal of an exam in cognitive diagnostic assessment is to uncover whether an examinee has mastered certain attributes. Different cognitive diagnosis models (CDMs) have been developed for this purpose. The core of these CDMs is the Q-matrix, which is an item-to-attribute mapping, traditionally designed by domain experts. An expert designed Q-matrix is not without issues. For example, domain experts might neglect some attributes or have different opinions about the inclusion of some entries in the Q-matrix. It is therefore of practical importance to develop an automated method to estimate the Q-matrix. This research proposes a deterministic learning algorithm for estimating the Q-matrix. To obtain a sensible binary Q-matrix, a dichotomizing method is also devised. Results from the simulation study shows that the proposed method for estimating the Q-matrix is useful. The empirical study analyzes the ECPE data. The estimated Q-matrix is compared with the expert-designed one. All analyses in this research are carried out in R.


Author(s):  
Wolfram Barfuss

AbstractA dynamical systems perspective on multi-agent learning, based on the link between evolutionary game theory and reinforcement learning, provides an improved, qualitative understanding of the emerging collective learning dynamics. However, confusion exists with respect to how this dynamical systems account of multi-agent learning should be interpreted. In this article, I propose to embed the dynamical systems description of multi-agent learning into different abstraction levels of cognitive analysis. The purpose of this work is to make the connections between these levels explicit in order to gain improved insight into multi-agent learning. I demonstrate the usefulness of this framework with the general and widespread class of temporal-difference reinforcement learning. I find that its deterministic dynamical systems description follows a minimum free-energy principle and unifies a boundedly rational account of game theory with decision-making under uncertainty. I then propose an on-line sample-batch temporal-difference algorithm which is characterized by the combination of applying a memory-batch and separated state-action value estimation. I find that this algorithm serves as a micro-foundation of the deterministic learning equations by showing that its learning trajectories approach the ones of the deterministic learning equations under large batch sizes. Ultimately, this framework of embedding a dynamical systems description into different abstraction levels gives guidance on how to unleash the full potential of the dynamical systems approach to multi-agent learning.


2021 ◽  
Vol 31 (04) ◽  
pp. 2150051
Author(s):  
Xunde Dong ◽  
Cong Wang

Gray–Scott model is one of the most well-known reaction–diffusion models which has a wealth of spatiotemporal chaos behavior. It is commonly used to study spatiotemporal chaos. In the paper, a novel method is proposed for the identification of the Gray–Scott model via deterministic learning and interpolation. The method mainly consists of two phases: the local identification phase and the global identification phase. Local identification is achieved using the finite difference method and deterministic learning. Based on the local identification results, the interpolation method is employed to obtain global identification. Numerical experiments show the feasibility and effectiveness of the proposed method.


2021 ◽  
pp. 1-12
Author(s):  
Qian Yin ◽  
Bingxin Xu ◽  
Kaiyan Zhou ◽  
Ping Guo

Sign in / Sign up

Export Citation Format

Share Document