Introduction of a new dataset and method for location predicting based on deep learning in wargame

2021 ◽  
pp. 1-16
Author(s):  
Man Liu ◽  
Hongjun Zhang ◽  
Wenning Hao ◽  
Xiuli Qi ◽  
Kai Cheng ◽  
...  

It is a challenge for existing artificial intelligence algorithms to deal with incomplete information of computer tactical wargames in military research, and one effective method is to take advantage of game replays based on data mining or supervised learning. However, the open source datasets of wargame replays are extremely rare, which obstruct the development of research on computer wargames. In this paper, a data set of wargame replays is opened for predicting algorithm on the condition of incomplete information, to be specific, we propose the dataset processing method for deep learning and an network model for enemy locations predicting. We first introduce the criteria and methods of data preprocessing, parsing and feature extraction, then the training set and test set for deep learning are predefined. Furthermore, we have designed a newly specific network model for enemy locations predicting, including multi-head input, multi-head output, CNN and GRU layers to deal with the multi-agent and long-term memory problems. The experimental results demonstrate that our method achieves good performance of 84.9% on top-50 accuracy. Finally, we open source the data set and methods on https://github.com/daman043/AAGWS-Wargame-master.

Electronics ◽  
2021 ◽  
Vol 10 (13) ◽  
pp. 1514
Author(s):  
Seung-Ho Lim ◽  
WoonSik William Suh ◽  
Jin-Young Kim ◽  
Sang-Young Cho

The optimization for hardware processor and system for performing deep learning operations such as Convolutional Neural Networks (CNN) in resource limited embedded devices are recent active research area. In order to perform an optimized deep neural network model using the limited computational unit and memory of an embedded device, it is necessary to quickly apply various configurations of hardware modules to various deep neural network models and find the optimal combination. The Electronic System Level (ESL) Simulator based on SystemC is very useful for rapid hardware modeling and verification. In this paper, we designed and implemented a Deep Learning Accelerator (DLA) that performs Deep Neural Network (DNN) operation based on the RISC-V Virtual Platform implemented in SystemC in order to enable rapid and diverse analysis of deep learning operations in an embedded device based on the RISC-V processor, which is a recently emerging embedded processor. The developed RISC-V based DLA prototype can analyze the hardware requirements according to the CNN data set through the configuration of the CNN DLA architecture, and it is possible to run RISC-V compiled software on the platform, can perform a real neural network model like Darknet. We performed the Darknet CNN model on the developed DLA prototype, and confirmed that computational overhead and inference errors can be analyzed with the DLA prototype developed by analyzing the DLA architecture for various data sets.


2012 ◽  
Vol 25 (4) ◽  
pp. 617-626 ◽  
Author(s):  
Barbara Carretti ◽  
Erika Borella ◽  
Silvia Fostinelli ◽  
Michela Zavagnin

ABSTRACTBackground:A growing number of studies are attempting to understand how effective cognitive interventions may be for patients with amnestic mild cognitive impairment (aMCI), particularly in relation to their memory problems.Methods:The present study aimed to explore the benefits of a working memory (WM) training program in aMCI patients. Patients (N= 20) were randomly assigned to two training programs: the experimental group practiced with a verbal WM task, while the active control group conducted educational activities on memory.Results:Results showed that the aMCI patients completing the WM training obtained specific gains in the task trained with some transfer effects on other WM measures (visuospatial WM) and on processes involved in or related to WM, e.g. fluid intelligence (the Cattell test) and long-term memory. This was not the case for the aMCI control group, who experienced only a very limited improvement.Conclusion:This pilot study suggests that WM training could be a valuable method for improving cognitive performance in aMCI patients, possibly delaying the onset of Alzheimer's disease.


Entropy ◽  
2020 ◽  
Vol 22 (5) ◽  
pp. 519
Author(s):  
Saad Alqithami ◽  
Rahmat Budiarto ◽  
Musaad Alzahrani ◽  
Henry Hexmoor

Due to the complexity of an open multi-agent system, agents’ interactions are instantiated spontaneously, resulting in beneficent collaborations with one another for mutual actions that are beyond one’s current capabilities. Repeated patterns of interactions shape a feature of their organizational structure when those agents self-organize themselves for a long-term objective. This paper, therefore, aims to provide an understanding of social capital in organizations that are open membership multi-agent systems with an emphasis in our formulation on the dynamic network of social interactions that, in part, elucidate evolving structures and impromptu topologies of networks. We model an open source project as an organizational network and provide definitions and formulations to correlate the proposed mechanism of social capital with the achievement of an organizational charter, for example, optimized productivity. To empirically evaluate our model, we conducted a case study of an open source software project to demonstrate how social capital can be created and measured within this type of organization. The results indicate that the values of social capital are positively proportional towards optimizing agents’ productivity into successful completion of the project.


Entropy ◽  
2021 ◽  
Vol 23 (12) ◽  
pp. 1672
Author(s):  
Sebastian Raubitzek ◽  
Thomas Neubauer

Measures of signal complexity, such as the Hurst exponent, the fractal dimension, and the Spectrum of Lyapunov exponents, are used in time series analysis to give estimates on persistency, anti-persistency, fluctuations and predictability of the data under study. They have proven beneficial when doing time series prediction using machine and deep learning and tell what features may be relevant for predicting time-series and establishing complexity features. Further, the performance of machine learning approaches can be improved, taking into account the complexity of the data under study, e.g., adapting the employed algorithm to the inherent long-term memory of the data. In this article, we provide a review of complexity and entropy measures in combination with machine learning approaches. We give a comprehensive review of relevant publications, suggesting the use of fractal or complexity-measure concepts to improve existing machine or deep learning approaches. Additionally, we evaluate applications of these concepts and examine if they can be helpful in predicting and analyzing time series using machine and deep learning. Finally, we give a list of a total of six ways to combine machine learning and measures of signal complexity as found in the literature.


Hippocampus ◽  
2002 ◽  
Vol 12 (5) ◽  
pp. 637-647 ◽  
Author(s):  
Gayle M. Wittenberg ◽  
Megan R. Sullivan ◽  
Joe Z. Tsien

2021 ◽  
Author(s):  
marie lods ◽  
Pierre Mortessagne ◽  
Emilie Pacary ◽  
Geoffrey Terral ◽  
Fanny Farrugia ◽  
...  

Abstract Hippocampal adult neurogenesis is involved in many memory processes from learning, to remembering and forgetting. However, whether or not the stimulation of adult neurogenesis can improve memory performance remains unclear. Here, using a chemogenetic approach that combines selective tagging and specific activation of distinct adult-born neuron populations, we demonstrate that this activation can improve remote memory accuracy and strength. These results open up new avenues for remedying memory problems that may arise over time.


2021 ◽  
Author(s):  
Allison L. Clouthier ◽  
Gwyneth B. Ross ◽  
Matthew P. Mavor ◽  
Isabel Coll ◽  
Alistair Boyle ◽  
...  

AbstractThe purpose of this work was to develop an open-source deep learning-based algorithm for motion capture marker labelling that can be trained on measured or simulated marker trajectories. In the proposed algorithm, a deep neural network including recurrent layers is trained on measured or simulated marker trajectories. Labels are assigned to markers using the Hungarian algorithm and a predefined generic marker set is used to identify and correct mislabeled markers. The algorithm was first trained and tested on measured motion capture data. Then, the algorithm was trained on simulated trajectories and tested on data that included movements not contained in the simulated data set. The ability to improve accuracy using transfer learning to update the neural network weights based on labelled motion capture data was assessed. The effect of occluded and extraneous markers on labelling accuracy was also examined. Labelling accuracy was 99.6% when trained on measured data and 92.8% when trained on simulated trajectories, but could be improved to up to 98.8% through transfer learning. Missing or extraneous markers reduced labelling accuracy, but results were comparable to commercial software. The proposed labelling algorithm can be used to accurately label motion capture data in the presence of missing and extraneous markers and accuracy can be improved as data are collected, labelled, and added to the training set. The algorithm and user interface can reduce the time and manual effort required to label optical motion capture data, particularly for those with limited access to commercial software.


Author(s):  
Beth Vonnahme

Citizens are continuously inundated with political information. How do citizens process that information for use in decision-making? Political psychologists have generally thought of information processing as proceeding through a series of stages: (1) exposure and attention; (2) comprehension; (3) encoding, interpretation, and elaboration; (4) organization and storage in memory; and (5) retrieval. This processing of information relies heavily on two key structures: working memory and long-term memory. Working memory actively processes incoming information whereas long-term memory is the storage structure of the brain. The most widely accepted organizational scheme for long-term memory is the associative network model. In this model, information stored in long-term memory is organized as a series of connected nodes. Each node in the network represents a concept with links connecting the various concepts. The links between nodes represent beliefs about the connection between concepts. These links facilitate retrieval of information through a process known as spreading activation. Spreading activation moves information from long-term memory to working memory. When cued nodes are retrieved from memory, they activate linked nodes thereby weakly activating further nodes and so forth. Repeatedly activated nodes are the most likely to be retrieved from long-term memory for use in political decision-making. The concept of an associative network model of memory has informed a variety of research avenues, but several areas of inquiry remain underdeveloped. Specifically, many researchers rely on an associative network model of memory without questioning the assumptions and implications of the model. Doing so might further inform our understanding of information processing in the political arena. Further, voters are continuously flooded with political and non-political information; thus, exploring the role that the larger information environment can play in information processing is likely to be a fruitful path for future inquiry. Finally, little attention has been devoted to the various ways a digital information environment alters the way citizens process political information. In particular, the instantaneous and social nature of digital information may short-circuit information processing.


Sign in / Sign up

Export Citation Format

Share Document