Simulation Based Training for the Future Force Warrior

Author(s):  
Walter Warwick ◽  
Rick Archer ◽  
Alan Brockett ◽  
Patty McDermott

In this paper we describe techniques we have adopted to develop a computer-based, outcome-driven simulator to train digital information skills for small unit leaders of the Army's Future Force Warrior program. We begin by contrasting attempts to engender “virtual realism” in simulation based training against attempts to engender cognitive realism by way of the branching storylines at the heart of an outcome-driven simulation. We next present an example of how such an approach might be applied to train digital information skills before turning to a more general discussion of the problems that such an approach entail—namely, crafting an engaging story while minimizing the combinatorial explosion in a branching storyline. We describe how we have dealt with these problems both by streamlining storylines and by decoupling student input from the branching process. Finally, we allude to a software tool we have created that allows the training developer to author and execute such outcome-driven simulations.

Electronics ◽  
2021 ◽  
Vol 10 (22) ◽  
pp. 2881
Author(s):  
Muath Alrammal ◽  
Munir Naveed ◽  
Georgios Tsaramirsis

The use of innovative and sophisticated malware definitions poses a serious threat to computer-based information systems. Such malware is adaptive to the existing security solutions and often works without detection. Once malware completes its malicious activity, it self-destructs and leaves no obvious signature for detection and forensic purposes. The detection of such sophisticated malware is very challenging and a non-trivial task because of the malware’s new patterns of exploiting vulnerabilities. Any security solutions require an equal level of sophistication to counter such attacks. In this paper, a novel reinforcement model based on Monte-Carlo simulation called eRBCM is explored to develop a security solution that can detect new and sophisticated network malware definitions. The new model is trained on several kinds of malware and can generalize the malware detection functionality. The model is evaluated using a benchmark set of malware. The results prove that eRBCM can identify a variety of malware with immense accuracy.


2019 ◽  
Vol 70 (11) ◽  
pp. 3942-3946
Author(s):  
Gabriel Radulescu ◽  
Diana Cursaru

Obtaining the commercial lubricating oils through an industrial method is a process which has an extensive complexity, requiring a very special attention paid to the final products quality. In this field, any new mixing compound, any new additive and any process improvement is more than welcome. Using the so called optimal mixing recipes � in order to get commercial lubricating oils by the base oils and corresponding additives � is a common way to lower the production cost and increase its quality. This paper proposes an original software tool, developed by the authors, which offers these recipes based on the final mixture properties (explicitly given). The application is built-up around the nonlinear programming and runs under MATLAB� environment. It is a remarkably robust application, with good functionality and accuracy. Its performance is proved both in theory and practice, after laboratory experimental tests.


2019 ◽  
Vol 38 (2) ◽  
pp. 90-93
Author(s):  
K.S. Sahana ◽  
Ghulam Jeelani Qadiri ◽  
Prakash R.M. Saldanha

Introduction: Internship is very a critical period of a medical undergraduate education during which student evolves into a doctor. The objectives of this study were to assess the interns at the end of their paediatric postings. Materials and Methods. Interns knowledge and skills were assessed at the end of their postings in the must know areas. Assessment was conducted by the trained faculty and interns were given the orientation about it. Method of assessment included OSCE, simulation based using standardized patients and computer-based model driven simulators. Feedback was given to the students immediately at the end of their exam Results: Total 202 interns participated in the exam over the period of two years. New-borns assessment was done more frequently (22.7%) and interpretation of investigations was less frequently assessed (7.9%). Rest of other stations was assessed almost at the equal proportion. Highest score was observed in vaccines section (7.5) and lowest score was seen in procedures assessment (5.5). Conclusion. Interns were found to be weaker in procedural, communication and clinical scenario judgement skills which will help us in planning future training of the Interns.


Author(s):  
Daniel Süpke ◽  
Jorge Marx Gómez ◽  
Ralf Isenmann

Web 2.0 driven sustainability reporting describes an emerging digital approach powered through Web 2.0 technologies for companies communicating sustainability issues. Such a computer-based application of Detailed Table of Contents semantics overcomes the limitations of orthodox methods and provides an array of specific capabilities to improve sustainability communication both, for companies (reporters), and their various stakeholders (report readers), that is, along interactivity, customisation ,and reporting à la carte, stakeholder dialogue, and participation. This chapter gives an outline on this up-and-coming sustainability reporting approach along three categories: (i) Media-specific trends in sustainability reporting are observed. (ii) New opportunities Web 2.0 technologies are offering for corporate sustainability reporting are identified. (iii) The concept and implementation of a software tool for sustainability reporting à la carte is presented making clear the movement away from early reporting stages towards the advanced one of a Web 2.0 driven approach.


Author(s):  
Edgard Benítez-Guerrero ◽  
Omar Nieva-García

The vast amounts of digital information stored in databases and other repositories represent a challenge for finding useful knowledge. Traditionalmethods for turning data into knowledge based on manual analysis reach their limits in this context, and for this reason, computer-based methods are needed. Knowledge Discovery in Databases (KDD) is the semi-automatic, nontrivial process of identifying valid, novel, potentially useful, and understandable knowledge (in the form of patterns) in data (Fayyad, Piatetsky-Shapiro, Smyth & Uthurusamy, 1996). KDD is an iterative and interactive process with several steps: understanding the problem domain, data preprocessing, pattern discovery, and pattern evaluation and usage. For discovering patterns, Data Mining (DM) techniques are applied.


2004 ◽  
Vol 41 (A) ◽  
pp. 273-280 ◽  
Author(s):  
Marvin K. Nakayama ◽  
Perwez Shahabuddin ◽  
Karl Sigman

Using a known fact that a Galton–Watson branching process can be represented as an embedded random walk, together with a result of Heyde (1964), we first derive finite exponential moment results for the total number of descendants of an individual. We use this basic and simple result to prove analogous results for the population size at time t and the total number of descendants by time t in an age-dependent branching process. This has applications in justifying the interchange of expectation and derivative operators in simulation-based derivative estimation for generalized semi-Markov processes. Next, using the result of Heyde (1964), we show that, in a stable GI/GI/1 queue, the length of a busy period and the number of customers served in a busy period have finite exponential moments if and only if the service time does.


2014 ◽  
Vol 15 (6) ◽  
pp. 513-526
Author(s):  
Wanjing Xiu ◽  
Yuan Liao

Abstract Transmission lines are essential components of electric power grids. Diverse power system applications and simulation based studies require transmission line parameters including series resistance, reactance, and shunt susceptance, and accurate parameters are pivotal in ensuring the accuracy of analyses and reliable system operation. Commercial software packages for performing power system studies usually have their own databases that store the power system model including line parameters. When there is a physical system model change, the corresponding component in the database of the software packages will need to be modified. Manually updating line parameters are tedious and error-prone. This paper proposes a solution for streamlining the calculation of line parameters and updating of their values in respective software databases. The algorithms used for calculating the values of line parameters are described. The software developed for implementing the solution is described, and typical results are presented. The proposed solution is developed for a utility and has a potential to be put into use by other utilities.


2018 ◽  
Author(s):  
Taylor Royalty ◽  
Andrew D. Steen

AbstractWe applied simulation-based approaches to characterize how microbial community structure influences the amount of sequencing effort to reconstruct metagenomes that are assembled from short read sequences. An initial analysis evaluated the quantity, completion, and contamination of complete-metagenome-assembled genome (complete-MAG) equivalents, a bioinformatic-pipeline normalized metric for MAG quantity, as a function of sequencing effort, on four preexisting sequence read datasets taken from a maize soil, an estuarine sediment, the surface ocean, and the human gut. These datasets were subsampled to varying degrees of completeness in order to simulate the effect of sequencing effort on MAG retrieval. Modeling suggested that sequencing efforts beyond what is typical in published experiments (1 to 10 Gbp) would generate diminishing returns in terms of MAG binning. A second analysis explored the theoretical relationship between sequencing effort and the proportion of available metagenomic DNA sequenced during a sequencing experiment as a function of community richness, evenness, and genome size. Simulations from this analysis demonstrated that while community richness and evenness influenced the amount of sequencing required to sequence a community metagenome to exhaustion, the effort necessary to sequence an individual genome to a target fraction of exhaustion was only dependent on the relative abundance of the corresponding organism and its genome size. A software tool, GRASE, was created to assist investigators further explore this relationship. Re-evaluation of the relationship between sequencing effort and binning success in the context of the relative abundance of genomes, as opposed to base pairs, provides a framework to design sequencing experiments based on the relative abundance of microbes in an environment rather than arbitrary levels of sequencing effort.


Sign in / Sign up

Export Citation Format

Share Document