scholarly journals MEnDiGa: A Minimal Engine for Digital Games

2017 ◽  
Vol 2017 ◽  
pp. 1-13 ◽  
Author(s):  
Filipe M. B. Boaventura ◽  
Victor T. Sarinho

Game engines generate high dependence of developed games on provided implementation resources. Feature modeling is a technique that captures commonalities and variabilities results of domain analysis to provide a basis for automated configuration of concrete products. This paper presents the Minimal Engine for Digital Games (MEnDiGa), a simplified collection of game assets based on game features capable of building small and casual games regardless of their implementation resources. It presents minimal features in a representative hierarchy of spatial and game elements along with basic behaviors and event support related to game logic features. It also presents modules of code to represent, interpret, and adapt game features to provide the execution of configured games in multiple game platforms. As a proof of concept, a clone of the Doodle Jump game was developed using MEnDiGa assets and compared with original game version. As a result, a new G-factor based approach for game construction is provided, which is able to separate the core of game elements from the implementation itself in an independent, reusable, and large-scale way.

2008 ◽  
Vol 2008 ◽  
pp. 1-7 ◽  
Author(s):  
Ahmed BinSubaih ◽  
Steve Maddock

Game assets are portable between games. The games themselves are, however, dependent on the game engine they were developed on. Middleware has attempted to address this by, for instance, separating out the AI from the core game engine. Our work takes this further by separating thegamefrom the game engine, and making it portable between game engines. The game elements that we make portable are the game logic, the object model, and the game state, which represent the game's brain, and which we collectively refer to as the game factor, or G-factor. We achieve this using an architecture based around a service-oriented approach. We present an overview of this architecture and its use in developing games. The evaluation demonstrates that the architecture does not affect performance unduly, adds little development overhead, is scaleable, and supports modifiability.


Author(s):  
Willem Vos ◽  
Petter Norli ◽  
Emilie Vallee

This paper describes a novel technique for the detection of cracks in pipelines. The proposed in-line inspection technique has the ability to detect crack features at random angles in the pipeline, such as axial, circumferential, and any angle in between. This ability is novel to the current ILI technology offering and will also add value by detecting cracks in deformed pipes (i.e. in dents), and cracks associated with the girth weld (mid weld cracks, rapid cooling cracks and cracks parallel to the weld). Furthermore, the technology is suitable for detection of cracks in spiral welded pipes, both parallel to the spiral weld as well as perpendicular to the weld. Integrity issues around most features described above are not addressed with ILI tools, often forcing operators to perform hydrostatic tests to ensure pipeline safety. The technology described here is based on the use of wideband ultrasound inline inspection tools that are already in operation. They are designed for the inspection of structures operating in challenging environments such as offshore pipelines. Adjustments to the front-end analog system and data collection from a grid of transducers allow the tools to detect cracks in any orientation in the line. Description of changes to the test set-up are presented as well as the theoretical background behind crack detection. Historical development of the technology will be presented, such as early laboratory testing and proof of concept. The proof of concept data will be compared to the theoretical predictions. A detailed set of results are presented. These are from tests that were performed on samples sourced from North America and Europe which contain SCC features. Results from ongoing testing will be presented, which involved large-scale testing on SCC features in gas-filled pipe spools.


2016 ◽  
Vol 17 (3) ◽  
pp. 913-938 ◽  
Author(s):  
Daniela Rabiser ◽  
Herbert Prähofer ◽  
Paul Grünbacher ◽  
Michael Petruzelka ◽  
Klaus Eder ◽  
...  

Author(s):  
Adilson Vahldick ◽  
Maria J. Marcelino ◽  
António J. Mendes

Casual games are characterized for their fast learning curve. Casual games tasks usually are short and have increasing difficulty. This seems an interesting approach to learn and practice introductory computer programming concepts for students that face difficulties. Many of serious games intended to support computer programming learning are commercial and aimed at children. Also only a few of those described in the literature are available to teachers. This chapter describes the development of a new game that aims to support introductory computer programming learning and its pilot study with three undergraduate introductory classes. The chapter proposes a set of design principles that might be useful in the development of casual games to support computer programming learning. These principles resulted from the experiment and include game features that were considered important to engage students and to improve some students' computer programming skills.


Author(s):  
Vincent Breton ◽  
Eddy Caron ◽  
Frederic Desprez ◽  
Gael Le Mahec

As grids become more and more attractive for solving complex problems with high computational and storage requirements, bioinformatics starts to be ported on large scale platforms. The BLAST kernel, one of the main cornerstone of high performance genomics, was one the first application ported on such platform. However, if a simple parallelization was enough for the first proof of concept, its use in production platform needed more optimized algorithms. In this chapter, we review existing parallelization and “gridification” approaches as well as related issues such as data management and replication, and a case study using the DIET middleware over the Grid’5000 experimental platform.


Author(s):  
Imran Muhammad ◽  
Fatemeh Hoda Moghimi ◽  
Nyree J. Taylor ◽  
Bernice Redley ◽  
Lemai Nguyen ◽  
...  

Based on initial pre-clinical data and results from focus group studies, proof of concept for an intelligent operational planning and support tool (IOPST) for nursing in acute healthcare contexts has been demonstrated. However, moving from a simulated context to a large scale clinical trial brings potential challenges associated with the many complexities and multiple people-technology interactions. To enable an in depth and rich analysis of such a context, it is the contention of this paper that incorporating an Actor-Network Theory (ANT) lens to facilitate analysis will be a prudent option as discussed below.


2017 ◽  
Vol 31 (4) ◽  
pp. 73-102 ◽  
Author(s):  
Abhijit Banerjee ◽  
Rukmini Banerji ◽  
James Berry ◽  
Esther Duflo ◽  
Harini Kannan ◽  
...  

The promise of randomized controlled trials is that evidence gathered through the evaluation of a specific program helps us—possibly after several rounds of fine-tuning and multiple replications in different contexts—to inform policy. However, critics have pointed out that a potential constraint in this agenda is that results from small “proof-of-concept” studies run by nongovernment organizations may not apply to policies that can be implemented by governments on a large scale. After discussing the potential issues, this paper describes the journey from the original concept to the design and evaluation of scalable policy. We do so by evaluating a series of strategies that aim to integrate the nongovernment organization Pratham’s “Teaching at the Right Level” methodology into elementary schools in India. The methodology consists of reorganizing instruction based on children’s actual learning levels, rather than on a prescribed syllabus, and has previously been shown to be very effective when properly implemented. We present evidence from randomized controlled trials involving some designs that failed to produce impacts within the regular schooling system but still helped shape subsequent versions of the program. As a result of this process, two versions of the programs were developed that successfully raised children’s learning levels using scalable models in government schools. We use this example to draw general lessons about using randomized control trials to design scalable policies.


Author(s):  
Christian Rauch ◽  
Thomas Ho¨rmann ◽  
Sebastian Jagsch ◽  
Raimund Almbauer

Much attention has been paid recently by research and development engineers on performing multi-physics calculations. One way to do this is to couple commercial tools for examining complex systems. Since the proposal of an software architecture for coupling programs as published in a previous paper significant changes have led to an improved performance for large-scale industrial applications. This architecture is being described and as a proof of concept a simulation is being conducted by coupling two commercial solvers. The speed-up of the new system is being presented. The simulation results are then compared with measurements of surface temperatures of an exhaust system of an actual sports utilities vehicle (SUV) and conclusions are being drawn. The proposed architecture is easily adaptable to various programs as it is implemented in C++ and changes for a specific code can be restricted to a view classes.


2019 ◽  
Vol 875 ◽  
Author(s):  
Jianqing Huang ◽  
Hecong Liu ◽  
Weiwei Cai

Online in situ prediction of 3-D flame evolution has been long desired and is considered to be the Holy Grail for the combustion community. Recent advances in computational power have facilitated the development of computational fluid dynamics (CFD), which can be used to predict flame behaviours. However, the most advanced CFD techniques are still incapable of realizing online in situ prediction of practical flames due to the enormous computational costs involved. In this work, we aim to combine the state-of-the-art experimental technique (that is, time-resolved volumetric tomography) with deep learning algorithms for rapid prediction of 3-D flame evolution. Proof-of-concept experiments conducted suggest that the evolution of both a laminar diffusion flame and a typical non-premixed turbulent swirl-stabilized flame can be predicted faithfully in a time scale on the order of milliseconds, which can be further reduced by simply using a few more GPUs. We believe this is the first time that online in situ prediction of 3-D flame evolution has become feasible, and we expect this method to be extremely useful, as for most application scenarios the online in situ prediction of even the large-scale flame features are already useful for an effective flame control.


2020 ◽  
Vol 9 (2) ◽  
pp. 163-188
Author(s):  
Martti Havukainen ◽  
Teemu H. Laine ◽  
Timo Martikainen ◽  
Erkki Sutinen

Sign in / Sign up

Export Citation Format

Share Document