Convergence: Commodity flight simulation and the future

2008 ◽  
Vol 112 (1136) ◽  
pp. 599-607
Author(s):  
K. Takeda ◽  
S. J. Newman ◽  
J. Kenny ◽  
M. Zyskowski

Abstract The development of commodity flight simulation, in the form of PC game technology, continues to advance at a rapid pace. Indeed, the software industry is now being driven primarily by the requirements of gaming, digital media, and other entertainment applications. This has largely been due to the commoditisation of computer hardware, which is apparent when considering recent trends in central processing unit and graphics processor development. The flight simulation industry has benefited from this trend of hardware commoditisation, and will continue to do so for the foreseeable future. It is, however, yet to fully realise the potential for leveraging commodity-off-the-shelf (COTS) software. In this paper the opportunities presenting themselves for the next 25 years of flight simulation are discussed, as the aviation and games software industry’s requirements converge. A SWOT (strengths-weaknesses-opportunities-threats) analysis of the commodity flight simulation software industry is presented, including flight modelling, scenery generation, multiplayer technology, artificial intelligence, mission planning, and event handling. Issues such as data portability, economics, licensing, intellectual-property, interoperability, developer extensibility, robustness, qualification, and maintainability are addressed. Microsoft Flight Simulator is used as a case study of how commodity flight simulation has been extended to include extensive programmatic access to its core engine. Examples are given on how the base platform of this application can be extended by third-party developers and the power this extensibility model provides to the industry. This paper is presented to highlight particular technology trends in the commodity flight simulation industry, the fidelity that commodity flight simulations can provide, and to provide a high-level overview of the strengths and weaknesses thereof.

2016 ◽  
Vol 6 (1) ◽  
pp. 79-90
Author(s):  
Łukasz Syrocki ◽  
Grzegorz Pestka

AbstractThe ready to use set of functions to facilitate solving a generalized eigenvalue problem for symmetric matrices in order to efficiently calculate eigenvalues and eigenvectors, using Compute Unified Device Architecture (CUDA) technology from NVIDIA, is provided. An integral part of the CUDA is the high level programming environment enabling tracking both code executed on Central Processing Unit and on Graphics Processing Unit. The presented matrix structures allow for the analysis of the advantages of using graphics processors in such calculations.


2015 ◽  
Vol 2015 ◽  
pp. 1-14 ◽  
Author(s):  
David Couturier ◽  
Michel R. Dagenais

As computation schemes evolve and many new tools become available to programmers to enhance the performance of their applications, many programmers started to look towards highly parallel platforms such as Graphical Processing Unit (GPU). Offloading computations that can take advantage of the architecture of the GPU is a technique that has proven fruitful in recent years. This technology enhances the speed and responsiveness of applications. Also, as a side effect, it reduces the power requirements for those applications and therefore extends portable devices battery life and helps computing clusters to run more power efficiently. Many performance analysis tools such as LTTng, strace and SystemTap already allow Central Processing Unit (CPU) tracing and help programmers to use CPU resources more efficiently. On the GPU side, different tools such as Nvidia’s Nsight, AMD’s CodeXL, and third party TAU and VampirTrace allow tracing Application Programming Interface (API) calls and OpenCL kernel execution. These tools are useful but are completely separate, and none of them allow a unified CPU-GPU tracing experience. We propose an extension to the existing scalable and highly efficient LTTng tracing platform to allow unified tracing of GPU along with CPU’s full tracing capabilities.


2021 ◽  
Vol 12 ◽  
Author(s):  
Sergio Gálvez ◽  
Federico Agostini ◽  
Javier Caselli ◽  
Pilar Hernandez ◽  
Gabriel Dorado

New High-Performance Computing architectures have been recently developed for commercial central processing unit (CPU). Yet, that has not improved the execution time of widely used bioinformatics applications, like BLAST+. This is due to a lack of optimization between the bases of the existing algorithms and the internals of the hardware that allows taking full advantage of the available CPU cores. To optimize the new architectures, algorithms must be revised and redesigned; usually rewritten from scratch. BLVector adapts the high-level concepts of BLAST+ to the x86 architectures with AVX-512, to harness their capabilities. A deep comprehensive study has been carried out to optimize the approach, with a significant reduction in time execution. BLVector reduces the execution time of BLAST+ when aligning up to mid-size protein sequences (∼750 amino acids). The gain in real scenario cases is 3.2-fold. When applied to longer proteins, BLVector consumes more time than BLAST+, but retrieves a much larger set of results. BLVector and BLAST+ are fine-tuned heuristics. Therefore, the relevant results returned by both are the same, although they behave differently specially when performing alignments with low scores. Hence, they can be considered complementary bioinformatics tools.


JURTEKSI ◽  
2020 ◽  
Vol 7 (1) ◽  
pp. 47-52
Author(s):  
Gus Oka Ciptahadi

Abstract: Practicum Introduction to Information Technology is a general course that is always present at every IT campus in Indonesia. To give an introduction to hard ware to students, each lecturer always requires a long time, because they have to open (CPU) the Central Processing Unit and release other components to be explained by students. Inefficient time in opening and releasing computer hardware components when the course is taking place is the main problem that will be solved by researchers in this study. Plus the computer components that will be explained are quite small, making students unable to see it clearly when in the teaching and learning process. The offered solution, utilizing a technology based on 3D animation. Where this 3D animation technology will provide a more realistic appearance. As in the study entitled SINDO JOURNALIST 3D FILM ANIMATION in the journal Art Design and Culture volume 2. No. 1 March 2017, which researchers have reviewed, also explained that the use of 3D design technology can produce vivid and real visuals that are close to their original form [1]. The method in developing 3D animation in this study uses the Reseach and Development (R&D) Method. Conclusions from this study, researchers have succeeded in making a 3D animation-based information media about the introduction of computer hardware for the Introduction to Information Technology Practicum, with the process of animation results that have been tested using blackbox test.            Keywords: animation; hardware; research and development  Abstrak: Mata kuliah praktikum Pengantar Teknologi Informasi adalah matakuliah umum yang selalu terdapat pada setiap kampus IT di Indonesia. Untuk memberikan pengenalan hardware kepada peserta didik, setiap dosen selalu membutuhkan waktu yang cukup lama, dikarenakan harus membuka (CPU) Central Processing Unit dan melepaskan komponen lainnya agar dapat dijelaskan oleh peserta didik. Tidak efisiennya waktu  dalam membuka dan melepaskan komponen hardware komputer ketika matakuliah berlangsung menjadi permasalahan utama yang akan diselesaikan peneliti dalam penelitian kali ini. Ditambah lagi komponen komputer yang akan dijelaskan cukup kecil, membuat peserta didik tidak dapat melihatnya dengan jelas ketika dalam proses belajar mengajar. Solusi yang ditawarkan, memanfaatkan sebuah teknologi berbasis 3D animasi. Dimana teknologi 3D animasi ini akan memberikan tampilan yang lebih realistis. Metode didalam pengembangan animasi 3D pada penelitian ini menggunakan Metode Reseach and Development (R&D). Simpulan dari penelitian ini, peneliti sudah berhasil membuat sebuah media informasi berbasis animasi 3D tentang pengenalan hardware komputer untuk matakuliah Praktikum Pengantar Teknologi Informasi, dengan proses hasil animasi yang sudah diuji dengan menggunakan blackbox testing. Kata kunci: animasi; hardware; reseach and development.


2017 ◽  
Vol 4 (2) ◽  
pp. 69-74
Author(s):  
Fransiscus Ati Halim

The purpose of this research is to have simulation software capable of processing interrupt instruction and I/O operation that in the future it can contribute in developing a kernel. Interrupt and I/O operation are necessary in the development of the kernel system. Kernel is a medium for hardware and software to communicate. However, Not many application software which helps the learner to understand interrupt process. In managing the hardware, there are times when some kind of condition exist in the system that needs attention of processor or in this case kernel which managing the hardware. In response to that condition, the system will issue an interrupt request to sort that condition. As the I/O operation is needed since a computer system not just consists of CPU and memory only but also other device such as I/O device. This paper elaborates the application software for learning Interrupt application. With interrupt instruction and I/O operation in the simulation program, the program will be more represent the process happened in the real life computer. In this case, the program is able to run the interrupt instruction, I/O operation and other changes are running as expected. Refers to its main purpose, perhaps this simulation can lead to developing the kernel in operating system. From the results of instruction’s testing above, has a result that shows that 90% of instructions are run properly. In executing instructions, simulation program still has a bug following after the execution of Jump and conditional Jump. Index Terms—Interrupt; I/O; Kernel; Operating System REFERENCES [1] C. Hamacher, Z. Vranesic, S. Zaky. Naraig Manjikian , Computer Organization and Embedded Systems 6th Edition; McGraw-Hill, 2012 [2] B. Brey. The Intel Microprocessors , Architecture, Programming, and Interfacing , 8th Edition. Pearson, 2008 [3] W.Stallings. Computer Organization and Architecture, 9th Edition Pearson , 2012 [4] F.A.Halim , Sutrisno, “Fundamental Characteristic of Central Processing Unit Simulation as a Basic Stage of Making Kernel”, Publish in Konferensi Nasional Sistem & Informatika (KNS&I 2010), 12-13 Nov 2010, Bali [5] Intel, IA-32 Intel® Architecture Software Developer’s Manual Volume 3: System Programming Guide, Denver: Intel Corporation, 2004 [6] Intel,IA-32 Intel 80386 Reference Programmer's,: I/O Instruction , https://pdos.csail.mit. edu/6.828/2014/readings/i386/s08_02.htm, available 17 June 2017


Electronics ◽  
2020 ◽  
Vol 9 (8) ◽  
pp. 1239
Author(s):  
Yung-Hao Tung ◽  
Hung-Chuan Wei ◽  
Yen-Wu Ti ◽  
Yao-Tung Tsou ◽  
Neetesh Saxena ◽  
...  

Software-defined networking (SDN) is a new networking architecture with a centralized control mechanism. SDN has proven to be successful in improving not only the network performance, but also security. However, centralized control in the SDN architecture is associated with new security vulnerabilities. In particular, user-datagram-protocol (UDP) flooding attacks can be easily launched and cause serious packet-transmission delays, controller-performance loss, and even network shutdown. In response to applications in the Internet of Things (IoT) field, this study considers UDP flooding attacks in SDN and proposes two lightweight countermeasures. The first method sometimes sacrifices address-resolution-protocol (ARP) requests to achieve a high level of security. In the second method, although packets must sometimes be sacrificed when undergoing an attack before starting to defend, the detection of the network state can prevent normal packets from being sacrificed. When blocking a network attack, attacks from the affected port are directly blocked without affecting normal ports. The performance and security of the proposed methods were confirmed by means of extensive experiments. Compared with the situation where no defense is implemented, or similar defense methods are implemented, after simulating a UDP flooding attack, our proposed method performed better in terms of the available bandwidth, central-processing-unit (CPU) consumption, and network delay time.


2011 ◽  
Vol 130-134 ◽  
pp. 1085-1091
Author(s):  
Cheng Ma ◽  
Tian Yuan Xiao ◽  
Wen Hui Fan ◽  
Hong Bo Sun ◽  
Ying Chao Yue

As a well-known standard of distributed simulation, High Level Architecture (HLA) has adopted as basic framework in most distributed interactive simulation (DIS) systems. At the same time, DIS always involves multiple disciplinary simulation models which are supported by different software. And these software are not always compatible with HLA. For example, though widely used in mechanical kinetics and kinematics simulations, ADAMS, a multi-body kinetics simulation software cannot directly support HLA. To address this issue, this paper analyses redevelopment of legacy systems and models (such as Adams models) in DIS environment and proposes two encapsulation methods which is based on third-party software and user-defined subroutines respectively. A case study demonstrates the feasibility of the proposed methods. And a brief comparison is also given in conclusion section.


2019 ◽  
Vol 8 (2S11) ◽  
pp. 4043-4046

Objectives: DLR’s real-time Human-in-the-Loop Space Flight Simulator needed an enhancement in its transonic and supersonic behavior for its advanced concept of a suborbital, hypersonic, winged passenger transport called SpaceLiner. Methods/Statistical analysis: A simulation model has been developed by geometry modeled flight dynamics for the commercial flight simulation software “X-Plane”. The presented solution is based on a real-time flight dynamics corrector application, taking table-based aerodynamic coefficients from Computational Fluid Dynamics (CFD) model experiments to overwrite X-Plane’s internal flight dynamics in the supersonic and hypersonic regime. Findings: Although compressible flow effects are considered using Prandtl-Glauert, the SpaceLiner X-Plane simulation model needed deeper investigation in its transonic and supersonic behavior, taking into account that transonic effects in X-Plane only refer to an empirical mach-divergent drag increase and the airfoil becomes an appropriate thickness ratio diamond shape under supersonic conditions. Whereas the X-Plane internal flight simulation engine delivers a high level of realism under subsonic conditions, significant deviations from the SpaceLiner aerodynamic reference database were identified in the supersonic and hypersonic regimes. An improved accuracy could be observed for two Mach test cases under corrector application usage conditions. Using X-Plane on the one hand and covering a constant accuracy throughout the whole range of regimes, subsonic, supersonic and hypersonic on the other hand, can be achieved by using the presented corrector application solution. Application/Improvements: X-Plane’s wireframe model approach was successfully fused with table-based lookup processing, delivering a constant high level of realism throughout the whole Mach range.


2021 ◽  
Vol 15 (1) ◽  
pp. 20
Author(s):  
Robert Karamagi

Phishing has become the most convenient technique that hackers use nowadays to gain access to protected systems. This is because cybersecurity has evolved and low-cost systems with the least security investments will need quite advanced and sophisticated mechanisms to be able to penetrate technically. Systems currently are equipped with at least some level of security, imposed by security firms with a very high level of expertise in managing the common and well-known attacks. This decreases the possible technical attack surface. Nation-states or advanced persistent threats (APTs), organized crime, and black hats possess the finance and skills to penetrate many different systems. However, they are always in need of the most available computing resources, such as central processing unit (CPU) and random-access memory (RAM), so they normally hack and hook computers into a botnet. This may allow them to perform dangerous distributed denial of service (DDoS) attacks and perform brute force cracking algorithms, which are highly CPU intensive. They may also use the zombie or drone systems they have hacked to hide their location on the net and gain anonymity by bouncing off around them many times a minute. Phishing allows them to gain their stretch of compromised systems to increase their power. For a normal hacker without the money to invest in sophisticated techniques, exploiting the human factor, which is the weakest link to security, comes in handy. The possibility of successfully manipulating the human into releasing the security that they set up makes the life of the hacker very easy, because they do not have to try to break into the system with force, rather the owner will just open the door for them. The objective of the research is to review factors that enhance phishing and improve the probability of its success. We have discovered that hackers rely on triggering the emotional effects of their victims through their phishing attacks. We have applied the use of artificial intelligence to be able to detect the emotion associated with a phrase or sentence. Our model had a good accuracy which could be improved with the use of a larger dataset with more emotional sentiments for various phrases and sentences. Our technique may be used to check for emotional manipulation in suspicious emails to improve the confidence interval of suspected phishing emails.


Author(s):  
Doruk Bozağaç ◽  
Gülşah Karaduman ◽  
Ahmet Kara ◽  
M Nedim Alpdemir

In this paper we introduce a framework for parallel and distributed execution of simulations (Sim-PETEK), a middleware for minimizing the total run time of batch runs and Monte Carlo trials. Sim-PETEK proposes a generic solution for applications in the simulation domain, which improves on our previous work done to parallelize simulation runs in a single node, multiple central processing unit (CPU) setting. Our new framework aims at managing a heterogeneous computational resource pool consisting of multiple CPU nodes distributed on a potentially geographically dispersed network, through a service-oriented middleware layer that is compliant to Web Services Resource Framework standard, thereby providing a scalable and flexible architecture for simulation software developers. What differentiates Sim-PETEK from a general-purpose, Grid-based job-distribution middleware is a number of simulation-specific aspects regarding the specification, distribution, monitoring, result collection and aggregation of simulation runs. These aspects are prevalent in the structure of the messages and in the protocol of interaction both among the constituent services of the framework and within the interfaces exposed to the external clients.


Sign in / Sign up

Export Citation Format

Share Document