scholarly journals Massively multi-user online platform for large-scale applications

Author(s):  
Allen Yen-Cheng Yu

Many large-scale online applications enable thousands of users to access their services simultaneously. However, the overall service quality of an online application usually degrades when the number of users increases because, traditionally, centralized server architecture does not scale well. In order to provide better Quality of Service (QoS), service architecture such as Grid computing can be used. This type of architecture offers service scalability by utilizing heterogeneous hardware resources. In this thesis, a novel design of Grid computing middleware, Massively Multi-user Online Platform (MMOP), which integrates the Peer-to-Peer (P2P) structured overlays, is proposed. The objectives of this proposed design are to offer scalability and system design flexibility, simplify development processes of distributed applications, and improve QoS by following specified policy rules. A Massively Multiplayer Online Game (MMOG) has been created to validate the functionality and performance of MMOP. The simulation results have demonstrated that MMOP is a high performance and scalable servicing and computing middleware.

2021 ◽  
Author(s):  
Allen Yen-Cheng Yu

Many large-scale online applications enable thousands of users to access their services simultaneously. However, the overall service quality of an online application usually degrades when the number of users increases because, traditionally, centralized server architecture does not scale well. In order to provide better Quality of Service (QoS), service architecture such as Grid computing can be used. This type of architecture offers service scalability by utilizing heterogeneous hardware resources. In this thesis, a novel design of Grid computing middleware, Massively Multi-user Online Platform (MMOP), which integrates the Peer-to-Peer (P2P) structured overlays, is proposed. The objectives of this proposed design are to offer scalability and system design flexibility, simplify development processes of distributed applications, and improve QoS by following specified policy rules. A Massively Multiplayer Online Game (MMOG) has been created to validate the functionality and performance of MMOP. The simulation results have demonstrated that MMOP is a high performance and scalable servicing and computing middleware.


The classical planar Metal Oxide Semiconductor Field Effect Transistors (MOSFET) is fabricated by oxidation of a semiconductor namely Silicon. In this generation, an advanced technique called 3D system architecture FETs, are introduced for high performance and low power quality of devices. Based on the limitations of Short Channel Effect (SCE), Silicon (Si) FET cannot be scaled under 10nm. Hence various performing measures like methods, principles, and geometrics are done to upscale the semiconductor. CMOS using alternate channel materials like GE and III-Vs on substrates is a highly anticipated technique for developing nanowire structures. By considering these issues, in this paper, we developed a simulation model that provides accurate results basing on Gate layout and multi-gate NW FET's so that the scaling can be increased few nanometers long and performance limits gradually increases. The model developed is SILVACO that tests the action of FET with different gate oxide materials.


Author(s):  
Ishwarya S ◽  
S. Kuzhalvaimozhi

<p>The paper is about how the application is maintained and monitored using Azure CI pipeline. Maintaining and monitoring the quality of the software plays an important role in company’s growth and performance. This is achieved using DevOps. Few years back agile methodology was playing a major role in the industry, software were deployed in monthly, quarterly or annual basis, which is time consuming. However, now industries are moving towards DevOps methodology where in the software deployed multiple times a day. This methodology provides the organization to constantly and reliably add new features and automatically deploy them across various platforms or environment in order to gain high performance and quality assurance products. Continuous integration and Continuous delivery/ Continuous deployment are the pillars of DevOps. Continuous integration, Continuous delivery and Continuous deployment are the continuous software development practices of industry. By automating the build, test and deployment of software, CI/CD bridges the space between development and operation teams. This paper also concentrates on how the Test Driven Development features of .Net technologies supports the quality maintenance and monitoring of the application.</p>


Author(s):  
Yuan-Shun Dai ◽  
Jack Dongarra

Grid computing is a newly developed technology for complex systems with large-scale resource sharing, wide-area communication, and multi-institutional collaboration. It is hard to analyze and model the Grid reliability because of its largeness, complexity and stiffness. Therefore, this chapter introduces the Grid computing technology, presents different types of failures in grid system, models the grid reliability with star structure and tree structure, and finally studies optimization problems for grid task partitioning and allocation. The chapter then presents models for star-topology considering data dependence and treestructure considering failure correlation. Evaluation tools and algorithms are developed, evolved from Universal generating function and Graph Theory. Then, the failure correlation and data dependence are considered in the model. Numerical examples are illustrated to show the modeling and analysis.


2020 ◽  
Vol 496 (1) ◽  
pp. 629-637
Author(s):  
Ce Yu ◽  
Kun Li ◽  
Shanjiang Tang ◽  
Chao Sun ◽  
Bin Ma ◽  
...  

ABSTRACT Time series data of celestial objects are commonly used to study valuable and unexpected objects such as extrasolar planets and supernova in time domain astronomy. Due to the rapid growth of data volume, traditional manual methods are becoming extremely hard and infeasible for continuously analysing accumulated observation data. To meet such demands, we designed and implemented a special tool named AstroCatR that can efficiently and flexibly reconstruct time series data from large-scale astronomical catalogues. AstroCatR can load original catalogue data from Flexible Image Transport System (FITS) files or data bases, match each item to determine which object it belongs to, and finally produce time series data sets. To support the high-performance parallel processing of large-scale data sets, AstroCatR uses the extract-transform-load (ETL) pre-processing module to create sky zone files and balance the workload. The matching module uses the overlapped indexing method and an in-memory reference table to improve accuracy and performance. The output of AstroCatR can be stored in CSV files or be transformed other into formats as needed. Simultaneously, the module-based software architecture ensures the flexibility and scalability of AstroCatR. We evaluated AstroCatR with actual observation data from The three Antarctic Survey Telescopes (AST3). The experiments demonstrate that AstroCatR can efficiently and flexibly reconstruct all time series data by setting relevant parameters and configuration files. Furthermore, the tool is approximately 3× faster than methods using relational data base management systems at matching massive catalogues.


2011 ◽  
Vol 3 (2) ◽  
pp. 44-58 ◽  
Author(s):  
Meriem Meddeber ◽  
Belabbas Yagoubi

A computational grid is a widespread computing environment that provides huge computational power for large-scale distributed applications. One of the most important issues in such an environment is resource management. Task assignment as a part of resource management has a considerable effect on the grid middleware performance. In grid computing, task execution time is dependent on the machine to which it is assigned, and task precedence constraints are represented by a directed acyclic graph. This paper proposes a hybrid assignment strategy of dependent tasks in Grids which integrate static and dynamic assignment technologies. Grid computing is considered a set of clusters formed by a set of computing elements and a cluster manager. The main objective is to arrive at a method of task assignment that could achieve minimum response time and reduce the transfer cost, inducing by the tasks transfer respecting the dependency constraints.


2008 ◽  
Vol 05 (02) ◽  
pp. 273-287
Author(s):  
LI CHEN ◽  
HIROSHI OKUDA

This paper describes a parallel visualization library for large-scale datasets developed in the HPC-MW project. Three parallel frameworks are provided in the library to satisfy different requirements of applications. Meanwhile, it is applicable for a variety of mesh types covering particles, structured grids and unstructured grids. Many techniques have been employed to improve the quality of the visualization. High speedup performance has been achieved by some hardware-oriented optimization strategies on different platforms, from PC clusters to the Earth Simulator. Good results have been obtained on some typical parallel platforms, thus demonstrating the feasibility and effectiveness of our library.


1996 ◽  
Vol 07 (03) ◽  
pp. 295-303 ◽  
Author(s):  
P. D. CODDINGTON

Large-scale Monte Carlo simulations require high-quality random number generators to ensure correct results. The contrapositive of this statement is also true — the quality of random number generators can be tested by using them in large-scale Monte Carlo simulations. We have tested many commonly-used random number generators with high precision Monte Carlo simulations of the 2-d Ising model using the Metropolis, Swendsen-Wang, and Wolff algorithms. This work is being extended to the testing of random number generators for parallel computers. The results of these tests are presented, along with recommendations for random number generators for high-performance computers, particularly for lattice Monte Carlo simulations.


2016 ◽  
Vol 53 (2) ◽  
pp. 234-249 ◽  
Author(s):  
Fabrice Burlot ◽  
Rémi Richard ◽  
Helene Joncheray

The conditions for high performance have changed considerably over the last few years. Athletes must spend more time training and competing, devote a lot of time to mental, physical and nutritional professionals and continue to respond to some constraints such as studying, spending time with their families, friends and quality of life. In this context and based on the work of Rosa, we wonder about the capacity of elite athletes to combine all these constraints, namely to manage the acceleration in their pace of life, in order to be able to achieve always more and better in the same time unit. To address this issue, we interviewed 42 French high-level athletes who train at the National Institute of Sport, Expertise and Performance (INSEP). Results show that to suit their goals, athletes implement arrangement and adjustment strategies aimed at making the time they have wholly useful and efficient. This time constraint puts athletes in a perpetual state of tension, on the verge of a good or poor life. The paper shows how the question of time, and particularly the acceleration of pace of life, is vital for modern sporting performance.


Sign in / Sign up

Export Citation Format

Share Document