ALCS — A high-performance high-availability DB/DC monitor

Author(s):  
S. J. Hobson
Author(s):  
Yanish Pradhananga ◽  
Pothuraju Rajarajeswari

The evolution of Internet of Things (IoT) brought about several challenges for the existing Hardware, Network and Application development. Some of these are handling real-time streaming and batch bigdata, real- time event handling, dynamic cluster resource allocation for computation, Wired and Wireless Network of Things etc. In order to combat these technicalities, many new technologies and strategies are being developed. Tiarrah Computing comes up with integration the concept of Cloud Computing, Fog Computing and Edge Computing. The main objectives of Tiarrah Computing are to decouple application deployment and achieve High Performance, Flexible Application Development, High Availability, Ease of Development, Ease of Maintenances etc. Tiarrah Computing focus on using the existing opensource technologies to overcome the challenges that evolve along with IoT. This paper gives you overview of the technologies and design your application as well as elaborate how to overcome most of existing challenge.


Author(s):  
Khaled Ahmed Nagaty

In this article the author explained the classes of e-commerce business models and their advantages and disadvantages. He discussed the important issues and problems facing e-commerce web sites and how to build a successful e-commerce Web site using techniques of security, privacy and authentication, guidelines of maintenance, collecting user’s information for personalization, using multi-tier architecture to achieve high performance and high availability.


2009 ◽  
pp. 389-420
Author(s):  
Brian Goodman ◽  
Maheshwar Inampudi ◽  
James Doran

In this chapter, we introduce five practices to help build scalable, resilient Web applications. In 2004, IBM launched its expertise location system, bringing together two legacy systems and transforming the employee’s ability to find and connect with their extensive network. This chapter reviews five of the many issues that challenge enterprise Web applications: resource contention, managing transactions, application resiliency, geographic diversity, and exception perception management. Using the IBM expertise location system as context, we will present five key methods that mitigate these risks, achieving high availability and high performance goals.


2012 ◽  
Vol 460 ◽  
pp. 313-316 ◽  
Author(s):  
Yong Qiang Zhang ◽  
Wen Ming Li

This paper aims at improve the MySQL relational database in the high concurrent read and write to the database, the high efficiency of mass data storage and access, database scalability and high availability aspects of the performance, presents a the High-performance Cluster Architecture Based on the MySQL and NoSQL. Use the MySQL on the advantages of relational data and the NoSQL on the advantages of storage areas for enterprise to save the development costs and maintenance costs. The combination of the MySQL and NoSQL has brought new ideas for the database development of web2.0.


Author(s):  
Rizki Dewantara ◽  
Siska Iskandar ◽  
Agung Fatwanto

High-performance academic information systems and high availability services are requirement in every university. One of many reasons is for anticipation damage and fail server disrupting server network performance. Failover computer cluster method is applied to two servers: primary server as main server and secondary server as backup server. Four stages will be carried out: First: Installation and support software configuration. Second: installation and failover cluster configuration. Third: installation and Distributed Replicated Block Devices (DRBD) configuration. Forth: server testing with siege and nettool.This research conducted by doing server test before and after high availability. As if the main server has a system failure, it will automatically backup the main server to backup server to minimize user accessed data failures. System uses Ubuntu 16.04 LTS operating system. Based on the test data, it is acquired two data: packet data and data response time (ms). Data packets acquired from this research are: 233.3 average data sent; 228.3 average data received; 2.3 average data lost; while 59.7 average response time (ms) is showed; 2.7 minimum average; 633.8 maximum average. Data sent is 120B per data.


Sign in / Sign up

Export Citation Format

Share Document