Handbook of Research on Scalable Computing Technologies
Latest Publications


TOTAL DOCUMENTS

38
(FIVE YEARS 0)

H-INDEX

2
(FIVE YEARS 0)

Published By IGI Global

9781605666617, 9781605666624

Author(s):  
Yang Xiang ◽  
Daxin Tian

Network security applications such as intrusion detection systems (IDSs), firewalls, anti-virus/spyware systems, anti-spam systems, and security visualisation applications are all computing-intensive applications. These applications all heavily rely on deep packet inspection, which is to examine the content of each network packet’s payload. Today these security applications cannot cope with the speed of broadband Internet that has already been deployed, that is, the processor power is much slower than the bandwidth power. Recently the development of multi-core processors brings more processing power. Multi-core processors represent a major evolution in computing hardware technology. While two years ago most network processors and personal computer microprocessors had single core configuration, the majority of the current microprocessors contain dual or quad cores and the number of cores on die is expected to grow exponentially over time. The purpose of this chapter is to discuss the research on using multi-core technologies to parallelize deep packet inspection algorithms, and how such an approach will improve the performance of deep packet inspection applications. This will eventually provide a security system the capability of real-time packet inspection thus significantly improve the overall status of security on current Internet infrastructure.


Author(s):  
Sudha Gunturu ◽  
Xiaolin Li ◽  
Laurence Tianruo Yang

This chapter studies a load scheduling strategy with near-optimal processing time that is designed to explore the computational characteristics of DNA sequence alignment algorithms, specifically, the Needleman-Wunsch Algorithm. Following the divisible load scheduling theory, an efficient load scheduling strategy is designed in large-scale networks so that the overall processing time of the sequencing tasks is minimized. In this study, the load distribution depends on the length of the sequence and number of processors in the network and, the total processing time is also affected by communication link speed. Several cases have been considered in the study by varying the sequences, communication and computation speeds, and number of processors. Through simulation and numerical analysis, this study demonstrates that for a constant sequence length as the numbers of processors increase in the network the processing time for the job decreases and minimum overall processing time is achieved.


Author(s):  
Jiehan Zhou ◽  
Zhonghong Ou ◽  
Junzhao Sun ◽  
Mika Rautiainen ◽  
Mika Ylianttila

Community Coordinated Multimedia (CCM) envisions a novel paradigm that enables the user to consume multiple media through requesting multimedia-intensive Web services via diverse display devices, converged networks, and heterogeneous platforms within a virtual, open and collaborative community. These trends yield new requirements for CCM middleware. This chapter aims to systematically and extensively describe middleware challenges and opportunities to realize the CCM paradigm by reviewing the activities of middleware with respect to four viewpoints, namely mobility-aware, multimedia-driven, service-oriented, and community-coordinated.


Author(s):  
King Tin Lam ◽  
Cho-Li Wang

Web application servers, being today’s enterprise application backbone, have warranted a wealth of J2EE-based clustering technologies. Most of them however need complex configurations and excessive programming effort to retrofit applications for cluster-aware execution. This chapter proposes a clustering approach based on distributed Java virtual machine (DJVM). A DJVM is a collection of extended JVMs that enables parallel execution of a multithreaded Java application over a cluster. A DJVM achieves transparent clustering and resource virtualization, extolling the virtue of single-system-image (SSI). The authors evaluate this approach through porting Apache Tomcat to their JESSICA2 DJVM and identify scalability issues arising from fine-grain object sharing coupled with intensive synchronizations among distributed threads. By leveraging relaxed cache coherence protocols, we are able to conquer the scalability barriers and harness the power of our DJVM’s global object space design to significantly outstrip existing clustering techniques for cache-centric web applications.


Author(s):  
Alan A. Bertossi ◽  
M. Cristina Pinotti ◽  
Phalguni Gupta

The server allocation problem arises in isolated infostations, where mobile users going through the coverage area require immediate high-bit rate communications such as web surfing, file transferring, voice messaging, email and fax. Given a set of service requests, each characterized by a temporal interval and a category, an integer k, and an integer hc for each category c, the problem consists in assigning a server to each request in such a way that at most k mutually simultaneous requests are assigned to the same server at the same time, out of which at most hc are of category c, and the minimum number of servers is used. Since this problem is computationally intractable, a scalable 2-approximation online algorithm is exhibited. Generalizations of the problem are considered, which contain bin-packing, multiprocessor scheduling, and interval graph coloring as special cases, and admit scalable on-line algorithms providing constant approximations.


Author(s):  
Yuan-Shun Dai ◽  
Jack Dongarra

Grid computing is a newly developed technology for complex systems with large-scale resource sharing, wide-area communication, and multi-institutional collaboration. It is hard to analyze and model the Grid reliability because of its largeness, complexity and stiffness. Therefore, this chapter introduces the Grid computing technology, presents different types of failures in grid system, models the grid reliability with star structure and tree structure, and finally studies optimization problems for grid task partitioning and allocation. The chapter then presents models for star-topology considering data dependence and treestructure considering failure correlation. Evaluation tools and algorithms are developed, evolved from Universal generating function and Graph Theory. Then, the failure correlation and data dependence are considered in the model. Numerical examples are illustrated to show the modeling and analysis.


Author(s):  
Wei Shen ◽  
Qing-An Zeng

Integrated heterogeneous wireless and mobile network (IHWMN) is introduced by combing different types of wireless and mobile networks (WMNs) in order to provide more comprehensive service such as high bandwidth with wide coverage. In an IHWMN, a mobile terminal equipped with multiple network interfaces can connect to any available network, even multiple networks at the same time. The terminal also can change its connection from one network to other networks while still keeping its communication alive. Although IHWMN is very promising and a strong candidate for future WMNs, it brings a lot of issues because different types of networks or systems need to be integrated to provide seamless service to mobile users. In this chapter, the authors focus on some major issues in IHWMN. Several noel network selection strategies and resource management schemes are also introduced for IHWMN to provide better resource allocation for this new network architecture.


Author(s):  
Hai Jiang ◽  
Yanqing Ji

Computation mobility enables running programs to move around among machines and is the essence of performance gain, fault tolerance, and system throughput increase. State-carrying code (SCC) is a software mechanism to achieve such computation mobility by saving and retrieving computation states during normal program execution in heterogeneous multi-core/many-core clusters. This chapter analyzes different kinds of state saving/retrieving mechanisms for their pros and cons. To achieve a portable, flexible and scalable solution, SCC adopts the application-level thread migration approach. Major deployment features are explained and one example system, MigThread, is used to illustrate implementation details. Future trends are given to point out how SCC can evolve into a complete lightweight virtual machine. New high productivity languages might step in to raise SCC to language level. With SCC, thorough resource utilization is expected.


Author(s):  
Yifeng Zhu ◽  
Hong Jiang

This chapter discusses the false rates of Bloom filters in a distributed environment. A Bloom filter (BF) is a space-efficient data structure to support probabilistic membership query. In distributed systems, a Bloom filter is often used to summarize local services or objects and this Bloom filter is replicated to remote hosts. This allows remote hosts to perform fast membership query without contacting the original host. However, when the services or objects are changed, the remote Bloom replica may become stale. This chapter analyzes the impact of staleness on the false positive and false negative for membership queries on a Bloom filter replica. An efficient update control mechanism is then proposed based on the analytical results to minimize the updating overhead. This chapter validates the analytical models and the update control mechanism through simulation experiments.


Author(s):  
Priyadarsi Nanda ◽  
Xiangjian He

The evolution of Internet and its successful technologies has brought a tremendous growth in business, education, research etc. over the last four decades. With the dramatic advances in multimedia technologies and the increasing popularity of real-time applications, recently Quality of Service (QoS) support in the Internet has been in great demand. Deployment of such applications over the Internet in recent years, and the trend to manage them efficiently with a desired QoS in mind, researchers have been trying for a major shift from its Best Effort (BE) model to a service oriented model. Such efforts have resulted in Integrated Services (Intserv), Differentiated Services (Diffserv), Multi Protocol Label Switching (MPLS), Policy Based Networking (PBN) and many more technologies. But the reality is that such models have been implemented only in certain areas in the Internet not everywhere and many of them also faces scalability problem while dealing with huge number of traffic flows with varied priority levels in the Internet. As a result, an architecture addressing scalability problem and satisfying end-to-end QoS still remains a big issue in the Internet. In this chapter the authors propose a policy based architecture which they believe can achieve scalability while offering end to end QoS in the Internet.


Sign in / Sign up

Export Citation Format

Share Document