Encyclopedia of Internet Technologies and Applications
Latest Publications


TOTAL DOCUMENTS

100
(FIVE YEARS 0)

H-INDEX

3
(FIVE YEARS 0)

Published By IGI Global

9781591409939, 9781591409946

Author(s):  
Antonios Alexiou ◽  
Dimitrios Antonellis ◽  
Christos Bouras

Wi-Fi, short for “wireless fidelity,” is a term for certain types of wireless local area network (WLAN) that use specifications in the 802.11 family. In general, the wireless technologies are used for the replacement or the expansion of the common wired networks. They possess all the functionality of wired LANs but without the physical constraints of the wire itself. The wireless nature inherently allows easy implementation of broadcast/multicast services. When used with portable computing devices (e.g., notebook computers), wireless LANs are also known as cordless LANs because this term emphasizes the elimination of both power cord and network cable (Tanenbaum, 2003).


Author(s):  
Stamatis Karnouskos

As we move towards service-oriented complex infrastructures, what is needed, security, robustness, and intelligence distributed within the network. Modern systems are too complicated to be centrally administered; therefore, the need for approaches that provide autonomic characteristics and are able to be self sustained is evident. We present here one approach towards this goal, i.e., how we can build dynamic infrastructures based on mobile agents (MA) and active networks (AN). Both concepts share common ground at the architectural level, which makes it interesting to use a mix of them to provide a more sophisticated framework for building dynamic systems. We argue that by using this combination, more autonomous systems can be built that can effectively possess at least at some level of self-* features, such as self-management, self-healing, etc., which, in conjunction with cooperation capabilities, will lead to the deployment of dynamic infrastructures that autonomously identify and adapt to external/internal events. As an example, the implementation of an autonomous network-based security service is analyzed, which proves that denial of service attacks can be managed by the network itself intelligently and in an autonomic fashion.


Author(s):  
Nelson Luís Saldanha da Fonseca ◽  
Neila Fernanda Michel

In response to a series of collapses due to congestion on the Internet in the mid-’80s, congestion control was added to the transmission control protocol (TCP) (Jacobson, 1988), thus allowing individual connections to control the amount of traffic they inject into the network. This control involves regulating the size of the congestion window (cwnd) to impose a limit on the size of the transmission window. In the most deployed TCP variant on the Internet, TCP Reno (Allman, Floyd, & Partridge, 2002), changes in congestion window size are driven by the loss of segments. Congestion window size is increased by 1/cwnd for each acknowledgement (ack) received, and reduced to half for the loss of a segment in a pattern known as additive increase multiplicative decrease (AIMD). Although this congestion control mechanism was derived at a time when the line speed was of the order of 56 kbs, it has performed remarkably well given that the speed, size, load, and connectivity of the Internet have increased by approximately six orders of magnitude in the past 15 years. However, the AIMD pattern of window growth seriously limits efficienct operation of TCP-Reno over high-capacity links, so that the transport layer is the network bottleneck. This text explains the major challenges involved in using TCP for high-speed networks and briefly describes some of the variations of TCP designed to overcome these challenges.


Author(s):  
Sergio Gutiérrez ◽  
Abelardo Pardo ◽  
Carlos Delgado Kloos

A swarm may be defined as a population of interacting elements that is able to optimize some global objective through collaborative search of a space (Kennedy, 2001). The elements may be very simple machines or very complex living beings, but there are two restrictions to be observed: They are limited to local interactions; usually the interaction is not performed directly but indirectly through the environment. The property that makes swarms interesting is their self-organizing behaviour; in other words, it is the fact that a lot of simple processes can lead to complex results.


Author(s):  
Chia-Chu Chiang

Software maintenance is an inevitable process due to program evolution (Lehman & Belady, 1985). Adaptive maintenance (Schenidewind, 1987) is an activity used to adapt software to new environments or new requirements due to the evolving needs of new platforms, new operating systems, new software, and evolving business requirements. For example, companies have been adapting their legacy systems to Web-enabling environments of doing business that could not have been imagined even a decade ago (Khosrow-Pour & Herman, 2001; Werthner & Ricci, 2004).


Author(s):  
Adetola Oredope ◽  
Antonio Liotta

The IP multimedia subsystem (IMS) specifies a service centric framework for converged, all-IP networks. This promises to provide the long awaited environment for deploying technology-neutral services over fixed, wireless, and cellular networks, known as third generation (3G) networks. Since its initial proposal in 1999, the IMS has gone through different stages of development, from its initial Release 5 up to the current Release 7.


Author(s):  
Dumitru Roman ◽  
Ioan Toma ◽  
Dieter Fensel

Service-oriented computing (SOC) is the new emerging paradigm for distributed computing, especially in the area of e-business and e-work processing, that has evolved from object-oriented and component-based computing to enable the building of scalable and agile networks of collaborating business applications distributed within and across organizational boundaries; services will count for customers and not the specific software or hardware component that is used to implement the services. In this context, services become the next level of abstraction in the process of creating systems that would enable automation of e-businesses and e-works.


Author(s):  
Kevin Curran ◽  
Gary Gumbleton

Tim Berners-Lee, director of the World Wide Web Consortium (W3C), states that, “The Semantic Web is not a separate Web but an extension of the current one, in which information is given well-defined meaning, better enabling computers and people to work in cooperation” (Berners-Lee, 2001). The Semantic Web will bring structure to the meaningful content of Web pages, creating an environment where software agents, roaming from page to page, can readily carry out sophisticated tasks for users. The Semantic Web (SW) is a vision of the Web where information is more efficiently linked up in such a way that machines can more easily process it. It is generating interest not just because Tim Berners-Lee is advocating it, but because it aims to solve the problem of information being hidden away in HTML documents, which are easy for humans to get information out of but are difficult for machines to do so. We will discuss the Semantic Web here.


Author(s):  
Christos Bouras ◽  
Apostolos Gkamas ◽  
Dimitris Primpas ◽  
Kostas Stamos

IP networks are built around the idea of best effort networking, which makes no guarantees regarding the delivery, speed, and accuracy of the transmitted data. While this model is suitable for a large number of applications, and works well for almost all applications when the network load is low (and therefore there is no congestion), there are two main factors that lead to the need for an additional capability of quality of service guarantees. One is the fact that an increasing number of Internet applications are related to real-time and other multimedia data, which have greater service requirements in order to be satisfying to the user. The other is that Internet usage is steadily increasing, and although the network infrastructure is also updated often, it is not always certain that network resource offerings will be ahead of usage demand. In order to deal with this situation, IETF has developed two architectures in order to enable QoS-based handling of data flows in IP networks. This article describes and compares these two architectures.


Author(s):  
Antóniol Nogueira ◽  
Paulo Salvador ◽  
Rui Valadas ◽  
António Pacheco

This article addresses the use of Markovian models, based on discrete time MMPPs (dMMPPs), for modeling IP traffic. In order to describe the packet arrival process, we will present three traffic models that were designed to capture self-similar behavior over multiple time scales. The first model is based on a parameter fitting procedure that matches both the autocovariance and marginal distribution of the counting process (Salvador 2003). The dMMPP is constructed as a superposition of two-state dMMPPs (2-dMMPPs), designed to match the autocovariance function, and one designed to match the marginal distribution. The second model is a superposition of MMPPs, each one describing a different time scale (Nogueira 2003a). The third model is obtained as the equivalent to a hierarchical construction process that, starting at the coarsest time scale, successively decomposes MMPP states into new MMPPs to incorporate the characteristics offered by finer time scales (Nogueira 2003b). These two models are constructed by fitting the distribution of packet counts in a given number of time scales.


Sign in / Sign up

Export Citation Format

Share Document