Advances in Wireless Technologies and Telecommunication - Multidisciplinary Perspectives on Telecommunications, Wireless Systems, and Mobile Computing
Latest Publications


TOTAL DOCUMENTS

13
(FIVE YEARS 0)

H-INDEX

0
(FIVE YEARS 0)

Published By IGI Global

9781466647152, 9781466647169

Author(s):  
Nurul I. Sarkar ◽  
Yash Dole

This chapter aims to report on the performance of voice and video traffic over two popular backbone network technologies, namely Gigabit Ethernet (GbE) and Asynchronous Transfer Mode (ATM). ATM networks are being used by many universities and organizations for their unique characteristics such as scalability and guaranteed Quality of Service (QoS), especially for voice and video applications. Gigabit Ethernet matches ATM functionality by providing higher bandwidth at much lower cost, less complexity, and easier integration into the existing Ethernet technologies. It is useful to be able to compare these two technologies against various network performance metrics to find out which technology performs better for transporting voice and video conferencing. This chapter provides an in-depth performance analysis and comparison of GbE and ATM networks by extensive OPNET-based simulation. The authors measure the Quality of Service (QoS) parameters, such as voice and video throughput, end-to-end delay, and voice jitter. The analysis and simulation results reported in this chapter provide some insights into the performance of GbE and ATM backbone networks. This chapter may help network researchers and engineers in selecting the best technology for the deployment of backbone campus and corporate networks.


Author(s):  
Robert van Wessel ◽  
Henk J. de Vries

We all take the ubiquity of the Internet for granted: anyone, anywhere, anytime, any device, any connection, any app…but for how long? Is the future of the Internet really at stake? Discussions about control of the Internet, its architecture and of the applications running on it started more than a decade ago (Blumenthal & Clark, 2001). This topic is becoming more and more important for citizens, businesses, and governments across the world. In its original set-up, the architecture of the Internet did not favor one application over another and was based on the net neutrality principle (Wu, 2003). However, architectures should be understood an “alternative way of influencing economic systems” (Van Schewick, 2010), but they should not be a substitute for politics (Agre, 2003). The architecture is laid down in standards and therefore discussions about the future of the Internet should also address the role of standards. This is what this chapter aims to do.


Author(s):  
Askin Erdem Gundogdu ◽  
Erkan Afacan

There has been great interest in wireless power transmission since 2007 when a novel approach was presented by a group of scientists at MIT. With this new technique, power transmission range is possible for a couple of meters with high efficiency; however, to be able to use this technique in our lives with high efficiency and long transfer range, small structured devices and new design techniques are strongly required. In this chapter, the investigation on supplying energy by sweeping was presented. The experimental results claim that energy could be supplied to multiple devices almost at the same time. If the range of chosen frequency increases, the number of devices could be increased as well, considering slight energy efficiency loss in the transfer system. The authors hope that the proposed technique gives inspiration to the designers and to the market.


Author(s):  
Barbara Holland

New Technologies pose new challenges when libraries build their virtual collections. With the growth and popularity of e-books and other portable devices, collections can no longer be evaluated purely on the bases of content. Today, there is a growing trend in seeking information using mobile devices. Libraries can extend new types of services to users of mobile devices and develop, license, or otherwise make available scholarly content configured for mobile devices. Libraries will soon become part of an institutional planning process for the development of services for mobile devices. Only users can indicate how these platforms will be used as mobile tools for study or entertainment devices. The University of Technology Library (now a part of Aalto University) 2009-2010, in collaboration with the Usability Research Group, surveyed various e-book readers. Furthermore, the California Lutheran University ran a two-semester pilot to explore how course use of e-readers affects student learning from 2009-2010. To improve access to digital assets at the Norwegian National Library, an Android App was created and tested for mobile use. In addition, two military educational schools conducted a study of current mobile device ownership and use by their students. Survey results revealed that a majority of students say they would engage in mobile learning if it were available. This chapter examines surveys and emerging trends in Digital Libraries, mobile devices, and mobile learning.


Author(s):  
V. Goswami ◽  
S. S. Patra ◽  
G. B. Mund

In Cloud Computing, the virtualization of IT infrastructure enables consolidation and pooling of IT resources so they are shared over diverse applications to offset the limitation of shrinking resources and growing business needs. Cloud Computing is a way to increase the capacity or add capabilities dynamically without investing in new infrastructure, training new personnel, or licensing new software. It extends Information Technology's existing capabilities. In the last few years, cloud computing has grown from being a promising business concept to one of the fast growing segments of the IT industry. For the commercial success of this new computing paradigm, the ability to deliver guaranteed Quality of Services is crucial. Based on the Service Level Agreement, the requests are processed in the cloud centers in different modes. This chapter deals with Quality of Services and optimal management of cloud centers with different arrival modes. For this purpose, the authors consider a finite-buffer multi-server queuing system where client requests have different arrival modes. It is assumed that each arrival mode is serviced by one or more virtual machines, and different modes have equal probabilities of receiving services. Various performance measures are obtained and optimal cost policy is presented with numerical results. A genetic algorithm is employed to search optimal values of various parameters for the system.


Author(s):  
Abdelaali Chaoub ◽  
Elhassane Ibn-Elhaj

Cognitive Radio networks and channel sharing are emerging as a new paradigm of communication in multimedia and wireless networks. In this chapter, the authors consider a secondary use application that consists of carrying TDMA traffic between the mobile and the base station for GSM networks over a Cognitive Radio network. Therefore, packets are not only lost by reason of Primary traffic interruptions, but the authors consider the opportunistic spectrum sharing which is the cornerstone of the Cognitive Radio concept as the major cause of collisions between several Secondary Users. The authors introduce their specific collision model, and they propose a solution consisting of the creation of many secondary user links with high reliability using a specific algorithm introduced here to alleviate traffic collisions. The authors evaluate the secondary traffic transmission performance in view of the Spectral Efficiency, and they outline the achieved gains of using the proposed solution. Finally, the authors highlight the recent trend in CR literature, which consists of leveraging the TV white spaces to deploy and enhance next generation cellular networks such as LTE.


Author(s):  
Amine Dhraief ◽  
Imen Mahjri ◽  
Abdelfettah Belghith

Wireless Sensor Networks (WSNs) have recently emerged as a prominent technology for lots of civilian and military applications in both rural and urban environments. Area coverage configuration is an efficient method to alleviate the nodes' limited energy supply in high density WSNs. It consists in selecting as few active sensors as possible from all deployed nodes while ensuring sufficient sensing coverage of the monitored region. Several coverage configuration protocols have been developed; most of them presume the availability of precise knowledge about node locations and sensing ranges. Relaxing these conservative assumptions might affect the performance of coverage configuration protocols. In this chapter, the authors examine the impact of location errors, irregular sensing ranges, and packet losses on the Coverage Configuration Protocol (CCP). The authors focus more precisely on the impact of using this protocol on a real application: precision agriculture where farmers need to cover the entire terrain with sensors in order to rapidly detect and localize spots requiring chemical treatment.


Author(s):  
C. Poongodi ◽  
A. M. Natarajan

Intermittently Connected Mobile Networks (ICMNs) are a kind of wireless network where, due to mobility of nodes and lack of connectivity, there may be disconnections among the nodes for a long time. To deal with such networks, store-carry-forward method is adopted for routing. This method buffers the messages in each node for a long time until a forwarding opportunity comes. Multiple replications are made for each message. It results in an increase in network overhead and high resource consumption because of uncontrolled replications. Uncontrolled replications are done due to lack of global knowledge about the messages and the forwarding nodes. The authors introduce a new simple scheme that applies knapsack policy-based replication strategy while replicating the messages residing in a node buffer. The numbers of replications are controlled by appropriately selecting messages based on the total count on replications already made and the message size. In addition, the messages are selected for forwarding based on the relay node goodness in contacting the destination and the remaining buffer size of that relay node. Therefore, useful replications are made based on the dynamic environment of a network, and it reduces the network overhead, resource consumption, delivery delay, and in turn, increases the delivery ratio.


Author(s):  
Natarajan Meghanathan

In the first half of the chapter, the authors provide a comprehensive description of two broad categories of data gathering algorithms for wireless sensor networks: the classical energy-unaware algorithms and the modern energy-aware algorithms, as well as presented an exhaustive performance comparison of representative algorithms from both these categories. While the first half of the chapter focuses on static sink (that is located outside on the network boundary), the second half of the chapter explores the use of mobile sinks that gather data by stopping at the vicinity of the sensor nodes. As a first step, the authors investigate the performance of three different strategies to develop sink mobility models for delay and energy-efficient data gathering in static wireless sensor networks. The three strategies differ on the approach to take to determine the next stop for data gathering: randomly choosing a sensor node that is yet to be covered (Random), choose the sensor node that has the maximum number of uncovered neighbor nodes (Max-Density), and choose the sensor node that has the largest value for the product of the maximum number of uncovered neighbor nodes and the residual energy (Max-Density-Energy). Based on the simulation results, the authors recommend incorporating the random node selection-based strategy to be a better strategy for sink mobility models (with minimal deployment overhead) rather than keeping track of the number of uncovered neighbor nodes per node and the residual energy available at the nodes.


Author(s):  
Ismail Ali ◽  
Sandro Moiron ◽  
Martin Fleury ◽  
Mohammed Ghanbari

Intra-refresh macroblocks and data partitioning are two error-resilience tools aimed at video streaming over wireless networks. Intra-refresh macroblocks avoids the repetitive delays associated with periodic intra-coded frames, while also arresting temporal error propagation. Data-partitioning divides a compressed data stream according to the data importance, allowing packet prioritization schemes to be designed. This chapter reviews these and other error-resilience tools from the H.264 codec. As an illustration of the use of these tools, the chapter demonstrates a wireless access scheme that selectively drops packets that carry intra-refresh macroblocks. This counter-intuitive scheme actually results in better video quality than if packets containing transform coefficients were to be selectively dropped. Dropping only occurs when in the presence of wireless network congestion, as at other times the intra-coded macroblocks protect the video against random bit errors. Any packet dropping takes place under IEEE 802.11e, which is a quality-of-service addition to the IEEE 802.11 standard for wireless LANs. The chapter shows that, by this scheme, when congestion occurs, it is possible to gain up to 2 dB in video quality over assigning a stream to a single IEEE 802.11e access category. The scheme is shown to be consistently advantageous in indoor and outdoor wireless scenarios over other ways of assigning the partitioned data packets to different access categories. The chapter also contains a review of other research ideas using intra-refresh macroblocks and data-partitioning, as well as a look at the research outlook, now that the High Efficiency Video Codec (HEVC) has been released.


Sign in / Sign up

Export Citation Format

Share Document