Performance Studies of FTP, Voice and Video over ATM-Wireless Backbone Network

Author(s):  
Nurul I. Sarkar ◽  
Ritchie Qi ◽  
Akbar Hossain

Asynchronous Transfer Mode (ATM) is a high-speed networking technology designed to support real-time applications such as voice and video over both wired and wireless networks. This type of network is being used by medium-to-large organizations and the Internet service providers as backbone network to carry data traffic over long-distance with a guaranteed quality of service (QoS). The guaranteed QoS is achieved through a point-to-point link between end users. While the performance of ATM network over wired network has been studied extensively, the performance of real-time traffic over an ATM-Wireless extension has not been fully explored yet. It is useful to be able to compare the performance of ATM network with and without wireless extension against various network performance metrics to find out the effect of wireless extension on system performance.

2021 ◽  
Vol 13 (3) ◽  
pp. 63
Author(s):  
Maghsoud Morshedi ◽  
Josef Noll

Video conferencing services based on web real-time communication (WebRTC) protocol are growing in popularity among Internet users as multi-platform solutions enabling interactive communication from anywhere, especially during this pandemic era. Meanwhile, Internet service providers (ISPs) have deployed fiber links and customer premises equipment that operate according to recent 802.11ac/ax standards and promise users the ability to establish uninterrupted video conferencing calls with ultra-high-definition video and audio quality. However, the best-effort nature of 802.11 networks and the high variability of wireless medium conditions hinder users experiencing uninterrupted high-quality video conferencing. This paper presents a novel approach to estimate the perceived quality of service (PQoS) of video conferencing using only 802.11-specific network performance parameters collected from Wi-Fi access points (APs) on customer premises. This study produced datasets comprising 802.11-specific network performance parameters collected from off-the-shelf Wi-Fi APs operating at 802.11g/n/ac/ax standards on both 2.4 and 5 GHz frequency bands to train machine learning algorithms. In this way, we achieved classification accuracies of 92–98% in estimating the level of PQoS of video conferencing services on various Wi-Fi networks. To efficiently troubleshoot wireless issues, we further analyzed the machine learning model to correlate features in the model with the root cause of quality degradation. Thus, ISPs can utilize the approach presented in this study to provide predictable and measurable wireless quality by implementing a non-intrusive quality monitoring approach in the form of edge computing that preserves customers’ privacy while reducing the operational costs of monitoring and data analytics.


Sensors ◽  
2021 ◽  
Vol 21 (2) ◽  
pp. 621
Author(s):  
Maghsoud Morshedi ◽  
Josef Noll

Video on demand (VoD) services such as YouTube have generated considerable volumes of Internet traffic in homes and buildings in recent years. While Internet service providers deploy fiber and recent wireless technologies such as 802.11ax to support high bandwidth requirement, the best-effort nature of 802.11 networks and variable wireless medium conditions hinder users from experiencing maximum quality during video streaming. Hence, Internet service providers (ISPs) have an interest in monitoring the perceived quality of service (PQoS) in customer premises in order to avoid customer dissatisfaction and churn. Since existing approaches for estimating PQoS or quality of experience (QoE) requires external measurement of generic network performance parameters, this paper presents a novel approach to estimate the PQoS of video streaming using only 802.11 specific network performance parameters collected from wireless access points. This study produced datasets comprising 802.11n/ac/ax specific network performance parameters labelled with PQoS in the form of mean opinion scores (MOS) to train machine learning algorithms. As a result, we achieved as many as 93–99% classification accuracy in estimating PQoS by monitoring only 802.11 parameters on off-the-shelf Wi-Fi access points. Furthermore, the 802.11 parameters used in the machine learning model were analyzed to identify the cause of quality degradation detected on the Wi-Fi networks. Finally, ISPs can utilize the results of this study to provide predictable and measurable wireless quality by implementing non-intrusive monitoring of customers’ perceived quality. In addition, this approach reduces customers’ privacy concerns while reducing the operational cost of analytics for ISPs.


Author(s):  
Noor Nateq Alfaisaly ◽  
Suhad Qasim Naeem ◽  
Azhar Hussein Neama

Worldwide interoperability microwave access (WiMAX) is an 802.16 wireless standard that delivers high speed, provides a data rate of 100 Mbps and a coverage area of 50 km. Voice over internet protocol (VoIP) is flexible and offers low-cost telephony for clients over IP. However, there are still many challenges that must be addressed to provide a stable and good quality voice connection over the internet. The performance of various parameters such as multipath channel model and bandwidth over the Star trajectoryWiMAX network were evaluated under a scenario consisting of four cells. Each cell contains one mobile and one base station. Network performance metrics such as throughput and MOS were used to evaluate the best performance of VoIP codecs. Performance was analyzed via OPNET program14.5. The result use of multipath channel model (disable) was better than using the model (ITU pedestrian A). The value of the throughput at 15 dB was approximately 1600 packet/sec, and at -1 dB was its value 1300 packet/se. According to data, the Multipath channel model of the disable type the value of the MOS was better than the ITU Pedestrian A type.


M/C Journal ◽  
1998 ◽  
Vol 1 (1) ◽  
Author(s):  
Nick Caldwell

The Australian Broadcasting Corporation is in the midst of significant change as a result of budgetary pressures from the government and the challenge of the oncoming digital age. Lack of funding and dwindling resources have forced the ABC to shut down many of its regional services and to outsource many of its formerly in-house productions. However, there do appear some ways in which the ABC might meet, as the rhetoric goes, "the challenge of the digital era". Traditionally, the role of the ABC has included the provision of comprehensive coverage of, and service for, the whole of Australia, including regions that would be economically unfeasible for commercial operations to penetrate. Recently, however, budgetary cuts have eroded this role substantially, with the axing of state based current affairs and the cessation of Radio Triple J's planned expansion into regional Australia. The Internet has provided a potential, if problematic, stop-gap solution, through the launch of the ABC's online news service. Internet based news solutions have few of the production-end overheads of the television service. There are no expensive studio set ups, no presenters, no cameras, just text that can be quickly keyed into the system and formatted for instantaneous, non-linear delivery. I should note at this point that currently, this "delivery" is in the passive sense of the word: users must search out the content and download it onto their machines. In Internet jargon, this is called "pull" technology. New technologies being developed promise to "push" the content automatically and directly to a user's computer. The ABC's implementation, taking advantage of all these benefits, is text-based, comprehensive, updated constantly, and easy to use. Currently, however, delivery of Internet-based content is tied to the existing phone network, and with most Internet service providers based in state capitals, regional Internet access is hindered by the cost of long-distance calls. The potential exists, nonetheless, for the ABC to achieve truly national coverage by methods that bypass existing structures. The planned shift by Australian TV networks to digital transmission has the potential to enable new possibilities for public broadcasting. A digital infrastructure could allow information and programming to be cheaply produced at the local level, then recompiled centrally and redistributed across the country. The convergence of computer and television will enable a greater variety of content to be sent to the home -- and, possibly, sent back out again in an altered form. Such a transformation of the way we experience television may well alter the concept of public broadcasting beyond recognition, if not render it obsolete. However, these possibilities, although reasonable given projected advances in technology, so far largely remain fantasy due to the debate over regulation between the Federal government and the commercial networks. It remains to be seen whether the ABC will be able to take advantage of the new opportunities. Citation reference for this article MLA style: Nick Caldwell. "Looking to a Digital Future: Thoughts on the New ABC." M/C Journal 1.1 (1998). [your date of access] <http://www.media-culture.org.au/9807/abc.php>. Chicago style: Nick Caldwell, "Looking to a Digital Future: Thoughts on the New ABC," M/C Journal 1, no. 1 (1998), <http://www.nedia-culture.org.au/9807/abc.php> ([your date of access]). APA style: Nick Caldwell. (1998) Looking to a digital future: thoughts on the new ABC. M/C Journal 1(1). <http://www.media-culture.org.au/9807/abc.php> ([your date of access]).


2018 ◽  
Vol 31 (1) ◽  
pp. 181-198 ◽  
Author(s):  
Michael F. Frimpon ◽  
Ebenezer Adaku

Purpose The rising proportion of internet users in Sub-Saharan Africa and the lack of analytical techniques, as decision support systems, in choosing among alternative internet service providers (ISPs) by consumers underpin this study. The purpose of this paper is to propose an approach for evaluating high-speed internet service offered by ISPs in a sub-Saharan African country. Design/methodology/approach Using a sample size of 150, pairwise comparisons of two ISPs along five criteria of cost, usability, support, reliability and speed were performed by ten person groups of university students working in various organizations in Ghana and undertaking an online Six Sigma Course. Geometric means were employed to aggregate the scores in 15 groups, and these scores were then normalized and used as input into an analytical hierarchy process grid. Findings The results show that consumers of internet services highly emphasize the cost attribute of internet provision in their decision making. On the other hand, it was realized that consumers least emphasize the support provided by ISPs in their decision making among alternative ISPs. Originality/value This study has sought to provide an analytical framework for assessing the quality of service provided by alternative ISPs in a developing economy’s context. The evaluating criteria in this framework also reveal the key consumer requirements in internet service provision in a developing economy’s environment. This, to a large extent, will inform the marketing strategies of existing ISPs in Ghana as well as prospective ones intending to enter the Ghanaian market. Besides, the National Communication Authority, a regulator of communication services provision in Ghana, will be informed about the performances of the ISPs along five performance criteria. This is expected to aid in their regulatory functions.


Author(s):  
Nurul I. Sarkar ◽  
Yash Dole

This chapter aims to report on the performance of voice and video traffic over two popular backbone network technologies, namely Gigabit Ethernet (GbE) and Asynchronous Transfer Mode (ATM). ATM networks are being used by many universities and organizations for their unique characteristics such as scalability and guaranteed Quality of Service (QoS), especially for voice and video applications. Gigabit Ethernet matches ATM functionality by providing higher bandwidth at much lower cost, less complexity, and easier integration into the existing Ethernet technologies. It is useful to be able to compare these two technologies against various network performance metrics to find out which technology performs better for transporting voice and video conferencing. This chapter provides an in-depth performance analysis and comparison of GbE and ATM networks by extensive OPNET-based simulation. The authors measure the Quality of Service (QoS) parameters, such as voice and video throughput, end-to-end delay, and voice jitter. The analysis and simulation results reported in this chapter provide some insights into the performance of GbE and ATM backbone networks. This chapter may help network researchers and engineers in selecting the best technology for the deployment of backbone campus and corporate networks.


Author(s):  
Maria Löblich

Internet neutrality—usually net(work) neutrality—encompasses the idea that all data packets that circulate on the Internet should be treated equally, without discriminating between users, types of content, platforms, sites, applications, equipment, or modes of communication. The debate about this normative principle revolves around the Internet as a set of distribution channels and how and by whom these channels can be used to control communication. The controversy was spurred by advancements in technology, the increased usage of bandwidth-intensive services, and changing economic interests of Internet service providers. Internet service providers are not only important technical but also central economic actors in the management of the Internet’s architecture. They seek to increase revenue, to recover sizable infrastructure upgrades, and expand their business model. This has consequences for the net neutrality principle, for individual users and corporate content providers. In the case of Internet service providers becoming content providers themselves, net neutrality proponents fear that providers may exclude competitor content, distribute it poorly and more slowly, and require competitors to pay for using high-speed networks. Net neutrality is not only a debate on infrastructure business models that is carried out in economic expert circles. On the contrary, and despite its technical character, it has become an issue in the public debate and an issue that is framed not only in economic but also in political and social terms. The main dividing line in the debate is whether net neutrality regulation is necessary or not and what scope net neutrality obligations should have. The Federal Communications Commission (FCC) in the United States passed new net neutrality rules in 2015 and strengthened its legal underpinning regarding the regulation of Internet service providers (ISPs). With the Telecoms Single Market Regulation, for the first time there will be a European Union–wide legislation for net neutrality, but not recent dilution of requirements. From a communication studies perspective, Internet neutrality is an issue because it relates to a number of topics addressed in communication research, including communication rights, diversity of media ownership, media distribution, user control, and consumer protection. The connection between legal and economic bodies of research, dominating net neutrality literature, and communication studies is largely underexplored. The study of net neutrality would benefit from such a linkage.


Author(s):  
Mohamed Wahba ◽  
Robert Leary ◽  
Nicolás Ochoa-Lleras ◽  
Jariullah Safi ◽  
Sean Brennan

This paper presents implementation details and performance metrics for software developed to connect the Robot Operating System (ROS) with Simulink Real-Time (SLRT). The communication takes place through the User Datagram Protocol (UDP) which allows for fast transmission of large amounts of data between the two systems. We use SLRT’s built-in UDP communication and binary packing blocks to send and receive the data over a network. We use implementation metrics from several examples to illustrate the effectiveness and drawbacks of this bridge in a real-time environment. The time latency of the bridge is analyzed by performing loop-back tests and obtaining the statistics of the time delay. A proof of concept experiment is presented that utilizes two laboratories that ran a driver-in-the-loop system despite a large physical separation. This work provides recommendations for implementing data integrity measures as well as the potential to use the system with other applications that demand high speed real-time communication.


2003 ◽  
Vol 02 (04) ◽  
pp. 683-692 ◽  
Author(s):  
DENNIS GUSTER ◽  
CHANGSOO SOHN ◽  
PAUL SAFONOV ◽  
DAVID ROBINSON

Technological advances such as high speed Ethernet and ATM have provided a means for business organizations to employ high performance networking. However, few studies have been conducted to verify the architecture's typical performance in a business environment. This study analyzed the network performance of high speed Ethernet and ATM when they were configured as LAN backbones. The results revealed that ATM exhibited performance superior to high speed Ethernet, but when adjustments were made for differences in line speed, the throughput was similar. In addition to analyzing empirical data about each technologies' performance, the advantages and limitations of using ATM in a business network are discussed.


In this paper is presented a novel area efficient Fast Fourier transform (FFT) for real-time compressive sensing (CS) reconstruction. Among various methodologies used for CS reconstruction algorithms, Greedy-based orthogonal matching pursuit (OMP) approach provides better solution in terms of accurate implementation with complex computations overhead. Several computationally intensive arithmetic operations like complex matrix multiplication are required to formulate correlative vectors making this algorithm highly complex and power consuming hardware implementation. Computational complexity becomes very important especially in complex FFT models to meet different operational standards and system requirements. In general, for real time applications, FFT transforms are required for high speed computations as well as with least possible complexity overhead in order to support wide range of applications. This paper presents an hardware efficient FFT computation technique with twiddle factor normalization for correlation optimization in orthogonal matching pursuit (OMP). Experimental results are provided to validate the performance metrics of the proposed normalization techniques against complexity and energy related issues. The proposed method is verified by FPGA synthesizer, and validated with appropriate currently available comparative analyzes.


Sign in / Sign up

Export Citation Format

Share Document