Dissecting Cloud Gaming Performance with DECAF

Author(s):  
Hassan Iqbal ◽  
Ayesha Khalid ◽  
Muhammad Shahzad

Cloud gaming platforms have witnessed tremendous growth over the past two years with a number of large Internet companies including Amazon, Facebook, Google, Microsoft, and Nvidia publicly launching their own platforms. While cloud gaming platforms continue to grow, the visibility in their performance and relative comparison is lacking. This is largely due to absence of systematic measurement methodologies which can generally be applied. As such, in this paper, we implement DECAF, a methodology to systematically analyze and dissect the performance of cloud gaming platforms across different game genres and game platforms. DECAF is highly automated and requires minimum manual intervention. By applying DECAF, we measure the performance of three commercial cloud gaming platforms including Google Stadia, Amazon Luna, and Nvidia GeForceNow, and uncover a number of important findings. First, we find that processing delays in the cloud comprise majority of the total round trip delay experienced by users, accounting for as much as 73.54% of total user-perceived delay. Second, we find that video streams delivered by cloud gaming platforms are characterized by high variability of bitrate, frame rate, and resolution. Platforms struggle to consistently serve 1080p/60 frames per second streams across different game genres even when the available bandwidth is 8-20× that of platform's recommended settings. Finally, we show that game platforms exhibit performance cliffs by reacting poorly to packet losses, in some cases dramatically reducing the delivered bitrate by up to 6.6× when loss rates increase from 0.1% to 1%. Our work has important implications for cloud gaming platforms and opens the door for further research on comprehensive measurement methodologies for cloud gaming.

2021 ◽  
Vol 7 (6) ◽  
pp. eabb7118
Author(s):  
E. Harris ◽  
E. Diaz-Pines ◽  
E. Stoll ◽  
M. Schloter ◽  
S. Schulz ◽  
...  

Nitrous oxide is a powerful greenhouse gas whose atmospheric growth rate has accelerated over the past decade. Most anthropogenic N2O emissions result from soil N fertilization, which is converted to N2O via oxic nitrification and anoxic denitrification pathways. Drought-affected soils are expected to be well oxygenated; however, using high-resolution isotopic measurements, we found that denitrifying pathways dominated N2O emissions during a severe drought applied to managed grassland. This was due to a reversible, drought-induced enrichment in nitrogen-bearing organic matter on soil microaggregates and suggested a strong role for chemo- or codenitrification. Throughout rewetting, denitrification dominated emissions, despite high variability in fluxes. Total N2O flux and denitrification contribution were significantly higher during rewetting than for control plots at the same soil moisture range. The observed feedbacks between precipitation changes induced by climate change and N2O emission pathways are sufficient to account for the accelerating N2O growth rate observed over the past decade.


Author(s):  
Chetna Laroiya ◽  
Vijay Bhushan Aggarwal

In order to implement IoT-based health-care for improved quality of life, we have to deal with sensor and communication technologies. In this article, the authors propose an approach to analyse real-time data streaming from a patient's surface body sensors, which are to be looked upon in a small sliding window frame. Time series analysis of data from the sensors is effective in reducing the round-trip delay between patient and the medical server. Two algorithms are for the sensor, and odd measures are proposed based on joint probability and joint conditional probability. The proposed algorithms are to be SQL compliant, as traces of at-sensor UDBMS alongside elementary capabilities supports databases with a meagre amount of SQL, which is evident in the literature.


Author(s):  
Ali Baghani ◽  
Reza Zahiri Azar ◽  
Septimiu Salcudean ◽  
Robert Rohling

The past two decades have witnessed the development of a new medical imaging modality: tissue elastography. The contrast in the images produced by an elastography system is based on the tissue elasticity, hence these images are called elastograms. Tissue elasticity is of clinical interest, because it is often correlated with pathology [1]. Different approaches to tissue elastography have emerged [2, 3]. In this article we report a tissue elastography system and its implementation on an ultrasound machine which provides consistent elastograms of a commercial quality assurance elastography phantom. The system uses our previously developed high frame rate sequencing and phase compensation techniques to measure axial and lateral motions at a typical frame rate of 1.25 kHz [4]. The system uses the curl of the displacements in a direct inversion algorithm to reconstruct elasticity. The most important benefit of this method is that the obtained elastograms are not dependent on the boundary conditions or the shape, size or position of the exciter, and as a result, the elastograms have fewer artifacts originating from these factors. The curl of the displacement has been used in magnetic resonance elastography (MRE) before, together with the direct inversion of the wave equation [5] and promising results have been obtained.


Author(s):  
Carlos Flores-Cortés ◽  
Raymundo Buenrostro-Mariscal ◽  
Antonio Gurrero-Ibañez ◽  
Fermín Estrada-González ◽  
Jesus Sandoval-Orozco

Wireless Sensor Networks (WSNs) have an enormous potential for investigating oceanographic problems such as the impact of industrial, touristic and commercial activities in coastal areas, among others. However, ocean waves, fog, humidity and other environmental conditions make difficult communication between nodes. This paper presents an evaluation on-site of the performance of an IEEE 802.15.4 WSN. In particular, received signal strength indication, throughput, round trip delay time and the rate of efficiency are evaluated.  Different settings were tested and results shown which settings performed better on these environments.


Electronics ◽  
2020 ◽  
Vol 9 (1) ◽  
pp. 136 ◽  
Author(s):  
Imtiaz Mahmud ◽  
Geon-Hwan Kim ◽  
Tabassum Lubna ◽  
You-Ze Cho

With the aim of improved throughput with reduced delay, Google proposed the bottleneck bandwidth and round-trip time (BBR) congestion control algorithm in 2016. Contrasting with the traditional loss-based congestion control algorithms, it operates without bottleneck queue formation and packet losses. However, we find unexpected behaviour in BBR during testbed experiments and network simulator 3 (NS-3) simulations. We observe huge packet losses, retransmissions, and large queue formation in the bottleneck in a congested network scenario. We believe this is because of BBR’s nature of sending extra data during the bandwidth probing without considering the network conditions, and the lack of a proper recovery mechanism. In a congested network, the sent extra data creates a large queue in the bottleneck, which is sustained due to insufficient drain time. BBR lacks a proper mechanism to detect such large bottleneck queues, cannot comply with the critical congestion situation properly, and results in excessive retransmission problems. Based on these observations, we propose a derivative of BBR, called “BBR with advanced congestion detection (BBR-ACD)”, that reduces the excessive retransmissions without losing the merits. We propose a novel method to determine an actual congestion situation by considering the packet loss and delay-gradient of round-trip time, and implement a proper recovery mechanism to handle such a congestion situation. Through extensive test and NS-3 simulations, we confirmed that the proposed BBR-ACD could reduce the retransmissions by about 50% while improving the total goodput of the network.


Sign in / Sign up

Export Citation Format

Share Document