On the Performance Comparisons of Native and Clientless Real-Time Screen-Sharing Technologies

Author(s):  
Chun-ying Huang ◽  
Yun-chen Cheng ◽  
Guan-zhang Huang ◽  
Ching-ling Fan ◽  
Cheng-hsin Hsu

Real-time screen-sharing provides users with ubiquitous access to remote applications, such as computer games, movie players, and desktop applications (apps), anywhere and anytime. In this article, we study the performance of different screen-sharing technologies, which can be classified into native and clientless ones. The native ones dictate that users install special-purpose software, while the clientless ones directly run in web browsers. In particular, we conduct extensive experiments in three steps. First, we identify a suite of the most representative native and clientless screen-sharing technologies. Second, we propose a systematic measurement methodology for comparing screen-sharing technologies under diverse and dynamic network conditions using different performance metrics. Last, we conduct extensive experiments and perform in-depth analysis to quantify the performance gap between clientless and native screen-sharing technologies. We found that our WebRTC-based implementation achieves the best overall performance. More precisely, it consumes a maximum of 3 Mbps bandwidth while reaching a high decoding ratio and delivering good video quality. Moreover, it leads to a steadily high decoding ratio and video quality under dynamic network conditions. By presenting the very first rigorous comparisons of the native and clientless screen-sharing technologies, this article will stimulate more exciting studies on the emerging clientless screen-sharing technologies.

Impact ◽  
2020 ◽  
Vol 2020 (2) ◽  
pp. 9-11
Author(s):  
Tomohiro Fukuda

Mixed reality (MR) is rapidly becoming a vital tool, not just in gaming, but also in education, medicine, construction and environmental management. The term refers to systems in which computer-generated content is superimposed over objects in a real-world environment across one or more sensory modalities. Although most of us have heard of the use of MR in computer games, it also has applications in military and aviation training, as well as tourism, healthcare and more. In addition, it has the potential for use in architecture and design, where buildings can be superimposed in existing locations to render 3D generations of plans. However, one major challenge that remains in MR development is the issue of real-time occlusion. This refers to hiding 3D virtual objects behind real articles. Dr Tomohiro Fukuda, who is based at the Division of Sustainable Energy and Environmental Engineering, Graduate School of Engineering at Osaka University in Japan, is an expert in this field. Researchers, led by Dr Tomohiro Fukuda, are tackling the issue of occlusion in MR. They are currently developing a MR system that realises real-time occlusion by harnessing deep learning to achieve an outdoor landscape design simulation using a semantic segmentation technique. This methodology can be used to automatically estimate the visual environment prior to and after construction projects.


2021 ◽  
Vol 21 (4) ◽  
pp. 1-22
Author(s):  
Safa Otoum ◽  
Burak Kantarci ◽  
Hussein Mouftah

Volunteer computing uses Internet-connected devices (laptops, PCs, smart devices, etc.), in which their owners volunteer them as storage and computing power resources, has become an essential mechanism for resource management in numerous applications. The growth of the volume and variety of data traffic on the Internet leads to concerns on the robustness of cyberphysical systems especially for critical infrastructures. Therefore, the implementation of an efficient Intrusion Detection System for gathering such sensory data has gained vital importance. In this article, we present a comparative study of Artificial Intelligence (AI)-driven intrusion detection systems for wirelessly connected sensors that track crucial applications. Specifically, we present an in-depth analysis of the use of machine learning, deep learning and reinforcement learning solutions to recognise intrusive behavior in the collected traffic. We evaluate the proposed mechanisms by using KDD’99 as real attack dataset in our simulations. Results present the performance metrics for three different IDSs, namely the Adaptively Supervised and Clustered Hybrid IDS (ASCH-IDS), Restricted Boltzmann Machine-based Clustered IDS (RBC-IDS), and Q-learning based IDS (Q-IDS), to detect malicious behaviors. We also present the performance of different reinforcement learning techniques such as State-Action-Reward-State-Action Learning (SARSA) and the Temporal Difference learning (TD). Through simulations, we show that Q-IDS performs with detection rate while SARSA-IDS and TD-IDS perform at the order of .


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Saaveethya Sivakumar ◽  
Alpha Agape Gopalai ◽  
King Hann Lim ◽  
Darwin Gouwanda ◽  
Sunita Chauhan

AbstractThis paper presents a wavelet neural network (WNN) based method to reduce reliance on wearable kinematic sensors in gait analysis. Wearable kinematic sensors hinder real-time outdoor gait monitoring applications due to drawbacks caused by multiple sensor placements and sensor offset errors. The proposed WNN method uses vertical Ground Reaction Forces (vGRFs) measured from foot kinetic sensors as inputs to estimate ankle, knee, and hip joint angles. Salient vGRF inputs are extracted from primary gait event intervals. These selected gait inputs facilitate future integration with smart insoles for real-time outdoor gait studies. The proposed concept potentially reduces the number of body-mounted kinematics sensors used in gait analysis applications, hence leading to a simplified sensor placement and control circuitry without deteriorating the overall performance.


2021 ◽  
Vol 14 (5) ◽  
pp. 785-798
Author(s):  
Daokun Hu ◽  
Zhiwen Chen ◽  
Jianbing Wu ◽  
Jianhua Sun ◽  
Hao Chen

Persistent memory (PM) is increasingly being leveraged to build hash-based indexing structures featuring cheap persistence, high performance, and instant recovery, especially with the recent release of Intel Optane DC Persistent Memory Modules. However, most of them are evaluated on DRAM-based emulators with unreal assumptions, or focus on the evaluation of specific metrics with important properties sidestepped. Thus, it is essential to understand how well the proposed hash indexes perform on real PM and how they differentiate from each other if a wider range of performance metrics are considered. To this end, this paper provides a comprehensive evaluation of persistent hash tables. In particular, we focus on the evaluation of six state-of-the-art hash tables including Level hashing, CCEH, Dash, PCLHT, Clevel, and SOFT, with real PM hardware. Our evaluation was conducted using a unified benchmarking framework and representative workloads. Besides characterizing common performance properties, we also explore how hardware configurations (such as PM bandwidth, CPU instructions, and NUMA) affect the performance of PM-based hash tables. With our in-depth analysis, we identify design trade-offs and good paradigms in prior arts, and suggest desirable optimizations and directions for the future development of PM-based hash tables.


2018 ◽  
Vol 2018 ◽  
pp. 1-12 ◽  
Author(s):  
Mohammad Zohaib

Dynamic difficulty adjustment (DDA) is a method of automatically modifying a game’s features, behaviors, and scenarios in real-time, depending on the player’s skill, so that the player, when the game is very simple, does not feel bored or frustrated, when it is very difficult. The intent of the DDA is to keep the player engrossed till the end and to provide him/her with a challenging experience. In traditional games, difficulty levels increase linearly or stepwise during the course of the game. The features such as frequency, starting levels, or rates can be set only at the beginning of the game by choosing a level of difficulty. This can, however, result in a negative experience for players as they try to map a predecided learning curve. DDA attempts to solve this problem by presenting a customized solution for the gamers. This paper provides a review of the current approaches to DDA.


Energies ◽  
2020 ◽  
Vol 13 (17) ◽  
pp. 4543
Author(s):  
Michael Spiegel ◽  
Eric Veith ◽  
Thomas Strasser

Multi-microgrids address the need for a resilient, sustainable, and cost-effective electricity supply by providing a coordinated operation of individual networks. Due to local generation, dynamic network topologies, and islanding capabilities of hosted microgrids or groups thereof, various new fault mitigation and optimization options emerge. However, with the great flexibility, new challenges such as complex failure modes that need to be considered for a resilient operation, appear. This work systematically reviews scheduling approaches which significantly influence the feasibility of mitigation options before a failure is encountered. An in-depth analysis of identified key contributions covers aspects such as the mathematical apparatus, failure models and validation to highlight the current methodical spectrum and to identify future perspectives. Despite the common optimization-based framework, a broad variety of scheduling approaches is revealed. However, none of the key contributions provides practical insights beyond lab validation and considerable effort is required until the approaches can show their full potential in practical implementations. It is expected that the great level of detail guides further research in improving and validating existing scheduling concepts as well as it, in the long run, aids engineers to choose the most suitable options regarding increasingly resilient power systems.


2021 ◽  
Author(s):  
Rahul Johari ◽  
Tanvi Gautam

Abstract Natural calamities leave people helpless by arising several situations such as network breakdown, zero communication, intermittent connectivity, dynamic network topology. In such situation an application of dynamic and intermittent routing scheme is essential to make further communication possible during likewise scenarios. An application of TCP/IP becomes futile in mentioned circumstances as it best works for static nodes and pre-defined network topology wherein source and destination nodes are first establishing the communication link with each other. An alternative measure of such hitches is to encounter an application of DTN protocol which possess all characteristics to withstand in such scenarios such as; dynamic network topology, intermittent connectivity, frequent path breaks, store – carry – forward fashion. In this paper we did thorough investigation of forest fire dataset (Uttarakhand) after exploring its implementation in ONE with Epidemic, Prophet, Spray and Wait, HBPR, GAER respectively. An extensive and thorough investigation for real world traces implementation has been done with OppNet routing protocols against mobility models namely; Shortest path map – based, Random Direction, Random Walk, Random Waypoint, Cluster Movement respectively for network performance metrics namely packet delivery ratio, packet overhead ratio and average latency ratio respectively with the application of K means clustering machine learning algorithm. With the help of this analysis, we explore the real-world traces characteristics and study the areas on which network performance can be improved.


2019 ◽  
Vol 8 (2) ◽  
pp. 3800-3804

As focusing on the scheduling schemes, there are many scheduling schemes for multilevel. So the paper is concentrating to compare the scheduling schemes and producing the average waiting time and turnaround time. If it is minimized then the overall performance may shoot up. In this paper comparison is done between three scheduling schemes Enhanced Dynamic Multilevel Packet scheduling (EDMP), Circular Wait Dynamic Multilevel Packet scheduling (CW-DMP) and Starvation-Free Dynamic Multilevel Packet scheduling (SF-DMP). In all the above schemes there are three priority levels say priority level 1(Pr1), priority level 2(Pr2) and priority level 3(Pr3). Pr1 will comprise the real time tasks, Pr2 containing the non real time remote tasks and non real time local tasks are there in Pr3. In each and every scheme, each and every priority level will be using the individual scheduling technique to schedule the tasks. Also the comparison is done based on waiting time and the turnaround time of the task thereby the average waiting time and the average turnaround time are calculated.


Author(s):  
Ivan Vasilevich Artamonov

In the course of development and implementation of information technologies it is necessary to measure performance of the designed and improved business processes. A developed system of performance metrics for such analysis is determined by the specific nature of a business process, while its quality depends on the analyst’s experience. Current technologies do not provide a method for objective measuring of future business processes throughput efficiency being either too primitive or too complex for real-world enterprise models. Some experience on performance measuring is collected in the theory of manufacturing systems, computer hardware and software, queuing theory and quality of service of business processes. Summing up the achievements it is possible to create a general-purpose and abstract set of performance parameters that can be applied to any business process and used for depth analysis of processes in the systems of simulation modelling and of business process management. The set consists of four groups evaluating efficiency by the time of operation, quantitative parameters, workload of employees and compliance to standards and conventions. For these parameters there have been developed a number of boundary values, reaching them leads to undesirable effects. Besides, the definition of dangerous events has been proposed to determine abnormal, out-of-bound process behavior or state causing business process failure.


Sign in / Sign up

Export Citation Format

Share Document