scholarly journals Estimating PQoS of Video Conferencing on Wi-Fi Networks Using Machine Learning

2021 ◽  
Vol 13 (3) ◽  
pp. 63
Author(s):  
Maghsoud Morshedi ◽  
Josef Noll

Video conferencing services based on web real-time communication (WebRTC) protocol are growing in popularity among Internet users as multi-platform solutions enabling interactive communication from anywhere, especially during this pandemic era. Meanwhile, Internet service providers (ISPs) have deployed fiber links and customer premises equipment that operate according to recent 802.11ac/ax standards and promise users the ability to establish uninterrupted video conferencing calls with ultra-high-definition video and audio quality. However, the best-effort nature of 802.11 networks and the high variability of wireless medium conditions hinder users experiencing uninterrupted high-quality video conferencing. This paper presents a novel approach to estimate the perceived quality of service (PQoS) of video conferencing using only 802.11-specific network performance parameters collected from Wi-Fi access points (APs) on customer premises. This study produced datasets comprising 802.11-specific network performance parameters collected from off-the-shelf Wi-Fi APs operating at 802.11g/n/ac/ax standards on both 2.4 and 5 GHz frequency bands to train machine learning algorithms. In this way, we achieved classification accuracies of 92–98% in estimating the level of PQoS of video conferencing services on various Wi-Fi networks. To efficiently troubleshoot wireless issues, we further analyzed the machine learning model to correlate features in the model with the root cause of quality degradation. Thus, ISPs can utilize the approach presented in this study to provide predictable and measurable wireless quality by implementing a non-intrusive quality monitoring approach in the form of edge computing that preserves customers’ privacy while reducing the operational costs of monitoring and data analytics.

Sensors ◽  
2021 ◽  
Vol 21 (2) ◽  
pp. 621
Author(s):  
Maghsoud Morshedi ◽  
Josef Noll

Video on demand (VoD) services such as YouTube have generated considerable volumes of Internet traffic in homes and buildings in recent years. While Internet service providers deploy fiber and recent wireless technologies such as 802.11ax to support high bandwidth requirement, the best-effort nature of 802.11 networks and variable wireless medium conditions hinder users from experiencing maximum quality during video streaming. Hence, Internet service providers (ISPs) have an interest in monitoring the perceived quality of service (PQoS) in customer premises in order to avoid customer dissatisfaction and churn. Since existing approaches for estimating PQoS or quality of experience (QoE) requires external measurement of generic network performance parameters, this paper presents a novel approach to estimate the PQoS of video streaming using only 802.11 specific network performance parameters collected from wireless access points. This study produced datasets comprising 802.11n/ac/ax specific network performance parameters labelled with PQoS in the form of mean opinion scores (MOS) to train machine learning algorithms. As a result, we achieved as many as 93–99% classification accuracy in estimating PQoS by monitoring only 802.11 parameters on off-the-shelf Wi-Fi access points. Furthermore, the 802.11 parameters used in the machine learning model were analyzed to identify the cause of quality degradation detected on the Wi-Fi networks. Finally, ISPs can utilize the results of this study to provide predictable and measurable wireless quality by implementing non-intrusive monitoring of customers’ perceived quality. In addition, this approach reduces customers’ privacy concerns while reducing the operational cost of analytics for ISPs.


World Health Organization’s (WHO) report 2018, on diabetes has reported that the number of diabetic cases has increased from one hundred eight million to four hundred twenty-two million from the year 1980. The fact sheet shows that there is a major increase in diabetic cases from 4.7% to 8.5% among adults (18 years of age). Major health hazards caused due to diabetes include kidney function failure, heart disease, blindness, stroke, and lower limb dismembering. This article applies supervised machine learning algorithms on the Pima Indian Diabetic dataset to explore various patterns of risks involved using predictive models. Predictive model construction is based upon supervised machine learning algorithms: Naïve Bayes, Decision Tree, Random Forest, Gradient Boosted Tree, and Tree Ensemble. Further, the analytical patterns about these predictive models have been presented based on various performance parameters which include accuracy, precision, recall, and F-measure.


10.29007/d18s ◽  
2020 ◽  
Author(s):  
Vincent Jaouen ◽  
Guillaume Dardenne ◽  
Florent Tixier ◽  
Éric Stindel ◽  
Dimitris Visvikis

Due to their sensitivity to acquisition parameters, medical images such as magnetic resonance images (MRI), Positron Emission tomography (PET) or Computed Tomography (CT) images often suffer from a kind of variability unrelated to diagnostic power, often known as the center effect (CE). This is especially true in MRI, where units are arbitrary and image values can strongly depend on subtle variations in the pulse sequences [1]. Due to the CE it is particularly difficult in various medical imaging applications to 1) pool data coming from several centers or 2) train machine learning algorithms requiring large homogeneous training sets. There is therefore a clear need for image standardization techniques aiming at reducing this effect.Considerable improvements in image synthesis have been achieved over the recent years using (deep) machine learning. Models based on generative adversarial neural networks (GANs) now enable the generation of high definition images capable of fooling the human eye [2]. These methods are being increasingly used in medical imaging for various cross-modality (image-to-image) applications such as MR to CT synthesis [3]. However, they have been seldom used for the purpose of image standardization, i.e. for reducing the CE [4].In this work, we explore the potential advantage of embedding a standardization step using GANs prior to knee bone tissue classification in MRI. We consider image standardization as a within-domain image synthesis problem, where our objective is to learn a mapping between a domain D constituted of heterogeneous images and a reference domain R showing one or several images of desired image characteristics.Preliminary results suggest a beneficial impact of such a standardization step on segmentation performance.


2020 ◽  
Vol 2020 ◽  
pp. 1-14
Author(s):  
Mariano Di Martino ◽  
Peter Quax ◽  
Wim Lamotte

Zero-rating is a technique where internet service providers (ISPs) allow consumers to utilize a specific website without charging their internet data plan. Implementing zero-rating requires an accurate website identification method that is also efficient and reliable to be applied on live network traffic. In this paper, we examine existing website identification methods with the objective of applying zero-rating. Furthermore, we demonstrate the ineffectiveness of these methods against modern encryption protocols such as Encrypted SNI and DNS over HTTPS and therefore show that ISPs are not able to maintain the current zero-rating approaches in the forthcoming future. To address this concern, we present “Open-Knock,” a novel approach that is capable of accurately identifying a zero-rated website, thwarts free-riding attacks, and is sustainable on the increasingly encrypted web. In addition, our approach does not require plaintext protocols or preprocessed fingerprints upfront. Finally, our experimental analysis unveils that we are able to convert each IP address to the correct domain name for each website in the Tranco top 6000 websites list with an accuracy of 50.5% and therefore outperform the current state-of-the-art approaches.


2020 ◽  
Vol 7 (2) ◽  
pp. 205395172095158
Author(s):  
Baptiste Kotras

This paper focuses on the conception and use of machine-learning algorithms for marketing. In the last years, specialized service providers as well as in-house data scientists have been increasingly using machine learning to predict consumer behavior for large companies. Predictive marketing thus revives the old dream of one-to-one, perfectly adjusted selling techniques, now at an unprecedented scale. How do predictive marketing devices change the way corporations know and model their customers? Drawing from STS and the sociology of quantification, I propose to study the original ambivalence that characterizes the promise of a mass personalization, i.e. algorithmic processes in which the precise adjustment of prediction to unique individuals involves the computation of massive datasets. By studying algorithms in practice, I show how the active embedding of local preexisting consumer knowledge and punctual de-personalization mechanisms are keys to the epistemic and organizational success of predictive marketing. This paper argues for the study of algorithms in their contexts and suggests new perspectives on algorithmic objectivity.


Author(s):  
RajKishore Sahni

The upsurge in the volume of unwanted emails called spam has created an intense need for the development of more dependable and robust antispam filters. Machine learning methods of recent are being used to successfully detect and filter spam emails. We present a systematic review of some of the popular machine learning based email spam filtering approaches. Our review covers survey of the important concepts, attempts, efficiency, and the research trend in spam filtering. The preliminary discussion in the study background examines the applications of machine learning techniques to the email spam filtering process of the leading internet service providers (ISPs) like Gmail, Yahoo and Outlook emails spam filters. Discussion on general email spam filtering process, and the various efforts by different researchers in combating spam through the use machine learning techniques was done. Our review compares the strengths and drawbacks of existing machine learning approaches and the open research problems in spam filtering. We recommended deep learning and deep adversarial learning as the future techniques that can effectively handle the menace of spam emails


2018 ◽  
Author(s):  
Adam Hakim ◽  
Shira Klorfeld ◽  
Tal Sela ◽  
Doron Friedman ◽  
Maytal Shabat-Simon ◽  
...  

AbstractA basic aim of marketing research is to predict consumers’ preferences and the success of marketing campaigns in the general population. However, traditional behavioral measurements have various limitations, calling for novel measurements to improve predictive power. In this study, we use neural signals measured with electroencephalography (EEG) in order to overcome these limitations. We record the EEG signals of subjects, as they watched commercials of six food products. We introduce a novel approach in which instead of using one type of EEG measure, we combine several measures, and use state-of-the-art machine learning algorithms to predict subjects’ individual future preferences over the products and the commercials’ population success, as measured by their YouTube metrics. As a benchmark, we acquired measurements of the commercials’ effectiveness using a standard questionnaire commonly used in marketing research. We reached 68.5% accuracy in predicting between the most and least preferred items and a lower than chance RMSE score for predicting the rank order preferences of all six products. We also predicted the commercials’ population success better than chance. Most importantly, we demonstrate for the first time, that for all of our predictions, the EEG measurements increased the prediction power of the questionnaires. Our analyses methods and results show great promise for utilizing EEG measures by managers, marketing practitioners, and researchers, as a valuable tool for predicting subjects’ preferences and marketing campaigns’ success.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Duy Ngoc Nguyen ◽  
Tuoi Thi Phan ◽  
Phuc Do

AbstractSentiment classification, which uses deep learning algorithms, has achieved good results when tested with popular datasets. However, it will be challenging to build a corpus on new topics to train machine learning algorithms in sentiment classification with high confidence. This study proposes a method that processes embedding knowledge in the ontology of opinion datasets called knowledge processing and representation based on ontology (KPRO) to represent the significant features of the dataset into the word embedding layer of deep learning algorithms in sentiment classification. Unlike the methods that lexical encode or add information to the corpus, this method adds presentation of raw data based on the expert’s knowledge in the ontology. Once the data has a rich knowledge of the topic, the efficiency of the machine learning algorithms is significantly enhanced. Thus, this method is appliable to embed knowledge in datasets in other languages. The test results show that deep learning methods achieved considerably higher accuracy when trained with the KPRO method’s dataset than when trained with datasets not processed by this method. Therefore, this method is a novel approach to improve the accuracy of deep learning algorithms and increase the reliability of new datasets, thus making them ready for mining.


2020 ◽  
Vol 12 (35) ◽  
pp. 4303-4309
Author(s):  
Gustavo Larios ◽  
Gustavo Nicolodelli ◽  
Matheus Ribeiro ◽  
Thalita Canassa ◽  
Andre R. Reis ◽  
...  

A novel approach to distinguish soybean seed vigor based on Fourier transform infrared spectroscopy (FTIR) associated with chemometric methods is presented.


Sign in / Sign up

Export Citation Format

Share Document