Quality of Experience Models for Multimedia Streaming

Author(s):  
Vlado Menkovski ◽  
Georgios Exarchakos ◽  
Antonio Liotta ◽  
Antonio Cuadra Sánchez

Understanding how quality is perceived by viewers of multimedia streaming services is essential for efficient management of those services. Quality of Experience (QoE) is a subjective metric that quantifies the perceived quality, which is crucial in the process of optimizing tradeoff between quality and resources. However, accurate estimation of QoE often entails cumbersome studies that are long and expensive to execute. In this regard, the authors present a QoE estimation methodology for developing Machine Learning prediction models based on initial restricted-size subjective tests. Experimental results on subjective data from streaming multimedia tests show that the Machine Learning models outperform other statistical methods achieving accuracy greater than 90%. These models are suitable for real-time use due to their small computational complexity. Even though they have high accuracy, these models are static and cannot adapt to environmental change. To maintain the accuracy of the prediction models, the authors have adopted Online Learning techniques that update the models on data from subjective viewer feedback. This method provides accurate and adaptive QoE prediction models that are an indispensible component of a QoE-aware management service.

Author(s):  
Vlado Menkovski ◽  
Georgios Exarchakos ◽  
Antonio Liotta ◽  
Antonio Cuadra Sánchez

Understanding how quality is perceived by viewers of multimedia streaming services is essential for efficient management of those services. Quality of Experience (QoE) is a subjective metric that quantifies the perceived quality, which is crucial in the process of optimizing tradeoff between quality and resources. However, accurate estimation of QoE often entails cumbersome studies that are long and expensive to execute. In this regard, the authors present a QoE estimation methodology for developing Machine Learning prediction models based on initial restricted-size subjective tests. Experimental results on subjective data from streaming multimedia tests show that the Machine Learning models outperform other statistical methods achieving accuracy greater than 90%. These models are suitable for real-time use due to their small computational complexity. Even though they have high accuracy, these models are static and cannot adapt to environmental change. To maintain the accuracy of the prediction models, the authors have adopted Online Learning techniques that update the models on data from subjective viewer feedback. This method provides accurate and adaptive QoE prediction models that are an indispensible component of a QoE-aware management service.


Author(s):  
Raed Shatnawi

BACKGROUND: Fault data is vital to predicting the fault-proneness in large systems. Predicting faulty classes helps in allocating the appropriate testing resources for future releases. However, current fault data face challenges such as unlabeled instances and data imbalance. These challenges degrade the performance of the prediction models. Data imbalance happens because the majority of classes are labeled as not faulty whereas the minority of classes are labeled as faulty. AIM: The research proposes to improve fault prediction using software metrics in combination with threshold values. Statistical techniques are proposed to improve the quality of the datasets and therefore the quality of the fault prediction. METHOD: Threshold values of object-oriented metrics are used to label classes as faulty to improve the fault prediction models The resulting datasets are used to build prediction models using five machine learning techniques. The use of threshold values is validated on ten large object-oriented systems. RESULTS: The models are built for the datasets with and without the use of thresholds. The combination of thresholds with machine learning has improved the fault prediction models significantly for the five classifiers. CONCLUSION: Threshold values can be used to label software classes as fault-prone and can be used to improve machine learners in predicting the fault-prone classes.


Author(s):  
Feidu Akmel ◽  
Ermiyas Birihanu ◽  
Bahir Siraj

Software systems are any software product or applications that support business domains such as Manufacturing,Aviation, Health care, insurance and so on.Software quality is a means of measuring how software is designed and how well the software conforms to that design. Some of the variables that we are looking for software quality are Correctness, Product quality, Scalability, Completeness and Absence of bugs, However the quality standard that was used from one organization is different from other for this reason it is better to apply the software metrics to measure the quality of software. Attributes that we gathered from source code through software metrics can be an input for software defect predictor. Software defect are an error that are introduced by software developer and stakeholders. Finally, in this study we discovered the application of machine learning on software defect that we gathered from the previous research works.


2020 ◽  
Vol 16 ◽  
Author(s):  
Nitigya Sambyal ◽  
Poonam Saini ◽  
Rupali Syal

Background and Introduction: Diabetes mellitus is a metabolic disorder that has emerged as a serious public health issue worldwide. According to the World Health Organization (WHO), without interventions, the number of diabetic incidences is expected to be at least 629 million by 2045. Uncontrolled diabetes gradually leads to progressive damage to eyes, heart, kidneys, blood vessels and nerves. Method: The paper presents a critical review of existing statistical and Artificial Intelligence (AI) based machine learning techniques with respect to DM complications namely retinopathy, neuropathy and nephropathy. The statistical and machine learning analytic techniques are used to structure the subsequent content review. Result: It has been inferred that statistical analysis can help only in inferential and descriptive analysis whereas, AI based machine learning models can even provide actionable prediction models for faster and accurate diagnose of complications associated with DM. Conclusion: The integration of AI based analytics techniques like machine learning and deep learning in clinical medicine will result in improved disease management through faster disease detection and cost reduction for disease treatment.


2021 ◽  
Vol 48 (4) ◽  
pp. 41-44
Author(s):  
Dena Markudova ◽  
Martino Trevisan ◽  
Paolo Garza ◽  
Michela Meo ◽  
Maurizio M. Munafo ◽  
...  

With the spread of broadband Internet, Real-Time Communication (RTC) platforms have become increasingly popular and have transformed the way people communicate. Thus, it is fundamental that the network adopts traffic management policies that ensure appropriate Quality of Experience to users of RTC applications. A key step for this is the identification of the applications behind RTC traffic, which in turn allows to allocate adequate resources and make decisions based on the specific application's requirements. In this paper, we introduce a machine learning-based system for identifying the traffic of RTC applications. It builds on the domains contacted before starting a call and leverages techniques from Natural Language Processing (NLP) to build meaningful features. Our system works in real-time and is robust to the peculiarities of the RTP implementations of different applications, since it uses only control traffic. Experimental results show that our approach classifies 5 well-known meeting applications with an F1 score of 0.89.


Work ◽  
2021 ◽  
pp. 1-12
Author(s):  
Zhang Mengqi ◽  
Wang Xi ◽  
V.E. Sathishkumar ◽  
V. Sivakumar

BACKGROUND: Nowadays, the growth of smart cities is enhanced gradually, which collects a lot of information and communication technologies that are used to maximize the quality of services. Even though the intelligent city concept provides a lot of valuable services, security management is still one of the major issues due to shared threats and activities. For overcoming the above problems, smart cities’ security factors should be analyzed continuously to eliminate the unwanted activities that used to enhance the quality of the services. OBJECTIVES: To address the discussed problem, active machine learning techniques are used to predict the quality of services in the smart city manages security-related issues. In this work, a deep reinforcement learning concept is used to learn the features of smart cities; the learning concept understands the entire activities of the smart city. During this energetic city, information is gathered with the help of security robots called cobalt robots. The smart cities related to new incoming features are examined through the use of a modular neural network. RESULTS: The system successfully predicts the unwanted activity in intelligent cities by dividing the collected data into a smaller subset, which reduces the complexity and improves the overall security management process. The efficiency of the system is evaluated using experimental analysis. CONCLUSION: This exploratory study is conducted on the 200 obstacles are placed in the smart city, and the introduced DRL with MDNN approach attains maximum results on security maintains.


Electronics ◽  
2021 ◽  
Vol 10 (3) ◽  
pp. 318
Author(s):  
Merima Kulin ◽  
Tarik Kazaz ◽  
Eli De Poorter ◽  
Ingrid Moerman

This paper presents a systematic and comprehensive survey that reviews the latest research efforts focused on machine learning (ML) based performance improvement of wireless networks, while considering all layers of the protocol stack: PHY, MAC and network. First, the related work and paper contributions are discussed, followed by providing the necessary background on data-driven approaches and machine learning to help non-machine learning experts understand all discussed techniques. Then, a comprehensive review is presented on works employing ML-based approaches to optimize the wireless communication parameters settings to achieve improved network quality-of-service (QoS) and quality-of-experience (QoE). We first categorize these works into: radio analysis, MAC analysis and network prediction approaches, followed by subcategories within each. Finally, open challenges and broader perspectives are discussed.


2021 ◽  
pp. postgradmedj-2020-139352
Author(s):  
Simon Allan ◽  
Raphael Olaiya ◽  
Rasan Burhan

Cardiovascular disease (CVD) is one of the leading causes of death across the world. CVD can lead to angina, heart attacks, heart failure, strokes, and eventually, death; among many other serious conditions. The early intervention with those at a higher risk of developing CVD, typically with statin treatment, leads to better health outcomes. For this reason, clinical prediction models (CPMs) have been developed to identify those at a high risk of developing CVD so that treatment can begin at an earlier stage. Currently, CPMs are built around statistical analysis of factors linked to developing CVD, such as body mass index and family history. The emerging field of machine learning (ML) in healthcare, using computer algorithms that learn from a dataset without explicit programming, has the potential to outperform the CPMs available today. ML has already shown exciting progress in the detection of skin malignancies, bone fractures and many other medical conditions. In this review, we will analyse and explain the CPMs currently in use with comparisons to their developing ML counterparts. We have found that although the newest non-ML CPMs are effective, ML-based approaches consistently outperform them. However, improvements to the literature need to be made before ML should be implemented over current CPMs.


2018 ◽  
Vol 27 (03) ◽  
pp. 1850011 ◽  
Author(s):  
Athanasios Tagaris ◽  
Dimitrios Kollias ◽  
Andreas Stafylopatis ◽  
Georgios Tagaris ◽  
Stefanos Kollias

Neurodegenerative disorders, such as Alzheimer’s and Parkinson’s, constitute a major factor in long-term disability and are becoming more and more a serious concern in developed countries. As there are, at present, no effective therapies, early diagnosis along with avoidance of misdiagnosis seem to be critical in ensuring a good quality of life for patients. In this sense, the adoption of computer-aided-diagnosis tools can offer significant assistance to clinicians. In the present paper, we provide in the first place a comprehensive recording of medical examinations relevant to those disorders. Then, a review is conducted concerning the use of Machine Learning techniques in supporting diagnosis of neurodegenerative diseases, with reference to at times used medical datasets. Special attention has been given to the field of Deep Learning. In addition to that, we communicate the launch of a newly created dataset for Parkinson’s disease, containing epidemiological, clinical and imaging data, which will be publicly available to researchers for benchmarking purposes. To assess the potential of the new dataset, an experimental study in Parkinson’s diagnosis is carried out, based on state-of-the-art Deep Neural Network architectures and yielding very promising accuracy results.


Sign in / Sign up

Export Citation Format

Share Document