scholarly journals On the use of reference monitors in subjective testing for HDTV

Author(s):  
Christian Keimel ◽  
Klaus Diepold
Keyword(s):  
Electronics ◽  
2021 ◽  
Vol 10 (15) ◽  
pp. 1843
Author(s):  
Jelena Vlaović ◽  
Snježana Rimac-Drlje ◽  
Drago Žagar

A standard called MPEG Dynamic Adaptive Streaming over HTTP (MPEG DASH) ensures the interoperability between different streaming services and the highest possible video quality in changing network conditions. The solutions described in the available literature that focus on video segmentation are mostly proprietary, use a high amount of computational power, lack the methodology, model notation, information needed for reproduction, or do not consider the spatial and temporal activity of video sequences. This paper presents a new model for selecting optimal parameters and number of representations for video encoding and segmentation, based on a measure of the spatial and temporal activity of the video content. The model was developed for the H.264 encoder, using Structural Similarity Index Measure (SSIM) objective metrics as well as Spatial Information (SI) and Temporal Information (TI) as measures of video spatial and temporal activity. The methodology that we used to develop the mathematical model is also presented in detail so that it can be applied to adapt the mathematical model to another type of an encoder or a set of encoding parameters. The efficiency of the segmentation made by the proposed model was tested using the Basic Adaptation algorithm (BAA) and Segment Aware Rate Adaptation (SARA) algorithm as well as two different network scenarios. In comparison to the segmentation available in the relevant literature, the segmentation based on the proposed model obtains better SSIM values in 92% of cases and subjective testing showed that it achieves better results in 83.3% of cases.


1941 ◽  
Vol 18 (5) ◽  
pp. 239
Author(s):  
Coates W R
Keyword(s):  

2000 ◽  
Vol 6 (2_suppl) ◽  
pp. 16-18 ◽  
Author(s):  
Alan M Dyer ◽  
Angus H Kirk

Traditional methods of performing refractions depend on a trained refractionist being present with the subject and conducting an interactive form of subjective testing. A fully automated refraction system was installed in 13 optical dispensaries and after 15 months the patient and statistical information was gathered. The data from all operators were consistent and suggested a lack of operator effect on the refraction results. The mean of the SD of subjective sphere measurements was 0.2, or slightly less than a quarter dioptre, which would be an acceptable level of accuracy for ordering corrective lenses. The present study suggests an absence of operator influence on the results of the refractions and a degree of consistency and accuracy compatible with the prescription of lenses.


2020 ◽  
Vol 10 (10) ◽  
pp. 3662 ◽  
Author(s):  
Abdul Wahab ◽  
Nafi Ahmad ◽  
John Schormans

In addition to the traditional Quality of Service (QoS) metrics of latency, jitter and Packet Loss Ratio (PLR), Quality of Experience (QoE) is now widely accepted as a numerical proxy for the actual user experience. The literature has reported many mathematical mappings between QoE and QoS, where the QoS parameters are measured by the network providers using sampling. Previous research has focussed on sampling errors in QoS measurements. However, the propagation of these sampling errors in QoS through to the QoE values has not been evaluated before. This is important: without knowing how sampling errors propagate through to QoE estimates there is no understanding of the precision of the estimates of QoE, only of the average QoE value. In this paper, we used industrially acquired measurements of PLR and jitter to evaluate the sampling errors. Additionally, we evaluated the correlation between these QoS measurements, as this correlation affects errors propagating to the estimated QoE. Focusing on Video-on-Demand (VoD) applications, we use subjective testing and regression to map QoE metrics onto PLR and jitter. The resulting mathematical functions, and the theory of error propagation, were used to evaluate the error propagated to QoE. This error in estimated QoE was represented as confidence interval width. Using the guidelines of UK government for sampling in a busy hour, our results indicate that confidence intervals around estimated the Mean Opinion Score (MOS) rating of QoE can be between MOS = 1 to MOS = 4 at targeted operating points of the QoS parameters. These results are a new perspective on QoE evaluation and are of potentially great significance to all organisations that need to estimate the QoE of VoD applications precisely.


Sign in / Sign up

Export Citation Format

Share Document