ANALYTICAL MODELS BASED DISCRETE-TIME QUEUEING FOR THE CONGESTED NETWORK

Author(s):  
MOFLEH AL-DIABAT ◽  
HUSSEIN ABDEL-JABER ◽  
FADI THABTAH ◽  
OSMAN ABOU-RABIA ◽  
MAHMOUD KISHTA

Congestion is one of the well-studied problems in computer networks, which occurs when the request for network resources exceeds the buffer capacity. Many active queue management techniques such as BLUE and RED have been proposed in the literature to control congestions in early stages. In this paper, we propose two discrete-time queueing network analytical models to drop the arrival packets in preliminary stages when the network becomes congested. The first model is based on Lambda Decreasing and it drops packets from a probability value to another higher value according to the buffer length. Whereas the second proposed model drops packets linearly based on the current queue length. We compare the performance of both our models with the original BLUE in order to decide which of these methods offers better quality of service. The comparison is done in terms of packet dropping probability, average queue length, throughput ratio, average queueing delay, and packet loss rate.

Author(s):  
Hussein Abdel-Jaber ◽  
Fadi Thabtah ◽  
Mike Woodward

Congestion control is among primary topics in computer network in which random early detection (RED) method is one of its common techniques. Nevertheless, RED suffers from drawbacks in particular when its "average queue length" is set below the buffer's "minimum threshold" position which makes the router buffer quickly overflow. To deal with this issue, this paper proposes two discrete-time queue analytical models that aim to utilize an instant queue length parameter as a congestion measure. This assigns mean queue length (mql) and average queueing delay smaller values than those for RED and eventually reduces buffers overflow. A comparison between RED and the proposed analytical models was conducted to identify the model that offers better performance. The proposed models outperform the classic RED in regards to mql and average queueing delay measures when congestion exists. This work also compares one of the proposed models (RED-Linear) with another analytical model named threshold-based linear reduction of arrival rate (TLRAR). The results of the mql, average queueing delay and the probability of packet loss for TLRAR are deteriorated when heavy congestion occurs, whereas, the results of our RED-Linear were not impacted and this shows superiority of our model.


Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 1949
Author(s):  
Lukas Sevcik ◽  
Miroslav Voznak

Video quality evaluation needs a combined approach that includes subjective and objective metrics, testing, and monitoring of the network. This paper deals with the novel approach of mapping quality of service (QoS) to quality of experience (QoE) using QoE metrics to determine user satisfaction limits, and applying QoS tools to provide the minimum QoE expected by users. Our aim was to connect objective estimations of video quality with the subjective estimations. A comprehensive tool for the estimation of the subjective evaluation is proposed. This new idea is based on the evaluation and marking of video sequences using the sentinel flag derived from spatial information (SI) and temporal information (TI) in individual video frames. The authors of this paper created a video database for quality evaluation, and derived SI and TI from each video sequence for classifying the scenes. Video scenes from the database were evaluated by objective and subjective assessment. Based on the results, a new model for prediction of subjective quality is defined and presented in this paper. This quality is predicted using an artificial neural network based on the objective evaluation and the type of video sequences defined by qualitative parameters such as resolution, compression standard, and bitstream. Furthermore, the authors created an optimum mapping function to define the threshold for the variable bitrate setting based on the flag in the video, determining the type of scene in the proposed model. This function allows one to allocate a bitrate dynamically for a particular segment of the scene and maintains the desired quality. Our proposed model can help video service providers with the increasing the comfort of the end users. The variable bitstream ensures consistent video quality and customer satisfaction, while network resources are used effectively. The proposed model can also predict the appropriate bitrate based on the required quality of video sequences, defined using either objective or subjective assessment.


T-Comm ◽  
2020 ◽  
Vol 14 (10) ◽  
pp. 39-44
Author(s):  
Dmitriy O. Kupriyanov ◽  

Quality of service parameters estimation becomes even more valuable when using cloud compute services. Mathematical model of described cloud-deployed web application in terms of Processor Sharing (PS) policy for mono-service traffic type proposed in this research. This model has Poisson distribution of the incoming requests flow with intensity ?. Requests awaiting in queue, queue length is considered to be unlimited and all requests should be served. Request in this system is HTTP request with a special payload in JSON format. The size of this payload is different for each request but it lies in a narrow band of values (bytes or decades of bytes). A model of cloud compute cluster was built. Characteristics of relative serving efficiency and relative bandwidth of a single requests flow was calculated using this model for different amount of resource provided for processing of a single request. The dependency of these characteristics from cluster load coefficient is demonstrated in charts. Some conclusions on cloud cluster QoS parameters behavior after the change of input requests flow size. Proposed model helps estimating quality of service parameters and adopting the infrastructure to increased or decreased number of requests from customers and could be used for architecting, deploying and administrating web services.


Author(s):  
A. V. Ponomarev

Introduction: Large-scale human-computer systems involving people of various skills and motivation into the information processing process are currently used in a wide spectrum of applications. An acute problem in such systems is assessing the expected quality of each contributor; for example, in order to penalize incompetent or inaccurate ones and to promote diligent ones.Purpose: To develop a method of assessing the expected contributor’s quality in community tagging systems. This method should only use generally unreliable and incomplete information provided by contributors (with ground truth tags unknown).Results:A mathematical model is proposed for community image tagging (including the model of a contributor), along with a method of assessing the expected contributor’s quality. The method is based on comparing tag sets provided by different contributors for the same images, being a modification of pairwise comparison method with preference relation replaced by a special domination characteristic. Expected contributors’ quality is evaluated as a positive eigenvector of a pairwise domination characteristic matrix. Community tagging simulation has confirmed that the proposed method allows you to adequately estimate the expected quality of community tagging system contributors (provided that the contributors' behavior fits the proposed model).Practical relevance: The obtained results can be used in the development of systems based on coordinated efforts of community (primarily, community tagging systems). 


2021 ◽  
Vol 11 (6) ◽  
pp. 2838
Author(s):  
Nikitha Johnsirani Venkatesan ◽  
Dong Ryeol Shin ◽  
Choon Sung Nam

In the pharmaceutical field, early detection of lung nodules is indispensable for increasing patient survival. We can enhance the quality of the medical images by intensifying the radiation dose. High radiation dose provokes cancer, which forces experts to use limited radiation. Using abrupt radiation generates noise in CT scans. We propose an optimal Convolutional Neural Network model in which Gaussian noise is removed for better classification and increased training accuracy. Experimental demonstration on the LUNA16 dataset of size 160 GB shows that our proposed method exhibit superior results. Classification accuracy, specificity, sensitivity, Precision, Recall, F1 measurement, and area under the ROC curve (AUC) of the model performance are taken as evaluation metrics. We conducted a performance comparison of our proposed model on numerous platforms, like Apache Spark, GPU, and CPU, to depreciate the training time without compromising the accuracy percentage. Our results show that Apache Spark, integrated with a deep learning framework, is suitable for parallel training computation with high accuracy.


2020 ◽  
Vol 12 (10) ◽  
pp. 4165 ◽  
Author(s):  
Dissakoon Chonsalasin ◽  
Sajjakaj Jomnonkwao ◽  
Vatanavongs Ratanavaraha

The airline industry in Thailand has grown enormously over the past decade. Competition among airline companies to reach market share and profit has been intense, requiring strong strategic abilities. To increase the service quality of such companies, identifying factors related to the context of airlines is important for policymakers. Thus, this study aims to present empirical data on structural factors related to the loyalty of domestic airline passengers. Structural equation modeling was used to confirm the proposed model. The questionnaire was used to survey and collect data from 1600 airline passengers. The results indicate that satisfaction, trust, perceived quality, relationship, and image of airlines positively influenced loyalty with a statistical significance of α = 0.05. Moreover, the study found that expectation and perceived quality indirectly influenced loyalty. The findings provide a reference for airline operators to clearly understand the factors that motivate passenger loyalty, which can be used to develop the sustainability of marketing strategies and support competitiveness.


2021 ◽  
Vol 13 (3) ◽  
pp. 1-19
Author(s):  
Sreelakshmy I. J. ◽  
Binsu C. Kovoor

Image inpainting is a technique in the world of image editing where missing portions of the image are estimated and filled with the help of available or external information. In the proposed model, a novel hybrid inpainting algorithm is implemented, which adds the benefits of a diffusion-based inpainting method to an enhanced exemplar algorithm. The structure part of the image is dealt with a diffusion-based method, followed by applying an adaptive patch size–based exemplar inpainting. Due to its hybrid nature, the proposed model exceeds the quality of output obtained by applying conventional methods individually. A new term, coefficient of smoothness, is introduced in the model, which is used in the computation of adaptive patch size for the enhanced exemplar method. An automatic mask generation module relieves the user from the burden of creating additional mask input. Quantitative and qualitative evaluation is performed on images from various datasets. The results provide a testimonial to the fact that the proposed model is faster in the case of smooth images. Moreover, the proposed model provides good quality results while inpainting natural images with both texture and structure regions.


2013 ◽  
Vol 694-697 ◽  
pp. 3675-3679
Author(s):  
Yi Xiang ◽  
Jun Peng ◽  
Qian Xiong ◽  
Liang Lei ◽  
Ming Ying You

Targeting at the "Data Structure" bilingual classes with regard to the lack of qualified teachers, a sharp learning curve and the poor teaching effect that are widespread in colleges and universities, this paper gives an analysis in an attempt to find a solution to the issue on quality of teachers, by means of the development of a network teaching platform and a supporting resource library for carrying out high-quality bilingual teaching.


2021 ◽  
Vol 16 (1) ◽  
Author(s):  
Mingli Wang ◽  
Huikuan Gu ◽  
Jiang Hu ◽  
Jian Liang ◽  
Sisi Xu ◽  
...  

Abstract Background and purpose To explore whether a highly refined dose volume histograms (DVH) prediction model can improve the accuracy and reliability of knowledge-based volumetric modulated arc therapy (VMAT) planning for cervical cancer. Methods and materials The proposed model underwent repeated refining through progressive training until the training samples increased from initial 25 prior plans up to 100 cases. The estimated DVHs derived from the prediction models of different runs of training were compared in 35 new cervical cancer patients to analyze the effect of such an interactive plan and model evolution method. The reliability and efficiency of knowledge-based planning (KBP) using this highly refined model in improving the consistency and quality of the VMAT plans were also evaluated. Results The prediction ability was reinforced with the increased number of refinements in terms of normal tissue sparing. With enhanced prediction accuracy, more than 60% of automatic plan-6 (AP-6) plans (22/35) can be directly approved for clinical treatment without any manual revision. The plan quality scores for clinically approved plans (CPs) and manual plans (MPs) were on average 89.02 ± 4.83 and 86.48 ± 3.92 (p < 0.001). Knowledge-based planning significantly reduced the Dmean and V18 Gy for kidney (L/R), the Dmean, V30 Gy, and V40 Gy for bladder, rectum, and femoral head (L/R). Conclusion The proposed model evolution method provides a practical way for the KBP to enhance its prediction ability with minimal human intervene. This highly refined prediction model can better guide KBP in improving the consistency and quality of the VMAT plans.


Sign in / Sign up

Export Citation Format

Share Document