server selection
Recently Published Documents


TOTAL DOCUMENTS

226
(FIVE YEARS 33)

H-INDEX

21
(FIVE YEARS 2)

2022 ◽  
Vol 12 (1) ◽  
pp. 0-0

Nowadays, in online social networks, there is an instantaneous extension of multimedia services and there are huge offers of video contents which has hindered users to acquire their interests. To solve these problem different personalized recommendation systems had been suggested. Although, all the personalized recommendation system which have been suggested are not efficient and they have significantly retarded the video recommendation process. So to solve this difficulty, context extractor based video recommendation system on cloud has been proposed in this paper. Further to this the system has server selection technique to handle the overload program and make it balanced. This paper explains the mechanism used to minimize network overhead and recommendation process is done by considering the context details of the users, it also uses rule based process and different algorithms used to achieve the objective. The videos will be stored in the cloud and through application videos will be dumped into cloud storage by reading, coping and storing process.


Author(s):  
Р.Я. ПИРМАГОМЕДОВ

Проанализирована и решена проблема управления распределенными вычислениями в беспроводной сети доступа в условиях высокой динамики доступности ресурсов. Предложен метод выбора сервера в сети именованных данных, использующий точку беспроводного доступа в качестве брокера, представляющего интересы мобильного пользователя. Реализована имитационная модель в программной среде ndnSIM, позволяющая экспериментально оценить производительность предложенныхрешений. Численные результаты, полученные в результате эксперимента, показали, что разработанные решения в значительной степени превосходят существующие технологий в части снижения задержки. This work proposes a method for edge server selection in radio access networks, assuming high temporal and spatial dynamics of user demands. A method for selecting a server in a named data network using a wireless access point as a broker representing the interests of a mobile user is proposed. A simulation model is implemented in the ndnSIM software environment, which makes it possible to experimentally evaluate the performance of the proposed solutions. Our numerical experiment results demonstrated that proposed solutions are significantly outperformed legacy network architectures in terms of reducing the latency.


2021 ◽  
Author(s):  
Arun Nanthakumaran

Content delivery over Internet is optimized for low use perceived latency by designing specialized Content Distribution Network (CDN). Content servers are connected through a content switch the distributes client requests among them to achieve load balancing across the servers. The content switch is located one-hop away from the servers. Server load balancing is one factor in achieveing low user perceived latency and improving operational efficiency. Traffic engineering is another factor that is generally and integrated with serverload balancing for CDN optimization. In this thesis we propose a request routing algorithm for a CDN that was designed to integrate server selection and traffic engineering functions in the request routing system. The CDN employs MPLS in the network for traffic engineering. The proposed algorithm optimizes content delivery for user perceived latency by achieving server load balancing and network traffic loan manaement among alternative paths. It also improves operational efficiency of the CDN by eliminating bottleneck paths and increasing utilization of underutilized servers and paths.


2021 ◽  
Author(s):  
Arun Nanthakumaran

Content delivery over Internet is optimized for low use perceived latency by designing specialized Content Distribution Network (CDN). Content servers are connected through a content switch the distributes client requests among them to achieve load balancing across the servers. The content switch is located one-hop away from the servers. Server load balancing is one factor in achieveing low user perceived latency and improving operational efficiency. Traffic engineering is another factor that is generally and integrated with serverload balancing for CDN optimization. In this thesis we propose a request routing algorithm for a CDN that was designed to integrate server selection and traffic engineering functions in the request routing system. The CDN employs MPLS in the network for traffic engineering. The proposed algorithm optimizes content delivery for user perceived latency by achieving server load balancing and network traffic loan manaement among alternative paths. It also improves operational efficiency of the CDN by eliminating bottleneck paths and increasing utilization of underutilized servers and paths.


2021 ◽  
Vol 1 (1) ◽  
pp. 016-025
Author(s):  
Ouariach Soufiane ◽  
Khaldi Maha ◽  
Erradi Mohamed ◽  
Khaldi Mohamed

Through this article which concerns the implementation of the Moodle e-learning platform in a server, we will first present an example of a Web server architecture, then we propose the adopted architecture which is based on Linux containers. Afterwards, we propose a description of all the necessary tools chosen for the implementation of the platform in a Web server. Then, we propose through figures the installation of the different technological tools and the Moodle platform. Finally, we propose the configuration of our Moodle platform according to our needs.


2021 ◽  
Author(s):  
Sulthana Begam ◽  
Sangeetha M ◽  
Shanker N R

Abstract Software Defined Networking (SDN) manages data traffic in Data Center Network (DCN). SDN improves utilization of large scale network resource and performance of network application. In SDN, load balancing technique optimizes the data flow during transmission through server load deviation after evaluating the network status dynamically. However, load deviation in the network needs optimum server selection and routing path with respect to less time and complexity. In this paper, we propose a M ultiple R egression B ased S earching (MRBS) algorithm for optimum server selection and routing path in DCN. The appropriate server selection during heavy load conditions such as message spikes, different message frequencies and unpredictable traffic patterns are done through regression based analysis and correlation of various server parameters, only after detecting the types of traffic and loads based on bandwidth. The parameters included in the regression modeling are load, response time, bandwidth and server utilization. Moreover, the heuristic algorithm is combined with regression model for efficient path selection. The proposed algorithm reduces the delay and time more than 85% when compared with traditional algorithms due to stochastic gradient decent weights estimation.


Author(s):  
Zhuofan Liao ◽  
Jingsheng Peng ◽  
Bing Xiong ◽  
Jiawei Huang

AbstractWith the combination of Mobile Edge Computing (MEC) and the next generation cellular networks, computation requests from end devices can be offloaded promptly and accurately by edge servers equipped on Base Stations (BSs). However, due to the densified heterogeneous deployment of BSs, the end device may be covered by more than one BS, which brings new challenges for offloading decision, that is whether and where to offload computing tasks for low latency and energy cost. This paper formulates a multi-user-to-multi-servers (MUMS) edge computing problem in ultra-dense cellular networks. The MUMS problem is divided and conquered by two phases, which are server selection and offloading decision. For the server selection phases, mobile users are grouped to one BS considering both physical distance and workload. After the grouping, the original problem is divided into parallel multi-user-to-one-server offloading decision subproblems. To get fast and near-optimal solutions for these subproblems, a distributed offloading strategy based on a binary-coded genetic algorithm is designed to get an adaptive offloading decision. Convergence analysis of the genetic algorithm is given and extensive simulations show that the proposed strategy significantly reduces the average latency and energy consumption of mobile devices. Compared with the state-of-the-art offloading researches, our strategy reduces the average delay by 56% and total energy consumption by 14% in the ultra-dense cellular networks.


Sign in / Sign up

Export Citation Format

Share Document