scholarly journals Optimasi Radius Server untuk Pengaturan Alokasi Bandwidth pada Jaringan Hotspot

2021 ◽  
Vol 2 (2) ◽  
pp. 18-24
Author(s):  
I Gede Bagus Premana Putra ◽  
I Putu Agus Eka Pratama

Hotspot is one form of utilizing Wireless LAN technology that can be used to access internet services and is usually found in public areas such as libraries, campus internet parks, or offices. Aspects that need to be considered in hotspots to make users feel comfortable is security then at the provider side, the aspect that needs to be considered is the regulation of bandwidth allocation to optimize the data transfer speeds that the network and to prevent the possibility of dense network traffic. RADIUS server is one type of server that can be used at hotspots to secure hotspots because it supports various types of encryption. In this study, optimization of the settings for hotspot network bandwidth is optimized by integrating the RADIUS server with RouterOS. Bandwidth allocation management is done by determining the active time of a user account and set quotas hotspot uploads and downloads for the account. The results obtained from this study indicate that when a hotspot user account has passed the active time or has passed the upload quota and the download given, the account will be deleted from the list of hotspot user accounts or disabled.

2018 ◽  
Vol 8 (11) ◽  
pp. 2216
Author(s):  
Jiahui Jin ◽  
Qi An ◽  
Wei Zhou ◽  
Jiakai Tang ◽  
Runqun Xiong

Network bandwidth is a scarce resource in big data environments, so data locality is a fundamental problem for data-parallel frameworks such as Hadoop and Spark. This problem is exacerbated in multicore server-based clusters, where multiple tasks running on the same server compete for the server’s network bandwidth. Existing approaches solve this problem by scheduling computational tasks near the input data and considering the server’s free time, data placements, and data transfer costs. However, such approaches usually set identical values for data transfer costs, even though a multicore server’s data transfer cost increases with the number of data-remote tasks. Eventually, this hampers data-processing time, by minimizing it ineffectively. As a solution, we propose DynDL (Dynamic Data Locality), a novel data-locality-aware task-scheduling model that handles dynamic data transfer costs for multicore servers. DynDL offers greater flexibility than existing approaches by using a set of non-decreasing functions to evaluate dynamic data transfer costs. We also propose online and offline algorithms (based on DynDL) that minimize data-processing time and adaptively adjust data locality. Although DynDL is NP-complete (nondeterministic polynomial-complete), we prove that the offline algorithm runs in quadratic time and generates optimal results for DynDL’s specific uses. Using a series of simulations and real-world executions, we show that our algorithms are 30% better than algorithms that do not consider dynamic data transfer costs in terms of data-processing time. Moreover, they can adaptively adjust data localities based on the server’s free time, data placement, and network bandwidth, and schedule tens of thousands of tasks within subseconds or seconds.


2021 ◽  
Author(s):  
Walid Aljoby

Our work, DiffPerf, is a key enabler which represents a significant step forward in network softwarization. It supports an agile and dynamic in-network bandwidth allocation in an ISP-centric settings and is implemented on largest community-led SDN platforms.


Author(s):  
Uma Nandhini D ◽  
Udhayakumar S ◽  
Latha Tamilselvan ◽  
Silviya Nancy J

<p class="0abstract">Computing with mobile is still in its infancy due to its limitations of computational power, battery lifetime and storage capacity. These limitations hinder the growth of mobile computing, which in-turn affects the growth of computationally intensive applications developed for the mobile devices. So in-order to help execute complex applications within the mobile device, mobile cloud computing (MCC) emerged as a feasible solution. The job of offloading the task to the cloud data center for storage and execution from the mobile seems to gain popularity, however, issues related to network bandwidth, loss of mobile data connectivity, and connection setup does not augment well to extend the benefits offered by MCC. Cloudlet servers filled this gab by assisting the mobile cloud environment as an edge device, offering compute power to the connected devices with high speed wireless LAN connectivity. Implementation constraints of cloudlet faces severe challenges in-terms of its storage, network sharing, and VM provisioning. Moreover, the number of connected devices of the cloudlet and its load conditions vary drastically leading to unexpected bottleneck, in which case the availability to server becomes an issue. Therefore, a scalable cloudlet, Client Aware Scalable Cloudlet (CASC) is proposed with linear regression analysis, predicting the knowledge of expected load conditions for provisioning new virtual machines and to perform resource migration.</p>


2011 ◽  
Vol 403-408 ◽  
pp. 2628-2631
Author(s):  
Jiang Yi Shi ◽  
Kun Chen ◽  
Kang Li ◽  
Zhi Xiong Di

With the Internet services increased explosively, the requirement for network bandwidth is rigorous. Upon the extraordinary development of process capability, Memory access control has become a key factor that impacts the performance of network processor. The paper proposed a storage management for fast packet buffers in network processor, which can enhance the utilization of bandwidth. Experiment results shows that this approach improved the rates of accessing to memory system in network processor remarkably.


2017 ◽  
Vol 13 (02) ◽  
pp. 34 ◽  
Author(s):  
Varun Tiwari ◽  
Avinash Keskar ◽  
NC Shivaprakash

Designing an Internet of Things (IoT) enabled environment requires integration of various things/devices. Integrating these devices require a generalized approach as these devices can have different communication protocols. In this paper, we have proposed generalized nodes for connecting various devices. These nodes are capable of creating a scalable local wireless network that connects to the cloud through a network gateway. The nodes also support over the air programming to re-configure the network from the cloud. As number of devices connected to the cloud increases, the network traffic also increases. In order to reduce the network traffic we have used different data transfer schemes for the network. We have also proposed an event-based data transfer scheme for situations where there is low probability of change in sensor value. The experimental results shows that the event-based scheme reduces the data traffic by up to 48% under practical conditions without any loss of information compared to priority based data transfer. We have also shown that the proposed scheme is more reliable for data transfer in a large network with a success rate of 99.5% measured over 200 minutes for 1201 data packets.


2018 ◽  
Vol 176 ◽  
pp. 01020
Author(s):  
Wang Chao ◽  
Zhang Dalong ◽  
Ran Xiaomin

Aiming at the problem of link congestion caused by the shortage of network bandwidth resources at the user end, this paper first proposes a regional load balancing idea. Then, for the problem of bandwidth resource allocation in regional load balancing, a bandwidth allocation model is established and a dynamic auction algorithm is proposed. The algorithm calculates the link quality and stability by constructing a link model, and introduces the auction bandwidth to the auctioneer's incentive degree to obtain the auction bidding function. The simulation results show that the algorithm can effectively improve the user's network status, reduce the service response delay, increase the throughput, and at the same time can effectively prevent the auction user's false bidding behavior, so that the auction quote quickly converges to the maximum quote, reduces the number of auctions, and reduces Communication overhead.


Sign in / Sign up

Export Citation Format

Share Document