scholarly journals Network bandwidth reservation method combining machine learning and linear programming

Author(s):  
Kouichi Genda
Author(s):  
Sarmad H. Ali ◽  
Osamah A. Ali ◽  
Samir C. Ajmi

In this research, we are trying to solve Simplex methods which are used for successively improving solution and finding the optimal solution, by using different types of methods Linear, the concept of linear separation is widely used in the study of machine learning, through this study we will find the optimal method to solve by comparing the time consumed by both Quadric and Fisher methods.


2020 ◽  
Vol 2020 ◽  
pp. 1-13
Author(s):  
Sha Lu ◽  
Zengxin Wei

Proximal point algorithm is a type of method widely used in solving optimization problems and some practical problems such as machine learning in recent years. In this paper, a framework of accelerated proximal point algorithm is presented for convex minimization with linear constraints. The algorithm can be seen as an extension to G u ¨ ler’s methods for unconstrained optimization and linear programming problems. We prove that the sequence generated by the algorithm converges to a KKT solution of the original problem under appropriate conditions with the convergence rate of O 1 / k 2 .


2020 ◽  
Vol 17 (8) ◽  
pp. 3765-3769
Author(s):  
N. P. Ponnuviji ◽  
M. Vigilson Prem

Cloud Computing has revolutionized the Information Technology by allowing the users to use variety number of resources in different applications in a less expensive manner. The resources are allocated to access by providing scalability flexible on-demand access in a virtual manner, reduced maintenance with less infrastructure cost. The majority of resources are handled and managed by the organizations over the internet by using different standards and formats of the networking protocols. Various research and statistics have proved that the available and existing technologies are prone to threats and vulnerabilities in the protocols legacy in the form of bugs that pave way for intrusion in different ways by the attackers. The most common among attacks is the Distributed Denial of Service (DDoS) attack. This attack targets the cloud’s performance and cause serious damage to the entire cloud computing environment. In the DDoS attack scenario, the compromised computers are targeted. The attacks are done by transmitting a large number of packets injected with known and unknown bugs to a server. A huge portion of the network bandwidth of the users’ cloud infrastructure is affected by consuming enormous time of their servers. In this paper, we have proposed a DDoS Attack detection scheme based on Random Forest algorithm to mitigate the DDoS threat. This algorithm is used along with the signature detection techniques and generates a decision tree. This helps in the detection of signature attacks for the DDoS flooding attacks. We have also used other machine learning algorithms and analyzed based on the yielded results.


2021 ◽  
Author(s):  
Cesar Gomez ◽  
Abdallah Shami ◽  
Xianbing Wang

Network Telemetry (NT) is a crucial component in today’s networks, as it provides the network managers with important data about the status and behavior of the network elements. NT data are then utilized to get insights and rapidly take actions to improve the network performance or avoid its degradation. Intuitively, the more data are collected, the better for the network managers. However, the gathering and transportation of excessive NT data might produce an adverse effect, leading to a paradox: the data that are supposed to help actually damage the network performance. This is the motivation to introduce a novel NT framework that dynamically adjusts the rate in which the NT data should be transmitted. In this work, we present an NT scheme that is traffic-aware, meaning that the network elements collect and send NT data based on the type of traffic that they forward. The evaluation results of our Machine Learning-based mechanism show that it is possible to reduce by over 75% the network bandwidth overhead that a conventional NT scheme produces.


Sensors ◽  
2020 ◽  
Vol 20 (4) ◽  
pp. 1176 ◽  
Author(s):  
Davy Preuveneers ◽  
Ilias Tsingenopoulos ◽  
Wouter Joosen

The application of artificial intelligence enhances the ability of sensor and networking technologies to realize smart systems that sense, monitor and automatically control our everyday environments. Intelligent systems and applications often automate decisions based on the outcome of certain machine learning models. They collaborate at an ever increasing scale, ranging from smart homes and smart factories to smart cities. The best performing machine learning model, its architecture and parameters for a given task are ideally automatically determined through a hyperparameter tuning process. At the same time, edge computing is an emerging distributed computing paradigm that aims to bring computation and data storage closer to the location where they are needed to save network bandwidth or reduce the latency of requests. The challenge we address in this work is that hyperparameter tuning does not take into consideration resource trade-offs when selecting the best model for deployment in smart environments. The most accurate model might be prohibitively expensive to computationally evaluate on a resource constrained node at the edge of the network. We propose a multi-objective optimization solution to find acceptable trade-offs between model accuracy and resource consumption to enable the deployment of machine learning models in resource constrained smart environments. We demonstrate the feasibility of our approach by means of an anomaly detection use case. Additionally, we evaluate the extent that transfer learning techniques can be applied to reduce the amount of training required by reusing previous models, parameters and trade-off points from similar settings.


2011 ◽  
Vol 230-232 ◽  
pp. 793-797
Author(s):  
Wei Li ◽  
Chong Yang Deng

Machine learning and linear programming with time dependent cost are two popular intelligent optimization tools to handle uncertainty in real world problems. Thus, combining these two technologies is quite attractive. This paper proposed an effective framework to deal with uncertainty in practice, based on combing introducing learning parameter into linear programming models.


Author(s):  
Hongjing Zhang ◽  
Ian Davidson

Recent work on explainable clustering allows describing clusters when the features are interpretable. However, much modern machine learning focuses on complex data such as images, text, and graphs where deep learning is used but the raw features of data are not interpretable. This paper explores a novel setting for performing clustering on complex data while simultaneously generating explanations using interpretable tags. We propose deep descriptive clustering that performs sub-symbolic representation learning on complex data while generating explanations based on symbolic data. We form good clusters by maximizing the mutual information between empirical distribution on the inputs and the induced clustering labels for clustering objectives. We generate explanations by solving an integer linear programming that generates concise and orthogonal descriptions for each cluster. Finally, we allow the explanation to inform better clustering by proposing a novel pairwise loss with self-generated constraints to maximize the clustering and explanation module's consistency. Experimental results on public data demonstrate that our model outperforms competitive baselines in clustering performance while offering high-quality cluster-level explanations.


Sign in / Sign up

Export Citation Format

Share Document