scholarly journals Automating Network Operation Centers with Superhuman Performance

Author(s):  
Sadi Altamimi ◽  
Basel Altamimi ◽  
Shervin Shirmohammadi ◽  
David Cote

<div><p>Today's Network Operation Centres (NOC) consist of teams of network professionals responsible for monitoring and taking actions for their network's health. Most of these NOC actions are relatively complex and executed manually; only the simplest tasks can be automated with rules-based software. But today's networks are getting larger and more complex. Therefore, deciding what action to take in the face of non-trivial problems has essentially become an art that depends on collective human intelligence of NOC technicians, specialized support teams organized by technology domains, and vendors' technical support. This model is getting increasingly expensive and inefficient, and the automation of all or at least some NOC tasks is now considered a desirable step towards autonomous and self-healing networks. In this article, we investigate whether such decisions can be taken by Artificial Intelligence instead of collective human intelligence, specifically by the Machine Learning method of Reinforcement Learning (RL), which has been shown in computer games to outperform humans. We build an Action Recommendation Engine (ARE) based on RL, train it with expert rules or by letting it explore outcomes by itself, and show that it can learn new and more efficient strategies that outperform expert rules designed by humans. ARE can be used in face of network problems to either quickly recommend actions to NOC technicians or autonomously take actions for fast recovery.</p><p><br></p> <p> </p> <p> </p> <p><b><i>“This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible.”</i></b></p><br></div>

2021 ◽  
Author(s):  
Sadi Altamimi ◽  
Basel Altamimi ◽  
Shervin Shirmohammadi ◽  
David Cote

<div><p>Today's Network Operation Centres (NOC) consist of teams of network professionals responsible for monitoring and taking actions for their network's health. Most of these NOC actions are relatively complex and executed manually; only the simplest tasks can be automated with rules-based software. But today's networks are getting larger and more complex. Therefore, deciding what action to take in the face of non-trivial problems has essentially become an art that depends on collective human intelligence of NOC technicians, specialized support teams organized by technology domains, and vendors' technical support. This model is getting increasingly expensive and inefficient, and the automation of all or at least some NOC tasks is now considered a desirable step towards autonomous and self-healing networks. In this article, we investigate whether such decisions can be taken by Artificial Intelligence instead of collective human intelligence, specifically by the Machine Learning method of Reinforcement Learning (RL), which has been shown in computer games to outperform humans. We build an Action Recommendation Engine (ARE) based on RL, train it with expert rules or by letting it explore outcomes by itself, and show that it can learn new and more efficient strategies that outperform expert rules designed by humans. ARE can be used in face of network problems to either quickly recommend actions to NOC technicians or autonomously take actions for fast recovery.</p><p><br></p> <p> </p> <p> </p> <p><b><i>“This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible.”</i></b></p><br></div>


2020 ◽  
Author(s):  
Lewis Mervin ◽  
Avid M. Afzal ◽  
Ola Engkvist ◽  
Andreas Bender

In the context of bioactivity prediction, the question of how to calibrate a score produced by a machine learning method into reliable probability of binding to a protein target is not yet satisfactorily addressed. In this study, we compared the performance of three such methods, namely Platt Scaling, Isotonic Regression and Venn-ABERS in calibrating prediction scores for ligand-target prediction comprising the Naïve Bayes, Support Vector Machines and Random Forest algorithms with bioactivity data available at AstraZeneca (40 million data points (compound-target pairs) across 2112 targets). Performance was assessed using Stratified Shuffle Split (SSS) and Leave 20% of Scaffolds Out (L20SO) validation.


2019 ◽  
Author(s):  
Hironori Takemoto ◽  
Tsubasa Goto ◽  
Yuya Hagihara ◽  
Sayaka Hamanaka ◽  
Tatsuya Kitamura ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document