PerfGuard

2021 ◽  
Vol 14 (13) ◽  
pp. 3362-3375
Author(s):  
Remmelt Ammerlaan ◽  
Gilbert Antonius ◽  
Marc Friedman ◽  
H M Sajjad Hossain ◽  
Alekh Jindal ◽  
...  

Modern data processing systems require optimization at massive scale, and using machine learning to optimize these systems (ML-for-systems) has shown promising results. Unfortunately, ML-for-systems is subject to over generalizations that do not capture the large variety of workload patterns, and tend to augment the performance of certain subsets in the workload while regressing performance for others. In this paper, we introduce a performance safeguard system, called PerfGuard , that designs pre-production experiments for deploying ML-for-systems. Instead of searching the entire space of query plans (a well-known, intractable problem), we focus on query plan deltas (a significantly smaller space). PerfGuard formalizes these differences, and correlates plan deltas to important feedback signals, like execution cost. We describe the deep learning architecture and the end-to-end pipeline in PerfGuard that could be used with general relational databases. We show that this architecture improves on baseline models, and that our pipeline identifies key query plan components as major contributors to plan disparity. Offline experimentation shows PerfGuard as a promising approach, with many opportunities for future improvement.

IEEE Access ◽  
2020 ◽  
Vol 8 ◽  
pp. 20313-20324 ◽  
Author(s):  
Martin Gjoreski ◽  
Anton Gradisek ◽  
Borut Budna ◽  
Matjaz Gams ◽  
Gregor Poglajen

2018 ◽  
Vol 7 (4.6) ◽  
pp. 296 ◽  
Author(s):  
S Rahul ◽  
. .

This paper gives a present of general learning of deep methodology and its applications to a variety of signal and data processing schedules. It is discussed about Machine learning vs. Deep Learning a brief and which is best suited in the market, Dissimilarities, Problem handling, Interpretability, Comparative and different options between cubic centimeter and metric capacity unit and concluded by justifying deep learning is a part of Machine learning and Machine learning is a part of Artificial intelligence.  


2021 ◽  
Vol 251 ◽  
pp. 03057
Author(s):  
Michael Andrews ◽  
Bjorn Burkle ◽  
Shravan Chaudhari ◽  
Davide Di Croce ◽  
Sergei Gleyzer ◽  
...  

Machine learning algorithms are gaining ground in high energy physics for applications in particle and event identification, physics analysis, detector reconstruction, simulation and trigger. Currently, most data-analysis tasks at LHC experiments benefit from the use of machine learning. Incorporating these computational tools in the experimental framework presents new challenges. This paper reports on the implementation of the end-to-end deep learning with the CMS software framework and the scaling of the end-to-end deep learning with multiple GPUs. The end-to-end deep learning technique combines deep learning algorithms and low-level detector representation for particle and event identification. We demonstrate the end-to-end implementation on a top quark benchmark and perform studies with various hardware architectures including single and multiple GPUs and Google TPU.


IEEE Access ◽  
2020 ◽  
Vol 8 ◽  
pp. 70590-70603 ◽  
Author(s):  
Martin Gjoreski ◽  
Matja Z Gams ◽  
Mitja Lustrek ◽  
Pelin Genc ◽  
Jens-U. Garbas ◽  
...  

Electronics ◽  
2019 ◽  
Vol 8 (12) ◽  
pp. 1461 ◽  
Author(s):  
Taeheum Cho ◽  
Unang Sunarya ◽  
Minsoo Yeo ◽  
Bosun Hwang ◽  
Yong Seo Koo ◽  
...  

Sleep scoring is the first step for diagnosing sleep disorders. A variety of chronic diseases related to sleep disorders could be identified using sleep-state estimation. This paper presents an end-to-end deep learning architecture using wrist actigraphy, called Deep-ACTINet, for automatic sleep-wake detection using only noise canceled raw activity signals recorded during sleep and without a feature engineering method. As a benchmark test, the proposed Deep-ACTINet is compared with two conventional fixed model based sleep-wake scoring algorithms and four feature engineering based machine learning algorithms. The datasets were recorded from 10 subjects using three-axis accelerometer wristband sensors for eight hours in bed. The sleep recordings were analyzed using Deep-ACTINet and conventional approaches, and the suggested end-to-end deep learning model gained the highest accuracy of 89.65%, recall of 92.99%, and precision of 92.09% on average. These values were approximately 4.74% and 4.05% higher than those for the traditional model based and feature based machine learning algorithms, respectively. In addition, the neuron outputs of Deep-ACTINet contained the most significant information for separating the asleep and awake states, which was demonstrated by their high correlations with conventional significant features. Deep-ACTINet was designed to be a general model and thus has the potential to replace current actigraphy algorithms equipped in wristband wearable devices.


2020 ◽  
Vol 11 (1) ◽  
Author(s):  
Qingyu Zhao ◽  
Ehsan Adeli ◽  
Kilian M. Pohl

AbstractThe presence of confounding effects (or biases) is one of the most critical challenges in using deep learning to advance discovery in medical imaging studies. Confounders affect the relationship between input data (e.g., brain MRIs) and output variables (e.g., diagnosis). Improper modeling of those relationships often results in spurious and biased associations. Traditional machine learning and statistical models minimize the impact of confounders by, for example, matching data sets, stratifying data, or residualizing imaging measurements. Alternative strategies are needed for state-of-the-art deep learning models that use end-to-end training to automatically extract informative features from large set of images. In this article, we introduce an end-to-end approach for deriving features invariant to confounding factors while accounting for intrinsic correlations between the confounder(s) and prediction outcome. The method does so by exploiting concepts from traditional statistical methods and recent fair machine learning schemes. We evaluate the method on predicting the diagnosis of HIV solely from Magnetic Resonance Images (MRIs), identifying morphological sex differences in adolescence from those of the National Consortium on Alcohol and Neurodevelopment in Adolescence (NCANDA), and determining the bone age from X-ray images of children. The results show that our method can accurately predict while reducing biases associated with confounders. The code is available at https://github.com/qingyuzhao/br-net.


2020 ◽  
Vol 10 (6) ◽  
pp. 1997
Author(s):  
Xin Shu ◽  
Chang Liu ◽  
Tong Li

As we all know, the output of the tactile sensing array on the gripper can be used to predict grasping stability. Some methods utilize traditional tactile features to make the decision and some advanced methods use machine learning or deep learning ways to build a prediction model. While these methods are all limited to the specific sensing array and have two common disadvantages. On the one hand, these models cannot perform well on different sensors. On the other hand, they do not have the ability of inferencing on multiple sensors in an end-to-end manner. Thus, we aim to find the internal relationships among different sensors and inference the grasping stability of multiple sensors in an end-to-end way. In this paper, we propose the MM-CNN (mask multi-head convolutional neural network), which can be utilized to predict the grasping stability on the output of multiple sensors with the weight sharing mechanism. We train this model and evaluate it on our own collected datasets. This model achieves 99.49% and 94.25% prediction accuracy on two different sensing arrays, separately. In addition, we show that our proposed structure is also available for other CNN backbones and can be easily integrated.


2021 ◽  
Vol 6 ◽  
pp. 248
Author(s):  
Paul Mwaniki ◽  
Timothy Kamanu ◽  
Samuel Akech ◽  
Dustin Dunsmuir ◽  
J. Mark Ansermino ◽  
...  

Background: The success of many machine learning applications depends on knowledge about the relationship between the input data and the task of interest (output), hindering the application of machine learning to novel tasks. End-to-end deep learning, which does not require intermediate feature engineering, has been recommended to overcome this challenge but end-to-end deep learning models require large labelled training data sets often unavailable in many medical applications. In this study, we trained machine learning models to predict paediatric hospitalization given raw photoplethysmography (PPG) signals obtained from a pulse oximeter. We trained self-supervised learning (SSL) for automatic feature extraction from PPG signals and assessed the utility of SSL in initializing end-to-end deep learning models trained on a small labelled data set with the aim of predicting paediatric hospitalization.Methods: We compared logistic regression models fitted using features extracted using SSL with end-to-end deep learning models initialized either randomly or using weights from the SSL model. We also compared the performance of SSL models trained on labelled data alone (n=1,031) with SSL trained using both labelled and unlabelled signals (n=7,578). Results: The SSL model trained on both labelled and unlabelled PPG signals produced features that were more predictive of hospitalization compared to the SSL model trained on labelled PPG only (AUC of logistic regression model: 0.78 vs 0.74). The end-to-end deep learning model had an AUC of 0.80 when initialized using the SSL model trained on all PPG signals, 0.77 when initialized using SSL trained on labelled data only, and 0.73 when initialized randomly. Conclusions: This study shows that SSL can improve the classification of PPG signals by either extracting features required by logistic regression models or initializing end-to-end deep learning models. Furthermore, SSL can leverage larger unlabelled data sets to improve performance of models fitted using small labelled data sets.


2018 ◽  
Vol 7 (4.6) ◽  
pp. 291
Author(s):  
S Rahul

This paper gives a present of general learning of deep methodology and its applications to a variety of signal and data processing schedules. It is discussed about Machine learning vs. Deep Learning a brief and which is best suited in the market, Dissimilarities, Problem handling, Interpretability, Comparative and different options between cubic centimeter and metric capacity unit and concluded by justifying deep learning is a part of Machine learning and Machine learning is a part of Artificial intelligence.  


Electronics ◽  
2021 ◽  
Vol 10 (22) ◽  
pp. 2851
Author(s):  
Obinna Izima ◽  
Ruairí Fréin ◽  
Ali Malik

A growing number of video streaming networks are incorporating machine learning (ML) applications. The growth of video streaming services places enormous pressure on network and video content providers who need to proactively maintain high levels of video quality. ML has been applied to predict the quality of video streams. Quality of delivery (QoD) measurements, which capture the end-to-end performances of network services, have been leveraged in video quality prediction. The drive for end-to-end encryption, for privacy and digital rights management, has brought about a lack of visibility for operators who desire insights from video quality metrics. In response, numerous solutions have been proposed to tackle the challenge of video quality prediction from QoD-derived metrics. This survey provides a review of studies that focus on ML techniques for predicting the QoD metrics in video streaming services. In the context of video quality measurements, we focus on QoD metrics, which are not tied to a particular type of video streaming service. Unlike previous reviews in the area, this contribution considers papers published between 2016 and 2021. Approaches for predicting QoD for video are grouped under the following headings: (1) video quality prediction under QoD impairments, (2) prediction of video quality from encrypted video streaming traffic, (3) predicting the video quality in HAS applications, (4) predicting the video quality in SDN applications, (5) predicting the video quality in wireless settings, and (6) predicting the video quality in WebRTC applications. Throughout the survey, some research challenges and directions in this area are discussed, including (1) machine learning over deep learning; (2) adaptive deep learning for improved video delivery; (3) computational cost and interpretability; (4) self-healing networks and failure recovery. The survey findings reveal that traditional ML algorithms are the most widely adopted models for solving video quality prediction problems. This family of algorithms has a lot of potential because they are well understood, easy to deploy, and have lower computational requirements than deep learning techniques.


Sign in / Sign up

Export Citation Format

Share Document