Adaptive Model Scheduling for Resource-efficient Data Labeling

2022 ◽  
Vol 16 (4) ◽  
pp. 1-22
Author(s):  
Mu Yuan ◽  
Lan Zhang ◽  
Xiang-Yang Li ◽  
Lin-Zhuo Yang ◽  
Hui Xiong

Labeling data (e.g., labeling the people, objects, actions, and scene in images) comprehensively and efficiently is a widely needed but challenging task. Numerous models were proposed to label various data and many approaches were designed to enhance the ability of deep learning models or accelerate them. Unfortunately, a single machine-learning model is not powerful enough to extract various semantic information from data. Given certain applications, such as image retrieval platforms and photo album management apps, it is often required to execute a collection of models to obtain sufficient labels. With limited computing resources and stringent delay, given a data stream and a collection of applicable resource-hungry deep-learning models, we design a novel approach to adaptively schedule a subset of these models to execute on each data item, aiming to maximize the value of the model output (e.g., the number of high-confidence labels). Achieving this lofty goal is nontrivial since a model’s output on any data item is content-dependent and unknown until we execute it. To tackle this, we propose an Adaptive Model Scheduling framework, consisting of (1) a deep reinforcement learning-based approach to predict the value of unexecuted models by mining semantic relationship among diverse models, and (2) two heuristic algorithms to adaptively schedule the model execution order under a deadline or deadline-memory constraints, respectively. The proposed framework does not require any prior knowledge of the data, which works as a powerful complement to existing model optimization technologies. We conduct extensive evaluations on five diverse image datasets and 30 popular image labeling models to demonstrate the effectiveness of our design: our design could save around 53% execution time without loss of any valuable labels.

Author(s):  
Sachin Kumar ◽  
Rohan Asthana ◽  
Shashwat Upadhyay ◽  
Nidhi Upreti ◽  
Mohammad Akbar

Webology ◽  
2021 ◽  
Vol 18 (2) ◽  
pp. 439-448
Author(s):  
Parameswar Kanuparthi ◽  
Vaibhav Bejgam ◽  
V. Madhu Viswanatham

Agriculture, the primary sector of Indian economy. It contributes around 18 percent of overall GDP (Gross Domestic Product). More than fifty percent of Indians belong to an agricultural background. There is a necessary to rapidly increase the agriculture production in India due to the vast increasing of population. The significant crop type for most of the people in India is rice but it was one of the crops that has been mostly affected by the cause of diseases in majority of the cases. This results in reduced yield that lead to loss for farmers. The major challenges faced while cultivating the rice crops is getting infected by the diseases due to the various effects that include environmental conditions, pesticides used and natural disasters. Early detection of rice diseases will eventually help farmers to get out from disasters and help in better yield. In this paper, we are proposing a new method of ensembling the transfer learning models to detect the rice plant and classify the diseases using images. Using this model, the three most common rice crop diseases are detected such as Brown spot, Leaf smut and Bacterial leaf blight. Generally, transfer learning uses pre-trained models and gives better accuracy for the image datasets. Also, ensembling of machine learning algorithms (combining two or more ML algorithms) will help in reducing the generalization error and also makes the model more robust. Ensemble learning is becoming trendier as it reduces generalization error as well as makes the model more robust. The ensembling technique that was used in the paper is majority voting. Here we are proposing a novel model that ensembles three transfer learning models which are InceptionV3, MobileNetV2 and DenseNet121 with an accuracy of 96.42%.


Diagnostics ◽  
2020 ◽  
Vol 10 (6) ◽  
pp. 417 ◽  
Author(s):  
Mohammad Farukh Hashmi ◽  
Satyarth Katiyar ◽  
Avinash G Keskar ◽  
Neeraj Dhanraj Bokde ◽  
Zong Woo Geem

Pneumonia causes the death of around 700,000 children every year and affects 7% of the global population. Chest X-rays are primarily used for the diagnosis of this disease. However, even for a trained radiologist, it is a challenging task to examine chest X-rays. There is a need to improve the diagnosis accuracy. In this work, an efficient model for the detection of pneumonia trained on digital chest X-ray images is proposed, which could aid the radiologists in their decision making process. A novel approach based on a weighted classifier is introduced, which combines the weighted predictions from the state-of-the-art deep learning models such as ResNet18, Xception, InceptionV3, DenseNet121, and MobileNetV3 in an optimal way. This approach is a supervised learning approach in which the network predicts the result based on the quality of the dataset used. Transfer learning is used to fine-tune the deep learning models to obtain higher training and validation accuracy. Partial data augmentation techniques are employed to increase the training dataset in a balanced way. The proposed weighted classifier is able to outperform all the individual models. Finally, the model is evaluated, not only in terms of test accuracy, but also in the AUC score. The final proposed weighted classifier model is able to achieve a test accuracy of 98.43% and an AUC score of 99.76 on the unseen data from the Guangzhou Women and Children’s Medical Center pneumonia dataset. Hence, the proposed model can be used for a quick diagnosis of pneumonia and can aid the radiologists in the diagnosis process.


Author(s):  
S. Sasikala ◽  
S. J. Subhashini ◽  
P. Alli ◽  
J. Jane Rubel Angelina

Machine learning is a technique of parsing data, learning from that data, and then applying what has been learned to make informed decisions. Deep learning is actually a subset of machine learning. It technically is machine learning and functions in the same way, but it has different capabilities. The main difference between deep and machine learning is, machine learning models become well progressively, but the model still needs some guidance. If a machine learning model returns an inaccurate prediction, then the programmer needs to fix that problem explicitly, but in the case of deep learning, the model does it by itself. Automatic car driving system is a good example of deep learning. On other hand, Artificial Intelligence is a different thing from machine learning and deep learning. Deep learning and machine learning both are the subsets of AI.


2021 ◽  
Author(s):  
Benjamin J. Arthur ◽  
Yun Ding ◽  
Medhini Sosale ◽  
Faduma Khalif ◽  
Elizabeth Kim ◽  
...  

AbstractMany animals produce distinct sounds or substrate-borne vibrations, but these signals have proved challenging to segment with automated algorithms. We have developed SongExplorer, a web-browser based interface wrapped around a deep-learning algorithm that supports an interactive workflow for (1) discovery of animal sounds, (2) manual annotation, (3) supervised training of a deep convolutional neural network, and (4) automated segmentation of recordings. Raw data can be explored by simultaneously examining song events, both individually and in the context of the entire recording, watching synced video, and listening to song. We provide a simple way to visualize many song events from large datasets within an interactive low-dimensional visualization, which facilitates detection and correction of incorrectly labelled song events. The machine learning model we implemented displays higher accuracy than existing heuristic algorithms and similar accuracy as two expert human annotators. We show that SongExplorer allows rapid detection of all song types from new species and of novel song types in previously well-studied species.


Author(s):  
Mahmoud Hammad ◽  
Mohammed Al-Smadi ◽  
Qanita Bani Baker ◽  
Sa’ad A. Al-Zboon

<span>Question-answering platforms serve millions of users seeking knowledge and solutions for their daily life problems. However, many knowledge seekers are facing the challenge to find the right answer among similar answered questions and writer’s responding to asked questions feel like they need to repeat answers many times for similar questions. This research aims at tackling the problem of learning the semantic text similarity among different asked questions by using deep learning. Three <span>models are implemented to address the aforementioned problem: i) a supervised-machine learning model using XGBoost trained with pre-defined features, ii) an adapted Siamese-based deep learning recurrent architecture trained with pre-defined</span> features, and iii) a Pre-trained deep bidirectional transformer based on BERT model. Proposed models were evaluated using a reference Arabic dataset from the mawdoo3.com company. Evaluation results show that the BERT-based model outperforms the other two models with an F1=92.<span>99%, whereas the Siamese-based model comes in the second place with F1=89.048%, and finally, the XGBoost as a baseline model achieved the lowest</span> result of F1=86.086%.</span>


Author(s):  
Daniel Zügner ◽  
Amir Akbarnejad ◽  
Stephan Günnemann

Deep learning models for graphs have achieved strong performance for the task of node classification. Despite their proliferation, currently there is no study of their robustness to adversarial attacks. Yet, in domains where they are likely to be used, e.g. the web, adversaries are common. Can deep learning models for graphs be easily fooled? In this extended abstract we summarize the key findings and contributions of our work, in which we introduce the first study of adversarial attacks on attributed graphs, specifically focusing on models exploiting ideas of graph convolutions. In addition to attacks at test time, we tackle the more challenging class of poisoning/causative attacks, which focus on the training phase of a machine learning model. We generate adversarial perturbations targeting the node's features and the graph structure, thus, taking the dependencies between instances in account. Moreover, we ensure that the perturbations remain unnoticeable by preserving important data characteristics. To cope with the underlying discrete domain we propose an efficient algorithm Nettack exploiting incremental computations. Our experimental study shows that accuracy of node classification significantly drops even when performing only few perturbations. Even more, our attacks are transferable: the learned attacks generalize to other state-of-the-art node classification models and unsupervised approaches, and likewise are successful given only limited knowledge about the graph.


2021 ◽  
Author(s):  
Si Shen

BACKGROUND Chief complaint is the initial, general, and written description of a patient’s symptoms provided during the hospital intake process. By improving the automatic classification of chief complaint text, the quality and efficiency of patients’ hospital visits can be improved. OBJECTIVE Using chief complaint data in Chinese from the Information Centre of Jiangsu Commission Health, we built models for automatically detecting the correct treating department and then conducted various tests on those models using machine learning and deep learning. METHODS The study tested and compared the performances of the traditional machine learning model of SVM with deep learning models of Bi-LSTM, Bi-LSTM-CRF, At-Bi-LSTM-CRF and Bi-GRU-CRF on the chief complaint text data mainly. It is mainly based on Chinese character expansion model train and test in all traditional machine learning and deep learning models. RESULTS We found that the Bi-LSTM performed better at the chief complaint classification task than the SVM and that the performance difference between the deep learning models constructed is not obvious. The F scores of Bi-LSTM, Bi-LSTM-CRF, At-Bi-LSTM-CRF and Bi-GRU-CRF model built for the experiment effectively reach 88.10, 87.91, 88.14 and 87.98. CONCLUSIONS We found that the Bi-LSTM performed better at the chief complaint classification task than the SVM and that the performance difference between the deep learning models constructed is not obvious. The F scores of Bi-LSTM, Bi-LSTM-CRF, At-Bi-LSTM-CRF and Bi-GRU-CRF model built for the experiment effectively reach 88.10, 87.91, 88.14 and 87.98.


Sign in / Sign up

Export Citation Format

Share Document