scholarly journals A-iLearn: An adaptive incremental learning model for spoof fingerprint detection

2021 ◽  
pp. 100210
Author(s):  
Shivang Agarwal ◽  
Ajita Rattani ◽  
C. Ravindranath Chowdary
2018 ◽  
Author(s):  
Yu Li ◽  
Zhongxiao Li ◽  
Lizhong Ding ◽  
Yuhui Hu ◽  
Wei Chen ◽  
...  

ABSTRACTMotivationIn most biological data sets, the amount of data is regularly growing and the number of classes is continuously increasing. To deal with the new data from the new classes, one approach is to train a classification model, e.g., a deep learning model, from scratch based on both old and new data. This approach is highly computationally costly and the extracted features are likely very different from the ones extracted by the model trained on the old data alone, which leads to poor model robustness. Another approach is to fine tune the trained model from the old data on the new data. However, this approach often does not have the ability to learn new knowledge without forgetting the previously learned knowledge, which is known as the catastrophic forgetting problem. To our knowledge, this problem has not been studied in the field of bioinformatics despite its existence in many bioinformatic problems.ResultsHere we propose a novel method, SupportNet, to solve the catastrophic forgetting problem efficiently and effectively. SupportNet combines the strength of deep learning and support vector machine (SVM), where SVM is used to identify the support data from the old data, which are fed to the deep learning model together with the new data for further training so that the model can review the essential information of the old data when learning the new information. Two powerful consolidation regularizers are applied to ensure the robustness of the learned model. Comprehensive experiments on various tasks, including enzyme function prediction, subcellular structure classification and breast tumor classification, show that SupportNet drastically outperforms the state-of-the-art incremental learning methods and reaches similar performance as the deep learning model trained from scratch on both old and new data.AvailabilityOur program is accessible at: https://github.com/lykaust15/SupportNet.


Deep learning has arrived with a great number of advances in the research of machine learning and its models. Due to the advancements recently in the field of deep learning and its models especially in the fields like NLP and Computer Vision in supervised learning for which we have to pre-definably decide a dataset and train our model completely on it and make predictions but in case if we have any new samples of data on which we want our model to be predicted then we have to completely retrain the model, which is computationally costly therefore to avoid re-training the model, we add the new samples on the previously learnt features from the pre- trained model called Incremental Learning. In the paper we proposed the system to overcome the process of catastrophic forgetting we introduced the concept of building on pre-trained model.


2021 ◽  
pp. 1-13
Author(s):  
Edmond Q. Wu ◽  
Chin-Teng Lin ◽  
Li-Min Zhu ◽  
Z. R. Tang ◽  
Yu-Wen Jie ◽  
...  

2020 ◽  
Vol 208 ◽  
pp. 106460
Author(s):  
David Muñoz ◽  
Camilo Narváez ◽  
Carlos Cobos ◽  
Martha Mendoza ◽  
Francisco Herrera

Sign in / Sign up

Export Citation Format

Share Document