Sensor drift detection in SNG plant using auto-associative kernel regression

Author(s):  
Jae-Min cha ◽  
Taekyoung Lee ◽  
Joon-Young Kim ◽  
Junguk Shin ◽  
Jinil Kim ◽  
...  

Chemosensors ◽  
2021 ◽  
Vol 9 (4) ◽  
pp. 78
Author(s):  
Jianhua Cao ◽  
Tao Liu ◽  
Jianjun Chen ◽  
Tao Yang ◽  
Xiuxiu Zhu ◽  
...  

Gas sensor drift is an important issue of electronic nose (E-nose) systems. This study follows this concern under the condition that requires an instant drift compensation with massive online E-nose responses. Recently, an active learning paradigm has been introduced to such condition. However, it does not consider the “noisy label” problem caused by the unreliability of its labeling process in real applications. Thus, we have proposed a class-label appraisal methodology and associated active learning framework to assess and correct the noisy labels. To evaluate the performance of the proposed methodologies, we used the datasets from two E-nose systems. The experimental results show that the proposed methodology helps the E-noses achieve higher accuracy with lower computation than the reference methods do. Finally, we can conclude that the proposed class-label appraisal mechanism is an effective means of enhancing the robustness of active learning-based E-nose drift compensation.



2013 ◽  
Vol 25 (4) ◽  
pp. 829-853 ◽  
Author(s):  
Hira L. Koul ◽  
Weixing Song


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Abdulkadir Canatar ◽  
Blake Bordelon ◽  
Cengiz Pehlevan

AbstractA theoretical understanding of generalization remains an open problem for many machine learning models, including deep networks where overparameterization leads to better performance, contradicting the conventional wisdom from classical statistics. Here, we investigate generalization error for kernel regression, which, besides being a popular machine learning method, also describes certain infinitely overparameterized neural networks. We use techniques from statistical mechanics to derive an analytical expression for generalization error applicable to any kernel and data distribution. We present applications of our theory to real and synthetic datasets, and for many kernels including those that arise from training deep networks in the infinite-width limit. We elucidate an inductive bias of kernel regression to explain data with simple functions, characterize whether a kernel is compatible with a learning task, and show that more data may impair generalization when noisy or not expressible by the kernel, leading to non-monotonic learning curves with possibly many peaks.







2014 ◽  
Vol 50 (2) ◽  
pp. 77-79 ◽  
Author(s):  
Haopeng Zhang ◽  
Zhiguo Jiang


2016 ◽  
Vol 123 ◽  
pp. 53-63 ◽  
Author(s):  
Kaibing Zhang ◽  
Xinbo Gao ◽  
Jie Li ◽  
Hongxing Xia


Sign in / Sign up

Export Citation Format

Share Document