incremental update
Recently Published Documents


TOTAL DOCUMENTS

104
(FIVE YEARS 15)

H-INDEX

13
(FIVE YEARS 1)

2022 ◽  
Vol 16 (2) ◽  
pp. 1-27
Author(s):  
Yang Yang ◽  
Hongchen Wei ◽  
Zhen-Qiang Sun ◽  
Guang-Yu Li ◽  
Yuanchun Zhou ◽  
...  

Open set classification (OSC) tackles the problem of determining whether the data are in-class or out-of-class during inference, when only provided with a set of in-class examples at training time. Traditional OSC methods usually train discriminative or generative models with the owned in-class data, and then utilize the pre-trained models to classify test data directly. However, these methods always suffer from the embedding confusion problem, i.e., partial out-of-class instances are mixed with in-class ones of similar semantics, making it difficult to classify. To solve this problem, we unify semi-supervised learning to develop a novel OSC algorithm, S2OSC, which incorporates out-of-class instances filtering and model re-training in a transductive manner. In detail, given a pool of newly coming test data, S2OSC firstly filters the mostly distinct out-of-class instances using the pre-trained model, and annotates super-class for them. Then, S2OSC trains a holistic classification model by combing in-class and out-of-class labeled data with the remaining unlabeled test data in a semi-supervised paradigm. Furthermore, considering that data are usually in the streaming form in real applications, we extend S2OSC into an incremental update framework (I-S2OSC), and adopt a knowledge memory regularization to mitigate the catastrophic forgetting problem in incremental update. Despite the simplicity of proposed models, the experimental results show that S2OSC achieves state-of-the-art performance across a variety of OSC tasks, including 85.4% of F1 on CIFAR-10 with only 300 pseudo-labels. We also demonstrate how S2OSC can be expanded to incremental OSC setting effectively with streaming data.


2021 ◽  
pp. 1-10
Author(s):  
Aamir Ali ◽  
Muhammad Asim

Generally, big interaction networks keep the interaction records of actors over a certain period. With the rapid increase of these networks users, the demand for frequent subgraph mining on a large database is more and more intense. However, most of the existing studies of frequent subgraphs have not considered the temporal information of the graph. To fill this research gap, this article presents a novel temporal frequent subgraph-based mining algorithm (TFSBMA) using spark. TFSBMA employs frequent subgraph mining with a minimum threshold in a spark environment. The proposed algorithm attempts to analyze the temporal frequent subgraph (TFS) using a Frequent Subgraph Mining Based Using Spark (FSMBUS) method with a minimum support threshold and evaluate its frequency in temporal manner. Furthermore, based on the FSMBUS results, the study also tries to compute TFS using an incremental update strategy. Experimental results show that the proposed algorithm can accurately and efficiently compute all the TFS with corresponding frequencies. In addition, we applied the proposed algorithm on a real-world dataset having artificial time information that confirms the practical usability of the proposed algorithm.


Author(s):  
Xu Lu ◽  
Xuguang Wang

AbstractShort-term spin-up for strong storms is a known difficulty for the operational Hurricane Weather Research and Forecasting (HWRF) model after assimilating high-resolution inner-core observations. Our previous study associated this short-term intensity prediction issue with the incompatibility between the HWRF model and the data assimilation (DA) analysis. While improving physics and resolution of the model was found helpful, this study focuses on further improving the intensity predictions through the four-dimensional incremental analysis update (4DIAU).In the traditional 4DIAU, increments are pre-determined by subtracting background forecasts from analyses. Such pre-determined increments implicitly require linear evolution assumption during the update, which are hardly valid for rapid-evolving hurricanes. To confirm the hypothesis, a corresponding 4D analysis nudging (4DAN) method which uses online increments is first compared with the 4DIAU in an oscillation model. Then, variants of 4DIAU are proposed to improve its application for nonlinear systems. Next, 4DIAU, 4DAN and their proposed improvements are implemented into the HWRF 4DEnVar DA system and are investigated with hurricane Patricia (2015).Results from both oscillation model and HWRF model show that: 1. the pre-determined increments in 4DIAU can be detrimental when there are discrepancies between the updated and background forecasts during a nonlinear evolution. 2. 4DAN can improve the performance of incremental update upon 4DIAU, but its improvements are limited by the over-filtering. 3. Relocating initial background before the incremental update can improve the corresponding traditional methods. 4. the feature-relative 4DIAU method improves the incremental update the most and produces the best track and intensity predictions for Patricia among all experiments.


2021 ◽  
Author(s):  
Ruth E Timme ◽  
Maria Balkey ◽  
William Wolfgang ◽  
Errol Strain

PURPOSE: Guidance on how to populate NCBI's metadata packages, maximizing interoperability for foodborne pathogen surveillance. SCOPE: This protocol provides detailed instructions for populating the following two templates: 1. BioSample metadata: guidelines to populate the GenomeTrakr-extended pathogen package. 2. SRA metadata: NCBI's generic sequence metadata template for SRA submissions. Versions: v6: Added the One Health Enteric package presented at IAFP 2021 meeting. v7: Updated the picklists in the GenomeTrakr-extended pathogen package, "GT-pathogen package-OHE v0.2.2.xlsx" and added an incremental update file for the DRAFT One Health Enteric Package that includes extensive edits compared to v6.


Author(s):  
Zhe-Hui Lin ◽  
Shu-Chih Yang ◽  
Eugenia Kalnay

The analysis correction made by data assimilation (DA) can introduce model shock or artificial signal, leading to degradation in forecast. In this study, we propose an Ensemble Transform Kalman Incremental Smoother (ETKIS) as an incremental update solution for ETKF-based algorithms. ETKIS not only has the advantages as other incremental update schemes to improve the balance in the analysis but also provides effective incremental correction, even under strong nonlinear dynamics. Results with the shallow-water model show that ETKIS can smooth out the imbalance associated with the use of covariance localization. More importantly, ETKIS preserves the moving signal better than the overly smoothed corrections derived by other incremental update schemes. Results from the Lorenz 3-variable model show that ETKIS and ETKF achieve similar accuracy at the end of the assimilation window, while the time-varying increment of ETKIS allows the ensemble to avoid strong corrections during strong nonlinearity. ETKIS shows benefits over 4DIAU by better capturing the evolving error and constraining the over-dispersive spread under conditions of long assimilation windows or a high perturbation growth rate.


Sign in / Sign up

Export Citation Format

Share Document