shot classification
Recently Published Documents


TOTAL DOCUMENTS

234
(FIVE YEARS 154)

H-INDEX

15
(FIVE YEARS 5)

2022 ◽  
Vol 122 ◽  
pp. 108304
Author(s):  
Zhengping Hu ◽  
Zijun Li ◽  
Xueyu Wang ◽  
Saiyue Zheng

2021 ◽  
Author(s):  
Yirui Wu ◽  
Benze Wu ◽  
Yunfei Zhang ◽  
Shaohua Wan

Abstract With the development of 5G/6G, IoT, and cloud systems, the amount of data generated, transmitted, and calculated is increasing, and fast and effective close-range image classification becomes more and more important. But many methods require a large number of samples to support in order to achieve sufficient functions. This allows the entire network to zoom in to meet a large number of effective feature extractions, which reduces the efficiency of small sample classification to a certain extent. In order to solve these problems, we propose an image enhancement method for the problems of few-shot classification. This method is an expanded convolutional network with data enhancement function. This network can not only meet the features required for image classification without increasing the number of samples, but also has the advantage of using a large number of effective features without sacrificing efficiency. structure. The cutout structure can enhance the matrix in the data image input process by adding a fixed area 0 mask. The structure of FAU uses dilated convolution and uses the characteristics of the sequence to improve the efficiency of the network. We conduct a comparative experiment on the miniImageNet and CUB datasets, and the proposed method is superior to the comparative method in terms of effectiveness and efficiency measurement in the 1-shot and 5-shot cases.


2021 ◽  
Author(s):  
Jiahao Wang ◽  
Bin Song ◽  
Dan Wang ◽  
Hao Qin

2021 ◽  
Vol 11 (22) ◽  
pp. 10977
Author(s):  
Youngjae Lee ◽  
Hyeyoung Park

In developing a few-shot classification model using deep networks, the limited number of samples in each class causes difficulty in utilizing statistical characteristics of the class distributions. In this paper, we propose a method to treat this difficulty by combining a probabilistic similarity based on intra-class statistics with a metric-based few-shot classification model. Noting that the probabilistic similarity estimated from intra-class statistics and the classifier of conventional few-shot classification models have a common assumption on the class distributions, we propose to apply the probabilistic similarity to obtain loss value for episodic learning of embedding network as well as to classify unseen test data. By defining the probabilistic similarity as the probability density of difference vectors between two samples with the same class label, it is possible to obtain a more reliable estimate of the similarity especially for the case of large number of classes. Through experiments on various benchmark data, we confirm that the probabilistic similarity can improve the classification performance, especially when the number of classes is large.


Author(s):  
Zhewei Weng ◽  
Chunyan Feng ◽  
Tiankui Zhang ◽  
Yutao Zhu ◽  
Zeren Chen

2021 ◽  
Vol 10 (1) ◽  
pp. 44
Author(s):  
Bhargavi Mahesh ◽  
Teresa Scholz ◽  
Jana Streit ◽  
Thorsten Graunke ◽  
Sebastian Hettenkofer

Metal oxide (MOX) sensors offer a low-cost solution to detect volatile organic compound (VOC) mixtures. However, their operation involves time-consuming heating cycles, leading to a slower data collection and data classification process. This work introduces a few-shot learning approach that promotes rapid classification. In this approach, a model trained on several base classes is fine-tuned to recognize a novel class using a small number (n = 5, 25, 50 and 75) of randomly selected novel class measurements/shots. The used dataset comprises MOX sensor measurements of four different juices (apple, orange, currant and multivitamin) and air, collected over 10-minute phases using a pulse heater signal. While high average accuracy of 82.46 is obtained for five-class classification using 75 shots, the model’s performance depends on the juice type. One-shot validation showed that not all measurements within a phase are representative, necessitating careful shot selection to achieve high classification accuracy. Error analysis revealed contamination of some measurements by the previously measured juice, a characteristic of MOX sensor data that is often overlooked and equivalent to mislabeling. Three strategies are adopted to overcome this: (E1) and (E2) fine-tuning after dropping initial/final measurements and the first half of each phase, respectively, (E3) pretraining with data from the second half of each phase. Results show that each of the strategies performs best for a specific number of shots. E3 results in the highest performance for five-shot learning (accuracy 63.69), whereas E2 yields the best results for 25-/50-shot learning (accuracies 79/87.1) and E1 predicts best for 75-shot learning (accuracy 88.6). Error analysis also showed that, for all strategies, more than 50% of air misclassifications resulted from contamination, but E1 was affected the least. This work demonstrates how strongly data quality can affect prediction performance, especially for few-shot classification methods, and that a data-centric approach can improve the results.


Sign in / Sign up

Export Citation Format

Share Document