scholarly journals Classifier‐Guided Visual Correction of Noisy Labels for Image Classification Tasks

2020 ◽  
Vol 39 (3) ◽  
pp. 195-205 ◽  
Author(s):  
A. Bäuerle ◽  
H. Neumann ◽  
T. Ropinski
2021 ◽  
Vol 2021 ◽  
pp. 1-6
Author(s):  
Qiang Cai ◽  
Fenghai Li ◽  
Yifan Chen ◽  
Haisheng Li ◽  
Jian Cao ◽  
...  

Along with the strong representation of the convolutional neural network (CNN), image classification tasks have achieved considerable progress. However, majority of works focus on designing complicated and redundant architectures for extracting informative features to improve classification performance. In this study, we concentrate on rectifying the incomplete outputs of CNN. To be concrete, we propose an innovative image classification method based on Label Rectification Learning (LRL) through kernel extreme learning machine (KELM). It mainly consists of two steps: (1) preclassification, extracting incomplete labels through a pretrained CNN, and (2) label rectification, rectifying the generated incomplete labels by the KELM to obtain the rectified labels. Experiments conducted on publicly available datasets demonstrate the effectiveness of our method. Notably, our method is extensible which can be easily integrated with off-the-shelf networks for improving performance.


Author(s):  
Po-Ming Lee ◽  
Tzu-Chien Hsiao

Abstract Recent studies have utilizes color, texture, and composition information of images to achieve affective image classification. However, the features related to spatial-frequency domain that were proven to be useful for traditional pattern recognition have not been tested in this field yet. Furthermore, the experiments conducted by previous studies are not internationally-comparable due to the experimental paradigm adopted. In addition, contributed by recent advances in methodology, that are, Hilbert-Huang Transform (HHT) (i.e. Empirical Mode Decomposition (EMD) and Hilbert Transform (HT)), the resolution of frequency analysis has been improved. Hence, the goal of this research is to achieve the affective image-classification task by adopting a standard experimental paradigm introduces by psychologists in order to produce international-comparable and reproducible results; and also to explore the affective hidden patterns of images in the spatial-frequency domain. To accomplish these goals, multiple human-subject experiments were conducted in laboratory. Extended Classifier Systems (XCSs) was used for model building because the XCS has been applied to a wide range of classification tasks and proved to be competitive in pattern recognition. To exploit the information in the spatial-frequency domain, the traditional EMD has been extended to a two-dimensional version. To summarize, the model built by using the XCS achieves Area Under Curve (AUC) = 0.91 and accuracy rate over 86%. The result of the XCS was compared with other traditional machine-learning algorithms (e.g., Radial-Basis Function Network (RBF Network)) that are normally used for classification tasks. Contributed by proper selection of features for model building, user-independent findings were obtained. For example, it is found that the horizontal visual stimulations contribute more to the emotion elicitation than the vertical visual stimulation. The effect of hue, saturation, and brightness; is also presented.


Sensors ◽  
2020 ◽  
Vol 20 (6) ◽  
pp. 1594
Author(s):  
Haifeng Li ◽  
Xin Dou ◽  
Chao Tao ◽  
Zhixiang Wu ◽  
Jie Chen ◽  
...  

Image classification is a fundamental task in remote sensing image processing. In recent years, deep convolutional neural networks (DCNNs) have experienced significant breakthroughs in natural image recognition. The remote sensing field, however, is still lacking a large-scale benchmark similar to ImageNet. In this paper, we propose a remote sensing image classification benchmark (RSI-CB) based on massive, scalable, and diverse crowdsourced data. Using crowdsourced data, such as Open Street Map (OSM) data, ground objects in remote sensing images can be annotated effectively using points of interest, vector data from OSM, or other crowdsourced data. These annotated images can, then, be used in remote sensing image classification tasks. Based on this method, we construct a worldwide large-scale benchmark for remote sensing image classification. This benchmark has large-scale geographical distribution and large total image number. It contains six categories with 35 sub-classes of more than 24,000 images of size 256 × 256 pixels. This classification system of ground objects is defined according to the national standard of land-use classification in China and is inspired by the hierarchy mechanism of ImageNet. Finally, we conduct numerous experiments to compare RSI-CB with the SAT-4, SAT-6, and UC-Merced data sets. The experiments show that RSI-CB is more suitable as a benchmark for remote sensing image classification tasks than other benchmarks in the big data era and has many potential applications.


2018 ◽  
Vol 7 (2.24) ◽  
pp. 541
Author(s):  
Zainab Zaveri ◽  
Dhruv Gosain ◽  
Arul Prakash M

We present an optical compute engine with implementation of Deep CNNs. CNNs are designed in an organized and hierarchical manner and their convolutional layers, subsampling layers alternate with each other, thus   the intricacy of the data per layer escalates as we traverse in the layered structure, which gives us more efficient results when dealing with complex data sets and computations. CNNs are realised in a distinctive way and vary from other neural networks in how their convolutional and subsampling layers are organised. DCNNs bring us very proficient results when it comes to image classification tasks. Recently, we have understood that generalization is more important when compared to the neural network’s depth for more optimised image classification. Our feature extractors are learned in an unsupervised way, hence the results get more precise after every backpropagation and error correction.


2021 ◽  
Vol 15 ◽  
Author(s):  
Zhikui Chen ◽  
Xu Zhang ◽  
Wei Huang ◽  
Jing Gao ◽  
Suhua Zhang

Deep transfer learning aims at dealing with challenges in new tasks with insufficient samples. However, when it comes to few-shot learning scenarios, due to the low diversity of several known training samples, they are prone to be dominated by specificity, thus leading to one-sidedness local features instead of the reliable global feature of the actual categories they belong to. To alleviate the difficulty, we propose a cross-modal few-shot contextual transfer method that leverages the contextual information as a supplement and learns context awareness transfer in few-shot image classification scenes, which fully utilizes the information in heterogeneous data. The similarity measure in the image classification task is reformulated via fusing textual semantic modal information and visual semantic modal information extracted from images. This performs as a supplement and helps to inhibit the sample specificity. Besides, to better extract local visual features and reorganize the recognition pattern, the deep transfer scheme is also used for reusing a powerful extractor from the pre-trained model. Simulation experiments show that the introduction of cross-modal and intra-modal contextual information can effectively suppress the deviation of defining category features with few samples and improve the accuracy of few-shot image classification tasks.


2020 ◽  
Author(s):  
Ying Bi ◽  
Bing Xue ◽  
Mengjie Zhang

IEEE Feature extraction is essential for solving image classification by transforming low-level pixel values into high-level features. However, extracting effective features from images is challenging due to high variations across images in scale, rotation, illumination, and background. Existing methods often have a fixed model complexity and require domain expertise. Genetic programming with a flexible representation can find the best solution without the use of domain knowledge. This paper proposes a new genetic programming-based approach to automatically learning informative features for different image classification tasks. In the new approach, a number of image-related operators, including filters, pooling operators and feature extraction methods, are employed as functions. A flexible program structure is developed to integrate different functions and terminals into a single tree/solution. The new approach can evolve solutions of variable depths to extract various numbers and types of features from the images. The new approach is examined on 12 different image classification tasks of varying difficulty and compared with a large number of effective algorithms. The results show that the new approach achieves better classification performance than most benchmark methods. The analysis of the evolved programs/solutions and the visualisation of the learned features provide deep insights on the proposed approach.


2021 ◽  
Vol 13 (23) ◽  
pp. 4823
Author(s):  
Cheng Shi ◽  
Yenan Dang ◽  
Li Fang ◽  
Zhiyong Lv ◽  
Huifang Shen

Multi-sensor image can provide supplementary information, usually leading to better performance in classification tasks. However, the general deep neural network-based multi-sensor classification method learns each sensor image separately, followed by a stacked concentrate for feature fusion. This way requires a large time cost for network training, and insufficient feature fusion may cause. Considering efficient multi-sensor feature extraction and fusion with a lightweight network, this paper proposes an attention-guided classification method (AGCNet), especially for multispectral (MS) and panchromatic (PAN) image classification. In the proposed method, a share-split network (SSNet) including a shared branch and multiple split branches performs feature extraction for each sensor image, where the shared branch learns basis features of MS and PAN images with fewer learn-able parameters, and the split branch extracts the privileged features of each sensor image via multiple task-specific attention units. Furthermore, a selective classification network (SCNet) with a selective kernel unit is used for adaptive feature fusion. The proposed AGCNet can be trained by an end-to-end fashion without manual intervention. The experimental results are reported on four MS and PAN datasets, and compared with state-of-the-art methods. The classification maps and accuracies show the superiority of the proposed AGCNet model.


Sign in / Sign up

Export Citation Format

Share Document