scholarly journals A Joint Optimization Framework of the Embedding Model and Classifier for Meta-Learning

2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Zhongyu Liu ◽  
Xu Chu ◽  
Yan Lu ◽  
Wanli Yu ◽  
Shuguang Miao ◽  
...  

The aim of meta-learning is to train the machine to learn quickly and accurately. Improving the performance of the meta-learning model is important in solving the problem of small samples and in achieving general artificial intelligence. A meta-learning method based on feature embedding that exhibits good performance on the few-shot problem was previously proposed. In this method, the pretrained deep convolution neural network was used as the embedding model of sample features, and the output of one layer was used as the feature representation of samples. The main limitation of the method is the inability to fuse low-level texture features and high-level semantic features of the embedding model and joint optimization of the embedding model and classifier. Therefore, a multilayer adaptive joint training and optimization method of the embedding model was proposed in the current study. The main characteristics of the current method include using multilayer adaptive hierarchical loss to train the embedding model and using the quantum genetic algorithm to jointly optimize the embedding model and classifier. Validation was performed based on multiple public datasets for meta-learning model testing. The proposed method shows higher accuracy compared with multiple baseline methods.

AI ◽  
2021 ◽  
Vol 2 (2) ◽  
pp. 195-208
Author(s):  
Gabriel Dahia ◽  
Maurício Pamplona Segundo

We propose a method that can perform one-class classification given only a small number of examples from the target class and none from the others. We formulate the learning of meaningful features for one-class classification as a meta-learning problem in which the meta-training stage repeatedly simulates one-class classification, using the classification loss of the chosen algorithm to learn a feature representation. To learn these representations, we require only multiclass data from similar tasks. We show how the Support Vector Data Description method can be used with our method, and also propose a simpler variant based on Prototypical Networks that obtains comparable performance, indicating that learning feature representations directly from data may be more important than which one-class algorithm we choose. We validate our approach by adapting few-shot classification datasets to the few-shot one-class classification scenario, obtaining similar results to the state-of-the-art of traditional one-class classification, and that improves upon that of one-class classification baselines employed in the few-shot setting.


Information ◽  
2021 ◽  
Vol 12 (7) ◽  
pp. 285
Author(s):  
Wenjing Yang ◽  
Liejun Wang ◽  
Shuli Cheng ◽  
Yongming Li ◽  
Anyu Du

Recently, deep learning to hash has extensively been applied to image retrieval, due to its low storage cost and fast query speed. However, there is a defect of insufficiency and imbalance when existing hashing methods utilize the convolutional neural network (CNN) to extract image semantic features and the extracted features do not include contextual information and lack relevance among features. Furthermore, the process of the relaxation hash code can lead to an inevitable quantization error. In order to solve these problems, this paper proposes deep hash with improved dual attention for image retrieval (DHIDA), which chiefly has the following contents: (1) this paper introduces the improved dual attention mechanism (IDA) based on the ResNet18 pre-trained module to extract the feature information of the image, which consists of the position attention module and the channel attention module; (2) when calculating the spatial attention matrix and channel attention matrix, the average value and maximum value of the column of the feature map matrix are integrated in order to promote the feature representation ability and fully leverage the features of each position; and (3) to reduce quantization error, this study designs a new piecewise function to directly guide the discrete binary code. Experiments on CIFAR-10, NUS-WIDE and ImageNet-100 show that the DHIDA algorithm achieves better performance.


Author(s):  
Weiguo Cao ◽  
Marc J. Pomeroy ◽  
Yongfeng Gao ◽  
Matthew A. Barish ◽  
Almas F. Abbasi ◽  
...  

AbstractTexture features have played an essential role in the field of medical imaging for computer-aided diagnosis. The gray-level co-occurrence matrix (GLCM)-based texture descriptor has emerged to become one of the most successful feature sets for these applications. This study aims to increase the potential of these features by introducing multi-scale analysis into the construction of GLCM texture descriptor. In this study, we first introduce a new parameter - stride, to explore the definition of GLCM. Then we propose three multi-scaling GLCM models according to its three parameters, (1) learning model by multiple displacements, (2) learning model by multiple strides (LMS), and (3) learning model by multiple angles. These models increase the texture information by introducing more texture patterns and mitigate direction sparsity and dense sampling problems presented in the traditional Haralick model. To further analyze the three parameters, we test the three models by performing classification on a dataset of 63 large polyp masses obtained from computed tomography colonoscopy consisting of 32 adenocarcinomas and 31 benign adenomas. Finally, the proposed methods are compared to several typical GLCM-texture descriptors and one deep learning model. LMS obtains the highest performance and enhances the prediction power to 0.9450 with standard deviation 0.0285 by area under the curve of receiver operating characteristics score which is a significant improvement.


2021 ◽  
Vol 2083 (4) ◽  
pp. 042005
Author(s):  
Xueyi Liu ◽  
Junhao Dong ◽  
Guangyu Tu

Abstract Fan, as the most commonly used mechanical equipment, is widely used. In order to solve the problem of fan bearing fault diagnosis, this paper analyzes the main factors affecting fan spindle speed and power generation in operation. The input and output parameters of the performance prediction model are determined. The performance prediction model of wind turbine is established by using generalized regression neural network, and the smoothing factor of GRNN is optimized by comparing the prediction accuracy of the model. Based on this model, the sliding data window method is used to calculate the residual evaluation index of wind turbine speed and power in real time. When the evaluation index continuously exceeds the pre-set threshold, the abnormal state of wind turbine can be judged. In order to obtain wind turbine blades with better aerodynamic performance, a blade aerodynamic performance optimization method based on quantum heredity is proposed. The B é zier curve control point is used as the design variable to represent the continuous chord length and torsion angle distribution of the blade, the blade shape optimization model aiming at the maximum power is established, and the quantum genetic algorithm is used to optimize the chord length and torsion angle of the blade under different constraints. The optimization results of quantum genetic algorithm and classical genetic algorithm are compared and analyzed. Under the same parameters and boundary conditions, the proposed blade aerodynamic optimization method based on quantum genetic optimization is better than the classical genetic optimization method, and can obtain better blade aerodynamic shape and higher wind energy capture efficiency. This method makes up for the shortcomings of traditional fault diagnosis methods, improves the recognition rate of fault types and the accuracy of fault diagnosis, and the diagnosis effect is good.


Smart Science ◽  
2018 ◽  
Vol 7 (1) ◽  
pp. 16-27
Author(s):  
Wei-Ling Chen ◽  
Hsiang-Yueh Lai ◽  
Pi-Yun Chen ◽  
Chung-Dann Kan ◽  
Chia-Hung Lin

2017 ◽  
Vol 15 (10) ◽  
pp. 101203 ◽  
Author(s):  
Meng Zheng Meng Zheng ◽  
Ke Liu Ke Liu ◽  
Lihui Liu Lihui Liu ◽  
Yanqiu Li Yanqiu Li

Sign in / Sign up

Export Citation Format

Share Document