Keywords Assignment to Fixed Image Region Segmentations Using Fuzzy SVMs

2012 ◽  
Vol 429 ◽  
pp. 236-241 ◽  
Author(s):  
Ye Ji ◽  
Yan Chen ◽  
Yan Cao ◽  
Li Li Qu

The basic idea of this paper is multiple keywords can be assigned to image through the method of fixed region segmentation. We divide a single image into the 4-level regions. For each of them, the combined feature is extracted and inputted into the trained Fuzzy SVMs to classify, which has been proved better than conventional SVMs in the generalization ability. The values of classification in each category are calculated. Based on these values, the keywords are assigned.

2013 ◽  
Vol 12 (6) ◽  
pp. 1168-1175 ◽  
Author(s):  
Gangyi Wang ◽  
Guanghui Ren ◽  
Lihui Jiang ◽  
Taifan Quan

2011 ◽  
Vol 135-136 ◽  
pp. 522-527 ◽  
Author(s):  
Gang Zhang ◽  
Shan Hong Zhan ◽  
Chun Ru Wang ◽  
Liang Lun Cheng

Ensemble pruning searches for a selective subset of members that performs as well as, or better than ensemble of all members. However, in the accuracy / diversity pruning framework, generalization ability of target ensemble is not considered, and moreover, there is not clear relationship between them. In this paper, we proof that ensemble formed by members of better generalization ability is also of better generalization ability. We adopt learning with both labeled and unlabeled data to improve generalization ability of member learners. A data dependant kernel determined by a set of unlabeled points is plugged in individual kernel learners to improve generalization ability, and ensemble pruning is launched as much previous work. The proposed method is suitable for both single-instance and multi-instance learning framework. Experimental results on 10 UCI data sets for single-instance learning and 4 data sets for multi-instance learning show that subensemble formed by the proposed method is effective.


2021 ◽  
Vol 1971 (1) ◽  
pp. 012068
Author(s):  
Shufang Xu ◽  
Yaowen Fu ◽  
Xiaoyi Sun

2001 ◽  
Vol 24 (2) ◽  
pp. 221-235 ◽  
Author(s):  
Hui‐Yu Huang ◽  
Yung‐Sheng Chen ◽  
Wen‐Hsing Hsu

2018 ◽  
Vol 37 (3) ◽  
pp. 191 ◽  
Author(s):  
Jiří Dvořák ◽  
Jan Švihlík ◽  
Jan Kybic ◽  
Barbora Radochová ◽  
Jiří Janáček ◽  
...  

The present paper deals with the problem of volume estimation of individual objects from a single 2D view. Our main application is volume estimation of pancreatic (Langerhans) islets and the single 2D view constraint comes from the time and equipment limitations of the standard clinical procedure.Two main approaches are followed in this paper. First, two regression-based methods are proposed, using a set of simple shape descriptors of the segmented image of the islet. Second, two example-based methods are proposed, based on a database of islets with known volume. For training and evaluation, islet volumes were determined by OPT microscopy and a semi-automatical stereological volume estimation using the so-called Fakir probes.The performance of the single image volume estimation methods is studied on a set of 99 islets from human donors. Further experiments were also performed on a stone dataset and on synthetic 3D shapes, generated using a flexible stochastic particle model. The proposed methods are fast and the experimental results show that in most situations the proposed methods perform significantly better than the methods currently used in clinical practice, which are based on simple spherical or ellipsoidal models.


2020 ◽  
Vol 10 (3) ◽  
pp. 955
Author(s):  
Taejun Kim ◽  
Han-joon Kim

Researchers frequently use visualizations such as scatter plots when trying to understand how random variables are related to each other, because a single image represents numerous pieces of information. Dependency measures have been widely used to automatically detect dependencies, but these measures only take into account a few types of data, such as the strength and direction of the dependency. Based on advances in the applications of deep learning to vision, we believe that convolutional neural networks (CNNs) can come to understand dependencies by analyzing visualizations, as humans do. In this paper, we propose a method that uses CNNs to extract dependency representations from 2D histograms. We carried out three sorts of experiments and found that CNNs can learn from visual representations. In the first experiment, we used a synthetic dataset to show that CNNs can perfectly classify eight types of dependency. Then, we showed that CNNs can predict correlations based on 2D histograms of real datasets and visualize the learned dependency representation space. Finally, we applied our method and demonstrated that it performs better than the AutoLearn feature generation algorithm in terms of average classification accuracy, while generating half as many features.


Author(s):  
Guang-Nan He ◽  
Yu-Bin Yang ◽  
Yao Zhang ◽  
Yang Gao ◽  
Lin Shang

Sign in / Sign up

Export Citation Format

Share Document