scholarly journals Multi-scale vector quantization with reconstruction trees

Author(s):  
Enrico Cecini ◽  
Ernesto De Vito ◽  
Lorenzo Rosasco

Abstract We propose and study a multi-scale approach to vector quantization (VQ). We develop an algorithm, dubbed reconstruction trees, inspired by decision trees. Here the objective is parsimonious reconstruction of unsupervised data, rather than classification. Contrasted to more standard VQ methods, such as $k$-means, the proposed approach leverages a family of given partitions, to quickly explore the data in a coarse-to-fine multi-scale fashion. Our main technical contribution is an analysis of the expected distortion achieved by the proposed algorithm, when the data are assumed to be sampled from a fixed unknown distribution. In this context, we derive both asymptotic and finite sample results under suitable regularity assumptions on the distribution. As a special case, we consider the setting where the data generating distribution is supported on a compact Riemannian submanifold. Tools from differential geometry and concentration of measure are useful in our analysis.

2020 ◽  
Vol 30 (12) ◽  
pp. 4676-4687
Author(s):  
Yifan Zuo ◽  
Yuming Fang ◽  
Yong Yang ◽  
Xiwu Shang ◽  
Qiang Wu
Keyword(s):  

Author(s):  
Zhongguo Li ◽  
Magnus Oskarsson ◽  
Anders Heyden

AbstractThe task of reconstructing detailed 3D human body models from images is interesting but challenging in computer vision due to the high freedom of human bodies. This work proposes a coarse-to-fine method to reconstruct detailed 3D human body from multi-view images combining Voxel Super-Resolution (VSR) based on learning the implicit representation. Firstly, the coarse 3D models are estimated by learning an Pixel-aligned Implicit Function based on Multi-scale Features (MF-PIFu) which are extracted by multi-stage hourglass networks from the multi-view images. Then, taking the low resolution voxel grids which are generated by the coarse 3D models as input, the VSR is implemented by learning an implicit function through a multi-stage 3D convolutional neural network. Finally, the refined detailed 3D human body models can be produced by VSR which can preserve the details and reduce the false reconstruction of the coarse 3D models. Benefiting from the implicit representation, the training process in our method is memory efficient and the detailed 3D human body produced by our method from multi-view images is the continuous decision boundary with high-resolution geometry. In addition, the coarse-to-fine method based on MF-PIFu and VSR can remove false reconstructions and preserve the appearance details in the final reconstruction, simultaneously. In the experiments, our method quantitatively and qualitatively achieves the competitive 3D human body models from images with various poses and shapes on both the real and synthetic datasets.


Author(s):  
Kun Zhang ◽  
Danny Crookes ◽  
Jim Diamond ◽  
Minrui Fei ◽  
Jianguo Wu ◽  
...  

2016 ◽  
pp. 1184-1228 ◽  
Author(s):  
Bhupesh Kumar Singh

Genetic Algorithm (GA) (a structured framework of metaheauristics) has been used in various tasks such as search optimization and machine learning. Theoretically, there should be sound framework for genetic algorithms which can interpret/explain the various facts associated with it. There are various theories of the working of GA though all are subject to criticism. Hence an approach is being adopted that the legitimate theory of GA must be able to explain the learning process (a special case of the successive approximation) of GA. The analytical method of approximating some known function is expanding a complicated function an infinite series of terms containing some simpler (or otherwise useful) function. These infinite approximations facilitate the error to be made arbitrarily small by taking a progressive greater number of terms into consideration. The process of learning in an unknown environment, the form of function to be learned is known only by its form over the observation space. The problem of learning the possible form of the function is termed as experience problem. Various learning paradigms have ensured their legitimacy through the rigid space interpretation of the concentration of measure and Dvoretzky theorem. Hence it is being proposed that the same criterion should be applied to explain the learning capability of GA, various formalisms of explaining the working of GA should be evaluated by applying the criteria, and that learning capability can be used to demonstrate the probable capability of GA to perform beyond the limit cast by the No Free Lunch Theorem.


2019 ◽  
Vol 26 (2) ◽  
pp. 217-221 ◽  
Author(s):  
Chongyu Chen ◽  
Haoguang Huang ◽  
Chuangrong Chen ◽  
Zhuoqi Zheng ◽  
Hui Cheng
Keyword(s):  

2013 ◽  
Vol 22 (01) ◽  
pp. 1250075 ◽  
Author(s):  
NAN YANG ◽  
HU-CHUAN LU ◽  
GUO-LIANG FANG ◽  
GANG YANG

In this paper, we propose an effective framework to automatically segment hard exudates (HEs) in fundus images. Our framework is based on a coarse-to-fine strategy, as we first get a coarse result allowed of some negative samples, then eliminate the negative samples step by step. In our framework, we make the most of the multi-channel information by employing a boosted soft segmentation algorithm. Additionally, we develop a multi-scale background subtraction method to obtain the coarse segmentation result. After subtracting the optical disc (OD) region from the coarse result, the HEs are extracted by a SVM classifier. The main contributions of this paper are: (1) propose an efficient and robust framework for automatic HEs segmentation; (2) present a boosted soft segmentation algorithm to combine multi-channel information; (3) employ a double ring filter to segment and adjust the OD region. We perform our experiments on the pubic DIARETDB1 dateset, which consists of 89 fundus images. The performance of our algorithm is assessed on both lesion-based criterion and image-based criterion. Our experimental results show that the proposed algorithm is very effective and robust.


2018 ◽  
Vol 6 (1) ◽  
Author(s):  
Dominik Janzing ◽  
Bernhard Schölkopf

AbstractWe study a model where one target variable $Y$ is correlated with a vector $\textbf{X}:=(X_1,\dots,X_d)$ of predictor variables being potential causes of $Y$. We describe a method that infers to what extent the statistical dependences between $\textbf{X}$ and $Y$ are due to the influence of $\textbf{X}$ on $Y$ and to what extent due to a hidden common cause (confounder) of $\textbf{X}$ and $Y$. The method relies on concentration of measure results for large dimensions $d$ and an independence assumption stating that, in the absence of confounding, the vector of regression coefficients describing the influence of each $\textbf{X}$ on $Y$ typically has ‘generic orientation’ relative to the eigenspaces of the covariance matrix of $\textbf{X}$. For the special case of a scalar confounder we show that confounding typically spoils this generic orientation in a characteristic way that can be used to quantitatively estimate the amount of confounding (subject to our idealized model assumptions).


1965 ◽  
Vol 25 ◽  
pp. 121-142
Author(s):  
Minoru Kurita

In this paper we consider certain tensors associated with differentiable mappings of Riemannian manifolds and apply the results to a p-mapping, which is a special case of a subprojective one in affinely connected manifolds (cf. [1], [7]). The p-mapping in Riemannian manifolds is a generalization of a conformal mapping and a projective one. From a point of view of differential geometry an analogy between these mappings is well known. On the other hand it is interesting that a stereographic projection of a sphere onto a plane is conformal, while a central projection is projectve, namely geodesic-preserving. This situation was clarified partly in [6]. A p-mapping defined in this paper gives a precise explanation of this and also affords a certain mapping in the euclidean space which includes a similar mapping and an inversion as special cases.


Sign in / Sign up

Export Citation Format

Share Document