The Condition Metric in the Space of Rectangular Full Rank Matrices

2010 ◽  
Vol 31 (5) ◽  
pp. 2580-2602 ◽  
Author(s):  
Paola Boito ◽  
Jean-Pierre Dedieu
Keyword(s):  
2020 ◽  
Vol 2020 (10) ◽  
pp. 310-1-310-7
Author(s):  
Khalid Omer ◽  
Luca Caucci ◽  
Meredith Kupinski

This work reports on convolutional neural network (CNN) performance on an image texture classification task as a function of linear image processing and number of training images. Detection performance of single and multi-layer CNNs (sCNN/mCNN) are compared to optimal observers. Performance is quantified by the area under the receiver operating characteristic (ROC) curve, also known as the AUC. For perfect detection AUC = 1.0 and AUC = 0.5 for guessing. The Ideal Observer (IO) maximizes AUC but is prohibitive in practice because it depends on high-dimensional image likelihoods. The IO performance is invariant to any fullrank, invertible linear image processing. This work demonstrates the existence of full-rank, invertible linear transforms that can degrade both sCNN and mCNN even in the limit of large quantities of training data. A subsequent invertible linear transform changes the images’ correlation structure again and can improve this AUC. Stationary textures sampled from zero mean and unequal covariance Gaussian distributions allow closed-form analytic expressions for the IO and optimal linear compression. Linear compression is a mitigation technique for high-dimension low sample size (HDLSS) applications. By definition, compression strictly decreases or maintains IO detection performance. For small quantities of training data, linear image compression prior to the sCNN architecture can increase AUC from 0.56 to 0.93. Results indicate an optimal compression ratio for CNN based on task difficulty, compression method, and number of training images.


Author(s):  
Gerandy Brito ◽  
Ioana Dumitriu ◽  
Kameron Decker Harris

Abstract We prove an analogue of Alon’s spectral gap conjecture for random bipartite, biregular graphs. We use the Ihara–Bass formula to connect the non-backtracking spectrum to that of the adjacency matrix, employing the moment method to show there exists a spectral gap for the non-backtracking matrix. A by-product of our main theorem is that random rectangular zero-one matrices with fixed row and column sums are full rank with high probability. Finally, we illustrate applications to community detection, coding theory, and deterministic matrix completion.


2013 ◽  
Vol 21 (1) ◽  
pp. 125-140 ◽  
Author(s):  
Ryan Bakker ◽  
Keith T. Poole

In this article, we show how to apply Bayesian methods to noisy ratio scale distances for both the classical similarities problem as well as the unfolding problem. Bayesian methods produce essentially the same point estimates as the classical methods, but are superior in that they provide more accurate measures of uncertainty in the data. Identification is nontrivial for this class of problems because a configuration of points that reproduces the distances is identified only up to a choice of origin, angles of rotation, and sign flips on the dimensions. We prove that fixing the origin and rotation is sufficient to identify a configuration in the sense that the corresponding maxima/minima are inflection points with full-rank Hessians. However, an unavoidable result is multiple posterior distributions that are mirror images of one another. This poses a problem for Markov chain Monte Carlo (MCMC) methods. The approach we take is to find the optimal solution using standard optimizers. The configuration of points from the optimizers is then used to isolate a single Bayesian posterior that can then be easily analyzed with standard MCMC methods.


Sign in / Sign up

Export Citation Format

Share Document