We analyze algorithms for approximating a function [Formula: see text] mapping [Formula: see text] to [Formula: see text] using deep linear neural networks, that is, that learn a function [Formula: see text] parameterized by matrices [Formula: see text] and defined by [Formula: see text]. We focus on algorithms that learn through gradient descent on the population quadratic loss in the case that the distribution over the inputs is isotropic. We provide polynomial bounds on the number of iterations for gradient descent to approximate the least-squares matrix [Formula: see text], in the case where the initial hypothesis [Formula: see text] has excess loss bounded by a small enough constant. We also show that gradient descent fails to converge for [Formula: see text] whose distance from the identity is a larger constant, and we show that some forms of regularization toward the identity in each layer do not help. If [Formula: see text] is symmetric positive definite, we show that an algorithm that initializes [Formula: see text] learns an [Formula: see text]-approximation of [Formula: see text] using a number of updates polynomial in [Formula: see text], the condition number of [Formula: see text], and [Formula: see text]. In contrast, we show that if the least-squares matrix [Formula: see text] is symmetric and has a negative eigenvalue, then all members of a class of algorithms that perform gradient descent with identity initialization, and optionally regularize toward the identity in each layer, fail to converge. We analyze an algorithm for the case that [Formula: see text] satisfies [Formula: see text] for all [Formula: see text] but may not be symmetric. This algorithm uses two regularizers: one that maintains the invariant [Formula: see text] for all [Formula: see text] and the other that “balances” [Formula: see text] so that they have the same singular values.