Under-resourced or overloaded? Rethinking working memory deficits in developmental language disorder
Dominant theoretical accounts of developmental language disorder (DLD) are unanimous in assuming working memory capacity limitations. In the current report, we present an alternative view: That working memory in DLD is not under-resourced but overloaded due to operating on speech representations with low discriminability. This account is developed through computational simulations involving deep convolutional neural networks trained on spoken word spectrograms in which frequency information is either retained to mimic typical development or degraded to mimic spectral processing deficits identified among children with DLD. We assess not only spoken word recognition accuracy and predictive probability and entropy (i.e., predictive distribution spread), but also use mean-field-theory based manifold analysis to assess; (i) internal speech representation dimensionality, and (ii) classification capacity, a measure of networks’ ability to isolate any given internal speech representation that is used as a proxy for attentional control. We show that instantiating a low-level frequency discrimination deficit results in the formation of internal speech representations with atypically high dimensionality, and that classification capacity is exhausted due to low representation separability. These representation and control deficits underpin not only lower performance accuracy but also greater uncertainty even when making accurate predictions in a simulated spoken word recognition task (i.e., predictive distributions with low maximum probability and high entropy), which replicates the response delays and word finding difficulties often seen in DLD. Overall, these simulations demonstrate an integrated theoretical account of speech representation and processing in DLD in which working memory capacity limitations play no causal role.