Biological Correlates Of Machine Learning
The greatest successes of artificial intelligence are intelligent machines founded on models of how neurons interact with each other. In creating these models, machine-learning modelers divide intelligent behavior into separate learning and operating experiences: training and inference. We, the public, see machine-learning engines while they are operating, in inference mode, as they interpret our requests and images. To make them operational, their neural nets are trained using algorithms that are widely tested, optimized, and which even compete against each other regularly. The most successful training algorithms use back-propagation to train their neural networks.What has been learned about intelligence from these models is largely absent in biological models: that creating the memories underlying intelligent behavior occurs independently of network operations and requires network-level functions. This paper recasts memory research in the context of those two requirements and outlines novel biological correlates for training and inference modes, vector spaces and error terms. Specific biological machinery is identified as holding the key to understanding memory creation: the operation of tripartite synapses and how astrocytes act as normalization operators to manage synaptic plasticity.