We explore different ways in which the human visual system can adapt for perceiving and categorizing the environment. There are various accounts of supervised (categorical) and unsupervised perceptual learning, and different perspectives on the functional relationship between perception and categorization. We suggest that common experimental designs are insufficient to differentiate between hypothesised perceptual learning mechanisms and reveal their possible interplay. We propose a relatively underutilized way of studying potential categorical effects on perception, and test the predictions of different perceptual learning models using a two-dimensional, interleaved categorization-plus-reconstruction task. We find evidence that human visual encodings adapt to the feature structure of the environment, allocate encoding resources with respect to categorization utility, and adapt to prevent miscategorizations.