Near-scale, reach-relevant environments, like work desks, restaurant place settings or lab benches, are the interface of our hand-based interactions with the world. How are our conceptual representations of these environments organized? For navigable-scale scenes, global properties such as openness, depth or naturalness have been identified, but the analogous organizing principles for reach-scale environments are not known. To uncover such principles, we obtained 1.25 million odd-one-out behavioral judgments on image triplets assembled from 990 reachspace images. Images were selected to comprehensively sample the variation both between and within reachspace categories. Using data-driven modeling, we generated a 30-dimensional embedding which predicts human similarity judgments among the images. First, examination of the embedding dimensions revealed key properties that distinguish among reachspaces, relating to their structural layout, affordances, visual appearances and functional roles. Second, clustering analyses performed over the embedding revealed four distinct interpretable classes of reachspaces, with separate clusters for spaces related to food, electronics, analog activities, and storage or display. Finally, we found that the similarity structure among reachspace images was better predicted by the function of the spaces than their locations, suggesting that reachspaces are largely conceptualized in terms of the actions they are designed to support. Altogether, these results reveal the behaviorally-relevant principles that that structure our internal representations of reach-relevant environments.