The architecture of the cortical system underlying concept representation is a topic of intense debate. Much evidence supports the claim that concept retrieval selectively engages sensory, motor, and other neural systems involved in the acquisition of the retrieved concept, yet there is also strong evidence for involvement of high-level, supramodal cortical regions. A fundamental question about the organization of this system is whether modality-specific information originating from sensory and motor areas is integrated across multiple ″convergence zones″ or in a single centralized ″hub″. We used representational similarity analysis (RSA) of fMRI data to map brain regions where the similarity structure of neural patterns elicited by large sets of concepts matched the similarity structure predicted by a high-dimensional model of concept representation based on sensory, motor, affective, and other modal aspects of experience. Across two studies involving different sets of concepts, different participants, and different tasks, searchlight RSA revealed a distributed, bihemispheric network engaged in multimodal experiential representation, composed of high-level association cortex in anterior, lateral, and ventral temporal lobe; inferior parietal lobule; posterior cingulate gyrus and precuneus; and medial, dorsal, ventrolateral, and orbital prefrontal cortex. These regions closely resemble networks previously implicated in general semantic and ″default mode″ processing and are known to be high-level hubs for convergence of multimodal processing streams. Supplemented by an exploratory cluster analysis, these results indicate that the concept representation system consists of multiple, hierarchically organized convergence zones supporting multimodal integration of experiential information.