AbstractDeep convolutional neural networks (DCNN) nowadays can match and even outperform human performance in challenging complex tasks. However, it remains unknown whether DCNNs achieve human-like performance through human-like processes; that is, do DCNNs use similar internal representations to achieve the task as humans? Here we applied a reverse-correlation method to reconstruct the internal representations when DCNNs and human observers classified genders of faces. We found that human observers and a DCNN pre-trained for face identification, VGG-Face, showed high similarity between their “classification images” in gender classification, suggesting similar critical information utilized in this task. Further analyses showed that the similarity of the representations was mainly observed at low spatial frequencies, which are critical for gender classification in human studies. Importantly, the prior task experience, which the VGG-Face was pre-trained for processing faces at the subordinate level (i.e., identification) as humans do, seemed necessary for such representational similarity, because AlexNet, a DCNN pre-trained to process objects at the basic level (i.e., categorization), succeeded in gender classification but relied on a completely different representation. In sum, although DCNNs and humans rely on different sets of hardware to process faces, they can use a similar representation, possibly from similar prior task experiences, to achieve the same computation goal. Therefore, our study provides the first empirical evidence supporting the hypothesis of implementation-independent representation.