Hand-selective visual regions represent how to grasp 3D tools for use: brain decoding during real actions
ABSTRACTMost neuroimaging experiments that investigate how tools and their associated actions are represented in the brain use visual paradigms where tools or hands are displayed as 2D images and no real movements are performed. These studies have discovered selective responses in occipito-temporal and parietal cortices for viewing pictures of hands or tools which are assumed to reflect action-related processing, but this claim has never been directly investigated. To address this, we examined the responses of independently visually defined category-selective brain areas when participants directly manipulated 3D tools with their hands. Using real action fMRI and multi-voxel pattern analysis, we found that representations about whether a 3D tool is being grasped appropriately for use (e.g., grasp knife by the handle rather than its serrated edge) were decodable from hand-selective areas in occipito-temporal and parietal cortices, but not from tool-, object-, or body-selective areas, even if partially overlapping. These representations were automatically evoked even when there was no requirement for tool use and participants were naïve to object category (tool vs non-tools). We further show that these effects were exclusive for actions with tools, but not for biomechanically matched actions with control non-tools. The lack of effects in tool-selective cortex challenges the long-standing assumption that brain activation for viewing tool images reflects tool manipulation sensorimotor processing. Instead our results show that representations of how to grasp tools for use are automatically evoked in visual regions specialised for representing the human hand, the brain’s primary tool for interacting with the world.