A sequential sampling account of semantic relatedness decisions
Semantic memory research often draws on decisions about the semantic relatedness of concepts. These decisions depend on cognitive processes of memory retrieval and choice formation. However, most previous research focused on memory retrieval but neglected the decision aspects. Here we propose the sequential sampling framework to account for choices and response times in semantic relatedness decisions. We focus on three popular sequential sampling models, the Race model, the Leaky Competing Accumulator model (LCA) and the Drift Diffusion Model (DDM). Using model simulations, we investigate if and how these models account for two empirical benchmarks: the relatedness effect, denoting faster "related" than "unrelated" decisions when judging the relatedness of word pairs; and an inverted-U shaped relationship between response time and the relatedness strength of word pairs. Our simulations show that the LCA and DDM, but not the Race model, can reproduce both effects. Furthermore, the LCA predicts a novel phenomenon: the inverted relatedness effect for weakly related word pairs. Reanalyzing a publicly available data set, we obtained credible evidence of such an inverted relatedness effect. These results provide strong support for sequential sampling models -- and in particular the LCA -- as a viable computational account of semantic relatedness decisions and suggest an important role for decision-related processes in (semantic) memory tasks.