Forced choice (FC) personality measures are increasingly popular in research and applied contexts. To date however, no method for detecting faking behavior on this format has been both proposed and empirically tested. We introduce a new methodology for faking detection on FC measures, based on the assumption that individuals engaging in faking try to approximate the ideal response on each block of items. Individuals’ responses are scored relative to the ideal using a model for rank-order data not previously applied to FC measures (Generalized Mallows Model). Scores are then used as predictors of faking in a regularized logistic regression. In Study 1, we test our approach using cross-validation, and contrast generic and job-specific ideal responses. Study 2 replicates our methodology on two measures matched and mismatched on item desirability. We achieved between 80 – 92% balanced accuracy in detecting instructed faking, and predicted probabilities of faking correlated with self-reported faking behavior. We discuss how this approach, driven by trying to capture the faking process, differs methodologically and theoretically to existing faking detection paradigms, and measure and context-specific factors impacting accuracy.