Active learning is an important approach to reduce data-collection costs for inductive learning problems by sampling only the most informative instances for labeling. We focus here on the sampling criterion for how to select these most informative instances. Three contributions are made in this paper. First, in contrast to the leading sampling strategy of halving the volume of version space, we present the sampling strategy of reducing the volume of version space by more than half with the assumption of target function being chosen from nonuniform distribution over version space. Second, we propose the idea of sampling the instances that would be most possibly misclassified. Third, we develop a sampling method named CBMPMS (Committee Based Most Possible Misclassification Sampling) which samples the instances that have the largest probability to be misclassified by the current classifier. Comparing the proposed CBMPMS method with the existing active learning methods, when the classifiers achieve the same accuracy, the former method will sample fewer times than the latter ones. The experiments show that the proposed method outperforms the traditional sampling methods on most selected datasets.