It is a tradition that goes back to Jacob Cohen to calculate the sample size before collecting data. The most commonly asked question is: "How many subjects do we need to obtain a significant result if we use the p-value to evaluate the hypothesis if an effect size exists?" In the Bayesian framework, we may want to know how many subjects are needed to get convincing evidence if we use the Bayes factor to evaluate the hypothesis. This paper proposes a solution to the above question by reaching two goals: firstly, the size of the Bayes factor reaches a given threshold, and secondly the probability that the Bayes factor exceeds the given threshold reaches a required value. Researchers can express their expectations through the order or the sign hypothesis of the parameters in a linear regression model. For example, the researchers may expect the regression coefficient to be $\beta_1>\beta_2>\beta_3$, which is an order constrained hypothesis; or the researchers may expect a regression coefficient $\beta_1>0$, which is a sign hypothesis. The greatest advantage of using a specific hypothesis is that the sample size required is reduced compared to an unconstrained hypothesis to achieve the same probability that the Bayes factor exceeds some threshold. This article provides sample size tables for the null hypothesis, order hypothesis, sign hypothesis, complement hypothesis, and unconstrained hypothesis. To enhance the applicability, an R package is developed via a Monte Carlo simulation, which can facilitate psychologists while planning the sample size even if they do not have any statistical programming background.