Q-learning algorithm is usually used for traditional mobile robot navigation. The traditional Q-learning methods have the problem of dimension disaster, which may be produced by applying Q-learning to intelligent system of continuous state-space. Besides, the learning activity and efficiency are low. In order to solve these problems, a new method called ARTQL is proposed, which combined ART2 network with the traditional Q-learning algorithm. Then, a learning mechanism called novelty driven is proposed to lead the ARTQL algorithm to learn more actively and efficiently. Through the ARTQL with novelty driven mechanism algorithm, Q-learning Agent in view of the duty which needs to complete to learn an appropriate incremental clustering of state-space model, so Agent can carry out decision-making and a two-tiers online learning of state-space model cluster in unknown environment, without any priori knowledge, through interaction with the environment unceasingly alternately to improve the control strategies, increase the learning accuracy, activity and efficiency. Finally through the mobile robot navigation simulation experiments, we show that, using the proposed algorithm, mobile robot can improve its navigation performance continuously by interactive learning with the environment with high autonomous.