With the advent of cost-efficient depth cameras, many effective feature descriptors have been proposed for action recognition from depth sequences. However, most of them are based on single feature and thus unable to extract the action information comprehensively, e.g., some kinds of feature descriptors can represent the area where the motion occurs while they lack the ability of describing the order in which the action is performed. In this paper, a new feature representation scheme combining different feature descriptors is proposed to capture various aspects of action cues simultaneously. First of all, a depth sequence is divided into a series of sub-sequences using motion energy based spatial-temporal pyramid. For each sub-sequence, on the one hand, the depth motion maps (DMMs) based completed local binary pattern (CLBP) descriptors are calculated through a patch-based strategy. On the other hand, each sub-sequence is partitioned into spatial grids and the polynormals descriptors are obtained for each of the grid sequences. Then, the sparse representation vectors of the DMMs based CLBP and the polynormals are calculated separately. After pooling, the ultimate representation vector of the sample is generated as the input of the classifier. Finally, two different fusion strategies are applied to conduct fusion. Through extensive experiments on two benchmark datasets, the performance of the proposed method is proved better than that of each single feature based recognition method.