Convolutional neural networks (CNNs) for automatic classification and medical image diagnosis have recently displayed a remarkable performance. However, the CNNs fail to recognize original images rotated and oriented differently, limiting their performance. This paper presents a new capsule network (CapsNet) based framework known as the multi-lane atrous feature fusion capsule network (MLAF-CapsNet) for brain tumor type classification. The MLAF-CapsNet consists of atrous and CLAHE, where the atrous increases receptive fields and maintains spatial representation, whereas the CLAHE is used as a base layer that uses an improved adaptive histogram equalization (AHE) to enhance the input images. The proposed method is evaluated using whole-brain tumor and segmented tumor datasets. The efficiency performance of the two datasets is explored and compared. The experimental results of the MLAF-CapsNet show better accuracies (93.40% and 96.60%) and precisions (94.21% and 96.55%) in feature extraction based on the original images from the two datasets than the traditional CapsNet (78.93% and 97.30%). Based on the two datasets’ augmentation, the proposed method achieved the best accuracy (98.48% and 98.82%) and precisions (98.88% and 98.58%) in extracting features compared to the traditional CapsNet. Our results indicate that the proposed method can successfully improve brain tumor classification problems and support radiologists in medical diagnostics.