Human Activity Recognition (HAR) has been widely addressed by deep learning techniques. However, most prior research applied a general unique approach (signal processing and deep learning) to deal with different human activities including postures and gestures. These types of activity typically have highly diverse motion characteristics, which could be captured with wearable sensors placed on the user’s body. Repetitive movements such as running or cycling have repetitive patterns over time and generate harmonics in the frequency domain, while postures such as sitting or lying are characterized by a fixed position, with some positional changes and gestures or non-repetitive movements being based on an isolated movement usually performed by a limb. This work proposes a classifier module to perform an initial classification among these different types of movements, which would allow for applying afterwards the most appropriate approach in terms of signal processing and deep learning techniques for each type of movement. This classifier has been evaluated using the PAMAP2 and OPPORTUNITY datasets using a subject-wise cross-validation methodology. These datasets contain recordings from inertial sensors on hands, arms, chest, hip, and ankles, collected in a non-intrusive way. In the case of PAMAP2, the baseline approach for classifying the 12 activities using 5-s windows in the frequency domain obtained an accuracy of 85.26 ± 0.25%. However, an initial classifier module could distinguish between repetitive movements and postures using 5-s windows reaching higher performances. Afterward, specific window size, signal format, and deep learning architecture were used for each type of movement module, obtaining a final accuracy of 90.09 ± 0.35% (an absolute improvement of 4.83%).