Deep Learning (DL) models, widely used in several domains, are currently often used for posture recognition. This work researches four DL architectures for posture recognition: Convolutional Neural Network (CNN), Long Short-Term Memory (LSTM), hybrid CNN-LSTM, and transformer. Agriculture and construction working postures were addressed as use cases, by acquiring an inertial dataset during the simulation of their typical tasks in circuits. Since model performance greatly depends on the choice of the hyperparameters, a grid search was conducted to find the optimal hyperparameters. An extensive analysis of the hyperparameter combinations' effects is presented, identifying some general tendencies. Moreover, to unveil the black-box DL models, we applied the Gradient-weighted Class Activation Mapping (Grad-CAM) explainability method on CNN's outputs to better understand the model's decision-making, in terms of the most important sensors and time steps for each window output. An innovative combination of CNN and LSTM was implemented for the hybrid architecture, by using the convolution feature maps as LSTM inputs and fusing both subnetworks' outputs with weights, which are learned during the training. All architectures were successful in recognizing the eight posture classes, with the best model of each architecture exceeding 91.5% F1-score in the test. A top F1-score of 94.31%, with an inference time of just 2.96 ms, was achieved by a hybrid CNN-LSTM.