The development of a Machine Learning (ML) model depends on many variables in its training. Both model architecture-related variables, such as initial weights and hyperparameters, and general variables, like datasets and framework versions, might impact model metrics and experiment reproducibility. An application cannot be trustworthy if it produces good results only in a specific environment. Therefore, in order to avoid reproducibility issues, some good practices need to be adopted. This paper aims to report a practical experience in developing a machine learning application adopting a workflow that assures the reproducibility of the experiments and, consequently, its reliability, improving the team productivity.