Using Gaussian mixture models for clustering is a statistically mature method for clustering in data science with numerous successful applications in science and engineering. The parameters for a Gaussian mixture model (GMM) are typically estimated from training data using the iterative expectation-maximization algorithm, which requires the number of Gaussian components a priori. In this study, we propose two algorithms rooted in numerical algebraic geometry (NAG), namely, an area-based algorithm and a local maxima algorithm, to identify the optimal number of components. The area-based algorithm transforms several GMM with varying number of components into sets of equivalent polynomial regression splines. Next, it uses homotopy continuation methods for evaluating the resulting splines to identify the number of components that is most compatible with the gradient data. The local maxima algorithm forms a set of polynomials by fitting a smoothing spline to a dataset. Next, it uses NAG to solve the system of the first derivatives for finding the local maxima of the resulting smoothing spline, which represent the number of mixture components. The local maxima algorithm also identifies the location of the centers of Gaussian components. Using a real-world case study in automotive manufacturing and extensive simulations, we demonstrate that the performance of the proposed algorithms is comparable with that of Akaike information criterion (AIC) and Bayesian information criterion (BIC), which are popular methods in the literature. We also show the proposed algorithms are more robust than AIC and BIC when the Gaussian assumption is violated.