Studying complex relations in multivariate datasets is a common task in psychological science. Recently, the Gaussian graphical model has emerged as an increasingly popular model for characterizing the conditional dependence structure of random variables. Although the graphical lasso ($\ell_1$-regularization) is the most well-known estimator across the sciences, it has several drawbacks that make it less than ideal for model selection. There are now alternative forms of regularization that were developed specifically to overcome issues inherent to the $\ell_1$-penalty.To date, this information has not been synthesized. This paper provides a comprehensive survey of nonconvex regularization that spans from the smoothly clipped absolute deviation penalty to continuous approximations of the $\ell_0$-penalty (i.e., best subset) for directly estimating the inverse covariance matrix. A common thread shared by these penalties is that they all enjoy the oracle properties, that is, they perform as though the \emph{true} generating model were known in advance. To ensure their theoretical properties are general, I conducted extensive numerical experiments that indicated superior and more than competitive performance when compared to glasso and non-regularized model selection, respectively, all the while being computationally feasible for many variables. In addition, the important topics of tuning parameter selection and statistical inference in regularized models are reviewed.The penalties are employed to estimate the dependence structure of post-traumatic stress disorder symptoms. The discussion includes several ideas for future research, including a plethora of information to facilitate their study. I have implemented the methods in the