Sparse machine learning methods have provided substantial benefits to quantitative structure property modeling, as they make model interpretation simpler and generate models with improved predictivity. Sparsity is usually induced via Bayesian regularization using sparsity-inducing priors and by the use of expectation maximization algorithms with sparse priors. The focus to date has been on using sparse methods to model continuous data and to carry out sparse feature selection. We describe the relevance vector machine (RVM), a sparse version of the support vector machine (SVM) that is one of the most widely used classification machine learning methods in QSAR and QSPR. We illustrate the superior properties of the RVM by modeling eight data sets using SVM, RVM, and another sparse Bayesian machine learning method, the Bayesian regularized artificial neural network with Laplacian prior (BRANNLP). We show that RVM models are substantially sparser than the SVM models and have similar or superior performance to them.