Accurate subject-to-template alignment requires deformation models with high degrees of freedom to account for the high anatomical variability. Without proper regularization, such models tend to match the images aggressively, often producing unrealistic transformations, especially in the presence of noise, or pathologies such as various types of lesions. To improve the robustness of deformable registration, we propose a novel framework, which makes use of statistical deformation models (SDMs) for diffeomorphisms. We present a general approach to constructing such SDMs, and detail how to use them for regularizing a given transformation. To preserve the diffeomorphic property, while making use of linear statistical models, we convert the deformation field into a stationary velocity field through the logarithm operator. To account for learning in a high-dimensional, low-sample size setting, we model the high-dimensional velocity field as a collection of mutually constrained local velocity fields. For each local field, a low-dimensional representation is learned using principal component analysis. To capture possible dependencies across local transformations, canonical correlation analysis is performed on each pair of local velocities in the learned low-dimensional space. Experiments on healthy brain images show that the model can capture the normative variation of subject-to-template deformation fields with sub-millimeter accuracy. The method is validated on simulated brain lesion images and is tested on real brain images with pathologies, producing significantly smoother and more robust results than its non-statistical counterpart.