AbstractNeuroanatomical segmentation in T1-weighted magnetic resonance imaging of the brain is a prerequisite for quantitative morphological measurements, as well as an essential element in general pre-processing pipelines. While recent fully automated segmentation methods based on convolutional neural networks have shown great potential, these methods nonetheless suffer from severe performance degradation when there are mismatches between training (source) and testing (target) domains (e.g. due to different scanner acquisition protocols or due to anatomical differences in the respective populations under study). This work introduces a new method for unsupervised domain adaptation which improves performance in challenging cross-domain applications without requiring any additional annotations on the target domain. Using a previously validated state-of-the-art segmentation method based on a context-augmented convolutional neural network, we first demonstrate that networks with better domain generalizability can be trained using extensive data augmentation with label-preserving transformations which mimic differences between domains. Second, we incorporate unlabelled target domain samples into training using a self-ensembling approach, demonstrating further performance gains, and further diminishing the performance gap in comparison to fully-supervised training on the target domain.