Mini-batch Least-Squares Reverse Time Migration In A Deep Learning Framework
Migration techniques are an integral part of seismic imaging workflows. Least-squares reverse time migration (LSRTM) overcomes some of the shortcomings of conventional migration algorithms by compensating for illumination and removing sampling artifacts to increase spatial resolution. However, the computational cost associated with iterative LSRTM is high and convergence can be slow in complex media. We implement pre-stack LSRTM in a deep learning framework and adopt strategies from the data science domain to accelerate convergence. The proposed hybrid framework leverages the existing physics-based models and machine learning optimizers to achieve better and cheaper solutions. Using a time-domain formulation, we show that mini-batch gradients can reduce the computation cost by using a subset of total shots for each iteration. Mini-batch approach does not only reduce source cross-talk but also is less memory intensive. Combining mini-batch gradients with deep learning optimizers and loss functions can improve the efficiency of LSRTM. Deep learning optimizers such as the adaptive moment estimation are generally well suited for noisy and sparse data. We compare different optimizers and demonstrate their efficacy in mitigating migration artifacts. To accelerate the inversion, we adopt the regularised Huber loss function in conjunction. We apply these techniques to 2D Marmousi and 3D SEG/EAGE salt models and show improvements over conventional LSRTM baselines. The proposed approach achieves higher spatial resolution in less computation time measured by various qualitative and quantitative evaluation metrics.