One of the popular areas of modern applied nonlinear analysis is the study of variational inequalities. Many important problems of operations research and mathematical physics can be written in the form of variational inequalities. With the advent of generating adversarial neural networks, interest in algorithms for solving variational inequalities arose in the ML-community. This paper is devoted to the study of three new algorithms with Bregman projection for solving variational inequalities in Hilbert space. The first algorithm is the result of a modification of the two-stage Bregman method by low-cost adjusting the step size that without the prior knowledge of the Lipschitz constant of operator. The second algorithm, which we call the operator extrapolation algorithm, is obtained by replacing the Euclidean metric in the Malitsky–Tam method with the Bregman divergence. An attractive feature of the algorithm is only one computation at the iterative step of the Bregman projection onto the feasible set. The third algorithm is an adaptive version of the second, where the used rule for updating the step size does not require knowledge of Lipschitz constants and the calculation of operator values at additional points. For variational inequalities with pseudo-monotone, Lipschitz-continuous, and sequentially weakly continuous operators acting in a Hilbert space, convergence theorems are proved.