THIS PAPER IS ELIGIBLE FOR THE STUDENT PAPER AWARD. We study the precise asymptotic behavior of stochastic mirror descent (SMD) algorithms in over-parameterized binary linear classification using regression. In this over-parameterized regime, the training loss has infinitely many global minima which defines a manifold of interpolating solutions. SMD exhibits implicit regularization and finds the interpolating solution that is closest to the initial weight vector in Bregman divergence (corresponding to the mirror’s potential function). It has been empirically observed that different potentials lead to different generalization errors and different distributions of the weights. In this paper, we explicitly compute closed-form expressions of the distribution of the solution and characterize its generalization performance on data generated by a Gaussian Mixture model (GMM). The theory presented well matches empirical simulations and can provide insights into understanding the generalization performance of SMD on nonlinear models, such as in deep learning.