We prove the following discrete generalised Entropy Power Inequality (EPI) for isotropic log-concave sums of independent identically distributed random vectors $X_1,\dots,X_{n+1}$ on $\mathbb{Z}^d$: $$ H\Bigl(\sum_{i=1}^{n+1}{X_i}\Bigr) \geq H\Bigl(\sum_{i=1}^{n}{X_i}\Bigr) + \frac{d}{2}\log{\Bigl(\frac{n+1}{n}\Bigr)} +o(1), $$ where $o(1)$ vanishes as $H(X_1) \to \infty$. Moreover, for the $o(1)$-term we obtain a rate of convergence $ O\Bigl({H(X_1)}{e^{-\frac{1}{d}H(X_1)}}\Bigr)$, where the implied constants depend on $d$ and $n$. This generalises to $\mathbb{Z}^d$ the one-dimensional result of the second named author (2023). As in dimension one, our strategy is to establish that the discrete entropy is close to the differential entropy of the sum after adding $n$ independent and identically distributed uniform random vectors on $[0,1]^d$ and to apply the continuous EPI. However, in dimension $d\ge2$, more involved tools from convex geometry are needed. One of our technical tools is a dimensional analogue to a result of Bobkov, Marsiglietti and Melbourne (2022), which bounds the maximum probability of a log-concave p.m.f. in terms of the inverse of the determinant of the covariance matrix and may be of independent interest.