Approximate Message Passing (AMP) type algorithms are widely used to the signal recovery in high-dimensional noisy linear systems. Recently, a principle called Memory AMP (MAMP) was emerged. Leveraging this principle, a Bayes-optimal MAMP (GD-MAMP) algorithm was created, inheriting the strengths of AMP and OAMP/VAMP algorithms. In this paper, we first provide an overflow-avoiding GD-MAMP (OA-GD-MAMP) to address the overflow problem that arises from some intermediate variables exceeds the range of floating point numbers. Second, we develop a complexity-reduced GD-MAMP (CR=GD-MAMP) that effectively reduces the number of matrix-vector products per iteration by 1/3 (from 3 to 2) without significantly slowing down convergence speed.