MO4.R9.4

Efficient Nonconvex Optimization for Two-way Sparse Reduced-Rank Regression

Cheng Cheng, Ziping Zhao, ShanghaiTech University, China

Session:
AMP, Sparsity and Sketching

Track:
11: Information Theory and Statistics

Location:
Lamda

Presentation Time:
Mon, 8 Jul, 17:25 - 17:45

Session Chair:
Ramji Venkataramanan, University of Cambridge
Abstract
We consider the problem of two-way sparse reduced-rank regression (TSRRR). The purpose of TSRRR is to estimate the coefficient matrix in the multiple response linear regression model where the coefficient matrix is simultaneously low-rank and two-way sparse (i.e., row and column sparse). In this work, we formulate TSRRR as a nonconvex optimization problem and propose an efficient and scalable algorithm dubbed as ScaledGDT (Scaled Gradient Descent with hard Thresholding). To demonstrate the efficiency and scalability of our proposed algorithm, we prove the linear convergence rate which is independent of the condition number of the coefficient matrix obtained by the iterates of ScaledGDT, to the region within statistical error up to the optimal solution. Also, the statistical error rate obtained by ScaledGDT is verified to be nearly optimal compared to minimax rate, which confirms its satisfactory estimation accuracy. Simulations validate the competitive performance of our proposed algorithm compared with existing methods.
Resources