Technical Program

Paper Detail

Paper IDTN2.2
Paper Title The Complexity of Learning Deeply-Sparse Signal Representations
Authors Demba Ba, Harvard University, United States
SessionTN2: Mathematics of Deep Learning
LocationSalle Route du Rhum
Session TimeTuesday, 17 December, 16:00 - 17:20
Presentation TimeTuesday, 17 December, 16:20 - 16:40
Presentation Lecture
Topic Special Sessions: Mathematical Foundations of Deep Learning
Abstract Two important problems in neuroscience are to understand 1) how the brain represents sensory signals hierarchically and 2) how populations of neurons encode stimuli and how this encoding is related to behavior. My talk will focus on the tools I have developed to answer the first question. First, because they provide theoretical insights as to the complexity of learning deep neural networks. Second, because the framework behind these tools has implications on the principles of hierarchical processing the brain. It is well understood by now that there is a strong parallel between deep neural network architectures and sparse recovery and estimation, namely that a deep neural network architecture with ReLu nonlinearities arises from a finite sequence of cascaded sparse coding models, the outputs of which, except for the last element in the cascade, are sparse and unobservable. I show that if the measurement matrices in the cascaded sparse coding model (a) satisfy RIP and (b) all have sparse columns except for the last, they can be recovered with high probability in the absence of noise using a sequential alternating-optimization algorithm. The method of choice in deep learning to solve this problem is by training a deep auto-encoder. My main result states that the complexity of learning this deep sparse coding model is given by the maximum, across layers, of the product of the number of active neurons (sparsity) and the embedding dimension (of the sparse vector). I will demonstrate the usefulness of these ideas by showing that one can train auto-encoders to learn interpretable convolutional dictionaries in two applications: deconvolution of electrophysiology data and image denoising.