Background

A list of readings for this course

  1. Overview of the field from a practical point of view
  2. Understanding Deep Convolutional Networks by Mallat
  3. Understanding Deep Learning Requires Rethinking Generalization by Zhang et al.
  4. Deep Neural Networks with Random Gaussian Weights: A Universal Classification Strategy? by Giryes et al.
  5. Robust Large Margin Deep Neural Networks by Sokolic et al.
  6. Tradeoffs between Convergence Speed and Reconstruction Accuracy in Inverse Problems by Giryes et al.
  7. Understanding Trainable Sparse Coding via Matrix Factorization by Moreau and Bruna
  8. Convolutional Neural Networks Analyzed via Convolutional Sparse Coding by Papyan et al.
  9. Why are Deep Nets Reversible: A Simple Theory, With Implications for Training by Arora et al.
  10. Stable Recovery of the Factors From a Deep Matrix Product and Application to Convolutional Network by Malgouyres and Landsberg
  11. Optimal Approximation with Sparse Deep Neural Networks by Bolcskei et al.

Additional material

back