Background
- Emergence of simple cell by Olshausen and Field
A list of readings for this course
- Overview of the field from a practical point of view
- ImageNet Classification with Deep Convolutional Neural Networks (Alexnet) by Krizhevsky et al.
- Very Deep Convolutional Networks for Large-Scale Image Recognition (VGG) by Simonyan and Zisserman
- Going Deeper with Convolutions (GoogLeNet) by Szegedy et al.
- Deep Residual Learning for Image Recognition (ResNet) by He et al.
- Visualizing and Understanding Convolutional Neural Networks by Zeiler and Fergus
- Auto-Encoding Variational Bayes by Kingma and Welling
- Generative Adversarial Networks by Goodfellow et al.
- Understanding Deep Convolutional Networks by Mallat
- Understanding Deep Learning Requires Rethinking Generalization by Zhang et al.
- Deep Neural Networks with Random Gaussian Weights: A Universal Classification Strategy? by Giryes et al.
- Robust Large Margin Deep Neural Networks by Sokolic et al.
- Tradeoffs between Convergence Speed and Reconstruction Accuracy in Inverse Problems by Giryes et al.
- Understanding Trainable Sparse Coding via Matrix Factorization by Moreau and Bruna
- Convolutional Neural Networks Analyzed via Convolutional Sparse Coding by Papyan et al.
- Why are Deep Nets Reversible: A Simple Theory, With Implications for Training by Arora et al.
- Stable Recovery of the Factors From a Deep Matrix Product and Application to Convolutional Network by Malgouyres and Landsberg
- Optimal Approximation with Sparse Deep Neural Networks by Bolcskei et al.
Additional material
- Learning Functions: When is Deep Better Than Shallow by Mhaskar et al.
- Convolutional Rectifier Networks as Generalized Tensor Decompositions by Cohen and Shashua
- A Probabilistic Theory of Deep Learning by Patel et al.