# Introduction to Deep Learning in PyTorch

These materials are for the most part excerpts of
the slides of the course
EE-559 "Deep-Learning" that I will teach
at EPFL
next month. The pdf files on this page are licensed under a
Creative Commons BY-NC-SA 4.0
International License.

Although there will be no practical during this tutorial, you
can use the virtual
machine I prepared for this course if you want to try
PyTorch.

## Introduction

slides / handout

- Why learning
- From artificial neural networks to ``Deep Learning''
- Current application domains
- Why does it work now?
- Implementing a deep network, PyTorch
- Learning from data
- Capacity
- Proper evaluation protocols

## PyTorch tensors

slides / handout

- PyTorch's tensors
- Example: linear regression
- Manipulating high-dimension signals
- Broadcasting
- Tensor internals
- Using GPUs

## Multi-layer perceptrons, back-propagation, autograd

slides / handout

- A bit of history, the perceptron
- Limitation of linear classifiers
- Multi-Layer Perceptron
- Training and gradient descent
- Back-propagation
- DAG networks
- Autograd
- Weight sharing

## Convolution Neural Networks

slides / handout

- Convolutional layers
- Stride, padding, and dilation
- Pooling
- torch.nn.Module
- Creating a module
- Image classification, standard convnets

## Going deeper

slides / handout

- Batch processing
- Stochastic gradient descent
- Limitation of the gradient descent
- Momentum and moment estimation
- torch.optim
- Full example
- Dropout
- Batch normalization
- Residual networks
- torch.utils.data.DataLoader
- Example of neuro-surgery and fine-tuning in PyTorch

## Generative models

slides / handout

- Visualizing the processing
- Maximum response samples
- Adversarial examples
- Autoencoders
- Deep Autoencoders
- Generative Adversarial Networks