EE-559 – Deep Learning (Spring 2019)

Idiap's logo EPFL's logo

You can find here slides and a virtual machine for the course EE-559 “Deep Learning”, taught by Fran├žois Fleuret at EPFL. This course covers the main deep learning tools and theoretical results, with examples in the PyTorch framework. Note that these slides are still work in progress at the moment.

Last year's version provides handouts and 16h of voice-overs, but is structured slightly differently, and was developed for PyTorch 0.3.1, which differs substantially from PyTorch 0.4.x.

Thanks to Adam Paszke, Alexandre Nanchen, Xavier Glorot, Andreas Steiner, Matus Telgarsky, Diederik Kingma, Nikolaos Pappas, Soumith Chintala, and Shaojie Bai for their answers or comments.

Slides and practicals


The slide pdfs are the ones I use during the lectures. They are in landscape mode and include overlays and font coloring to facilitate the presentation. The handout pdfs are compiled without these fancy effects and with two slides per page in portrait mode to be more convenient for off-line reading and note-taking.

  1. Introduction.
    1. From artificial neural networks to deep learning. (slides, handout – 21 slides)
    2. Current applications and success. (slides, handout – 22 slides)
    3. What is really happening? (slides, handout – 13 slides)
    4. Tensor basics and linear regression. (slides, handout – 12 slides)
    5. High dimension tensors. (slides, handout – 14 slides)
    6. Tensor internals. (slides, handout – 5 slides)
  2. Machine learning fundamentals.
    1. Loss and risk. (slides, handout – 15 slides)
    2. Over and under fitting. (slides, handout – 24 slides)
    3. Bias-variance dilemma. (slides, handout – 10 slides)
    4. Proper evaluation protocols. (slides, handout – 6 slides)
    5. Basic clustering and embeddings. (slides, handout – 19 slides)
  3. Multi-layer perceptron and back-propagation.
    1. The perceptron. (slides, handout – 16 slides)
    2. Probabilistic interpretation of the linear classifier. (slides, handout – 8 slides)
    3. Limitations of linear classifiers and feature design. (slides, handout – 10 slides)
    4. Multi-Layer Perceptrons. (slides, handout – 9 slides)
    5. Gradient descent. (slides, handout – 13 slides)
    6. Back-propagation. (slides, handout – 11 slides)
  4. Graphs of operators, autograd, and convolutional layers.
    1. DAG networks. (slides, handout – 11 slides)
    2. Autograd. (slides, handout – 16 slides)
    3. PyTorch modules and batch processing. (slides, handout – 14 slides)
    4. Convolutions. (slides, handout – 23 slides)
    5. Pooling. (slides, handout – 7 slides)
    6. Writing a PyTorch module. (slides, handout – 10 slides)
  5. Initialization and optimization.
    1. Cross-entropy loss. (slides, handout – 9 slides)
    2. Stochastic gradient descent. (slides, handout – 17 slides)
    3. PyTorch optimizers. (slides, handout – 7 slides)
    4. $L_2$ and $L_1$ penalties. (slides, handout – 10 slides)
    5. Parameter initialization. (slides, handout – 22 slides)
    6. Architecture choice and training protocol. (slides, handout – 9 slides)
    7. Writing an autograd function. (slides, handout – 7 slides)
  6. Going deeper.
    1. Benefits of depth. (slides, handout – 9 slides)
    2. Rectifiers. (slides, handout – 7 slides)
    3. Dropout. (slides, handout – 12 slides)
    4. Batch normalization. (slides, handout – 15 slides)
    5. Residual networks. (slides, handout – 21 slides)
    6. Using GPUs. (slides, handout – 15 slides)
  7. Computer vision.
    1. Computer vision tasks. (slides, handout – 15 slides)
    2. Networks for image classification. (slides, handout – 36 slides)
    3. Networks for object detection. (slides, handout – 15 slides)
    4. Networks for semantic segmentation. (slides, handout – 8 slides)
    5. DataLoader and neuro-surgery. (slides, handout – 13 slides)
  8. Under the hood.
    1. Looking at parameters. (slides, handout – 11 slides)
    2. Looking at activations. (slides, handout – 21 slides)
    3. Visualizing the processing in the input. (slides, handout – 26 slides)
    4. Optimizing inputs. (slides, handout – 25 slides)
  9. Auto-encoders and generative models.
    1. Transposed convolutions. (slides, handout – 14 slides)
    2. Autoencoders. (slides, handout – 20 slides)
    3. Denoising and variational autoencoders. (slides, handout – 24 slides)
    4. Non-volume preserving networks. (slides, handout – 24 slides)
  10. Generative adversarial models.
    1. Generative Adversarial Networks. (slides, handout – 29 slides)
    2. Wasserstein GAN. (slides, handout – 16 slides)
    3. Conditional GAN and image translation. (slides, handout – 27 slides)
    4. Model persistence and checkpoints. (slides, handout – 9 slides)
  11. Recurrent models and NLP.
    1. Recurrent Neural Networks. (slides, handout – 23 slides)
    2. LSTM and GRU. (slides, handout – 17 slides)
    3. Word embeddings and translation. (slides, handout – 31 slides)





You may have to look at the python 3, jupyter, and PyTorch documentations at

Practical session prologue

Helper python prologue for the practical sessions:

Argument parsing

This prologue parses command-line arguments as follows

usage: [-h] [--full] [--tiny] [--force_cpu] [--seed SEED]
                [--cifar] [--data_dir DATA_DIR]

DLC prologue file for practical sessions.

optional arguments:
  -h, --help           show this help message and exit
  --full               Use the full set, can take ages (default
  --tiny               Use a very small set for quick checks
                       (default False)
  --force_cpu          Keep tensors on the CPU, even if cuda is
                       available (default False)
  --seed SEED          Random seed (default 0, < 0 is no seeding)
  --cifar              Use the CIFAR data-set and not MNIST
                       (default False)
  --data_dir DATA_DIR  Where are the PyTorch data located (default
                       $PYTORCH_DATA_DIR or './data')

It sets the default Tensor to torch.cuda.FloatTensor if cuda is available (and --force_cpu is not set).

Loading data

The prologue provides the function

load_data(cifar = None, one_hot_labels = False, normalize = False, flatten = True)

which downloads the data when required, reshapes the images to 1d vectors if flatten is True, narrows to a small subset of samples if --full is not selected, moves the Tensors to the GPU if cuda is available (and --force_cpu is not selected).

It returns a tuple of four tensors: train_data, train_target, test_data, and test_target.

If cifar is True, the data-base used is CIFAR10, if it is False, MNIST is used, if it is None, the argument --cifar is taken into account.

If one_hot_labels is True, the targets are converted to 2d torch.Tensor with as many columns as there are classes, and -1 everywhere except the coefficients [n, y_n], equal to 1.

If normalize is True, the data tensors are normalized according to the mean and variance of the training one.

If flatten is True, the data tensors are flattened into 2d tensors of dimension N × D, discarding the image structure of the samples. Otherwise they are 4d tensors of dimension N × C × H × W.

Minimal example

import dlc_practical_prologue as prologue

train_input, train_target, test_input, test_target = prologue.load_data()

print('train_input', train_input.size(), 'train_target', train_target.size())
print('test_input', test_input.size(), 'test_target', test_target.size())


data_dir ./data
* Using MNIST
** Reduce the data-set (use --full for the full thing)
** Use 1000 train and 1000 test samples
train_input torch.Size([1000, 784]) train_target torch.Size([1000])
test_input torch.Size([1000, 784]) test_target torch.Size([1000])

Virtual Machine

A Virtual Machine (VM) is a software that simulates a complete computer. The one we provide here includes a Linux operating system and all the tools needed to use PyTorch from a web browser (firefox or chrome).


First download and install: Oracle's VirtualBox then download the file: Virtual machine OVA package (large file ~2.5Gb) and open it in VirtualBox with File → Import Appliance.

You should now see an entry in the list of VMs. The first time it starts, it provides a menu to choose the keyboard layout you want to use (you can force the configuration later by passing forcekbd to the kernel through GRUB).

If the VM does not start and VirtualBox complains that the VT-x is not enabled, you have to activate the virtualization capabilities of your Intel CPU in the BIOS of your computer.

Using the VM

The VM automatically starts a JupyterLab on port 8888 and exports that port to the host. This means that you can access this JupyterLab with a web browser on the machine running VirtualBox at: http://localhost:8888/ and use python notebooks, view files, start terminals, and edit source files. Typing !bye in a notebook or bye in a terminal will shutdown the VM.

You can run a terminal and a text editor from inside the Jupyter notebook for exercises that require more than the notebook itself. Source files can be executed by running in a terminal the python command with the source file name as argument. Both can be done from the main Jupyter window with:

Files saved in the VM are erased when the VM is re-installed, which happens for each session on the EPFL machines. So you should download files you want to keep from the jupyter notebook to your account and re-upload them later when you need them.

This VM also exports an ssh port to the port 2022 on the host, which allows to log in with standard ssh clients on Linux and OSX, and with applications such as PuTTY on Windows. The default login is 'dave' and password 'dummy', same password for the root.


Note that performance for computation will not be as good as if you install PyTorch natively on your machine. In particular, the VM does not take advantage of a GPU if you have one.

Finally, please also note that this VM is configured in a convenient but highly non-secured manner, with easy to guess passwords, including for the root, and network-accessible non-protected Jupyter notebooks.

This VM is built on a Linux Debian 9 “stretch”, with miniconda, PyTorch 0.4.1, TensorFlow 1.4.1, MNIST, CIFAR10, and many Python utility packages installed.

License of use

The materials on this page are licensed under the Creative Commons BY-NC-SA 4.0 International License.

More simply: I am okay with this material being used for regular academic teaching, but definitely not for a book / youtube loaded with ads / whatever monetization model I am not aware of.