Home / Papers / Projects / Presentations / Posters

\( % Universal Mathematics \newcommand{\paren}[1]{\left( #1 \right)} \newcommand{\brackets}[1]{\left[ #1 \right]} \newcommand{\braces}[1]{\left\{ #1 \right\}} \newcommand{\norm}[1]{\left\lVert#1\right\rVert} \newcommand{\case}[1]{\begin{cases} #1 \end{cases}} \newcommand{\bigO}[1]{\mathcal{O}\left(#1\right)} % Analysis % Linear Algebra \newcommand{\mat}[1]{\begin{pmatrix}#1\end{pmatrix}} \newcommand{\bmat}[1]{\begin{bmatrix}#1\end{bmatrix}} % Probability Theory \DeclareMathOperator*{\V}{\mathop{\mathrm{Var}}} \DeclareMathOperator*{\E}{\mathop{\mathbb{E}}} \newcommand{\Exp}[2][]{\E_{#1}\brackets{#2}} \newcommand{\Var}[2][]{\V_{#1}\brackets{#2}} \newcommand{\Cov}[2][]{\mathop{\mathrm{Cov}}_{#1}\brackets{#2}} % Optimization \newcommand{\minimize}{\operatorname*{minimize}} \newcommand{\maximize}{\operatorname*{maximize}} \DeclareMathOperator*{\argmin}{arg\,min} \DeclareMathOperator*{\argmax}{arg\,max} % Set Theory \newcommand{\C}{\mathbb{C}} \newcommand{\N}{\mathbb{N}} \newcommand{\Q}{\mathbb{Q}} \newcommand{\R}{\mathbb{R}} \newcommand{\Z}{\mathbb{Z}} \)


Loss surface of 1D MAML

Understanding Meta-Learning
Unlike traditional transfer or multi-task methods, meta-learning with MAML can learn an inductive bias to quickly adapt to new and unseen tasks. But at what cost? With Shariq Iqbal and Fei Sha, we took a closer look at the requirements for MAML to succeed in learning this inductive bias. This led to our AISTATS’21 paper where we discussed some of our findings.
[at AISTATS, website, code]

IGT in action

Implicit Gradient Transport
In the Summer of 2018, I had the good fortune of visiting Mila, hosted by Ioannis Mitliagkas. Together with Nicolas Le Roux, Pierre-Antoine Manzagol, and Reza Babanezhad, we devised a simple yet effective algorithm for variance reduction in gradient-based optimization. The result, IGT, was accepted as a spotlight contribution at NeurIPS 2019.
[at NeurIPS, website, code, on twitter]

Schematics of Kleo

Kleo the Cat
In the Summer of 2017, Théo Denisart and I spent some time designing and programming a 3D-printed robotic cat for reinforcement learning, while at the ValeroLab. What makes Kleo special is that its limbs are actuated via tendons – making it robust to failures but difficult to control. Matt Simon brilliantly covered our work in his WIRED article.
[on WIRED, video, website]


learn2learn logo

learn2learn is a meta-learning framework built on top of PyTorch. It provides practitioners with high-level meta-learning implementations, and researchers with low-level utilities to develop new meta-learning algorithms for the supervised and reinforcement learning settings. It is in active development, and was the (lucky) winner of the PyTorch Summer Hackathon. You can install it with pip install learn2learn.
[code,, preprint, presentation]

Cherry logo

Cherry is a reinforcement learning framework built on top of PyTorch. What differentiates cherry from other RL frameworks is that it does not provide any algorithm implementation! Instead, it provides utilities to make it easy for researchers to implement their own algorithms. It has been used in many settings (optimization, meta-learning, variance reduction) and is under active development. You can install it with pip install cherry-rl.

Randopt Logo

Randopt is a Python package for machine learning experiment management, hyper-parameter optimization, and results visualization. It is in active development and I – as well as others – have been using it for every machine learning project since November 2016. You can install it with pip install randopt.


Toosk logo

Tooski is the largest francophone website dedicated to the Ski World Cup. On it, you’ll find news and blogs related to the Swiss Ski Team, the FIS World Cup circuit, as well as some younger skiers. Tooski began as an adventure in merging two of my passions: computers and skiing.