Posters - Sébastien M. R. Arnold
\( % Universal Mathematics \newcommand{\paren}[1]{\left( #1 \right)} \newcommand{\brackets}[1]{\left[ #1 \right]} \newcommand{\braces}[1]{\left\{ #1 \right\}} \newcommand{\norm}[1]{\left\lVert#1\right\rVert} \newcommand{\case}[1]{\begin{cases} #1 \end{cases}} \newcommand{\bigO}[1]{\mathcal{O}\left(#1\right)} % Analysis % Linear Algebra \newcommand{\mat}[1]{\begin{pmatrix}#1\end{pmatrix}} \newcommand{\bmat}[1]{\begin{bmatrix}#1\end{bmatrix}} % Probability Theory \DeclareMathOperator*{\V}{\mathop{\mathrm{Var}}} \DeclareMathOperator*{\E}{\mathop{\mathbb{E}}} \newcommand{\Exp}[2][]{\E_{#1}\brackets{#2}} \newcommand{\Var}[2][]{\V_{#1}\brackets{#2}} \newcommand{\Cov}[2][]{\mathop{\mathrm{Cov}}_{#1}\brackets{#2}} % Optimization \newcommand{\minimize}{\operatorname*{minimize}} \newcommand{\maximize}{\operatorname*{maximize}} \DeclareMathOperator*{\argmin}{arg\,min} \DeclareMathOperator*{\argmax}{arg\,max} % Set Theory \newcommand{\C}{\mathbb{C}} \newcommand{\N}{\mathbb{N}} \newcommand{\Q}{\mathbb{Q}} \newcommand{\R}{\mathbb{R}} \newcommand{\Z}{\mathbb{Z}} \)

Policy Learning and Evaluation with Randomized Quasi-Monte Carlo
Poster for our work on reducing the variance with RQMC in reinforcement learning.
AISTATS, 2022
[pdf]

Uniform Sampling Over Episode Difficulty
Poster for our work on sampling episodes in few-shot learning.
NeurIPS, 2021
[pdf]

When MAML Can Adapt Fast and How to Assist When it Cannot
Poster for our work on helping MAML learn to adapt.
AISTATS, 2021
[pdf]

Reducing the variance in online optimization by transporting past gradients
Poster for our work on implicit gradient transport.
NeurIPS, 2019
[pdf]

cherry: A Reinforcement Learning Framework for Researchers
An overview of cherry.
PyTorch Dev Conference, 2019
[pdf]

learn2learn: A Meta-Learning Framework for Researchers
An overview of learn2learn.
PyTorch Dev Conference, 2019
[pdf]

Managing Machine Learning Experiments
How to use randopt to manage machine learning experiments.
PyCon, 2018
[pdf]

Accelerating SGD for Distributed Deep Learning Using Approximated Hessian Matrix
Approximating the Hessian via finite differences in the distributed setting.
ICLR Workshop, 2017
[pdf]