Talks - Sébastien M. R. Arnold
\( % Universal Mathematics \newcommand{\paren}[1]{\left( #1 \right)} \newcommand{\brackets}[1]{\left[ #1 \right]} \newcommand{\braces}[1]{\left\{ #1 \right\}} \newcommand{\norm}[1]{\left\lVert#1\right\rVert} \newcommand{\case}[1]{\begin{cases} #1 \end{cases}} \newcommand{\bigO}[1]{\mathcal{O}\left(#1\right)} % Analysis % Linear Algebra \newcommand{\mat}[1]{\begin{pmatrix}#1\end{pmatrix}} \newcommand{\bmat}[1]{\begin{bmatrix}#1\end{bmatrix}} % Probability Theory \DeclareMathOperator*{\V}{\mathop{\mathrm{Var}}} \DeclareMathOperator*{\E}{\mathop{\mathbb{E}}} \newcommand{\Exp}[2][]{\E_{#1}\brackets{#2}} \newcommand{\Var}[2][]{\V_{#1}\brackets{#2}} \newcommand{\Cov}[2][]{\mathop{\mathrm{Cov}}_{#1}\brackets{#2}} % Optimization \newcommand{\minimize}{\operatorname*{minimize}} \newcommand{\maximize}{\operatorname*{maximize}} \DeclareMathOperator*{\argmin}{arg\,min} \DeclareMathOperator*{\argmax}{arg\,max} % Set Theory \newcommand{\C}{\mathbb{C}} \newcommand{\N}{\mathbb{N}} \newcommand{\Q}{\mathbb{Q}} \newcommand{\R}{\mathbb{R}} \newcommand{\Z}{\mathbb{Z}} \)

Quickly solving new tasks, with meta-learning and without
Thesis defense.
University of Southern California, Los Angeles, CA (Remote; December 2022)
[pdf]

Policy Learning and Evaluation with Randomized Quasi-Monte Carlo
Presentation of our work on RQMC for RL.
AISTATS 2022, Virtual, (Remote; March 2022)
[pdf, talk]

Uniform Sampling Over Episode Difficulty
Spotlight presentation of our work on sampling episodes in few-shot learning.
NeurIPS 2021, Virtual, (Remote; December 2021)
EPFL’s NeurIPS 2021 mirror event, Lausanne, Switzerland (December 2021)
[pdf, talk (NeurIPS), talk (EPFL)]

To Transfer or To Adapt: A Study Through Few-Shot Learning
Overview of my recent research research on adaptation and transfer in few-shot learning.
Google, Mountain View, CA (Remote; April 2021)
Amazon, Seattle, WA (August 2021).


When MAML Can Adapt Fast and How to Assist When it Cannot
Slidelive presentation of our work on helping MAML learn to adapt.
AISTATS 2021, Virtual (Remote; April 2021)
[pdf, talk]

Reducing the Variance in Online Optimization by Transporting Past Gradients
Spotlight presentation of our work on implicit gradient transport.
NeurIPS 2019, Vancouver, Canada (December 2019)
[pdf, talk]

learn2learn: A Meta-Learning Framework
Short presentation of learn2learn and some applications of meta-learning.
PyTorch Dev Conference, San Francisco, CA (October 2019)
[pdf, talk]

Information Geometric Optimization
Tutorial on recent approaches using information geometric principles for optimization. Inspired by Yann Ollivier’s presentation and James Martens’ paper.
ShaLab reading group, Los Angeles, CA (October 2018)
[pdf]

Managing Machine Learning Experiments
Presentation of randopt and how to use it to manage machine learning experiments.
SoCal Python Meetup, Los Angeles, CA (May 2018)
[pdf]