Sébastien M. R. Arnold

Home / Papers / Projects / Presentations / Posters

\( % Universal Mathematics \newcommand{\paren}[1]{\left( #1 \right)} \newcommand{\brackets}[1]{\left[ #1 \right]} \newcommand{\braces}[1]{\left\{ #1 \right\}} \newcommand{\norm}[1]{\left\lVert#1\right\rVert} \newcommand{\case}[1]{\begin{cases} #1 \end{cases}} \newcommand{\bigO}[1]{\mathcal{O}\left(#1\right)} % Analysis % Linear Algebra \newcommand{\mat}[1]{\begin{pmatrix}#1\end{pmatrix}} \newcommand{\bmat}[1]{\begin{bmatrix}#1\end{bmatrix}} % Probability Theory \DeclareMathOperator*{\V}{\mathop{\mathrm{Var}}} \DeclareMathOperator*{\E}{\mathop{\mathbb{E}}} \newcommand{\Exp}[2][]{\E_{#1}\brackets{#2}} \newcommand{\Var}[2][]{\V_{#1}\brackets{#2}} \newcommand{\Cov}[2][]{\mathop{\mathrm{Cov}}_{#1}\brackets{#2}} % Optimization \newcommand{\minimize}{\operatorname*{minimize}} \newcommand{\maximize}{\operatorname*{maximize}} \DeclareMathOperator*{\argmin}{arg\,min} \DeclareMathOperator*{\argmax}{arg\,max} % Set Theory \newcommand{\C}{\mathbb{C}} \newcommand{\N}{\mathbb{N}} \newcommand{\Q}{\mathbb{Q}} \newcommand{\R}{\mathbb{R}} \newcommand{\Z}{\mathbb{Z}} \)

I am a final year doctoral student in the ShaLab advised by Fei Sha. My research attempts to answer:

How can intelligent agents reuse and adapt their knowledge to quickly solve new tasks?

This question drives my work on multi-task and meta-learning, with a special focus on discovering inductive biases for transfer and adaptation.

I completed my undergraduate at USC, double majoring in computer science and mathematics. During that time, I was fortunate to work on robotics with Francisco Valero-Cuevas and optimization with Chunming Wang.

I am also an avid skier.

[Contact / Résumé / Semantic Scholar / GitHub / Twitter]

Long portrait

Note I am seeking internships for Summer '22 (expected graduation: December '22).


News

Uniform Sampling over Episode Difficulty - August 12, 2021
Following my 2020 Amazon internship, our preprint Uniform Sampling over Episode Difficulty is available on ArXiv. [ArXiv]

Summer at Amazon Prime - April 23, 2021
I will be spending another summer at Amazon, with the Prime team in Seattle, WA.

When MAML Can Adapt Fast and How to Assist When It Cannot - January 22, 2021
Our manuscript on When MAML Can Adapt Fast and How to Assist When It Cannot was accepted at AISTATS 2021. Open-source implementation in learn2learn is now available.
[ArXiv, pdf, web, code]

Summer at Amazon AI - April 1, 2020
I will be spending the summer at Amazon AI in Pasadena, CA.

Decoupling Adaptation from Modeling with Meta-Optimizers - November 17, 2019
Our preprint on Decoupling Adaptation from Modeling with Meta-Optimizers for Meta-Learning is available on ArXiv. Open-source implementation in learn2learn coming soon! [ArXiv, pdf]

Variance of Policy Gradient - November 17, 2019
Our preprint on Analyzing the variance of policy gradient estimators for LQR was accepted at the OptRL NeurIPS workshop. [ArXiv, pdf]

Implicit Gradient Transport - September 5, 2019
Our paper on Reducing the variance in online optimization by transporting past gradients was accepted at NeurIPS as a spotlight contribution. [ArXiv, pdf, website, code]

Open-Sourcing learn2learn - August 20, 2019
Our submission to the PyTorch Summer Hackathon won best in show! Check out the website to learn how to easily implement meta-learning algorithms with learn2learn. [website, code]

East European Summer School - June 5, 2019
I will be attending the East-European Summer School this summer. Get in touch if you will too!
Edit: My poster got lucky and received the best theory poster award!