Séb Arnold

Updated December 13, 2019

\( % Universal Mathematics \newcommand{\paren}[1]{\left( #1 \right)} \newcommand{\brackets}[1]{\left[ #1 \right]} \newcommand{\braces}[1]{\left\{ #1 \right\}} \newcommand{\norm}[1]{\left\lVert#1\right\rVert} \newcommand{\case}[1]{\begin{cases} #1 \end{cases}} \newcommand{\bigO}[1]{\mathcal{O}\left(#1\right)} % Analysis % Linear Algebra \newcommand{\mat}[1]{\begin{pmatrix}#1\end{pmatrix}} \newcommand{\bmat}[1]{\begin{bmatrix}#1\end{bmatrix}} % Probability Theory \DeclareMathOperator*{\V}{\mathop{\mathrm{Var}}} \DeclareMathOperator*{\E}{\mathop{\mathbb{E}}} \newcommand{\Exp}[2][]{\E_{#1}\brackets{#2}} \newcommand{\Var}[2][]{\V_{#1}\brackets{#2}} \newcommand{\Cov}[2][]{\mathop{\mathrm{Cov}}_{#1}\brackets{#2}} % Optimization \newcommand{\minimize}{\operatorname*{minimize}} \newcommand{\maximize}{\operatorname*{maximize}} \DeclareMathOperator*{\argmin}{arg\,min} \DeclareMathOperator*{\argmax}{arg\,max} % Set Theory \newcommand{\C}{\mathbb{C}} \newcommand{\N}{\mathbb{N}} \newcommand{\Q}{\mathbb{Q}} \newcommand{\R}{\mathbb{R}} \newcommand{\Z}{\mathbb{Z}} \)

I am a doctoral student in the ShaLab supervised by Fei Sha. My research interest lies at the intersection of optimization and reinforcement learning. Those interests have brought me to work on topics related meta-learning.

I did my undergraduate at the University of Southern California, double majoring in Computer Science and Mathematics. My research focused on robotics (with Francisco Valero-Cuevas), and mathematical optimization (with Chunming Wang).

I also like skiing (a lot).

[Contact, Resume, GitHub, Twitter]

Me and motorcycle hair

 




News

Decoupling Adaptation from Modeling with Meta-Optimizers

Our preprint on Decoupling Adaptation from Modeling with Meta-Optimizers for Meta-Learning is available on ArXiv. Open-source implementation in learn2learn coming soon! [ArXiv, pdf]


Variance of Policy Gradient

Our preprint on Analyzing the variance of policy gradient estimators for LQR was accepted at the OptRL NeurIPS workshop. [ArXiv, pdf]


Implicit Gradient Transport

Our paper on Reducing the variance in online optimization by transporting past gradients was accepted at NeurIPS as a spotlight contribution. [ArXiv, pdf, website, code]


Open-Sourcing learn2learn

Our submission to the PyTorch Summer Hackathon won best in show! Check out the website to learn how to easily implement meta-learning algorithms with learn2learn. [website, code]


East European Summer School

I will be attending the East-European Summer School this summer. Get in touch if you will too!