Papers

Home / Papers / Projects / Presentations / Posters

\( % Universal Mathematics \newcommand{\paren}[1]{\left( #1 \right)} \newcommand{\brackets}[1]{\left[ #1 \right]} \newcommand{\braces}[1]{\left\{ #1 \right\}} \newcommand{\norm}[1]{\left\lVert#1\right\rVert} \newcommand{\case}[1]{\begin{cases} #1 \end{cases}} \newcommand{\bigO}[1]{\mathcal{O}\left(#1\right)} % Analysis % Linear Algebra \newcommand{\mat}[1]{\begin{pmatrix}#1\end{pmatrix}} \newcommand{\bmat}[1]{\begin{bmatrix}#1\end{bmatrix}} % Probability Theory \DeclareMathOperator*{\V}{\mathop{\mathrm{Var}}} \DeclareMathOperator*{\E}{\mathop{\mathbb{E}}} \newcommand{\Exp}[2][]{\E_{#1}\brackets{#2}} \newcommand{\Var}[2][]{\V_{#1}\brackets{#2}} \newcommand{\Cov}[2][]{\mathop{\mathrm{Cov}}_{#1}\brackets{#2}} % Optimization \newcommand{\minimize}{\operatorname*{minimize}} \newcommand{\maximize}{\operatorname*{maximize}} \DeclareMathOperator*{\argmin}{arg\,min} \DeclareMathOperator*{\argmax}{arg\,max} % Set Theory \newcommand{\C}{\mathbb{C}} \newcommand{\N}{\mathbb{N}} \newcommand{\Q}{\mathbb{Q}} \newcommand{\R}{\mathbb{R}} \newcommand{\Z}{\mathbb{Z}} \)

* denotes equal contribution

Uniform Sampling over Episode Difficulty
S.M.R. Arnold*, G. S. Dhillon*, A. Ravichandran, S. Soatto, 2021, ArXiv
[ArXiv, pdf]

Embedding Adaptation is Still Needed for Few-Shot Learning
S.M.R. Arnold, F. Sha, 2021, ArXiv
[ArXiv, pdf]

When MAML Can Adapt Fast and How to Assist When It Cannot
S.M.R. Arnold, S. Iqbal, F. Sha, 2021, AISTATS
[ArXiv, pdf, website, code]

learn2learn: A Library for Meta-Learning Research
S.M.R. Arnold, P. Mahajan, D. Datta, I. Bunner, K.S. Zarkias, 2020, ArXiv
[ArXiv, pdf, website, code]

Analyzing the Variance of Policy Gradient Estimators for the Linear-Quadratic Regulator
J. Preiss, S.M.R. Arnold, C-Y. Wei, M. Kloft, 2019, NeurIPS OptRL Workshop
[ArXiv, pdf]

Reducing the variance in online optimization by transporting past gradients
S.M.R. Arnold, P.-A. Manzagol, R. Babanezhad, I. Mitliagkas, N. Le Roux, 2019, NeurIPS19, Spotlight
[ArXiv, pdf, website, code]

Understanding the Variance of Policy Gradient Estimators in Reinforcement Learning
S.M.R. Arnold*, J. Preiss*, C-Y. Wei*, M. Kloft, 2019, SoCal Machine Learning Symposium, Best Poster
See subsequent workshop submission for updated preprint.

Shapechanger: Environments for Transfer Learning
S.M.R. Arnold, E. Pun, T. Denisart, F. Valero-Cuevas, 2017, SoCal Robotics Symposium
[ArXiv, pdf, website]

Accelerating SGD for Distributed Deep Learning Using an Approximated Hessian Matrix
S.M.R. Arnold, C. Wang, 2017, ICLR Workshop [ArXiv, pdf]

A Performance Comparison between TRPO and CEM for Reinforcement Learning
S.M.R. Arnold, E. Chu, F. Valero-Cuevas, 2016, SoCal ML Symposium

A Greedy Algorithm to Cluster Specialists
S.M.R. Arnold, 2016, ArXiv
[ArXiv, pdf]