Senior Research Scientist, Waymo

Bio

I am currently a research scientist at Waymo (formerly the Google self-driving car project). Previously, I spent several years at Argo AI, Intel Labs, and Zillow Research. I received my Ph.D. from Georgia Tech, where I was advised by James Hays and Frank Dellaert. Prior to joining Georgia Tech, I completed my Bachelor’s and Master’s degrees in Computer Science at Stanford University, specializing in artificial intelligence.

[My CV]

Research

My interests revolve around generative modeling and machine learning for robotics and autonomy. Past and present research areas have included image understanding, 3D perception, SLAM, and simulation. I’ve been involved in research for self-driving vehicle development since 2017.

News

Past News

Teaching

Aside from research, another passion of mine is teaching. I enjoy creating teaching materials for topics related to computer vision, a field which relies heavily upon numerical optimization and statistical machine learning tools. A number of teaching modules I’ve written can be found below:

Module 1: Linear Algebra
Foundations: Linear Algebra Without the Agonizing Pain
Necessary background: Projection, Gram-Schmidt, SVD,
Fast Nearest Neighbors
Vectorizing nearest neighbors (with no for-loops!)
Module 2: Numerical Linear Algebra
Direct Methods for Solving Systems of Linear Equations
backsubstitution and the LU, Cholesky, QR factorizations
Conjugate Gradients
large systems of equations, Krylov subspaces, Cayley-Hamilton Theorem
Least-Squares
QR decomposition for least-squares, modified Gram-Schmidt, GMRES
Module 3: SVMs and Optimization
The Kernel Trick
poorly taught but beautiful piece of insight that makes SVMs work
Gauss-Newton Optimization in 10 Minutes
Derivation, Trust-Region Variant (Levenberg-Marquardt), Numpy Implementation
Convex Optimization Without the Agonizing Pain
Constrained Optimization, Lagrangians, Duality, and Interior Point Methods
Subgradient Methods in 10 Minutes
Convex Optimization Part II
Module 4: State Estimation
The Bayes Filter and Intro to State Estimation
linear dynamical systems, bayes rule, bayesian estimation, and filtering
Lie Groups and Rigid Body Kinematics
SO(2), SO(3), SE(2), SE(3), Lie algebras
Module 5: Geometry and Camera Calibration
Stereo and Disparity
disparity maps, cost volume, MC-CNN
Epipolar Geometry and the Fundamental Matrix
simple ideas that are normally poorly explained
Visual Odometry
The Essential matrix, Nister's 5-Pt Algorithm, and epipolar constraint derivation
Iterative Closest Point
registration, Sim(3) optimization, simple derivations and code examples
Module 6: Convolutional Neural Networks
Backprop through a Conv Layer
Deriving Backprop through convolution to either the kernel weights or inputs
Generative Adversarial Networks (GANs)
Deriving minimax and non-saturating losses, DCGAN implementation
PyTorch Tutorial
PyTorch tensor operations, initializing CONV layers, groups, custom modules
JAX Tutorial
Intro to JAX, optax, flax, linen, and training loops for JAX
Module 7: Reinforcement Learning
Policy Gradients
intuition and simple derivations of REINFORCE, TRPO
Module 8: Geometric Data Analysis
Module 9: Message Passing Interface (MPI)