John Lambert

prof_pic.jpg

Bio I am currently at Google DeepMind, working on Gemini. Previously, I was a senior research scientist at Waymo (formerly the Google self-driving car project), where I spent almost 3 years working on data-driven multi-agent simulation and generative modeling.

From 2017 to 2022, I spent several years at Argo AI, Intel Labs, and Zillow Research. I received my Ph.D. from Georgia Tech, where I was advised by James Hays and Frank Dellaert. Prior to joining Georgia Tech, I completed my Bachelor’s and Master’s degrees in Computer Science at Stanford University, specializing in artificial intelligence.

Research My interests revolve around generative modeling and machine learning for multimodal reasoning, robotics and autonomy. Past and present research areas have included NLP, image understanding, 3D perception, SLAM, and simulation. I was involved in research for self-driving vehicle development from 2017 to 2024.

[My CV]

news

Sep 25, 2024 One paper (SceneDiffuser) is accepted to NeurIPS 2024.
Sep 1, 2024 Learn more about our recent work on diffusion-based world models for simulation from our CVPR ‘24 workshop talk [YouTube link].
Jun 17, 2024 The 2024 Waymo Open Dataset Challenges have concluded. I served as a co-organizer, and reports from winners can be found here, along with a YouTube recording.

teaching

Aside from research, another passion of mine is teaching. I enjoy creating teaching materials for topics related to statistical machine learning, computer vision, numerical optimization. A number of teaching modules I've written can be found below:
Module 1: Linear Algebra
Foundations: Linear Algebra Without the Agonizing Pain
Necessary background: Projection, Gram-Schmidt, SVD,
Fast Nearest Neighbors
Vectorizing nearest neighbors (with no for-loops!)
Module 2: Numerical Linear Algebra
Direct Methods for Solving Systems of Linear Equations
backsubstitution and the LU, Cholesky, QR factorizations
Conjugate Gradients
large systems of equations, Krylov subspaces, Cayley-Hamilton Theorem
Least-Squares
QR decomposition for least-squares, modified Gram-Schmidt, GMRES
Module 3: SVMs and Optimization
The Kernel Trick
poorly taught but beautiful piece of insight that makes SVMs work
Gauss-Newton Optimization in 10 Minutes
Derivation, Trust-Region Variant (Levenberg-Marquardt), Numpy Implementation
Convex Optimization Without the Agonizing Pain
Constrained Optimization, Lagrangians, Duality, and Interior Point Methods
Subgradient Methods in 10 Minutes
Convex Optimization Part II
Module 4: State Estimation
The Bayes Filter and Intro to State Estimation
linear dynamical systems, bayes rule, bayesian estimation, and filtering
Lie Groups and Rigid Body Kinematics
SO(2), SO(3), SE(2), SE(3), Lie algebras
Module 5: Geometry and Camera Calibration
Stereo and Disparity
disparity maps, cost volume, MC-CNN
Epipolar Geometry and the Fundamental Matrix
simple ideas that are normally poorly explained
Visual Odometry
The Essential matrix, Nister's 5-Pt Algorithm, and epipolar constraint derivation
Iterative Closest Point
registration, Sim(3) optimization, simple derivations and code examples
Module 6: Convolutional Neural Networks
Backprop through a Conv Layer
Deriving Backprop through convolution to either the kernel weights or inputs
Generative Adversarial Networks (GANs)
Deriving minimax and non-saturating losses, DCGAN implementation
PyTorch Tutorial
PyTorch tensor operations, initializing CONV layers, groups, custom modules
JAX Tutorial
Intro to JAX, optax, flax, linen, and training loops for JAX
Module 7: Reinforcement Learning
Policy Gradients
intuition and simple derivations of REINFORCE, TRPO
Module 8: Geometric Data Analysis
Module 9: Message Passing Interface (MPI)