Ph.D. Candidate, School of Interactive Computing
Georgia Institute of Technology


I am a Ph.D. student at Georgia Tech, where I have the good fortune to work with Professors James Hays and Frank Dellaert. I completed my Bachelor’s and Master’s degrees in Computer Science at Stanford University in 2018, specializing in artificial intelligence.

You can reach me at johnlambert AT gatech DOT edu. Some of my code can be found here.

[My CV]



Humans have an amazing ability to understand the world through their visual system but designing automated systems to perform the task continues to prove difficult. We take for granted almost everything our visual system is capable of. While great progress has been made in 2D image understanding, the real world is 3D, not 2D, so reasoning in the 2D image plane is insufficient. The 3D world is high-dimensional and challenging and has a high data requirement.

My research interests revolve around geometric and semantic understanding of 3D environments. Accurate understanding of 3D environments will have enormous benefit for people all over the world, with implications for safer transportation and safer workplaces.


Aside from research, another passion of mine is teaching. I enjoy creating teaching materials for topics related to computer vision, a field which relies heavily upon numerical optimization and statistical machine learning tools. A number of teaching modules I’ve written can be found below:

Module 1: Linear Algebra
Linear Algebra Without the Agonizing Pain
Necessary Linear Algebra Overview
Fast Nearest Neighbors
Vectorizing nearest neighbors (with no for-loops!)
Module 2: Numerical Linear Algebra
Direct Methods for Solving Systems of Linear Equations
backsubstitution and the LU, Cholesky, QR factorizations
Conjugate Gradients
large systems of equations, Krylov subspaces, Cayley-Hamilton Theorem
QR decomposition for least-squares, modified Gram-Schmidt, GMRES
Module 3: SVMs and Optimization
The Kernel Trick
poorly taught but beautiful piece of insight that makes SVMs work
Gauss-Newton Optimization in 10 Minutes
Including Trust-Region Variant (Levenberg-Marquardt)
Convex Optimization Without the Agonizing Pain
Constrained Optimization, Lagrangians, Duality, and Interior Point Methods
Subgradient Methods in 10 Minutes
Convex Optimization Part II
Module 4: State Estimation
What is State Estimation? and the Bayes Filter
linear dynamical systems, bayes rule, bayesian estimation, and filtering
Lie Groups and Rigid Body Kinematics
SO(2), SO(3), SE(2), SE(3), Lie algebras
Module 5: Geometry and Camera Calibration
Stereo and Disparity
disparity maps, cost volume, MC-CNN
Epipolar Geometry and the Fundamental Matrix
simple ideas that are normally poorly explained
Visual Odometry
The Essential matrix, Nister's 5-Pt Algorithm, and epipolar constraint derivation
Iterative Closest Point
registration, Sim(3) optimization, simple derivations and code examples
Module 6: Reinforcement Learning
Policy Gradients
intuition and simple derivations of REINFORCE, TRPO
Module 7: Convolutional Neural Networks
Backprop through a Conv Layer
Deriving Backprop through convolution to either the kernel weights or inputs
PyTorch Tutorial
PyTorch tensor operations, initializing CONV layers, groups, custom modules
Module 8: Geometric Data Analysis
Module 9: Message Passing Interface (MPI)