Ph.D. Candidate
School of Interactive Computing
Georgia Institute of Technology

Bio

I am a Ph.D. student at Georgia Tech, advised by Professor James Hays. I completed my Bachelor’s and Master’s degrees in Computer Science at Stanford University in 2018, specializing in artificial intelligence.

You can reach me at johnlambert AT gatech DOT edu. Some of my code can be found here.

[My CV]

Research

Humans have an amazing ability to understand the world through their visual system but designing automated systems to perform the task continues to prove difficult. We take for granted almost everything our visual system is capable of. While great progress has been made in 2D image understanding, the real world is 3D, not 2D, so reasoning in the 2D image plane is insufficient. The 3D world is high-dimensional and challenging and has a high data requirement.

My research interests revolve around geometric and semantic understanding of 3D environments. Accurate understanding of 3D environments will have enormous benefit for people all over the world, with implications for safer transportation and safer workplaces.

Teaching

Aside from research, another passion of mine is teaching. I enjoy creating teaching materials for topics related to computer vision, a field which relies heavily upon numerical optimization and statistical machine learning tools. A number of teaching modules I’ve written can be found below:

Module 1: Linear Algebra
Linear Algebra Without the Agonizing Pain
Necessary Linear Algebra Overview
Fast Nearest Neighbors
Vectorizing nearest neighbors (with no for-loops!)
Module 2: Numerical Linear Algebra
Conjugate Gradients
large systems of equations, Krylov subspaces, Cayley-Hamilton Theorem
Module 3: SVMs and Optimization
The Kernel Trick
poorly taught but beautiful piece of insight that makes SVMs work
Gauss-Newton Optimization in 10 Minutes
Including Trust-Region Variant (Levenberg-Marquardt)
Convex Optimization Without the Agonizing Pain
Constrained Optimization, Lagrangians, Duality, and Interior Point Methods
Subgradient Methods in 10 Minutes
Convex Optimization Part II
Module 4: State Estimation
What is State Estimation? and the Bayes Filter
linear dynamical systems, bayes rule, bayesian estimation, and filtering
Lie Groups and Rigid Body Kinematics
SO(2), SO(3), SE(2), SE(3), Lie algebras
Module 5: Geometry and Camera Calibration
Epipolar Geometry and the Fundamental Matrix
simple ideas that are normally poorly explained
Iterative Closest Point
simple derivations and code examples
Structure From Motion
Deriving bundle adjustment
Module 6: Reinforcement Learning
Policy Gradients
intuition and simple derivations of REINFORCE, TRPO
Module 7: Geometric Data Analysis
Module 8: Message Passing Interface (MPI)