Skip to main content
Link
Menu
Expand
(external link)
Document
Search
Copy
Copied
Home
Theory
Matrix calculus
Convex sets
Affine set
Convex set
Conic set
Interior
Projection
Convex function
Conjugate set
Conjugate function
Dual norm
Subgradient and subdifferential
Optimality conditions. KKT
Convex optimization problem
Duality
Rates of convergence
Methods
Line search
Binary search
Golden search
Inexact line search
Successive parabolic interpolation
Zero order methods
Bee algorithm
Nelder–Mead
Simulated annealing
First order methods
Gradient descent
Subgradient descent
Projected subgradient descent
Mirror descent
Stochastic gradient descent
Stochastic average gradient
ADAM: A Method for Stochastic Optimization
Lookahead Optimizer: $k$ steps forward, $1$ step back
Shampoo: Preconditioned Stochastic Tensor Optimization
Adaptive metric methods
Newton method
Quasi Newton methods
Conjugate gradients
Natural gradient descent
LP and simplex algorithm
Automatic differentiation
Exercises
Matrix calculus
Convex sets
Projection
Separation
Conjugate sets
Convex functions
Subgradient and subdifferential
Conjugate functions
General optimization problems
Duality
Rates of convergence
Line search
CVXPY library
Automatic differentiation
Zero order methods
First order methods
Uncategorized
Applications
$A^*$ algorithm for path finding
Deep learning
Knapsack problem
Linear least squares
Maximum likelihood estimation
Minimum volume ellipsoid
Neural Network Loss Surface Visualization
Neural network Lipschitz constant
Principal component analysis
Rendezvous problem
Total variation in-painting
Travelling salesman problem
Two way partitioning problem
Benchmarks
CNN on FashionMNIST
Linear Least Squares
Materials
Tutorials
Quick start to the Colab
📽 Videos
Edit this page on GitHub
Methods
First order methods
Now we have only first order information from the oracle.
Materials
Visualization
of optimization algorithms.
Table of contents
Gradient descent
Subgradient descent
Projected subgradient descent
Mirror descent
Stochastic gradient descent
Stochastic average gradient
ADAM: A Method for Stochastic Optimization
Lookahead Optimizer: $k$ steps forward, $1$ step back
Shampoo: Preconditioned Stochastic Tensor Optimization