Theory
Theory
Matrix calculus
Convex sets
Affine set
Convex set
Conic set
Projection
Convex function
Conjugate set
Conjugate function
Dual norm
Subgradient and subdifferential
Optimality conditions. KKT
Convex optimization problem
Duality
Rates of convergence
Methods
Line search
Binary search
Golden search
Inexact Line Search
Successive parabolic interpolation
Zero order methods
Bee algorithm
Nelder–Mead
Simulated annealing
First order methods
Gradient descent
Subgradient descent
Projected subgradient descent
Mirror descent
Stochastic gradient descent
Stochastic average gradient
ADAM: A Method for Stochastic Optimization
Lookahead Optimizer:
k
steps forward,
1
step back
Adaptive metric methods
Newton method
Quasi Newton methods
Conjugate gradients
Natural gradient descent
Linear Programming and simplex algorithm
Automatic differentiation
Exercises
Matrix calculus
Convex sets
Projection
Separation
Conjugate sets
Convex functions
Subgradient and subdifferential
Conjugate functions
General optimization problems
Duality
Rates of convergence
Line search
CVXPY library
Automatic differentiation
Zero order methods
First order methods
Uncategorized
Applications
A^*
algorithm for path finding
Deep learning
Minimum volume ellipsoid
Knapsack problem
Linear least squares
Maximum likelihood estimation
Neural network Lipschitz constant
Neural Network Loss Surface Visualization
Principal component analysis
Rendezvous problem
Travelling salesman problem
Total variation in-painting
Two way partitioning problem
Benchmarks
CNN on FashionMNIST
Linear Least Squares
Materials
Tutorials
Quick start to the Colab
Theory
This chapter provides the information about foundation terms and notations for optimization.
Affine set
Matrix calculus
Convex set
Convex sets
Conic set
Convex function
Conjugate set
Conjugate function
Projection
Dual norm
Subgradient and subdifferential
Optimality conditions. KKT
Convex optimization problem
Duality
Rates of convergence
No matching items