Summary

A classical problem of minimizing finite sum of the smooth and convex functions was considered.

This problem usually arises in Deep Learning, where the gradient of the loss function is calculating over the huge number of data points, which could be very expensive in terms of the iteration cost. Baseline solution to the problem is to calculate the loss function and the corresponding gradient vector only on the small subset of indicies from , which usually referrs as Stochastic gradient descent. The authors claims, that the convergence rate of proposed algorithm is the same a for the full Gradient descent method ( for convex functions and for strongly convex objectives), but the iteration costs remains the same as for the stochastic version.

The method itself takes the following form:

where at each iteration only random summand of a gradient is updated:

  • There is a dependency on dimensionality factor $n$ in bounds. However, it can be improved using restart technique.
  • Empirical results were only shown on logistic regression with Tikhonov regularization problems on different datasets.
  • Batch and non- uniform versions are also presented in the paper.
  • The first known paper, that contains proof of linear convergence for the convex case.

Bounds

For a constant step size $\alpha = \dfrac{1}{16 L}$, where $L$ stands for the Lipschitz constant of a gradient of each function $ f_i(x) $ (in practice, it means that $ L = \max\limits_{i=1, \ldots, n} L_i $).

where $ C_0=g\left(x_0\right)-g\left(x^*\right)+\frac{4L}{n} \| x_0 - x^\ast\|^2 +\frac{\sigma^2}{16L}$ in convex case and

in $\mu$ - strongly convex case.