• Medientyp: Sonstige Veröffentlichung; Bericht; E-Book
  • Titel: Advances in low-memory subgradient optimization
  • Beteiligte: Dvurechensky, Pavel [VerfasserIn]; Gasnikov, Alexander [VerfasserIn]; Nurminski, Evgeni [VerfasserIn]; Stonyakin, Fedor [VerfasserIn]
  • Erschienen: Weierstrass Institute for Applied Analysis and Stochastics publication server, 2020
  • Sprache: Englisch
  • DOI: https://doi.org/10.20347/WIAS.PREPRINT.2676
  • Schlagwörter: 90C30 ; article ; 68Q25 ; Convex optimization -- composite optimization -- universal method -- mirror prox -- acceleration -- non-smooth optimization ; 90C25
  • Entstehung:
  • Anmerkungen: Diese Datenquelle enthält auch Bestandsnachweise, die nicht zu einem Volltext führen.
  • Beschreibung: One of the main goals in the development of non-smooth optimization is to cope with high dimensional problems by decomposition, duality or Lagrangian relaxation which greatly reduces the number of variables at the cost of worsening differentiability of objective or constraints. Small or medium dimensionality of resulting non-smooth problems allows to use bundle-type algorithms to achieve higher rates of convergence and obtain higher accuracy, which of course came at the cost of additional memory requirements, typically of the order of n2, where n is the number of variables of non-smooth problem. However with the rapid development of more and more sophisticated models in industry, economy, finance, et all such memory requirements are becoming too hard to satisfy. It raised the interest in subgradient-based low-memory algorithms and later developments in this area significantly improved over their early variants still preserving O(n) memory requirements. To review these developments this chapter is devoted to the black-box subgradient algorithms with the minimal requirements for the storage of auxiliary results, which are necessary to execute these algorithms. To provide historical perspective this survey starts with the original result of N.Z. Shor which opened this field with the application to the classical transportation problem. The theoretical complexity bounds for smooth and non-smooth convex and quasi-convex optimization problems are briefly exposed in what follows to introduce to the relevant fundamentals of non-smooth optimization. Special attention in this section is given to the adaptive step-size policy which aims to attain lowest complexity bounds. Unfortunately the non-differentiability of objective function in convex optimization essentially slows down the theoretical low bounds for the rate of convergence in subgradient optimization compared to the smooth case but there are different modern techniques that allow to solve non-smooth convex optimization problems faster then dictate lower complexity ...