Suche nach Personen

plus im Publikationsserver
plus bei BASE
plus bei Google Scholar

Daten exportieren

 

Small random initialization is akin to spectral learning : Optimization and generalization guarantees for overparameterized low-rank matrix reconstruction

Titelangaben

Verfügbarkeit überprüfen

Stöger, Dominik ; Soltanolkotabi, Mahdi:
Small random initialization is akin to spectral learning : Optimization and generalization guarantees for overparameterized low-rank matrix reconstruction.
In: Advances in Neural Information Processing Systems 34 (NeurIPS 2021). - 2021

Volltext

Kurzfassung/Abstract

Recently there has been significant theoretical progress on understanding the convergence and generalization of gradient-based methods on nonconvex losses with overparameterized models. Nevertheless, many aspects of optimization and generalization and in particular the critical role of small random initialization are not fully understood. In this paper, we take a step towards demystifying this role by proving that small random initialization followed by a few iterations of gradient descent behaves akin to popular spectral methods. We also show that this implicit spectral bias from small random initialization, which is provably more prominent for overparameterized models, also puts the gradient descent iterations on a particular trajectory towards solutions that are not only globally optimal but also generalize well. Concretely, we focus on the problem of reconstructing a low-rank matrix from a few measurements via a natural nonconvex formulation. In this setting, we show that the trajectory of the gradient descent iterations from small random initialization can be approximately decomposed into three phases: (I) a spectral or alignment phase where we show that that the iterates have an implicit spectral bias akin to spectral initialization allowing us to show that at the end of this phase the column space of the iterates and the underlying low-rank matrix are sufficiently aligned, (II) a saddle avoidance/refinement phase where we show that the trajectory of the gradient iterates moves away from certain degenerate saddle points, and (III) a local refinement phase where we show that after avoiding the saddles the iterates converge quickly to the underlying low-rank matrix. Underlying our analysis are insights for the analysis of overparameterized nonconvex optimization schemes that may have implications for computational problems beyond low-rank reconstruction.

Weitere Angaben

Publikationsform:Aufsatz in einem Buch
Sprache des Eintrags:Englisch
Institutionen der Universität:Mathematisch-Geographische Fakultät > Mathematik > Juniorprofessur für Data Science
Mathematisch-Geographische Fakultät > Mathematik > Mathematisches Institut für Maschinelles Lernen und Data Science (MIDS)
Weitere URLs:
Open Access: Freie Zugänglichkeit des Volltexts?:Ja
Begutachteter Aufsatz:Ja
Titel an der KU entstanden:Ja
KU.edoc-ID:30075
Eingestellt am: 05. Mai 2022 10:48
Letzte Änderung: 30. Sep 2024 10:44
URL zu dieser Anzeige: https://edoc.ku.de/id/eprint/30075/
AnalyticsGoogle Scholar