Suche nach Personen

plus im Publikationsserver
plus bei BASE
plus bei Google Scholar

Daten exportieren

 

Implicit Balancing and Regularization : Generalization and Convergence Guarantees for Overparameterized Asymmetric Matrix Sensing

Titelangaben

Verfügbarkeit überprüfen

Soltanolkotabi, Mahdi ; Stöger, Dominik ; Xie, Changzhi:
Implicit Balancing and Regularization : Generalization and Convergence Guarantees for Overparameterized Asymmetric Matrix Sensing.
In: Proceedings of Machine Learning Research. 195 (2023). - S. 5140-5142.
ISSN 2640-3498

Volltext

Kurzfassung/Abstract

Recently, there has been significant progress in understanding the convergence and generalization properties of gradient-based methods for training overparameterized learning models. However, many aspects including the role of small random initialization and how the various parameters of the model are coupled during gradient-based updates to facilitate good generalization, remain largely mysterious. A series of recent papers have begun to study this role for non-convex formulations of symmetric Positive Semi-Definite (PSD) matrix sensing problems which involve reconstructing a low-rank PSD matrix from a few linear measurements. The underlying symmetry/PSDness is crucial to existing convergence and generalization guarantees for this problem. In this paper, we study a general overparameterized low-rank matrix sensing problem where one wishes to reconstruct an asymmetric rectangular low-rank matrix from a few linear measurements. We prove that an overparameterized model trained via factorized gradient descent converges to the low-rank matrix generating the measurements. We show that in this setting, factorized gradient descent enjoys two implicit properties: (1) coupling of the trajectory of gradient descent where the factors are coupled in various ways throughout the gradient update trajectory and (2) an algorithmic regularization property where the iterates show a propensity towards low-rank models despite the overparameterized nature of the factorized model. These two implicit properties in turn allow us to show that the gradient descent trajectory from small random initialization moves towards solutions that are both globally optimal and generalize well.

Weitere Angaben

Publikationsform:Artikel
Zusätzliche Informationen:Titel des Zeitschriftenheftes: The Thirty Sixth Annual Conference on Learning Theory, 12-15 July 2023, Bangalore, India
Schlagwörter:asymmetric matrix sensing, factorized gradient descent, overparameterization, generalization with small random initialization, non-convex optimization
Sprache des Eintrags:Englisch
Institutionen der Universität:Mathematisch-Geographische Fakultät > Mathematik > Juniorprofessur für Data Science
Open Access: Freie Zugänglichkeit des Volltexts?:Ja
Peer-Review-Journal:Ja
Titel an der KU entstanden:Ja
KU.edoc-ID:32509
Eingestellt am: 02. Okt 2023 09:37
Letzte Änderung: 04. Okt 2023 11:26
URL zu dieser Anzeige: https://edoc.ku.de/id/eprint/32509/
AnalyticsGoogle Scholar