Suche nach Personen

plus im Publikationsserver
plus bei BASE
plus bei Google Scholar

Daten exportieren

 

Implicit Balancing and Regularization : Generalization and Convergence Guarantees for Overparameterized Asymmetric Matrix Sensing

Titelangaben

Verfügbarkeit überprüfen

Soltanolkotabi, Mahdi ; Stöger, Dominik ; Xie, Changzhi:
Implicit Balancing and Regularization : Generalization and Convergence Guarantees for Overparameterized Asymmetric Matrix Sensing.
In: IEEE transactions on information theory. 71 (2025) 4. - S. 2991-3037.
ISSN 0018-9448 ; 1557-9654

Volltext

Volltext Link zum Volltext (externe URL):
https://doi.org/10.1109/TIT.2025.3530335

Kurzfassung/Abstract

Recently, there has been significant progress in understanding the convergence and generalization properties of gradient-based methods for training overparameterized learning models. However, many aspects including the role of small random initialization and how the various parameters of the model are coupled during gradient-based updates to facilitate good generalization remain largely mysterious. A series of recent papers have begun to study this role for non-convex formulations of symmetric Positive Semi-Definite (PSD) matrix sensing problems which involve reconstructing a low-rank PSD matrix from a few linear measurements. The underlying symmetry/PSDness is crucial to existing convergence and generalization guarantees for this problem. In this paper, we study a general overparameterized low-rank matrix sensing problem where one wishes to reconstruct an asymmetric rectangular low-rank matrix from a few linear measurements. We prove that an overparameterized model trained via factorized gradient descent converges to the low-rank matrix generating the measurements. We show that in this setting, factorized gradient descent enjoys two implicit properties: (1) coupling of the trajectory of gradient descent where the factors are coupled in various ways throughout the gradient update trajectory and (2) an algorithmic regularization property where the iterates show a propensity towards low-rank models despite the overparameterized nature of the factorized model. These two implicit properties in turn allow us to show that the gradient descent trajectory from small random initialization moves towards solutions that are both globally optimal and generalize well.

Weitere Angaben

Publikationsform:Artikel
Sprache des Eintrags:Englisch
Institutionen der Universität:Mathematisch-Geographische Fakultät > Mathematik > Juniorprofessur für Data Science
Mathematisch-Geographische Fakultät > Mathematik > Mathematisches Institut für Maschinelles Lernen und Data Science (MIDS)
DOI / URN / ID:10.1109/TIT.2025.3530335
Open Access: Freie Zugänglichkeit des Volltexts?:Nein
Peer-Review-Journal:Ja
Verlag:IEEE
Die Zeitschrift ist nachgewiesen in:
Titel an der KU entstanden:Ja
KU.edoc-ID:34885
Eingestellt am: 22. Apr 2025 14:10
Letzte Änderung: 22. Apr 2025 14:10
URL zu dieser Anzeige: https://edoc.ku.de/id/eprint/34885/
AnalyticsGoogle Scholar