Suche nach Personen

plus im Publikationsserver
plus bei BASE
plus bei Google Scholar

Daten exportieren

 

Phase Transitions in Rate Distortion Theory and Deep Learning

Titelangaben

Verfügbarkeit überprüfen

Grohs, Philipp ; Klotz, Andreas ; Voigtlaender, Felix:
Phase Transitions in Rate Distortion Theory and Deep Learning.
In: Foundations of computational mathematics. (16. November 2021). - 64 S.
ISSN 1615-3375 ; 1615-3383

Volltext

Open Access
Volltext Link zum Volltext (externe URL):
https://doi.org/10.1007/s10208-021-09546-4

Kurzfassung/Abstract

Rate distortion theory is concerned with optimally encoding a given signal class S using a budget of R bits, as R→∞. We say that S can be compressed at rate s if we can achieve an error of O(R−s) for encoding S; the supremal compression rate is denoted s∗(S). Given a fixed coding scheme, there usually are elements of S that are compressed at a higher rate than s∗(S) by the given coding scheme; we study the size of this set of signals. We show that for certain "nice" signal classes S, a phase transition occurs: We construct a probability measure P on S such that for every coding scheme C and any s>s∗(S), the set of signals encoded with error O(R−s) by C forms a P-null-set. In particular our results apply to balls in Besov and Sobolev spaces that embed compactly into L2(Ω) for a bounded Lipschitz domain Ω. As an application, we show that several existing sharpness results concerning function approximation using deep neural networks are generically sharp.
We also provide quantitative and non-asymptotic bounds on the probability that a random f∈S can be encoded to within accuracy ε using R bits. This result is applied to the problem of approximately representing f∈S to within accuracy ε by a (quantized) neural network that is constrained to have at most W nonzero weights and is generated by an arbitrary "learning" procedure. We show that for any s>s∗(S) there are constants c,C such that, no matter how we choose the "learning" procedure, the probability of success is bounded from above by min{1,2C⋅W⌈log2(1+W)⌉2−c⋅ε−1/s}.

Weitere Angaben

Publikationsform:Artikel
Sprache des Eintrags:Englisch
Institutionen der Universität:Mathematisch-Geographische Fakultät > Mathematik > Lehrstuhl für Mathematik - Reliable Machine Learning
Mathematisch-Geographische Fakultät > Mathematik > Mathematisches Institut für Maschinelles Lernen und Data Science (MIDS)
DOI / URN / ID:10.1007/s10208-021-09546-4
Open Access: Freie Zugänglichkeit des Volltexts?:Ja
Peer-Review-Journal:Ja
Verlag:Springer
Die Zeitschrift ist nachgewiesen in:
Titel an der KU entstanden:Ja
KU.edoc-ID:28769
Eingestellt am: 15. Nov 2021 08:42
Letzte Änderung: 06. Jun 2023 11:01
URL zu dieser Anzeige: https://edoc.ku.de/id/eprint/28769/
AnalyticsGoogle Scholar