Suche nach Personen

plus im Publikationsserver
plus bei BASE
plus bei Google Scholar

Daten exportieren

 

Proof of the Theory-to-Practice Gap in Deep Learning via Sampling Complexity bounds for Neural Network Approximation Spaces

Titelangaben

Verfügbarkeit überprüfen

Grohs, Philipp ; Voigtlaender, Felix:
Proof of the Theory-to-Practice Gap in Deep Learning via Sampling Complexity bounds for Neural Network Approximation Spaces.
2021. - 42 S.

Volltext

Open Access
Volltext Link zum Volltext (externe URL):
https://arxiv.org/abs/2104.02746

Kurzfassung/Abstract

We study the computational complexity of (deterministic or randomized) algorithms based on point samples for approximating or integrating functions that can be well approximated by neural networks. Such algorithms (most prominently stochastic gradient descent and its variants) are used extensively in the field of deep learning. One of the most important problems in this field concerns the question of whether it is possible to realize theoretically provable neural network approximation rates by such algorithms. We answer this question in the negative by proving hardness results for the problems of approximation and integration on a novel class of neural network approximation spaces. In particular, our results confirm a conjectured and empirically observed theory-to-practice gap in deep learning. We complement our hardness results by showing that approximation rates of a comparable order of convergence are (at least theoretically) achievable.

Weitere Angaben

Publikationsform:Preprint, Working paper, Diskussionspapier
Sprache des Eintrags:Englisch
Institutionen der Universität:Mathematisch-Geographische Fakultät > Mathematik > Lehrstuhl für Reliable Machine Learning
DOI / URN / ID:arXiv:2104.02746
Open Access: Freie Zugänglichkeit des Volltexts?:Ja
Titel an der KU entstanden:Nein
KU.edoc-ID:29926
Eingestellt am: 30. Mär 2022 14:22
Letzte Änderung: 31. Mär 2022 13:09
URL zu dieser Anzeige: https://edoc.ku.de/id/eprint/29926/
AnalyticsGoogle Scholar