Suche nach Personen

plus im Publikationsserver
plus bei BASE
plus bei Google Scholar

Daten exportieren

 

Optimal approximation of piecewise smooth functions using deep ReLU neural networks

Titelangaben

Verfügbarkeit überprüfen

Petersen, Philipp ; Voigtlaender, Felix:
Optimal approximation of piecewise smooth functions using deep ReLU neural networks.
In: Neural networks. 108 (Dezember 2018). - S. 296-330.
ISSN 1879-2782 ; 0893-6080

Volltext

Volltext Link zum Volltext (externe URL):
https://doi.org/10.1016/j.neunet.2018.08.019

Kurzfassung/Abstract

We study the necessary and sufficient complexity of ReLU neural networks---in terms of depth and number of weights---which is required for approximating classifier functions in L2. As a model class, we consider the set Eβ(Rd) of possibly discontinuous piecewise Cβ functions f:[−1/2,1/2]d→R, where the different smooth regions of f are separated by Cβ hypersurfaces. For dimension d≥2, regularity β>0, and accuracy ε>0, we construct artificial neural networks with ReLU activation function that approximate functions from Eβ(Rd) up to L2 error of ε. The constructed networks have a fixed number of layers, depending only on d and β, and they have O(ε−2(d−1)/β) many nonzero weights, which we prove to be optimal. In addition to the optimality in terms of the number of weights, we show that in order to achieve the optimal approximation rate, one needs ReLU networks of a certain depth. Precisely, for piecewise Cβ(Rd) functions, this minimal depth is given---up to a multiplicative constant---by β/d. Up to a log factor, our constructed networks match this bound. This partly explains the benefits of depth for ReLU networks by showing that deep networks are necessary to achieve efficient approximation of (piecewise) smooth functions. Finally, we analyze approximation in high-dimensional spaces where the function f to be approximated can be factorized into a smooth dimension reducing feature map τ and classifier function g---defined on a low-dimensional feature space---as f=g∘τ. We show that in this case the approximation rate depends only on the dimension of the feature space and not the input dimension.

Weitere Angaben

Publikationsform:Artikel
Schlagwörter:Deep neural networks, piecewise smooth functions, function approximation, sparse connectivity, metric entropy, curse of dimension
Sprache des Eintrags:Englisch
Institutionen der Universität:Mathematisch-Geographische Fakultät > Mathematik > Lehrstuhl für Mathematik - Wissenschaftliches Rechnen
Mathematisch-Geographische Fakultät > Mathematik > Lehrstuhl für Mathematik - Reliable Machine Learning
Mathematisch-Geographische Fakultät > Mathematik > Mathematisches Institut für Maschinelles Lernen und Data Science (MIDS)
Weitere URLs:
DOI / URN / ID:10.1016/j.neunet.2018.08.019
Open Access: Freie Zugänglichkeit des Volltexts?:Nein
Peer-Review-Journal:Ja
Verlag:Elsevier
Die Zeitschrift ist nachgewiesen in:
Titel an der KU entstanden:Ja
KU.edoc-ID:23467
Eingestellt am: 22. Okt 2019 14:38
Letzte Änderung: 01. Jun 2023 15:42
URL zu dieser Anzeige: https://edoc.ku.de/id/eprint/23467/
AnalyticsGoogle Scholar