Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
91 views1 page

Universal Approximation Theorem

The Universal Approximation Theorem establishes that artificial neural networks can approximate continuous functions between Euclidean spaces to any desired degree of accuracy. Specifically, it shows that feedforward neural networks with one hidden layer can represent any continuous function and that increasing the number of hidden layers or neurons allows neural networks to represent an even wider variety of functions. However, universal approximation theorems do not provide a method for determining the optimal network weights.

Uploaded by

Sandhya Gubbala
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
91 views1 page

Universal Approximation Theorem

The Universal Approximation Theorem establishes that artificial neural networks can approximate continuous functions between Euclidean spaces to any desired degree of accuracy. Specifically, it shows that feedforward neural networks with one hidden layer can represent any continuous function and that increasing the number of hidden layers or neurons allows neural networks to represent an even wider variety of functions. However, universal approximation theorems do not provide a method for determining the optimal network weights.

Uploaded by

Sandhya Gubbala
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 1

Universal Approximation Theorem

In the mathematical theory of artificial neural networks,


universal approximation theorems are results that establish the
density of an algorithmically generated class of functions within a
given function space of interest.

Typically, these results concern the approximation capabilities of


the feedforward architecture on the space of continuous
functions between two Euclidean spaces, and the approximation
is with respect to the compact convergence topology.

However, there are also a variety of results between non-


Euclidean spaces and other commonly used architectures and,
more generally, algorithmically generated sets of functions, such
as the convolutional neural network (CNN) architecture, radial
basis-functions, or neural networks with specific properties.

Most universal approximation theorems can be parsed into two


classes.

 The first quantifies the approximation capabilities of neural


networks with an arbitrary number of artificial neurons
("arbitrary width" case) and
 The second focuses on the case with an arbitrary number of
hidden layers, each containing a limited number of artificial
neurons ("arbitrary depth" case).

Universal approximation theorems imply that neural networks


can represent a wide variety of interesting functions when given
appropriate weights. On the other hand, they typically do not
provide a construction for the weights, but merely state that such
a construction is possible.

You might also like