Can a single layer network approximate an arbitrary function? [on hold]
Clash Royale CLAN TAG#URR8PPP
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty margin-bottom:0;
up vote
3
down vote
favorite
Can a network with a single layer of $N$ neurons (where $N le infty$, no hidden layers) approximate any arbitrary function so that this network’s error approaches 0 as $N$ approaches $infty$?
machine-learning neural-networks
New contributor
Jack Bauer is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.
put on hold as off-topic by Firebug, Michael Chernick, kjetil b halvorsen, Hong Ooi, mkt 2 days ago
This question appears to be off-topic. The users who voted to close gave this specific reason:
- "Self-study questions (including textbook exercises, old exam papers, and homework) that seek to understand the concepts are welcome, but those that demand a solution need to indicate clearly at what step help or advice are needed. For help writing a good self-study question, please visit the meta pages." – Firebug, Michael Chernick, kjetil b halvorsen, Hong Ooi, mkt
 |Â
show 2 more comments
up vote
3
down vote
favorite
Can a network with a single layer of $N$ neurons (where $N le infty$, no hidden layers) approximate any arbitrary function so that this network’s error approaches 0 as $N$ approaches $infty$?
machine-learning neural-networks
New contributor
Jack Bauer is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.
put on hold as off-topic by Firebug, Michael Chernick, kjetil b halvorsen, Hong Ooi, mkt 2 days ago
This question appears to be off-topic. The users who voted to close gave this specific reason:
- "Self-study questions (including textbook exercises, old exam papers, and homework) that seek to understand the concepts are welcome, but those that demand a solution need to indicate clearly at what step help or advice are needed. For help writing a good self-study question, please visit the meta pages." – Firebug, Michael Chernick, kjetil b halvorsen, Hong Ooi, mkt
2
"no hidden layers" --> trick question
– Hong Ooi
2 days ago
I don't understand why this is closed as off-topic. Sounds like a clear on-topic question to me. I vote to reopen.
– amoeba
yesterday
That said, if your arbitrary function is a function from some input into real numbers, then you must have only 1 output neuron, so it's not clear what do you mean by the output N approaching infinity. Your single layer network is just a linear combination of a bunch of inputs passed through a specified nonlinearity (e.g. sigmoid). That's all. One output neuron. The number of input neurons is given by the problem, it can't be modified at all.
– amoeba
yesterday
@amoeba please read the original revision and you will understand.
– Firebug
yesterday
@Firebug That's why I edited the question, so it would conform with the rules and not be closed
– user
21 hours ago
 |Â
show 2 more comments
up vote
3
down vote
favorite
up vote
3
down vote
favorite
Can a network with a single layer of $N$ neurons (where $N le infty$, no hidden layers) approximate any arbitrary function so that this network’s error approaches 0 as $N$ approaches $infty$?
machine-learning neural-networks
New contributor
Jack Bauer is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.
Can a network with a single layer of $N$ neurons (where $N le infty$, no hidden layers) approximate any arbitrary function so that this network’s error approaches 0 as $N$ approaches $infty$?
machine-learning neural-networks
machine-learning neural-networks
New contributor
Jack Bauer is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.
New contributor
Jack Bauer is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.
edited yesterday


amoeba
54k14190243
54k14190243
New contributor
Jack Bauer is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.
asked Sep 9 at 17:41
Jack Bauer
214
214
New contributor
Jack Bauer is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.
New contributor
Jack Bauer is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.
Jack Bauer is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.
put on hold as off-topic by Firebug, Michael Chernick, kjetil b halvorsen, Hong Ooi, mkt 2 days ago
This question appears to be off-topic. The users who voted to close gave this specific reason:
- "Self-study questions (including textbook exercises, old exam papers, and homework) that seek to understand the concepts are welcome, but those that demand a solution need to indicate clearly at what step help or advice are needed. For help writing a good self-study question, please visit the meta pages." – Firebug, Michael Chernick, kjetil b halvorsen, Hong Ooi, mkt
put on hold as off-topic by Firebug, Michael Chernick, kjetil b halvorsen, Hong Ooi, mkt 2 days ago
This question appears to be off-topic. The users who voted to close gave this specific reason:
- "Self-study questions (including textbook exercises, old exam papers, and homework) that seek to understand the concepts are welcome, but those that demand a solution need to indicate clearly at what step help or advice are needed. For help writing a good self-study question, please visit the meta pages." – Firebug, Michael Chernick, kjetil b halvorsen, Hong Ooi, mkt
2
"no hidden layers" --> trick question
– Hong Ooi
2 days ago
I don't understand why this is closed as off-topic. Sounds like a clear on-topic question to me. I vote to reopen.
– amoeba
yesterday
That said, if your arbitrary function is a function from some input into real numbers, then you must have only 1 output neuron, so it's not clear what do you mean by the output N approaching infinity. Your single layer network is just a linear combination of a bunch of inputs passed through a specified nonlinearity (e.g. sigmoid). That's all. One output neuron. The number of input neurons is given by the problem, it can't be modified at all.
– amoeba
yesterday
@amoeba please read the original revision and you will understand.
– Firebug
yesterday
@Firebug That's why I edited the question, so it would conform with the rules and not be closed
– user
21 hours ago
 |Â
show 2 more comments
2
"no hidden layers" --> trick question
– Hong Ooi
2 days ago
I don't understand why this is closed as off-topic. Sounds like a clear on-topic question to me. I vote to reopen.
– amoeba
yesterday
That said, if your arbitrary function is a function from some input into real numbers, then you must have only 1 output neuron, so it's not clear what do you mean by the output N approaching infinity. Your single layer network is just a linear combination of a bunch of inputs passed through a specified nonlinearity (e.g. sigmoid). That's all. One output neuron. The number of input neurons is given by the problem, it can't be modified at all.
– amoeba
yesterday
@amoeba please read the original revision and you will understand.
– Firebug
yesterday
@Firebug That's why I edited the question, so it would conform with the rules and not be closed
– user
21 hours ago
2
2
"no hidden layers" --> trick question
– Hong Ooi
2 days ago
"no hidden layers" --> trick question
– Hong Ooi
2 days ago
I don't understand why this is closed as off-topic. Sounds like a clear on-topic question to me. I vote to reopen.
– amoeba
yesterday
I don't understand why this is closed as off-topic. Sounds like a clear on-topic question to me. I vote to reopen.
– amoeba
yesterday
That said, if your arbitrary function is a function from some input into real numbers, then you must have only 1 output neuron, so it's not clear what do you mean by the output N approaching infinity. Your single layer network is just a linear combination of a bunch of inputs passed through a specified nonlinearity (e.g. sigmoid). That's all. One output neuron. The number of input neurons is given by the problem, it can't be modified at all.
– amoeba
yesterday
That said, if your arbitrary function is a function from some input into real numbers, then you must have only 1 output neuron, so it's not clear what do you mean by the output N approaching infinity. Your single layer network is just a linear combination of a bunch of inputs passed through a specified nonlinearity (e.g. sigmoid). That's all. One output neuron. The number of input neurons is given by the problem, it can't be modified at all.
– amoeba
yesterday
@amoeba please read the original revision and you will understand.
– Firebug
yesterday
@amoeba please read the original revision and you will understand.
– Firebug
yesterday
@Firebug That's why I edited the question, so it would conform with the rules and not be closed
– user
21 hours ago
@Firebug That's why I edited the question, so it would conform with the rules and not be closed
– user
21 hours ago
 |Â
show 2 more comments
2 Answers
2
active
oldest
votes
up vote
3
down vote
accepted
The Universal Approximation Theorem states that a neural network with one hidden layer can approximate continuous functions on compact subsets of $R^n$, so no, not any arbitrary function.
New contributor
gunes is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.
A single-layer network is not equivalent to a neural network with one hidden layer if I understand it correctly.
– Jack Bauer
Sep 9 at 17:47
2
If he is talking about only having the output layer, i.e. without the hidden layer, it also can't model the function set described in the theorem.
– gunes
Sep 9 at 17:49
2
A NN with one hidden layer has a total of 2 layers, while a single-layer network looks something like this: wwwold.ece.utep.edu/research/webfuzzy/docs/kk-thesis/…
– Jack Bauer
Sep 9 at 17:49
3
The universal approximation theorems say that functions in a specified class can be approximated by networks of a specified class. They don't state what happens outside of these conditions (e.g. that other functions can't be approximated). So, I don't think this answers the question.
– user20160
2 days ago
5
"A well-known theorem says X, so a more general version Y is false" is not really a proper mathematical argument.
– Federico Poloni
2 days ago
 |Â
show 1 more comment
up vote
4
down vote
False: If there are no hidden layers, then your neural network will only be able to approximate linear functions, not any continuous function.
In fact, you need at least one hidden layer for a solution to the simple xor problem (see this post and this one).
When you only have an input and an output layer, and no hidden layer, the output layer is just a linear function of its weights since the activation function only acts on the inner product of the input with the weights, hence you can only produce linearly separable solutions.
N.B. It does not matter what your activation function are, the point is that no neural net with no hidden layer can solve the xor problem, since its solutions are non-linearly separable.
Comments are not for extended discussion; this conversation has been moved to chat.
– gung♦
yesterday
2
The first sentence is wrong (-1) because neurons in the output layer can be nonlinear.
– amoeba
yesterday
1
@amobea that is still a very small class of functions (non-linear after a single linear transformation), so while technically wrong it is only because of a technicality. If you add an arbitrary nonlinear transformation, you are restricted to single-index models.
– guy
yesterday
@amoeba You are wrong, no matter what the neurons in the output layer are, it will never learn to separate non-linearly separable points, see the xor example. You should not confuse the reader with wrong statements.
– user
9 hours ago
I never said anything about xor. What I said is that if the output neuron is nonlinear then clearly the function that the neural network will represent will be nonlinear too. Example: one input neuron, $x$. One output neuron with sigmoid nonlinearity. Neural network learns $f(x) = textsigmoid(wx+b)$. Nonlinear function.
– amoeba
3 hours ago
add a comment |Â
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
3
down vote
accepted
The Universal Approximation Theorem states that a neural network with one hidden layer can approximate continuous functions on compact subsets of $R^n$, so no, not any arbitrary function.
New contributor
gunes is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.
A single-layer network is not equivalent to a neural network with one hidden layer if I understand it correctly.
– Jack Bauer
Sep 9 at 17:47
2
If he is talking about only having the output layer, i.e. without the hidden layer, it also can't model the function set described in the theorem.
– gunes
Sep 9 at 17:49
2
A NN with one hidden layer has a total of 2 layers, while a single-layer network looks something like this: wwwold.ece.utep.edu/research/webfuzzy/docs/kk-thesis/…
– Jack Bauer
Sep 9 at 17:49
3
The universal approximation theorems say that functions in a specified class can be approximated by networks of a specified class. They don't state what happens outside of these conditions (e.g. that other functions can't be approximated). So, I don't think this answers the question.
– user20160
2 days ago
5
"A well-known theorem says X, so a more general version Y is false" is not really a proper mathematical argument.
– Federico Poloni
2 days ago
 |Â
show 1 more comment
up vote
3
down vote
accepted
The Universal Approximation Theorem states that a neural network with one hidden layer can approximate continuous functions on compact subsets of $R^n$, so no, not any arbitrary function.
New contributor
gunes is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.
A single-layer network is not equivalent to a neural network with one hidden layer if I understand it correctly.
– Jack Bauer
Sep 9 at 17:47
2
If he is talking about only having the output layer, i.e. without the hidden layer, it also can't model the function set described in the theorem.
– gunes
Sep 9 at 17:49
2
A NN with one hidden layer has a total of 2 layers, while a single-layer network looks something like this: wwwold.ece.utep.edu/research/webfuzzy/docs/kk-thesis/…
– Jack Bauer
Sep 9 at 17:49
3
The universal approximation theorems say that functions in a specified class can be approximated by networks of a specified class. They don't state what happens outside of these conditions (e.g. that other functions can't be approximated). So, I don't think this answers the question.
– user20160
2 days ago
5
"A well-known theorem says X, so a more general version Y is false" is not really a proper mathematical argument.
– Federico Poloni
2 days ago
 |Â
show 1 more comment
up vote
3
down vote
accepted
up vote
3
down vote
accepted
The Universal Approximation Theorem states that a neural network with one hidden layer can approximate continuous functions on compact subsets of $R^n$, so no, not any arbitrary function.
New contributor
gunes is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.
The Universal Approximation Theorem states that a neural network with one hidden layer can approximate continuous functions on compact subsets of $R^n$, so no, not any arbitrary function.
New contributor
gunes is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.
edited Sep 9 at 17:47


Sycorax
33.3k587151
33.3k587151
New contributor
gunes is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.
answered Sep 9 at 17:44
gunes
3285
3285
New contributor
gunes is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.
New contributor
gunes is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.
gunes is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.
A single-layer network is not equivalent to a neural network with one hidden layer if I understand it correctly.
– Jack Bauer
Sep 9 at 17:47
2
If he is talking about only having the output layer, i.e. without the hidden layer, it also can't model the function set described in the theorem.
– gunes
Sep 9 at 17:49
2
A NN with one hidden layer has a total of 2 layers, while a single-layer network looks something like this: wwwold.ece.utep.edu/research/webfuzzy/docs/kk-thesis/…
– Jack Bauer
Sep 9 at 17:49
3
The universal approximation theorems say that functions in a specified class can be approximated by networks of a specified class. They don't state what happens outside of these conditions (e.g. that other functions can't be approximated). So, I don't think this answers the question.
– user20160
2 days ago
5
"A well-known theorem says X, so a more general version Y is false" is not really a proper mathematical argument.
– Federico Poloni
2 days ago
 |Â
show 1 more comment
A single-layer network is not equivalent to a neural network with one hidden layer if I understand it correctly.
– Jack Bauer
Sep 9 at 17:47
2
If he is talking about only having the output layer, i.e. without the hidden layer, it also can't model the function set described in the theorem.
– gunes
Sep 9 at 17:49
2
A NN with one hidden layer has a total of 2 layers, while a single-layer network looks something like this: wwwold.ece.utep.edu/research/webfuzzy/docs/kk-thesis/…
– Jack Bauer
Sep 9 at 17:49
3
The universal approximation theorems say that functions in a specified class can be approximated by networks of a specified class. They don't state what happens outside of these conditions (e.g. that other functions can't be approximated). So, I don't think this answers the question.
– user20160
2 days ago
5
"A well-known theorem says X, so a more general version Y is false" is not really a proper mathematical argument.
– Federico Poloni
2 days ago
A single-layer network is not equivalent to a neural network with one hidden layer if I understand it correctly.
– Jack Bauer
Sep 9 at 17:47
A single-layer network is not equivalent to a neural network with one hidden layer if I understand it correctly.
– Jack Bauer
Sep 9 at 17:47
2
2
If he is talking about only having the output layer, i.e. without the hidden layer, it also can't model the function set described in the theorem.
– gunes
Sep 9 at 17:49
If he is talking about only having the output layer, i.e. without the hidden layer, it also can't model the function set described in the theorem.
– gunes
Sep 9 at 17:49
2
2
A NN with one hidden layer has a total of 2 layers, while a single-layer network looks something like this: wwwold.ece.utep.edu/research/webfuzzy/docs/kk-thesis/…
– Jack Bauer
Sep 9 at 17:49
A NN with one hidden layer has a total of 2 layers, while a single-layer network looks something like this: wwwold.ece.utep.edu/research/webfuzzy/docs/kk-thesis/…
– Jack Bauer
Sep 9 at 17:49
3
3
The universal approximation theorems say that functions in a specified class can be approximated by networks of a specified class. They don't state what happens outside of these conditions (e.g. that other functions can't be approximated). So, I don't think this answers the question.
– user20160
2 days ago
The universal approximation theorems say that functions in a specified class can be approximated by networks of a specified class. They don't state what happens outside of these conditions (e.g. that other functions can't be approximated). So, I don't think this answers the question.
– user20160
2 days ago
5
5
"A well-known theorem says X, so a more general version Y is false" is not really a proper mathematical argument.
– Federico Poloni
2 days ago
"A well-known theorem says X, so a more general version Y is false" is not really a proper mathematical argument.
– Federico Poloni
2 days ago
 |Â
show 1 more comment
up vote
4
down vote
False: If there are no hidden layers, then your neural network will only be able to approximate linear functions, not any continuous function.
In fact, you need at least one hidden layer for a solution to the simple xor problem (see this post and this one).
When you only have an input and an output layer, and no hidden layer, the output layer is just a linear function of its weights since the activation function only acts on the inner product of the input with the weights, hence you can only produce linearly separable solutions.
N.B. It does not matter what your activation function are, the point is that no neural net with no hidden layer can solve the xor problem, since its solutions are non-linearly separable.
Comments are not for extended discussion; this conversation has been moved to chat.
– gung♦
yesterday
2
The first sentence is wrong (-1) because neurons in the output layer can be nonlinear.
– amoeba
yesterday
1
@amobea that is still a very small class of functions (non-linear after a single linear transformation), so while technically wrong it is only because of a technicality. If you add an arbitrary nonlinear transformation, you are restricted to single-index models.
– guy
yesterday
@amoeba You are wrong, no matter what the neurons in the output layer are, it will never learn to separate non-linearly separable points, see the xor example. You should not confuse the reader with wrong statements.
– user
9 hours ago
I never said anything about xor. What I said is that if the output neuron is nonlinear then clearly the function that the neural network will represent will be nonlinear too. Example: one input neuron, $x$. One output neuron with sigmoid nonlinearity. Neural network learns $f(x) = textsigmoid(wx+b)$. Nonlinear function.
– amoeba
3 hours ago
add a comment |Â
up vote
4
down vote
False: If there are no hidden layers, then your neural network will only be able to approximate linear functions, not any continuous function.
In fact, you need at least one hidden layer for a solution to the simple xor problem (see this post and this one).
When you only have an input and an output layer, and no hidden layer, the output layer is just a linear function of its weights since the activation function only acts on the inner product of the input with the weights, hence you can only produce linearly separable solutions.
N.B. It does not matter what your activation function are, the point is that no neural net with no hidden layer can solve the xor problem, since its solutions are non-linearly separable.
Comments are not for extended discussion; this conversation has been moved to chat.
– gung♦
yesterday
2
The first sentence is wrong (-1) because neurons in the output layer can be nonlinear.
– amoeba
yesterday
1
@amobea that is still a very small class of functions (non-linear after a single linear transformation), so while technically wrong it is only because of a technicality. If you add an arbitrary nonlinear transformation, you are restricted to single-index models.
– guy
yesterday
@amoeba You are wrong, no matter what the neurons in the output layer are, it will never learn to separate non-linearly separable points, see the xor example. You should not confuse the reader with wrong statements.
– user
9 hours ago
I never said anything about xor. What I said is that if the output neuron is nonlinear then clearly the function that the neural network will represent will be nonlinear too. Example: one input neuron, $x$. One output neuron with sigmoid nonlinearity. Neural network learns $f(x) = textsigmoid(wx+b)$. Nonlinear function.
– amoeba
3 hours ago
add a comment |Â
up vote
4
down vote
up vote
4
down vote
False: If there are no hidden layers, then your neural network will only be able to approximate linear functions, not any continuous function.
In fact, you need at least one hidden layer for a solution to the simple xor problem (see this post and this one).
When you only have an input and an output layer, and no hidden layer, the output layer is just a linear function of its weights since the activation function only acts on the inner product of the input with the weights, hence you can only produce linearly separable solutions.
N.B. It does not matter what your activation function are, the point is that no neural net with no hidden layer can solve the xor problem, since its solutions are non-linearly separable.
False: If there are no hidden layers, then your neural network will only be able to approximate linear functions, not any continuous function.
In fact, you need at least one hidden layer for a solution to the simple xor problem (see this post and this one).
When you only have an input and an output layer, and no hidden layer, the output layer is just a linear function of its weights since the activation function only acts on the inner product of the input with the weights, hence you can only produce linearly separable solutions.
N.B. It does not matter what your activation function are, the point is that no neural net with no hidden layer can solve the xor problem, since its solutions are non-linearly separable.
edited 9 hours ago
answered 2 days ago
user
1884
1884
Comments are not for extended discussion; this conversation has been moved to chat.
– gung♦
yesterday
2
The first sentence is wrong (-1) because neurons in the output layer can be nonlinear.
– amoeba
yesterday
1
@amobea that is still a very small class of functions (non-linear after a single linear transformation), so while technically wrong it is only because of a technicality. If you add an arbitrary nonlinear transformation, you are restricted to single-index models.
– guy
yesterday
@amoeba You are wrong, no matter what the neurons in the output layer are, it will never learn to separate non-linearly separable points, see the xor example. You should not confuse the reader with wrong statements.
– user
9 hours ago
I never said anything about xor. What I said is that if the output neuron is nonlinear then clearly the function that the neural network will represent will be nonlinear too. Example: one input neuron, $x$. One output neuron with sigmoid nonlinearity. Neural network learns $f(x) = textsigmoid(wx+b)$. Nonlinear function.
– amoeba
3 hours ago
add a comment |Â
Comments are not for extended discussion; this conversation has been moved to chat.
– gung♦
yesterday
2
The first sentence is wrong (-1) because neurons in the output layer can be nonlinear.
– amoeba
yesterday
1
@amobea that is still a very small class of functions (non-linear after a single linear transformation), so while technically wrong it is only because of a technicality. If you add an arbitrary nonlinear transformation, you are restricted to single-index models.
– guy
yesterday
@amoeba You are wrong, no matter what the neurons in the output layer are, it will never learn to separate non-linearly separable points, see the xor example. You should not confuse the reader with wrong statements.
– user
9 hours ago
I never said anything about xor. What I said is that if the output neuron is nonlinear then clearly the function that the neural network will represent will be nonlinear too. Example: one input neuron, $x$. One output neuron with sigmoid nonlinearity. Neural network learns $f(x) = textsigmoid(wx+b)$. Nonlinear function.
– amoeba
3 hours ago
Comments are not for extended discussion; this conversation has been moved to chat.
– gung♦
yesterday
Comments are not for extended discussion; this conversation has been moved to chat.
– gung♦
yesterday
2
2
The first sentence is wrong (-1) because neurons in the output layer can be nonlinear.
– amoeba
yesterday
The first sentence is wrong (-1) because neurons in the output layer can be nonlinear.
– amoeba
yesterday
1
1
@amobea that is still a very small class of functions (non-linear after a single linear transformation), so while technically wrong it is only because of a technicality. If you add an arbitrary nonlinear transformation, you are restricted to single-index models.
– guy
yesterday
@amobea that is still a very small class of functions (non-linear after a single linear transformation), so while technically wrong it is only because of a technicality. If you add an arbitrary nonlinear transformation, you are restricted to single-index models.
– guy
yesterday
@amoeba You are wrong, no matter what the neurons in the output layer are, it will never learn to separate non-linearly separable points, see the xor example. You should not confuse the reader with wrong statements.
– user
9 hours ago
@amoeba You are wrong, no matter what the neurons in the output layer are, it will never learn to separate non-linearly separable points, see the xor example. You should not confuse the reader with wrong statements.
– user
9 hours ago
I never said anything about xor. What I said is that if the output neuron is nonlinear then clearly the function that the neural network will represent will be nonlinear too. Example: one input neuron, $x$. One output neuron with sigmoid nonlinearity. Neural network learns $f(x) = textsigmoid(wx+b)$. Nonlinear function.
– amoeba
3 hours ago
I never said anything about xor. What I said is that if the output neuron is nonlinear then clearly the function that the neural network will represent will be nonlinear too. Example: one input neuron, $x$. One output neuron with sigmoid nonlinearity. Neural network learns $f(x) = textsigmoid(wx+b)$. Nonlinear function.
– amoeba
3 hours ago
add a comment |Â
2
"no hidden layers" --> trick question
– Hong Ooi
2 days ago
I don't understand why this is closed as off-topic. Sounds like a clear on-topic question to me. I vote to reopen.
– amoeba
yesterday
That said, if your arbitrary function is a function from some input into real numbers, then you must have only 1 output neuron, so it's not clear what do you mean by the output N approaching infinity. Your single layer network is just a linear combination of a bunch of inputs passed through a specified nonlinearity (e.g. sigmoid). That's all. One output neuron. The number of input neurons is given by the problem, it can't be modified at all.
– amoeba
yesterday
@amoeba please read the original revision and you will understand.
– Firebug
yesterday
@Firebug That's why I edited the question, so it would conform with the rules and not be closed
– user
21 hours ago