Description
Currently the algorithm uses the BackPropagationLearner
to build the weights of the synapses, essentially behaving like a one-layer neural network. This is how the algorithm is described in the book too.
Since there is no pseudocode in the book though, how should we go about implementing the algorithm? Currently it is treated exactly like a neural network, which technically isn't incorrect, but in practice it is cumbersome and doesn't seem intuitive. For example, we are iterating through the layers, where in reality we only need two results for the input and output layers.
As I will shortly work on updating the Perceptron section of the Notebook for the multi-class implementation, I will also take a look at the code. I would like your opinion on how to go about it:
a) Leave it as is, with the BackPropagationLearner
building the trivial one layered network. This might not be as intuitive as other approaches, since the Perceptron algorithm doesn't use the back-propagation algorithm in its entirety and might confuse readers.
b) Write the implementation without the use of the BackPropagationLearner
. The algorithm itself will learn the weights on its own. This, in my opinion, is more intuitive and will be closer to the real workings of the algorithm.
Another thing I want to touch upon is the issue of the class number. Usually in literature, the Perceptron is treated as a neuron, whether it either fires or remains dormant. This is essentially a binary classifier. Do we want to have the Perceptron as a binary classifier, or go for the multi-class implementation we already have?
PS: I will also take care of the algorithm's Notebook section once this is agreed on.