-
-
Notifications
You must be signed in to change notification settings - Fork 25.9k
[MRG] RandomActivation #4703
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[MRG] RandomActivation #4703
Conversation
I think this would be more useful and scikit-learn like with a choice of activation function. It's still easy to throw a Ridge on the end, but not quite as fragmented. Also, I think this PR would be well illustrated by translating examples from #3306 into this model. |
yah that's true! will update it |
|
||
class RandomActivation(six.with_metaclass(ABCMeta, TransformerMixin)): | ||
def __init__(self, n_activated_features=10, weight_scale='auto', | ||
activation='identity', intercept=True, random_state=None): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe not identity because that would make it the same as RandomProjection more or less?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oh yeah, also I guess it wouldn't make sense to have a hidden layer with identity
activation.
Btw, how about "RandomBasisFunction" as a name? |
yah |
I'd add support for different initialization/randomization algorithms. E.g. The Nguyen-Widrow algorithm would be a nice addition. |
have you checked out the travis error? And porting the examples and doc over? |
yes, working on it - will push everything all at once. :) Unfortunately, I caught the flu and had to stay in bed for the last 3 days :(, came at the worst time. |
I'm sorry! get well soon! [I've had some virus for the last 2 month, hurray ^^] |
Thanks!! that's nice of you ! :) |
back on my feet! :) Will push a semi-complete version of this by tomorrow. I will basically use pipelining with Ridge to create examples very similar to those I had for extreme learning machines. |
2) added doc in code
Added the varying hyperparameters example, doc, and sanity checks in code.
cheers. |
Source: http://www3.ntu.edu.sg/home/egbhuang/pdf/ELM-Suvey-Huang-Gao.pdf |
…h increasing number of hidden neurons
I wonder why this is a travis error for python 3. For python 2.6, this works fine. |
Implicit relative imports are no longer available in python 3. See for instance http://stackoverflow.com/questions/12172791/changes-in-import-statement-python3 . I never use relative imports but I think |
@sveitser awesome! this fixed it! |
Why is there an "orphan" softmax function? It's not documented and it's not inside the ACTIVATIONS dict. |
@ekerazha I assume it has to do with the last layer of the random neural network algorithm... This PR is the first stage for the pipeline of a random neural network algorithm. I haven't really read that paper but I assume the orphaned softmax will join the party later on ;) |
There should definitely be an rst with a narrative documentation. It doesn't matter so much if it is it's own file, there will be a supervised file when merging the mlp anyhow. |
2) added doc in code
…h increasing number of hidden neurons
…nsupervised section of the documentation.
…ivation Conflicts: examples/neural_networks/plot_random_neural_network.py sklearn/neural_network/__init__.py sklearn/neural_network/random_basis_function.py
Hi @IssamLaradji first of all great work on MLP. That was one of the first algorithms I used here. I just wanted to know if you are still working on this ? Thanks. |
Hi @maniteja123, thanks a lot! Unfortunately, I am in the crunch time season of my PhD :( - therefore it might be a while before I can continue working on this. You are welcome to work on this if interested :) |
Hi @IssamLaradji thanks for the response. I actually am interested to work on this and thanks for the permission to work. All the best for your PhD thesis :) |
I found this new "Extreme Learning Machines" implementation https://github.com/maestrotf/pyExtremeLM which is pretty nice. |
Is there still interest in including this? |
I think this is out of scope for scikit-learn these days. |
Tasks
This is meant to be the first stage of the pipeline for the random neural network algorithm [1]
It fits on the input data by considering the number of features and then randomly generates an
n_features x n_activated
coefficient matrix wheren_activated
is the number of the "hidden layer" features defined by the user.The coefficient matrix can be used to transform the input data to a different space.
[1] http://homepage.tudelft.nl/a9p19/papers/icpr_92_random.pdf