- 
                Notifications
    
You must be signed in to change notification settings  - Fork 191
 
Support any synapse for learning rules #1095
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
| 
           By analyzing the blame information on this pull request, we identified @tbekolay to be a potential reviewer  | 
    
| 
           Awesome, this is definitely something I've wanted to do for a while!  | 
    
        
          
                nengo/params.py
              
                Outdated
          
        
      | 
               | 
          ||
| def equal(self, instance_a, instance_b): | ||
| return True # otherwise __get__ throws an error | ||
| 
               | 
          
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just wondering out loud whether there's a better way to do __get__ that avoids having to do this. Also, I would bust these out into a separate commit since they're not directly related to the main change.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure... but moved these changes to a separate commit.
| 
           Yeah, this is a great thing to do.  | 
    
2b1c223    to
    5a19d0f      
    Compare
  
    | 
           Updated the BCM/Oja/Voja rules, changelog, made docstrings consistent, made use of the  @tbekolay Why was this removed from the 2.2.0 release milestone?  | 
    
| 
           Also if we are happy with the way this uses the param system, then the same changes should be made to   | 
    
        
          
                nengo/learning_rules.py
              
                Outdated
          
        
      | args.append("learning_rate=%g" % self.learning_rate) | ||
| if self.pre_tau != 0.005: | ||
| args.append("pre_tau=%f" % self.pre_tau) | ||
| if self.pre_synapse != Lowpass(tau=0.005): | 
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How can I access a param's default?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
PES.pre_synapse.default
| 
           Okay this is now ready for review. To check that my improvements to  ... displays the right thing: Perhaps that whole notebook should be made into a series of unit tests? Also an important note: 
  | 
    
| 
           I haven't done a detailed review of this yet, but one discussion point from a quick look at the diff is that the API changes in this PR are backwards-incompatible, meaning that we'd need to do a 3.0.0 release when this is merged. Since we allow numbers as a shortcut for   | 
    
| 
           Maybe we can save this for 3.0.0 and avoid the duplication of parameters altogether? For me it's more pain than worth to maintain these two code paths.  | 
    
| 
           The deprecated parameters can be like this: @property
def pre_tau(self):
    warnings.warn("Deprecated since 2.2.0; use pre_synapse instead")
    return self.pre_synapse
@pre_tau
def pre_tau(self, val):
    warnings.warn("Deprecated since 2.2.0; use pre_synapse instead")
    self.pre_synapse = valBut either way, up to you!  | 
    
| 
           Updated into a form ready for 3.0.0. Also looping in @neworderofjamie since this will affect SpiNNaker if it gets in.  | 
    
        
          
                examples/usage/strings.ipynb
              
                Outdated
          
        
      | "print(nengo.BCM(learning_rate=1e-8, pre_synapse=0.01, post_synapse=0.005, theta_synapse=10.0))\n", | ||
| "print(nengo.Oja())\n", | ||
| "print(nengo.Oja(pre_tau=0.01, post_tau=0.005, beta=0.5, learning_rate=1e-5))" | ||
| "print(nengo.Oja(learning_rate=1e-5, pre_synapse=0.01, post_synapse=0.005, beta=0.5))" | 
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
TODO: Add one for Voja? Also consider making these into unit tests?
| 
           So, @arvoelke just let me know this branch exists, and it does exactly what @bjkomer and I were wanting with #1256 . We'll try it out! Thank you! Also, if it is possible for this to be done in a backwards-compatible manner with a few deprecation warnings (and no large code duplication), then it would be great to do so, even if we do put off merging it in till 3.0.0. I'll take a look and see if @tbekolay 's suggestion of how do to it could end up being reasonable....  | 
    
00c8783    to
    7a18fc6      
    Compare
  
    | 
           Rebased this to master and fixed things up. I also made it so that this change is backwards compatible (as suggested by @tbekolay), so no need to wait for a major release. Should be ready for review.  | 
    
| return args | ||
| return (('learning_rate', BCM.learning_rate.default), | ||
| ('pre_synapse', BCM.pre_synapse.default), | ||
| ('post_synapse', self.pre_synapse), | 
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is a typo and should be  ('post_synapse', BCM.post_synapse.default) at least for the sake of consistency?
Edit: I noticed this happens in all the instances of the learning rules. Is there a reason for this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The post_synapse defaults to the value of pre_synapse, so that's the value we want to check against (to see if the user set it to something other than the default).
| instance, size_in) # IntParam validation | ||
| 
               | 
          ||
| 
               | 
          ||
| class LearningRuleType(FrozenObject): | 
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So the idea behind FrozenObjects is that they're not supposed to support Defaults. I remember the idea behind this was so that FrozenObjects could be used in multiple places (e.g. the same learning rule type on multiple connections) or copied without issue. But I can't remember the exact details as to why having Defaults and being Frozen could not go together.
Part of the reason might be because FrozenObjects themselves are often used as defaults for other things. For example, the default for Ensemble.neuron_type is LIF(). If LIF had its own defaults for tau_rc and tau_ref, then changing model.config[nengo.LIF].tau_rc could change the default tau_rc for an ensemble, but also change that default other places, too. Things could get complicated or have unintended consequences.
For learning rules, that's less of an issue, since they're currently not used as defaults anywhere that I know of. My main point is just that I had designed FrozenObject to be separate from the Default system, so if we're bringing them together sometimes, I just want to make sure that that makes sense and doesn't cause any unforeseen problems.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's the reason all the parameters are marked as readonly, so that they can be used with FrozenObject (this is explicitly checked for in FrozenObject, so I think it is part of the intended/expected behaviour).  I thinkkk having everything be readonly resolves most of those concerns (since you can't change the values of a FrozenObject after using it somewhere else).  But we could add some unit tests, if there are any use cases you have in mind that you think might produce weird behaviour.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This LGTM! I made a few minor changes in fixup commits. Also note that the change to ObsoleteParam doesn't seem to be necessary anymore, so I removed it (there's a test for it in the history here, but that will be squashed in the merge).
If no objections, I'll squash and merge after lunch.
| 
           Could you please include a link to http://compneuro.uwaterloo.ca/publications/voelker2017c.html alongside the other tech report that is currently linked in the PES notebook? This gives useful information for understanding how different synapses on the error connection will affect the dynamics, and a heuristic for setting the learning rate. Thanks!  | 
    
          
 Absolutely, will do 👍  | 
    
Deprecates learning rule tau parameters. Co-authored-by: Daniel Rasmussen <[email protected]>
Description:
pre_tau,post_tau, andtheta_tauwithpre_synapse,post_synapse, andtheta_synapserespectively on all learning rules. This supports the use of arbitrary synapse models on the activities used for learning.learning_rateparameter always first in the list for consistency.theta_tauto only do one level of filtering (used to be filtered twice).Motivation and context:
We might not always want to filter the activities with a
Lowpass. This allows arbitrary filters for learning (e.g.Alpha). I found this useful for RL in a dynamic context (the filter defines a kernel, which determines the state to update with respect to the reward). This also incidentally fixes a bug (missing issue) where we couldn't passpre_tau=Noneorpre_tau=0as a parameter.How has this been tested?
A test was added for PES using
pre_synapse=Alpha(...), which checks that the probed activities from the learning rule are identical to the alpha-filtered neural activities. The other learning rules use the same code paths (i.e.build_or_passthrough), that are also tested withLowpasssynapses.Where should a reviewer start?
Start in
learning_rules.py, in particular thePESclass. Then move on tobuilder/learning_rules.py.How long should this take to review?
Types of changes:
Checklist:
Still to do:
ObsoleteParam.