-
Notifications
You must be signed in to change notification settings - Fork 45.5k
Adding Spatial Transformer to Models #35
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
transformer/README.md
Outdated
@@ -0,0 +1,62 @@ | |||
# Spatial Transformer Network | |||
|
|||
Spatial Transformer Networks [1] allow us to attend specific regions of interest of an image while, on the same time, provide invariance to shapes and sizes of the resulting image patches. This can improve the accuracy of the CNN and discover meaningful discriminative regions of an image. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"at the same time"
Generally, can you wrap to 80 character?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure thing! Thanks! I'll fix it in a follow up commit
Can you host the data files (ideally including the images) externally and point to them? The models repo will quickly become unmanagably large if we accept data files into it. |
Sure ! Let me move the data files accordingly. |
Done, I removed all images and pointed to the cluttered MNIST dataset. Can you check again ? |
Can you please add the following license header to all files containing code. If you like, also add your name to the top level AUTHORS file. Your authorship is of course also preserved in git. # Copyright 2016 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ============================================================================== |
Sure thing. I added the license header to each file containing code. |
Thanks! Can you add a note to your readme saying which version of TensorFlow this works with (I'm assuming 0.7), and then squash your commits? Then I'll merge. |
Shorten STN summary in README relinked to data files adding license header, editing AUTHORS file adding tensorflow version
Done! Thanks for reviewing and happy April Fools' Day. |
Thanks! |
code of 3 papers of USTC
Spatial Transformer implementation as described in Jaderberg et al. 2015 [1].
The implementation is based on the Lasagne implementation [2] and adapted for Tensorflow.
The actual implementation can be found in
transformer/spatial_transformer.py
.Examples include a simple hello world example in
transformer/example.py
and a Spatial Transformer Network training on cluttered MNISTtransformer/cluttered_mnist.py
.[1] http://arxiv.org/abs/1506.02025
[2] https://github.com/skaae/transformer_network/blob/master/transformerlayer.py