-
Notifications
You must be signed in to change notification settings - Fork 1.4k
add Tensor class #400
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add Tensor class #400
Conversation
tiny_dnn/core/framework/tensor.h
Outdated
} | ||
|
||
U* value = &host_data_->at(shape_[1] * shape_[2] * | ||
( shape_[3] * batch + depth ) + height + width); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it really OK to calculate data position in this way?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if you unroll the formula you will see that corresponds to access data with following shape: NxWxHxD
tiny_dnn/core/framework/tensor.h
Outdated
void init_data(const cnn_size_t batch, const cnn_size_t width, | ||
const cnn_size_t height, const cnn_size_t depth) { | ||
host_data_ = data_ptr(new std::vector<U>( | ||
batch * width * height * depth, float_t(0.0))); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Everyone loves bacon and C-style casts.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what do you mean here? probably this method could be public and implemented a resize routine
const cnn_size_t height, const cnn_size_t depth) { | ||
shape_[0] = batch; shape_[1] = width; | ||
shape_[2] = height; shape_[3] = depth; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[0] : batch
[1] : width
[2] : height
[3] : depth
If so, what about using enum?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
how would you do it?
@edgarriba |
tiny_dnn/core/framework/tensor.h
Outdated
// Checked version (throw exceptions for out-of-range error) | ||
template<typename T> | ||
T& at(const cnn_size_t batch, | ||
const cnn_size_t width, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think width
parameter should be renamed to x
.
tiny_dnn/core/framework/tensor.h
Outdated
template<typename T> | ||
T& at(const cnn_size_t batch, | ||
const cnn_size_t width, | ||
const cnn_size_t height, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think height
argument should be renamed to y
.
tiny_dnn/core/framework/tensor.h
Outdated
|
||
template<typename T> | ||
const T& at(const cnn_size_t batch, | ||
const cnn_size_t width, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In my opinion, width
parameter should be renamed to x
.
tiny_dnn/core/framework/tensor.h
Outdated
template<typename T> | ||
const T& at(const cnn_size_t batch, | ||
const cnn_size_t width, | ||
const cnn_size_t height, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
From my perspective, height
parameter should be renamed to y
in here.
tiny_dnn/core/framework/tensor.h
Outdated
T* ptr(const cnn_size_t batch, | ||
const cnn_size_t width, | ||
const cnn_size_t height, | ||
const cnn_size_t depth) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I propose to rename argument names width
to x
and height
to y
in here.
tiny_dnn/core/framework/tensor.h
Outdated
template<typename T> | ||
const T* ptr(const cnn_size_t batch, | ||
const cnn_size_t width, | ||
const cnn_size_t height, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please rename width
to x
and height
to y
.
tiny_dnn/core/framework/tensor.h
Outdated
// zero-overhead version (same performance to raw pointer access. | ||
// have an assertion for out-of-range error) | ||
template<typename T> | ||
T& operator[] (cnn_size_t index) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please add assert statement to check index
parameter.
tiny_dnn/core/framework/tensor.h
Outdated
} | ||
|
||
template<typename T> | ||
const T& operator[] (cnn_size_t index) const { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please add assert statement to check index
parameter.
tiny_dnn/core/framework/tensor.h
Outdated
T* access_data(const cnn_size_t batch, | ||
const cnn_size_t width, | ||
const cnn_size_t height, | ||
const cnn_size_t depth) const { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Renaming width
to x
and height
to y
would be good.
tiny_dnn/core/framework/tensor.h
Outdated
const cnn_size_t height, | ||
const cnn_size_t depth) const { | ||
if (batch > shape_[0] || width > shape_[1] || | ||
height > shape_[2] || depth > shape_[3]) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Instead of using shape_[magic_number_index]
, what about using getter methods such as batch()
, width()
, height()
, depth()
?
if (batch >= batch()
|| x >= width()
|| y >= height()
|| depth >= depth()
) {
nn_error("Fool! You have just ensured the doom of this world.");
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
batch or N? width or w or x? ...
@beru Besides, what do you think about the data container ? E.g. in Caffe2 they use |
acffaa0
to
146ae6f
Compare
tiny_dnn/core/framework/tensor.h
Outdated
const cnn_size_t d2, | ||
const cnn_size_t d3) const { | ||
if (d0 > shape_[0] || d1 > shape_[1] || | ||
d2 > shape_[2] || d3 > shape_[3]) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think comparison operators should be >=
instead of >
.
tiny_dnn/core/framework/tensor.h
Outdated
void resize(const U value = 0) { | ||
if (!host_data_) { | ||
host_data_ = std::unique_ptr<std::vector<U> >( | ||
new std::vector<U>(size(), value)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Probablly, you can use std::make_unique
in here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@beru make_unque is not supported until c++14
http://stackoverflow.com/questions/17902405/how-to-implement-make-unique-function-in-c11
Workaround solution: 32e589c
tiny_dnn/core/framework/tensor.h
Outdated
} | ||
|
||
// Returns the tensor shape | ||
std::vector<cnn_size_t> shape() const { return shape_; } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess const std::vector<cnn_size_t>& shape() const { return shape_; }
would be better.
But I'm wondering type of shape_
needs to be std::vector container..
An alternative type is std::array<cnn_size_t, 4>
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if we return here a reference it means that a user could "reshape" the tensor just modifying this vector but we also need to resize and copy data in case that we want a real reshape. Do we want this feature now?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you return const reference, caller cannot modify the returned vector without using const_cast.
Besides, if you return vector container instance by value, another heap allocation and copy occurs inside (without return value optimization).
T& at(const cnn_size_t d0, | ||
const cnn_size_t d1, | ||
const cnn_size_t d2, | ||
const cnn_size_t d3) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are const type qualifiers for arguments really absolutely necessary?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@beru not really needed since we are passing arguments by value. However, for sanity and readability/maintenance I think it's a good practice to mark non-mutable arguments.
tiny_dnn/core/framework/tensor.h
Outdated
} | ||
|
||
// Move constructor | ||
Tensor(Tensor<U>&& other) = default; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
VS2013 compiler doesn't support defaulted move constructors. https://stackoverflow.com/questions/24573963/move-constructor-invalid-type-for-defaulted-constructor-vs-2013
* | ||
* Data will be hold by a std::vector with 64bytes alignment. | ||
*/ | ||
explicit Tensor(const cnn_size_t d0, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You should use size_t
here. cc @nyanp
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok, but before that: are we going to support tensors serialization? (could be cool this feature :D) If so, I think that cnn_size_t
a.k.a std::uint32_t
should be okay. Otherwise, I'll move to standard size_t
.
Looks like VS somewhere tries to copy
|
tiny_dnn/core/framework/tensor.h
Outdated
for_i(true, res.size(), [&](size_t i) { | ||
const U tmp = src[i]; | ||
res[i] = tmp == U(0.0) ? std::numeric_limits<U>::quiet_NaN() : | ||
this->operator[](i) / (tmp + std::numeric_limits<U>::min()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Now you don't need addition
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks like VS somewhere tries to copy unique_ptr:
maybe because the destructor is missing ? I don't have a MS environment to test :(
Now you don't need addition
thx!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Adding a destructor might not suffice. If you add a destructor, copy constructor and copy assignment are still generated (for backward compatibility with pre-C++11).
In your case, if unique_ptr is a member, when a copy (constructor|assignment) is generated, it will invoke the copy (constructor|assignment) from unique_ptr, resulting in an error (since unique_optr is uncopiable).
The only way is to implement your custom copy constructor and copy assignment, or delete them if you want to make your class uncopiable.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
on first glance the problem is that VS tries to copy unique pointer in copy constructor |
Custom copy constructor seems necessary anyway. @edgarriba @bhack @nyanp |
@bhack Correct me if I'm wrong, but enforcing rule of zero together with |
Yes exactly: "By using a unique_ptr we make our class non-copyable/assignable and identical in behavior to the previous examples." |
You can still make a custom copy constructor which will work the |
Yes was a documentation enforcing this. And also http://stackoverflow.com/questions/29124592/copy-constructor-to-transfer-ownership-of-a-unique-ptr |
/off topic/ @keryell Any news from the United Nations conference on "Bringing About HPC Open-Standards World Peace" ;) |
/cc @gfursin We are designing Tensor here before integrating GPU and then go to CK. If you have any design advertise you are welcome. |
I close this. You can find the valid pull request in #41 |
The new tensor PR is #411 |
@bhack Thank you for specifying the proposal for C++ standard. |
@beru: array_ref was well received. Probably for C++20, but anyway there is a special department in the C++ committee dedicated to bike-sheding, so it will be probably slightly different... :-) @bhack : stop asking off-topic questions all over GitHub threads :-) But yes, everybody survived to this SC16 panel... |
@keryell We are living in a too covered tecnological roadmaps era that create fragmentation. HPC standard domain it is only part of this trend. Probably after C++20 all this discussions will be so useless like discussing an if statement. :) |
Add the Tensor structure:
std::vector<>
with 64bytes alignmentt.ptr<float_t>()
,t.at<float_t>()
andt[idx]
reshape()
andresize()
routinestoDevice()
andfromDevice()
routines