Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Conversation

zihaomu
Copy link
Member

@zihaomu zihaomu commented Sep 19, 2022

Related issues #22509 and #22442.

Pull Request Readiness Checklist

See details at https://github.com/opencv/opencv/wiki/How_to_contribute#making-a-good-pull-request

  • I agree to contribute to the project under Apache 2 License.
  • To the best of my knowledge, the proposed patch is not based on a code under GPL or another license that is incompatible with OpenCV
  • The PR is proposed to the proper branch
  • There is a reference to the original bug report and related work
  • There is accuracy test, performance test and test data in opencv_extra repository, if applicable
    Patch to opencv_extra has the same branch name.
  • The feature is well documented and sample code can be built with the project CMake

@zihaomu zihaomu requested a review from rogday September 19, 2022 09:08
@zihaomu zihaomu linked an issue Sep 19, 2022 that may be closed by this pull request
4 tasks
@zihaomu zihaomu added the category: dnn (onnx) ONNX suport issues in DNN module label Sep 19, 2022
@asmorkalov asmorkalov added the bug label Sep 19, 2022
@asmorkalov
Copy link
Contributor

@zihaomu I propose to touch some of quantized models (per-layer test) to cover new case with test too.

@asmorkalov asmorkalov added the pr: needs test New functionality requires minimal tests set label Sep 19, 2022
@zihaomu
Copy link
Member Author

zihaomu commented Sep 19, 2022

@zihaomu I propose to touch some of quantized models (per-layer test) to cover new case with test too.

Hi @asmorkalov, thanks for your code reviewing. This patch wants to stop rely on the node name when we parse the quantized node. And it doesn't introduce new features. In my opinion, the existing test cases are enough.

Copy link
Member

@rogday rogday left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for looking into this!

CV_Error(Error::StsNotImplemented, "Per-channel scales/zeropoints are not supported");
if (hasVariableInps)
{
LayerInfo layerInfo = layer_id.find(node_proto.input(i))->second;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So, you look for any real input layer and check if we created internal layer *Int8? I mean, someone could rename the layer or add a new one that doesn't follow that convention. Maybe we're better off with layerParams.set("depth", CV_8S) in each layer, or we could create a set of all integer layers. I think this should be discussed further with the rest of the team.

Copy link
Member Author

@zihaomu zihaomu Sep 20, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for your code review. OpenCV DNN needs to allocate output blob according to depth type (for example the depth of CV_8S will be allocated as int8 Mat.)

someone could rename the layer or add a new one that doesn't follow that convention.

It is possible. I agree with create a set of all integer layers. I think a better way is for us to have both Int8 name endings and keep sets. The set only stores some special layers that do not follow the rules like Quantize. For other layer type ending with Int8, we can keep the current implementation.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure relying on internal names is a good idea. Let's discuss this on Friday, maybe someone will come up with a better solution.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @rogday, please take a look. I have refactored the code.

Copy link
Member

@rogday rogday left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you!

Ptr<Layer> preLayer = dstNet.getLayer(layerInfo.layerId);

for (int i = 0; i < node_proto.output_size(); i++)
if (layerInfo.depth == CV_8S && hasInt8Input(preLayer->type))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So if we insert unsupported at the moment layer(Gather, for example) after integer node, it will set the depth to CV_8S and corrupt the memory at runtime? (reading int8 as float in this case).

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What if we have a set of ONNX layers instead? e.g. in this set we would put QuantizeLinear, DequantizeLinear, QLinearConv, Transpose, Reshape, MaxPool, etc.
And check that if we have integer inputs, current layer should be in this set, otherwise we throw an error, what do you think?

Copy link
Member Author

@zihaomu zihaomu Sep 29, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So if we insert unsupported at the moment layer(Gather, for example) after integer node, it will set the depth to CV_8S and corrupt the memory at runtime? (reading int8 as float in this case).

Yes, if we do this, it will corrupt. But this should not happen, Gather-like layer should output the same format as the input. So, if a Gather layer gets int8 input, it should be GatherInt8. And if the type of GatherInt8 layer is Gather, we can add the Gather to the list.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What if we have a set of ONNX layers instead? e.g. in this set we would put QuantizeLinear, DequantizeLinear, QLinearConv, Transpose, Reshape, MaxPool, etc.
And check that if we have integer inputs, the current layer should be in this set, otherwise we throw an error, what do you think?

I also considered a list of ONNX operators. Functionally, there is no difference between the two lists in essence, one is the original version of ONNX and the other is the mapped version of OpenCV.
Considering the future maintainability, indeed using the original version of ONNX operator names is better.

@asmorkalov
Copy link
Contributor

@zihaomu friendly reminder.

@zihaomu
Copy link
Member Author

zihaomu commented Oct 7, 2022

@asmorkalov, Sorry for the late reply, I'm on vacation recently and will be back at work tomorrow.

Copy link
Member

@rogday rogday left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good, one question.

Copy link
Member

@rogday rogday left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! 👍

@asmorkalov asmorkalov removed the pr: needs test New functionality requires minimal tests set label Oct 17, 2022
@asmorkalov
Copy link
Contributor

Discussed testing offline. Decided to merge without test.

@asmorkalov asmorkalov merged commit 02143cd into opencv:4.x Oct 17, 2022
@alalek alalek mentioned this pull request Jan 8, 2023
@asmorkalov asmorkalov added this to the 4.7.0 milestone Jan 23, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

bug category: dnn (onnx) ONNX suport issues in DNN module

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Fail to load onnx model quantized by Intel neural-compressor Stop relying on the producer's naming of quantized layers to fetch its parameters

3 participants