Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Inference Engine backend can't handle when the model input size has different width and height #19781

@Quantizs

Description

@Quantizs
System information (version)
  • OpenCV => 4.5.1
  • Operating System / Platform => Windows 64 Bit
  • Compiler => Visual Studio 2019
Detailed description

We have two custom YOLOv4 model.
One of them is a license plate detector using 256 x 256 input size.
The other is a character detector which is using input size with different width and hegiht (width = 416, height = 224).

If we are using opencv backend both the plate and character detector returns valid detections.
1
2
3

When we change the backend to Inference Engine the license plate detector returns the same results as before but the character detector results go wrong:
1w
2w
3w

  • The backend was the only difference between the two execution.
  • The only difference between the two model is that one has the same input width and height and the other does not.
Steps to reproduce

Metadata

Metadata

Assignees

Type

No type

Projects

No projects

Milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions