Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Conversation

@MagellaX
Copy link

  • Add CompatibleAdapters for SequenceInsert across opset transitions
  • Preserve sequence type information during version conversion
  • Fix shape inference errors after converting SequenceInsert models
  • Add comprehensive test coverage for SequenceInsert conversion

Resolves GitHub issue #3984: Version converter cannot convert SequenceInsert correctly due to missing sequence input type info.

@MagellaX MagellaX requested a review from a team as a code owner August 26, 2025 22:37
@github-project-automation github-project-automation bot moved this to In progress in PR Tracker Aug 26, 2025
@MagellaX MagellaX force-pushed the fix-sequence-insert-conversion branch 4 times, most recently from c1961b1 to 432591c Compare August 27, 2025 07:57
@codecov
Copy link

codecov bot commented Aug 27, 2025

Codecov Report

❌ Patch coverage is 68.18182% with 14 lines in your changes missing coverage. Please review.
✅ Project coverage is 54.36%. Comparing base (9738ccc) to head (cf6147c).
✅ All tests successful. No failed tests found.

Files with missing lines Patch % Lines
onnx/version_converter.py 66.66% 10 Missing and 4 partials ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #7248      +/-   ##
==========================================
+ Coverage   54.33%   54.36%   +0.03%     
==========================================
  Files         511      511              
  Lines       31819    31862      +43     
  Branches     2848     2868      +20     
==========================================
+ Hits        17290    17323      +33     
- Misses      13753    13761       +8     
- Partials      776      778       +2     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@MagellaX
Copy link
Author

@justinchuby any thoughts?

@MagellaX MagellaX force-pushed the fix-sequence-insert-conversion branch from 5909dce to 4e8db8c Compare August 27, 2025 16:09
titaiwangms and others added 5 commits August 27, 2025 21:43
### Description
<!-- - Describe your changes. -->

This PR is making 4 op-level shape inference function to be public.

```cpp
ONNX_API void RNNShapeInference(InferenceContext& ctx);
ONNX_API void convPoolShapeInference(
    InferenceContext& ctx,
    bool use_dilation,
    bool require_kernel_shape,
    int input1Idx,
    int input2Idx);
ONNX_API void convTransposeShapeInference(InferenceContext& ctx);
ONNX_API void globalPoolTypeShapeInference(InferenceContext& ctx);
```

### Motivation and Context
<!-- - Why is this change required? What problem does it solve? -->
<!-- - If it fixes an open issue, please link to the issue here. -->

When I was working on onnxruntime integration with ONNX==1.18. I found
these functions were changed to static, but they are used to help custom
operators complete the shape type inference.

Publishing op-level shape inference functions can enable users
developing custom contrib ops to leverage existing shape inference
implementations when their operators have similar semantics to standard
ONNX ops, promoting code reuse and model consistency.

---------

Signed-off-by: Ti-Tai Wang <[email protected]>
Signed-off-by: Yash solanki <[email protected]>
- Add CompatibleAdapters for SequenceInsert across opset transitions
- Preserve sequence type information during version conversion
- Fix shape inference errors after converting SequenceInsert models
- Add comprehensive test coverage for SequenceInsert conversion

Resolves GitHub issue onnx#3984: Version converter cannot convert
SequenceInsert correctly due to missing sequence input type info.

Signed-off-by: Yash solanki <[email protected]>
- Add missing protobuf header include
- Fix protobuf repeated field access syntax
- Simplify SequenceInsert handling logic
- Remove problematic input value_info generation

These changes resolve the CI compilation failures while maintaining
the core functionality for sequence type preservation.

Signed-off-by: Yash solanki <[email protected]>
- Change double quotes to single quotes for string literals in version_converter.py
- Ensure proper spacing in inline comments in automatic_upgrade_test.py

These changes address Ruff-FORMAT linting warnings to ensure CI compliance.

Signed-off-by: Yash solanki <[email protected]>
@MagellaX MagellaX force-pushed the fix-sequence-insert-conversion branch from 8d96ef0 to a3c21c3 Compare August 27, 2025 16:20
@MagellaX MagellaX requested a review from a team as a code owner August 27, 2025 16:20
@justinchuby
Copy link
Member

Thanks, will take a look soon

@gramalingam
Copy link
Contributor

As far as I can see: the key limitation is that the C++ IR does not support types other than Tensor type. See here. So, the ideal solution would be to extend the Value struct to store type info about sequence types as well. Having said that, a workaround like in this PR (to externally save and restore sequence types) would work for now, but would probably be insufficient if we ever want to handle sequence types within specific version-conversion logic.

@MagellaX
Copy link
Author

As far as I can see: the key limitation is that the C++ IR does not support types other than Tensor type. See here. So, the ideal solution would be to extend the Value struct to store type info about sequence types as well. Having said that, a workaround like in this PR (to externally save and restore sequence types) would work for now, but would probably be insufficient if we ever want to handle sequence types within specific version-conversion logic.

I appreciate the pointer, and I agree. The core issue is that value in the C++ IR only tracks tensor metadata (elem_type_ / sizes_) here, so we have no place to persist sequence types during conversion. The save/restore shim in this PR is meant to unblock the immediate failure, but I share your concern that we’ll need a deeper IR extension to model sequences cleanly. I’m happy to follow up with a design doc or an issue to track that larger change if that sounds reasonable.

@MagellaX
Copy link
Author

Additionally, I am aware that the longer-term fix resides in the C++ IR, and I’m happy to collaborate on that follow-up once this issue is resolved. If you’re comfortable with the latest patchset, could you take another look so we can get this merged atleast?!!

}
if (output->elemType() == TensorProto_DataType_UNDEFINED && output->sizes().empty()) {
// Special handling for operations that produce sequence types
if (node->kind().toString() == std::string("SequenceInsert")) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why this special treatment for this one op? What about other sequence ops? Would be better to have a more general solution. Isn't the "preserved_sequence_types" down below sufficient, why do we need this?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

Status: In progress

Development

Successfully merging this pull request may close these issues.

4 participants