Releases: onnx/onnx
v1.19.1
Note
This patch release includes important bug fixes to the function definition of Attention-23/24 under the Group Query Attention mode and to the reference implementation of RotaryEmbedding-23.
All changes
- Avoid unnecessary re-generating of proto files (#7253) in #7306
- Require ml_dtypes>=0.5.0 (#7254) in #7307
- Cherry pick four attention PRs in #7315
- Update rotary_embedding reference implementation and tests (#7304, #7316) in #7313
- Override
__repr__for some proto classes (#7259) in #7314 - add check for rc-candidates (Update create_release.yml) (#7261) in #7323
- Implement repr methods for Model/Graph/Function (#7320) in #7325
v1.19.0
ONNX v1.19.0 is now available with exciting new features! We would like to thank everyone who contributed to this release!
Please visit onnx.ai to learn more about ONNX and associated projects.
Key Updates
IR Version 12
- Added FLOAT8E8M0 type
ai.onnx Opset 24
- Added Swish op
- Added TensorScatter op and updated Attention op for in-place KV cache updates
- Enabled FLOAT8E8M0 for QuantizeLinear, DequantizeLinear, Cast, CastLike, Constant, ConstantOfShape, Identity, Reshape, Shape, Size, If, Loop, Scan, Flatten, Pad, Squeeze, Unsqueeze, and Transpose.
- Enabled BF16 for TopK and SplitToSequence.
Other
- Added dependency on ml-dtypes
BUILD_ONNX_PYTHONsymbol is deprecated (we be removed for 1.20). Please, useONNX_BUILD_PYTHONinstead.
What's Changed
Breaking Changes and Deprecations
- Deprecate printable_graph in helper by @justinchuby in #6803
- Remove deprecated mapping constants by @justinchuby in #6914
- Remove re2 dependency by @cbourjau in #7083
- Use ml_dtypes everywhere by @justinchuby in #7089
Spec and Operator
- Clarify the
axesinput of [un]Squeeze to be 1D tensors by @justinchuby in #6888 - Clarify that variable shadowing is not allowed by @justinchuby in #6955
- Clarify Mod operator by @cbourjau in #6973
- Fix typo regarding Attention scale in the spec by @yuanyao-nv in #6984
- Clarify default value for
ratioinput of Dropout operator by @robertknight in #7032 - Correct
dtypeattribute docs for EyeLike operator by @robertknight in #7031 - Update float8 table for the Cast op spec by @justinchuby in #7085
- Document Multi-Device Configuration proto specifications in IR.md by @Copilot in #7056
- Add FLOAT8E8M0 data type by @yuanyao-nv in #7030
- Enable float8e8m0 for Q/DQ, and other ops by @yuanyao-nv in #7120
- Update the saturating behavior for E4M3FNUZ/E5M2FNUZ in Cast and CastLike by @justinchuby in #7130
- Fix ELU and Softplus operators to support tensors of any shape by @Copilot in #7136
- Fix Shape operator specification: correct range bounds and document start > end behavior by @Copilot in #7132
- Fix Attention 3D, reference implementation and c++ expansion by @xadupre in #7142
- Fix RMS norm function definition by @justinchuby in #7135
- Fix spec for ReduceSumSquare and other reduce ops when noop_with_empty_axes is set by @Copilot in #7137
- Add bf16 support to TopK and SplitToSequence by @gramalingam in #7158
- Add Swish operator by @isdanni in #7172
- Add TensorScatter op for in-place kv cache update by @yuanyao-nv in #7114
- Fix Resize operator document by @kcvlex in #6686
- Add kv_nonpad_seqlen input to Attention by @yuanyao-nv in #7164
Reference Implementation
- Fix Resize reference operator by @xadupre in #7105
- fixed erf for empty inputs by @konstantin-pueckler-qc in #7170
- Fix Softmax reference for inputs of length 1 by @konstantin-pueckler-qc in #7169
- Fix return type of HardSigmoid in reference implementation by @konstantin-pueckler-qc in #7168
- Fix hardmax reference implementation by @konstantin-pueckler-qc in #7167
- [Reference] Fix constant of shape when input value is 0d by @justinchuby in #7177
Utilities and Tools
- Support set schema inference function in python by @OYCN in #5940
- Improve model Extractor by @justinchuby in #6920
- Fix: prefixing of graphs when
rename_inputs=False/rename_outputs=Falseby @KarelZe in #6994 - Fix Einsum shape inference segfault for scalar inputs by @Copilot in #7055
- Add support for constructing functions with graph attributes by @Copilot in #7112
- Make some op-level shape inference functions public by @titaiwangms in #7091
- Implement saturate_cast in numpy helper by @justinchuby in #7143
- Add
.txtpbas a support text proto format in serialization by @justinchuby in #7161
Build, CI and Tests
- Use ONNX_WERROR=ON in all jobs by @cyyever in #6825
- Cleanup CMake scripts by @cyyever in #6828
- Better support of sanitizers by @cyyever in #6826
- Remove the pull trigger in source dist test by @justinchuby in #6861
- Use CONFIG to find protobuf by @ktf in #6840
- Update to checkout submodules properly by @justinchuby in #6884
- Improve lint CI by @justinchuby in #6899
- Remove win arm64 from main.yml by @andife in #6906
- Enhance Build Process and CI Configuration (combine pipelines) by @andife in #6926
- Update and rename Install_test.yml to install_test.yml by @andife in #6950
- Update CMake to 3.24 and use LINK_LIBRARY:WHOLE_ARCHIVE by @cyyever in #6934
- replace requirements_release with requirements_release_build in codeql.yml by @andife in #6958
- Add backend node testing for Lpnormalization op by @jagadeeshvx in #6997
- Refine pybind11 integration by @cyyever in #7024
- Use pybind11_add_module by @cyyever in #7034
- Update input and output tensors in pb files to match the model by @amarin16 in #7074
- Generate the test data that should be in the repo but was omitted by @justinchuby in #7099
- Fix cast test cases by @justinchuby in #7102
- Test ArgMax with multiple maximal values and
select_last_index = 0by @meilofveeningen-rl in #7104 - Add more test cases with attention by @xadupre in #7117
- Update CMakeLists.txt to prevent ICE protobuf failure by @justinchuby in #7121
- Fix shared module name when cross-compiling by @zboszor in #7026
- Add missing symbols for onnx-mlir by @Sunny-Anand in #7179
- improve condition in create_release.yml by @andife in #7196
Documentation
- Update CIPipelines.md by @justinchuby in #6885
- Update IRv10 brief docs by @justinchuby in #6963
- Fix some typos in documentation by @bentheiii in #7156
- [Docs] Add tip blocks referencing ir-py project for Python APIs by @Copilot in #7109
Other Changes
- Merge Windows CI jobs by @cyyever in #6827
- Improve source build testing by @andife in #6831
- use HEAD_SHA instead of HEAD_REF in auto_update_doc.yml by @mshudrak in #6809
- Fix clang-tidy warnings by @cyyever in #6832
- Update pypa / manylinux2014_x86_64 in release_linux_x86_64.yml to 2025.03.22-2 by @andife in #6833
- Fix the ONNX_BUILD_CUSTOM_PROTOBUF bug introduced by PR#6495. by @cainbit in #6830
- Bump protobuf to v30.1 by @cyyever in #6804
- Remove onnx/onnx-data_pb.h by @cyyever in #6836
- replace quansight-labs/setup-python with actions/setup-python by @ngoldbaum in #6837
- integrate release_mac_freethreading into release_mac by @andife in #6841
- Bump protobuf to v30.2 by @cyyever in #6839
- Improve pyproject.toml and add py.typed by @cyyever in #6843
- Bump ruff to 0.11 and types-protobuf to 5.29.1.20250315 by @cyyever in #6846
- Bump actions/setup-python from 5.4.0 to 5.5.0 by @dependabot[bot] in #6853
- Bump actions/upload-artifact from 4.6.1 to 4.6.2 by @dependabot[bot] in #6852
- Bump actions/download-artifact from 4.1.9 to 4.2.1 by @dependabot[bot] in #6856
- Bump clang-format to 20 by @cyyever in #6850
- Fix external data bug in version converter by @yuanyao-nv in #6847
- Bump github/codeql-action from 3.28.11 to 3.28.13 by @dependabot[bot] in http...
v1.18.0
ONNX v1.18.0 is now available with exciting new features! We would like to thank everyone who contributed to this release!
Please visit onnx.ai to learn more about ONNX and associated projects.
Key Updates
ai.onnx Opset 23
Attention, Cast, CastLike, Constant, ConstantOfShape, DequantizeLinear, Flatten, Identity, If, Loop, Pad, QuantizeLinear, RMSNormalization, Reshape, RotaryEmbedding, Scan, Shape, Size, Squeeze, Transpose, Unsqueeze
IR Version 11
- Added FLOAT4E2M1 and multi-device configuration support
- Relaxed naming requirements (#6652)
Python support
- Support Python 3.13
- Experimental support for Python 3.13t (Windows, Mac)
- Removed support for Python 3.8
- wheels for Windows Arm64
Build
- Minimum protobuf version is upgraded to v25.1
- A new option ONNX_BUILD_CUSTOM_PROTOBUF is added for CMAKE (#6495)
What's Changed
Breaking Changes and Deprecations
- Remove/raise exception when external file exists during onnx.save by @tonypottera24 in #6497
- Remove python 3.8 workflows (python 3.8 is eol) by @cyyever in #6434
- Deprecate all type casting functions by @justinchuby in #6639 => scheduled to remove in 1.20
- GroupNormalization-18 is now deprecated and replaced by GroupNormalization-23 due to an incorrect definition (#6358)
helper.split_complex_to_pairsis now private. Users can consider duplicating the underlying logic for their own use.
Spec and Operator
- Add FLOAT4E2M1 data type by @yuanyao-nv in #6318
- Add FLOAT4E2M1 support to relevant operators by @yuanyao-nv in #6283
- Fix typo in the GroupNorm description by @yuanyao-nv in #6358
- clarify clip with min>max by @AlexandreEichenberger in #6395
- Update IR spec to clarify optional input/output availability by @justinchuby in #6435
- Fix GlobalLpPool input types by @justinchuby in #6503
- Fix reference implementation for TopK by @neNasko1 in #6593
- Clarify that FLOAT4E2M1 can be in int32_data by @justinchuby in #6640
- Relax naming requirements in IR spec by @justinchuby in #6652
- Add Rotary Embedding op to ONNX opset 23 by @shubhambhokare1 in #6461
- Format rotary embedding documentation by @justinchuby in #6717
- Add RMSNormalization to ONNX opset 23 by @shubhambhokare1 in #6443
- Add Attention Op to ONNX Opset 23 by @shubhambhokare1 in #6501
- Add multi-device execution support in ONNX by @kevinch-nv in #6641
Reference Implementation
- Fix NonMaxSuppression default values in ReferenceEvaluator by @xadupre in #6354
- Update op_pool_common.py, add missing "| None" by @andife in #6421
- Allow 1D vector for w_scale in QLinearConv by @mcollinswisc in #6460
- Fix pooling pads issues by @titaiwangms in #6650
Utilities and Tools
- Fix shape inference for Squeeze-1,11 with dynamic input shape by @Yosshi999 in #6314
- Extend printer and parser to support invalid identifiers by @gramalingam in #6346
- Support node labels in parser and printer by @gramalingam in #6349
- Fix parser to handle empty optional parameters in edge cases by @gramalingam in #6427
- Optimize DFS by @tonypottera24 in #6440
- Separate the infer_shapes option from the check_model option by @tonypottera24 in #6441
- Support models larger than 2GB by @tonypottera24 in #6438
- Add missing CHECK_PARSER_STATUS by @cyyever in #6448
- Fix
MAXIMUM_PROTOBUFsize (2GiB, not 2GB) by @xenova in #6556 - Fix numpy_helper to_array by @justinchuby in #6638
- Add version converter softmax 13 -> 12 by @seungwoo-ji-03 in #6608
Build, CI and Tests
- Update Protobuf to latest version (Win) by @liqunfu in #6362
- Use CMake python module by @cyyever in #6381
- Revert Python finding logic and some CMake fixes for Windows by @cyyever in #6382
- Enable more compiler warnings by @cyyever in #6383
- Remove pypi publishing inside "release_" by @andife in #6483
- Fix g++-13 build errors by @cyyever in #6509
- Move to using protogen's .pyi file generator by @justinchuby in #6462
- Python 3.13: builds successful for all os by @andife in #6370
- Simplify CMake argument handling in setup.py by @cyyever in #6545
- Remove struct and leverage ml_dtypes in helper tests by @justinchuby in #6631
- Add cmake dependency only when system cmake is not available (backend version) by @mgorny in #6643
- Remove onnxruntime tests by @justinchuby in #6709
- python313t builds in main_freethreading.yml by @andife in #6706
- Improve CMake summary by @cyyever in #6704
- Add protobuf local build option by @cyyever in #6495
- Harmonize protobuf versions by upgrading to a minimum protobuf version of 25.1 and fix CI error by @cyyever in #6725
- Include
backend.pyin source distribution by @mgorny in #6755
Documentation
- Move INSTALL instruction to separate file by @andife in #6560
- Fix Pad operator example in Python API docs (correct attribute usage) by @kolasaniv1996 in #6702
Other Changes
- Remove unused variables by @cyyever in #6303
- Fix main url checks by @roborags in #6312
- Bumped main VERSION_NUMBER to 1.18.0 by @roborags in #6315
- Bump ai.onnx opset to 23 by @roborags in #6316
- Combine different release pipelines by the use of reusable workflows by @andife in #6277
- Bump actions/upload-artifact from 3 to 4 by @andife in #6319
- Fix missing secrets for publishing of onnxweekly by @andife in #6321
- Update main.yml (upgrade github actions download/upload artifact) by @andife in #6320
- BF: fix condition for publishing to testpypi (Update create_release.yml) by @andife in #6338
- Fix OOB in data propagation of math ops when input is broadcasted to zero by @Yosshi999 in #6323
- The latest protobuf pkg 5.28.0 is failing on Windows. use the one pre⦠by @liqunfu in #6342
- Set up codecov test analysis by @justinchuby in #6345
- Fix model extraction utility by @gramalingam in #6344
- Add missing matrix.target-architecture ? (Update release_mac.yml) by @andife in #6350
- Some performance fixes in C++ code, use of emplace instead of insert by @cyyever in #6304
- Fix some clang-tidy warnings by @cyyever in #6353
- BugFix: make sure that unique files names are used before uploading the wheels b...
v1.17.0
ONNX v1.17.0 is now available with exciting new features! We would like to thank everyone who contributed to this release!
Please visit onnx.ai to learn more about ONNX and associated projects.
Key Updates
ai.onnx Opset 22
- Update to support bfloat16:
- Acos, Acosh, Asin, Asinh, Atan, Atanh, AveragePool, Bernoulli, Conv, ConvTranspose, Cos, Cosh, DeformConv, Det, Dropout, Elu, EyeLike, GRU, GlobalAveragePool, GlobalLpPool, GlobalMaxPool, GridSample, HardSigmoid, HardSwish, InstanceNormalization, LSTM, LpNormalization, LpPool, MaxPool, MaxRoiPool, MaxUnpool, Mish, Multinomial, NegativeLogLikelihoodLoss, RNN, RandomNormal, RandomNormalLike, RandomUniform, RandomUniformLike, RoiAlign, Round, Selu, Sin, Sinh, Softplus, Softsign, Tan, ThresholdedRelu
Python Changes
- Support for numpy >= 2.0
Bug fixes and infrastructure improvements
- Fix Check URLs errors 5972
- Use CMAKE_PREFIX_PATH in finding libprotobuf 5975
- Bump main VERSION_NUMBER to 1.17.0 5968
- Fix source and pip tar.gz builds on s390x systems 5984
- Fix unique_name 5992
- Fix SegFault bug in shape inference 5990
- Fix onnx.compose when connecting subgraphs 5991
- Fix conversion from split 11 to split 18 6020
- Update error messages for NegativeLogLikelihoodLoss inference function 6021
- Generalize input/output number check in shape inference 6005
- Replace rank inference with shape inference for Einsum op 6010
- build from source instruction with latest cmake change 6038
- Handle OneHot's depth value during shape inference 5963
- Not to install cmake in pyproject.toml on Windows 6045
- fix a skipped shape infer code 6049
- Include the ".onnxtext" extension in supported serialization format 6051
- Allow ReferenceEvaluator to return intermediate results 6066
- Fix 1 typo in numpy_helper.py 6041
- Remove benchmarking code 6076
- Prevent crash on import after GCC 8 builds 6048
- Check graph outputs are defined 6083
- Enable additional ruff rules 6032
- Add missing shape inference check for DequantizeLinear 6080
- Add bfloat16 to all relevant ops 6099
- fix(ci): install python dependencies with --only-binary :all: in manylinux 6120
- fix: install google-re2 with --only-binary option 6129
- Specify axis parameter for DequantizeLinear when input rank is 1 6095
- Pin onnxruntime to 1.17.3 for release CIs 6143
- Fix INT4 TensorProto byte size is 5x larger than expected with negative values 6161
- Mitigate tarball directory traversal risks 6164
- Fix reference implementation for ScatterND with 4D tensors 6174
- Addition of group > 1 in test and in backend for ConvTranspose 6175
- Support for bfloat16 for binary, unary operators in reference implementation 6166
- Refactor windows workflow to work on standard windows 6190
- Fix a few crashes while running shape inference 6195
- Update onnx to work with numpy>=2.0 6196
- Use sets to improve performance of dfs search 6213
- Upgrade reuse to v4.0.0 6216
- Makes to_array, from_array support custom numpy dtype, support float16 type in parser 6170
- Handle functions in external data helper 6233
- Refactor safe extract method to fix issue 6215 6222
- move examples dir 6230
- Use MACOSX_DEPLOYMENT_TARGET=12.0 for macOS wheels 6242
- Handle the optional input in infer_node_outputs 6250
- Add check on dimensions in Gemm opset 6 6217
- Update broken URLs 6255
- The latest protobuf pkg 5.28.0 is failing on Windows. use the one pre⦠6342
- Remove unused variables 6303
Test improvements
- Migrate CI to use Github Actions 6075
- Add shape inference test for custom op 6068
- chore(ci): build and test macOS universal2 wheels on macOS arm64 6117
- Fix input names for quantize/dequantize ONNX backend tests 6122
- Verify model deletion after testing 6127
- Better name for Github Action and fix Windows build on CI 6173
- Fix CI on Windows 3.12 6179
- Rename test name with duplicated names, add logic to check it does not happen again 6194
Documentation updates
v1.16.2
v1.16.1
ONNX v1.16.1 is a patch release based on v1.16.0.
Bug fixes
- Prevent crash on import after GCC 8 builds #6048
- Add missing shape inference check for DequantizeLinear #6080
- Fix input names for quantize/dequantize ONNX backend tests #6122
- fix a skipped shape infer code #6049
Please visit onnx.ai to learn more about ONNX and associated projects.
v1.16.0
ONNX v1.16.0 is now available with exciting new features! We would like to thank everyone who contributed to this release! Please visit onnx.ai to learn more about ONNX and associated projects.
Key Updates
ai.onnx Opset 21
- Update to support int4 and uint4:
- Update to support float8e4m3fnuz, float8e5m2, float8e5m2fnuz, int4 and uint4:
- Support blocked quantization. Support int4, uint4, int16, and uint16:
- Support bfloat16 and float16 scales. Support float8e4m3fn, float8e4m3fnuz, float8e5m2, float8e5m2fnuz quantized tensors:
- Add
stash_typeattribute and change input shape ofscaleandbiasfrom (G) to (C) for GroupNormalization
ai.onnx.ml Opset 4
- Addeded new operator TreeEnsemble
IR Version 10
- Added support for UINT4, INT4 types
- GraphProto, FunctionProto, NodeProto, TensorProto added
metadata_propsfield - FunctionProto added
value_infofield - FunctionProto and NodeProto added
overloadfield to support overloaded functions.
Python Changes
- Support registering custom OpSchemas via Python interface
- Support Python3.12
Security Updates
- Fix path sanitization bypass leading to arbitrary read (CVE-2024-27318)
- Fix Out of bounds read due to lack of string termination in assert (CVE-2024-27319)
Deprecation notice
- Deprecated using C++14 to compile ONNX from source. Use C++17 instead #5612
- Deprecated TreeEnsembleClassifier and TreeEnsembleRegressor
- Remove FormalParameter properties that were depreciated in ONNX 1.14 by 5074
Bug fixes and infrastructure improvements
- Enable empty list of values as attribute (#5559)
- Add backward conversions from 18->17 for reduce ops (#5606)
- DFT-20 version converter (#5613)
- Fix version-converter to generate valid identifiers (#5628)
- Reserve removed proto fields (#5643)
- Cleanup shape inference implementation (#5596)
- Do not use LFS64 on non-glibc linux (#5669)
- Drop "one of" default attribute check in LabelEncoder (#5673)
- TreeEnsemble base values for the reference implementation (#5665)
- Parser/printer support external data format (#5688)
- [cmake] Place export target file in the correct directory (#5677)
- Bump CMAKE_CXX_STANDARD as 17 globally (#5612)
- Fix shape inference for DequantizeLinear (#5709)
- Fix swapped version numbers in version converter (#5734)
- Expose LexicalScopeContext in checker.py (#5693)
- Create in-memory large models without serializing large initializers through protobuf (#5685)
- DefineΒ allΒ in onnx.reference (#5749)
- Add default for check_function & Use lexical_scope_ctx for readability (#5757)
- Make ReferenceEvaluator support ModelContainer (#5754)
- Fix reference implementation for loops with optional number of iterations (#5752)
- Print the actual and expected attribute types in checker (#5762)
- Resurrect check function context logic (#5778)
- Fix conversion to zero for E4M3FNUZ and E5M2FNUZ (#5764)
- Support Unicode file paths when loading an ONNX file (#5806)
- Removed unused string_view include (#5813)
- Use mac-release 10.15 (#5820)
- Process subgraphs in inliner (#5841)
- Enable unity(Jumbo) builds (#5768)
- Print tensor dtypes as strings in shape inference (#5856)
- Bump up IR_VERSION to 10 (#5860)
- Support Python 3.12 (#5743)
- Fix corner case where output size need to reduce by one in MaxPool (#5741)
- Bump Numpy minimal version to 1.20 (#5902)
- Fix endianness conversion in numpy_helper.to_array() (#5904)
- Add valueinfos field to FunctionProto (#5903)
- Remove deprecated properties from FormalParameter (#5921)
- Add proto support for overloaded functions (#5011)
- Add parser support for int4 types (#5934)
- Update proto to add metadata props (#5938)
- The latest Cmake 3.28.3 is failing with "Could NOT find Protobuf (missing: Protobuf_LIBRARIES)". Use Cmake 3.27.9 (#5951)
- Fix ReferenceEvaluator when run from a subclass (#5936)
Documentation updates
- Update top-k documentation (#5948)
- Updated docs for DynamicQuantizeLinear to be consistent with reference implementation (#5603)
- ClarifyΒ condΒ toΒ IfΒ must contain a single element (#5617)
- Update README.md (#5630)
- Fix affineGrid doc error - output shape shall has no 'C' in it (#5648)
- Use absolute link in README.md entirely (#5663)
- [Doc clarification] Added unidirectional text for LayerNorm (#5686)
- Add documentation for inliner (#5712)
- update release doc for tag creation (#5721)
- Doc: Add exception checks in check_model (#5736)
- AddΒ permΒ length constraint in Transpose doc (#5857)
- Fix label encoder definition in schema (#5863)
- Update batchnorm documentation (number of outputs for training mode) (#5932)
- Q/DQ docs readability + 4bit info in onnx.proto (#5937)
Installation
You can upgrade to the latest release using pip install onnx --upgrade or build from source following the README instructions.
Contributors
Thanks to these individuals for their contributions in this release since last 1.16.0 release:
Aditya Goel, Adrian Lizarraga, Andreas Fehlner, Charles Volzka, Daniel Richard G, Danni, G. Ramalingam, Gal Hubara-Agam, Ilya Lavrenov, Justin Chu, Tabari Alexander, Takeshi Watanabe, WORLD PEACE, Wouter Deconinck, Xavier DuprΓ©, Yuan Yao, dependabot[bot], galagam, jslap-ubi, liqun Fu
v1.15.0
ONNX v1.15.0 is now available with exciting new features! We would like to thank everyone who contributed to this release! Please visit onnx.ai to learn more about ONNX and associated projects.
Key Updates
-
Added new operators: ImageDecoder#5294 RegexFullMatch#5401 StringConcat#5350 StringSplit#5371 AffineGrid#5225 Gelu#5277
-
Updated existing operators: ConstantOfShape#5390 GridSample#5010 ReduceMax#5539 ReduceMin#5539 IsNan#5583 IsInf#5583 DFT#5514 LabelEncoder#5453
-
New features, bug fixes, and document updates
ai.onnx opset version increased to 20 with following changes:
-
New Operators (ai.onnx):
- ImageDecoder a new ImageDecoder operator to be used in preprocessing models
- RegexFullMatch a new operator for regex matching that is commonly used in feature preprocessing
- StringConcat takes two string tensors as input and returns the elementwise concatenation of the strings in each tensor
- StringSplit takes a string tensor as input and splits each element based on a delimiter attribute and a maxsplit attribute
- AffineGrid Generates a 2D or 3D flow field (sampling grid), given a batch of affine matrices theta
- Gelu applies gaussian error linear unit function or its approximation to input
-
Operator Updates (ai.onnx):
- ConstantOfShape extends supported data types
- GridSample extends to ND data
- ReduceMax adds support for boolean
- ReduceMin adds support for boolean
- IsNan adds support of float8 types
- IsInf adds support of float8 types
- DFT promote axis as input
ai.onnx.ml opset version increased to 4 with following changes:
- Operator Updates (ai.onnx.ml):
- LabelEncoder adds keys_as_tensor and values_as_tensor attributes
New functionality:
- Enable empty list of values as attribute PR#5559
- Update diff bakend node tests for auto update doc PR#5604
- Enable pylint checks with Ruff and remove pylint from lintrunner PR#5589
- Getting onnx to treat
inf/-infas float literals. PR#5528 - Create the onnxtxt serialization format PR#5524
- Support JSON as a serialization target PR#5523
- Support for parsing and printing empty list value as attribute PR#5516
- Add auto update doc pipeline to help developers update docs PR#5450
- Implement GELU as function op PR#5277
- Integrate function-inlining with version-conversion PR#5211
- Extend function type inference to handle missing optional parameters PR#5169
- Create repr functions for OpSchema PR#5117
- Utility to inline model-local functions PR#5105
- Faster reference implementation for operator Conv based on im2col PR#5069
- Support textproto as a serialization format PR#5112
ONNX now supports serializing to JSON, Text Proto as well as the ONNX Text Representation
Users are now able to serialize the model proto to a text format by specifying supported file extensions or supplying the format= argument in save_model.
For example
# model: onnx.ModelProto
onnx.save_model(model, "model.json")will save the model as a json file.
Shape inference enhancements
- [Spec] output_shape for ConvTranspose should not have batch and channels PR#5400
- Infer rank where reshape shape is inferred PR#5327
Bug fixes and infrastructure improvements
- Do not use LFS64 on non-glibc linu PR#5669
- [Web] Use tensor_dtype_to_np_dtype instead of deprecated function PR#5593
- Reject absolute path when saving external data PR#5566
- Support Python editable builds PR#5558
- Test onnxruntime 1.15 with opset 19/IR 9 and fix test source distribution PR#5376
- Supports float 8 initializers in ReferenceEvaluator PR#5295
- Fix check_tensor to work with large models on UNIX PR#5286
- Fix check_tensor to work with large models on Windows PR#5227
- Transpose scalar shape inference PR#5204
- Enable RUFF as a formatter PR#5176
- correct averagepool kernel shape in dilation test case PR#5158
- Fix type constraints of Reshape(19) PR#5146
- Add github action to check urls are valid PR#5434 Y
- Introduce optional cpplint in CI PR#5396 Y
- Test the serialization API with custom serializers PR#5315 Y
- [CI] Use ONNX Hub directly in test_model_zoo CI PR#5267 Y
- Clean up setup.py in favor of pyproject.toml PR#4879 Y
Documentation updates
- Merge the two contributing docs and create instructions for updating an op PR#5584
- [Doc] Update README.md regarding Protobuf update and fix typo in Slice-13 spec PR#5435
- Generate both onnx and onnx-ml operator docs when ONNX_ML=1 PR#5381
- Publish md files under docs/ to the documentation site PR#5312
- Update OpSchema docs to include new methods and classes PR#5297
- Fix missing examples in documentation for ai.onnx.ml PR#5228
- Modify OneHot operator explanation PR#5197
- Update CIPipelines.md PR#5157
- Extend python API documentation PR#5156
- Update sphinx to create markdown pages for operators PR#5137
Installation
You can upgrade to the latest release using pip install onnx --upgrade or build from source following the README instructions.
python setup.py develop deprecation
Direct invocation of setup.py is deprecated following https://setuptools.pypa.io/en/latest/deprecated/commands.html. To build ONNX, users should switch to use
# Editable installation
# Before: python setup.py develop
# Now
pip install -e .
# Build wheel
# Before: python setup.py bdist_wheel
# Now
pip install --upgrade build
python -m build .
Contributors
Thanks to these individuals for their contributions in this release sinc...
v1.14.1
ONNX v1.14.1 is a patch release based on v1.14.1.
Bug fixes
- Fix
shapedata propagation function to handle missing optional parameters #5219 - Fix a couple of shape inference issues #5223
- Extend function type inference to handle missing optional parameters #5169
- Fix check_tensor to work with large models on Windows #5227
- Fix check_tensor to work with large models on UNIX #5286
v1.14.0
ONNX v1.14.0 is now available with exciting new features! We would like to thank everyone who contributed to this release! Please visit onnx.ai to learn more about ONNX and associated projects.
Opset 19 is released
New operators
DeformConv added in #4783
Operator extensions
Equal - Support for string data type added in #4828
AveragePool - New attribute dilations #4790
Pad - Added new wrap to the mode attribute to support circular padding #4793
Resize - Added half_pixel_symmetric to the coordinate_transformation_mode attribute #4862
IR updates (bump to 9)
Backend tests
Replaced real models with light models in backend tests. #4861 #4960
Support Protobuf v21
Now ONNX supports Protobuf v21: #4956
Deprecation notice
- Python 3.7 support will be deprecated due to EOL in next release: #5191
- onnx-weekly package will be deprecated in TestPyPI. Please use them from PyPI instead: #4930
- Properties in FormalParameter will be deprecated in future release. Please use newer properties name: #5074
- Variables from mapping.py will be deprecated and become private implementation details. Please use public functions to get corresponding types from helper.py instead: #4554
Installation notice
You can upgrade to the latest release using pip install onnx --upgrade or build from source following the README instructions.
Contributors
Thanks to these individuals for their contributions in this release since last 1.13.0 release: @jcwchen, @andife, @gramalingam, @xadupre, @justinchuby, @liqunfu, @yuanyao-nv, @jbachurski, @p-wysocki, @prasanthpul, @jantonguirao, @take-cheeze, @smk2007, @AlexandreEichenberger, @snnn, @daquexian, @linkerzhang.