Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Commit fcfa0de

Browse files
authored
Revised PytorchAddExportSupport.md
- Minor changes
1 parent e1cfe10 commit fcfa0de

1 file changed

Lines changed: 37 additions & 50 deletions

File tree

tutorials/PytorchAddExportSupport.md

Lines changed: 37 additions & 50 deletions
Original file line numberDiff line numberDiff line change
@@ -1,86 +1,76 @@
11
## Fail to export the model in PyTorch
2-
Sometimes, when you try to export a model, you may receive some message like
3-
the following:
2+
When you try to export a model, you may receive a message similar to the following:
43
```
54
UserWarning: ONNX export failed on elu because torch.onnx.symbolic.elu does not exist
65
RuntimeError: ONNX export failed: Couldn't export operator elu
76
```
8-
The export fails because PyTorch does not support exporting `elu` operator.
9-
Furthermore, you have reached out to the ONNX team, but no response has been
10-
received yet. Is it possible to add support for this operator by yourself?
11-
The answer is Yes, but the difficulty (and the amount of code you will need
12-
to touch) depends on the answers to the following questions:
13-
14-
## Determine how hard to add the support for the wanted operator
15-
### Question 1: Is the operator you want standardized in ONNX?
7+
The export fails because PyTorch does not support exporting the `elu` operator. If you've already reached out to the ONNX team but haven't received a response, you can add support for this yourself. The difficulty of doing this depends on your answers to the following questions:
8+
9+
### Determine how difficult it is to add support for the operator
10+
#### Question 1: Is the operator you want standardized in ONNX?
1611
Answer:
17-
- Yes. Great, it will be very straightforward to add the support for the
18-
missing operators.
19-
- No. In this situation, it may be difficult to do the work by yourself.
12+
- **Yes.** Great! It will be straightforward to add support for the missing operator.
13+
- **No.** In this case, it may be difficult to do the work by yourself.
2014
Check the [Standardization Section](#standardize_op).
2115

22-
### Question 2: Can the ONNX operator be imported by the backend framework, such as Caffe2?
16+
#### Question 2: Can the ONNX operator be imported by the backend framework, such as Caffe2?
2317
Answer:
24-
- Yes. Terrific, we are able to run the exported ONNX model.
25-
- No. In this situation, you can only export model. Please contact the
26-
importer (such as onnx-caffe2) developers, more work is needed.
27-
28-
## How to add the support to export an operator in PyTorch
29-
### Condition 1: If the operator in PyTorch is an ATen operator.
30-
To determine whether the operator is an ATen operator or not, please check
31-
`torch/csrc/autograd/generated/VariableType.h` (available in generated code in pytorch install dir). If you find the
32-
corresponding function in this header file, most likely, it is an ATen
33-
operator.
18+
- **Yes.** Terrific. We are able to run the exported ONNX model.
19+
- **No.** In this situation, you can only export model. Please contact the
20+
importer (such as onnx-caffe2) developers, as additional work is required.
21+
22+
### How to add support to export an operator in PyTorch
23+
#### Condition 1: If the operator in PyTorch is an ATen operator...
24+
To determine whether the operator is an ATen operator or not, check
25+
`torch/csrc/autograd/generated/VariableType.h` (available within generated code in the PyTorch install dir). If you find the corresponding function in this header file, it's most likely an ATen operator.
3426

3527
**Define symbolic functions.** In this case, you should obey the following rules.
3628
- Define the symbolic function in [`torch/onnx/symbolic.py`](https://github.com/pytorch/pytorch/blob/master/torch/onnx/symbolic.py). Make sure the
3729
function has the same name as the ATen operator/function defined in
3830
`VariableType.h`.
3931
- The first parameter is always the exported ONNX graph.
40-
- Parameter names must EXACTLY match the names in `VariableType.h`, because
32+
- Parameter names must match the names in `VariableType.h` EXACTLY, because
4133
dispatch is done with keyword arguments.
42-
- Parameter ordering does NOT necessarily match what is in `VariableType.h`,
43-
tensors (inputs) are always first, then non-tensor arguments.
44-
- In the symbolic function, we usually need to create an ONNX node in the graph.
45-
Here is an example to create a node for `Elu` ONNX operator:
34+
- Parameter ordering does NOT necessarily match what is in `VariableType.h`.
35+
Tensors (inputs) are always first, followed by non-tensor arguments.
36+
- In the symbolic function, we usually need to create an ONNX node in the graph. Here is an example to create a node for the `Elu` ONNX operator:
4637
`g.op("Elu", input, alpha_f=_scalar(alpha))`. More details are included in
47-
[API Secition](#api).
38+
[API section](#api).
4839
- If the input argument is a tensor, but ONNX asks for a scalar, we have to
49-
explicitly do the conversion. The helper function `_scalar` can convert a
50-
scalar tensor into a python scalar, and `_if_scalar_type_as` can turn a
51-
python scalar into a PyTorch tensor.
40+
explicitly do the conversion. The helper function, `_scalar`, can convert a
41+
scalar tensor into a Python scalar, and `_if_scalar_type_as` can turn a
42+
Python scalar into a PyTorch tensor.
5243

53-
In case of adding support for operator `elu`, we can find the following
54-
declaration in `VariableType.h`:
44+
In the case of adding support for the operator `elu`, we can find the following declaration in `VariableType.h`:
5545
```cpp
5646
virtual Tensor elu(const Tensor & input, Scalar alpha, bool inplace) const override;
5747
```
58-
Apparently, `elu` is implemented in ATen library. So we can define a symbolic
59-
function called `elu` in `torch/onnx/symbolic.py` like the following:
48+
From the above, it can be determined that `elu` is implemented in the ATen library. So we can define a symbolic
49+
function called `elu` in `torch/onnx/symbolic.py`, similar to the following:
6050
```python
6151
def elu(g, input, alpha, inplace=False):
6252
return g.op("Elu", input, alpha_f=_scalar(alpha))
6353
```
6454

65-
### Condition 2: If the operator in PyTorch is not an ATen operator.
55+
#### Condition 2: If the operator in PyTorch is not an ATen operator...
6656
If you cannot find the corresponding function in `VariableType.h`,
67-
that means you need to define the symoblic function in the PyTorch
57+
this means you need to define the symbolic function in the PyTorch
6858
Function class. For example, you need to create a `symbolic` function
6959
for operator `Dropout` in [torch/nn/_functions/dropout.py](https://github.com/pytorch/pytorch/blob/99037d627da68cdf53d3d0315deceddfadf03bba/torch/nn/_functions/dropout.py#L14).
7060

7161
**Define symbolic functions.** To define the symbolic functions for
72-
non-ATen ops, the following rules should be obeyed.
73-
- Create a symbolic function named `symbolic` in the corresponding Function
62+
non-ATen operators, the following rules should be obeyed.
63+
- Create a symbolic function, named `symbolic`, in the corresponding Function
7464
class.
7565
- The first parameter is always the exported ONNX graph.
76-
- Parameter names except the first must EXACTLY match the names in `forward`.
66+
- Parameter names except the first must match the names in `forward` EXACTLY.
7767
- The output tuple size must match the outputs of `forward`.
7868
- In the symbolic function, we usually need to create an ONNX node in the graph.
7969
Check the [API Section](#api) for more details.
8070

8171

8272
## <a name="api"></a> Export related APIs in PyTorch
83-
Symbolic functions should be implemented in Python. All of these functions interact with Python methods which are implemented via C++-Python bindings, but intuitively the interface they provide looks like this:
73+
Symbolic functions should be implemented in Python. All of these functions interact with Python methods which are implemented via C++-Python bindings. The interface they provide looks like this:
8474

8575
```python
8676
def operator/symbolic(g, *inputs):
@@ -145,10 +135,7 @@ getting it into ONNX proper, you can add your operator as an *experimental*
145135
operator.
146136

147137
## More ONNX symbolic examples
148-
[ATen operators in symbolic.py](https://github.com/pytorch/pytorch/blob/master/torch/onnx/symbolic.py)
149-
150-
[Index](https://github.com/pytorch/pytorch/blob/99037d627da68cdf53d3d0315deceddfadf03bba/torch/autograd/_functions/tensor.py#L24)
151-
152-
[Negate](https://github.com/pytorch/pytorch/blob/99037d627da68cdf53d3d0315deceddfadf03bba/torch/autograd/_functions/basic_ops.py#L50)
153-
154-
[ConstantPadNd](https://github.com/pytorch/pytorch/blob/99037d627da68cdf53d3d0315deceddfadf03bba/torch/nn/_functions/padding.py#L8)
138+
- [ATen operators in symbolic.py](https://github.com/pytorch/pytorch/blob/master/torch/onnx/symbolic.py)
139+
- [Index](https://github.com/pytorch/pytorch/blob/99037d627da68cdf53d3d0315deceddfadf03bba/torch/autograd/_functions/tensor.py#L24)
140+
- [Negate](https://github.com/pytorch/pytorch/blob/99037d627da68cdf53d3d0315deceddfadf03bba/torch/autograd/_functions/basic_ops.py#L50)
141+
- [ConstantPadNd](https://github.com/pytorch/pytorch/blob/99037d627da68cdf53d3d0315deceddfadf03bba/torch/nn/_functions/padding.py#L8)

0 commit comments

Comments
 (0)