Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Commit 51a30d0

Browse files
authored
Update 2021-3-4-pytorch-1.8-released.md
1 parent 1ccef11 commit 51a30d0

File tree

1 file changed

+5
-4
lines changed

1 file changed

+5
-4
lines changed

_posts/2021-3-4-pytorch-1.8-released.md

+5-4
Original file line numberDiff line numberDiff line change
@@ -25,9 +25,11 @@ The ```torch.linalg``` module, modeled after NumPy’s [np.linalg](https://numpy
2525
* [Documentation](https://pytorch.org/docs/1.8.0/linalg.html)
2626

2727
## [Beta] Python code Transformations with FX
28-
```torch.fx``` is a toolkit that allows you to write transformations over PyTorch Python code, modifying the behavior of the model without having to modify the original source code. Concretely, FX allows you to write transformations of the form ```transform(input_module : nn.Module)``` -> ```nn.Module```, where you can feed in a Module instance and get a transformed Module instance out of it.
28+
FX allows you to write transformations of the form ```transform(input_module : nn.Module)``` -> ```nn.Module```, where you can feed in a ```Module``` instance and get a transformed ```Module``` instance out of it.
2929

30-
Because transforms are written with FX work on nn.Modules, the results of these transforms can be used in any place a normal Module can be used, including in training and with TorchScript.
30+
This kind of functionality is applicable in many scenarios. For example, the FX-based Graph Mode Quantization product is releasing as a prototype contemporaneously with FX. Graph Mode Quantization automates the process of quantizing a neural net and does so by leveraging FX’s program capture, analysis and transformation facilities. We are also developing many other transformation products with FX and we are excited to share this powerful toolkit with the community.
31+
32+
Because FX transforms consume and produce nn.Module instances, they can be used within many existing PyTorch workflows. This includes workflows that, for example, train in Python then deploy via TorchScript.
3133

3234
Below is an FX transform example:
3335

@@ -52,8 +54,7 @@ def transform(m: nn.Module,
5254
return torch.fx.GraphModule(m, graph)
5355

5456
```
55-
* [Documentation](https://pytorch.org/docs/1.8.0/fx.html)
56-
* Share feedback about FX on the [forums](https://discuss.pytorch.org/c/fx-functional-transformations/31) or [issue tracker](https://github.com/pytorch/pytorch/issues).
57+
You can read more about FX in the official documentation (https://pytorch.org/docs/master/fx.html). You can also find several examples of program transformations implemented using ```torch.fx``` [here](https://github.com/pytorch/examples/tree/master/fx). We are constantly improving FX and invite you to share any feedback you have about the toolkit on the forums (https://discuss.pytorch.org/) or issue tracker (https://github.com/pytorch/pytorch/issues).
5758

5859
# Distributed Training
5960
The PyTorch 1.8 release added a number of new features as well as improvements to reliability and usability. Concretely, support for: [Stable level async error/timeout handling](https://pytorch.org/docs/stable/distributed.html?highlight=init_process_group#torch.distributed.init_process_group) was added to improve NCCL reliability; and stable support for [RPC based profiling](https://pytorch.org/docs/stable/rpc.html). Additionally, we have added support for pipeline parallelism as well as gradient compression through the use of communication hooks in DDP. Details are below:

0 commit comments

Comments
 (0)