You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
```torch.fx``` is a toolkit that allows you to write transformations over PyTorch Python code, modifying the behavior of the model without having to modify the original source code. Concretely, FX allows you to write transformations of the form ```transform(input_module : nn.Module)``` -> ```nn.Module```, where you can feed in a Module instance and get a transformed Module instance out of it.
28
+
FX allows you to write transformations of the form ```transform(input_module : nn.Module)``` -> ```nn.Module```, where you can feed in a ```Module``` instance and get a transformed ```Module``` instance out of it.
29
29
30
-
Because transforms are written with FX work on nn.Modules, the results of these transforms can be used in any place a normal Module can be used, including in training and with TorchScript.
30
+
This kind of functionality is applicable in many scenarios. For example, the FX-based Graph Mode Quantization product is releasing as a prototype contemporaneously with FX. Graph Mode Quantization automates the process of quantizing a neural net and does so by leveraging FX’s program capture, analysis and transformation facilities. We are also developing many other transformation products with FX and we are excited to share this powerful toolkit with the community.
31
+
32
+
Because FX transforms consume and produce nn.Module instances, they can be used within many existing PyTorch workflows. This includes workflows that, for example, train in Python then deploy via TorchScript.
* Share feedback about FX on the [forums](https://discuss.pytorch.org/c/fx-functional-transformations/31) or [issue tracker](https://github.com/pytorch/pytorch/issues).
57
+
You can read more about FX in the official documentation (https://pytorch.org/docs/master/fx.html). You can also find several examples of program transformations implemented using ```torch.fx```[here](https://github.com/pytorch/examples/tree/master/fx). We are constantly improving FX and invite you to share any feedback you have about the toolkit on the forums (https://discuss.pytorch.org/) or issue tracker (https://github.com/pytorch/pytorch/issues).
57
58
58
59
# Distributed Training
59
60
The PyTorch 1.8 release added a number of new features as well as improvements to reliability and usability. Concretely, support for: [Stable level async error/timeout handling](https://pytorch.org/docs/stable/distributed.html?highlight=init_process_group#torch.distributed.init_process_group) was added to improve NCCL reliability; and stable support for [RPC based profiling](https://pytorch.org/docs/stable/rpc.html). Additionally, we have added support for pipeline parallelism as well as gradient compression through the use of communication hooks in DDP. Details are below:
0 commit comments