@@ -9,8 +9,8 @@ Prerequisites:
99- `RPC API documents <https://pytorch.org/docs/master/rpc.html >`__
1010
1111This tutorial uses two simple examples to demonstrate how to build distributed
12- training with the `torch.distributed.rpc <https://pytorch.org/docs/master /rpc.html >`__
13- package which is first introduced as a prototype feature in PyTorch v1.4.
12+ training with the `torch.distributed.rpc <https://pytorch.org/docs/stable /rpc.html >`__
13+ package which was first introduced as an experimental feature in PyTorch v1.4.
1414Source code of the two examples can be found in
1515`PyTorch examples <https://github.com/pytorch/examples >`__.
1616
@@ -36,19 +36,19 @@ paradigms. For example:
3636 machines.
3737
3838
39- The `torch.distributed.rpc <https://pytorch.org/docs/master /rpc.html >`__ package
40- can help with the above scenarios. In case 1, `RPC <https://pytorch.org/docs/master /rpc.html#rpc >`__
41- and `RRef <https://pytorch.org/docs/master /rpc.html#rref >`__ allow sending data
39+ The `torch.distributed.rpc <https://pytorch.org/docs/stable /rpc.html >`__ package
40+ can help with the above scenarios. In case 1, `RPC <https://pytorch.org/docs/stable /rpc.html#rpc >`__
41+ and `RRef <https://pytorch.org/docs/stable /rpc.html#rref >`__ allow sending data
4242from one worker to another while easily referencing remote data objects. In
43- case 2, `distributed autograd <https://pytorch.org/docs/master /rpc.html#distributed-autograd-framework >`__
44- and `distributed optimizer <https://pytorch.org/docs/master /rpc.html#module-torch.distributed.optim >`__
43+ case 2, `distributed autograd <https://pytorch.org/docs/stable /rpc.html#distributed-autograd-framework >`__
44+ and `distributed optimizer <https://pytorch.org/docs/stable /rpc.html#module-torch.distributed.optim >`__
4545make executing backward pass and optimizer step as if it is local training. In
4646the next two sections, we will demonstrate APIs of
47- `torch.distributed.rpc <https://pytorch.org/docs/master /rpc.html >`__ using a
47+ `torch.distributed.rpc <https://pytorch.org/docs/stable /rpc.html >`__ using a
4848reinforcement learning example and a language model example. Please note, this
4949tutorial does not aim at building the most accurate or efficient models to
5050solve given problems, instead, the main goal here is to show how to use the
51- `torch.distributed.rpc <https://pytorch.org/docs/master /rpc.html >`__ package to
51+ `torch.distributed.rpc <https://pytorch.org/docs/stable /rpc.html >`__ package to
5252build distributed training applications.
5353
5454
@@ -289,10 +289,10 @@ observers. The agent serves as master by repeatedly calling ``run_episode`` and
289289``finish_episode `` until the running reward surpasses the reward threshold
290290specified by the environment. All observers passively waiting for commands
291291from the agent. The code is wrapped by
292- `rpc.init_rpc <https://pytorch.org/docs/master /rpc.html#torch.distributed.rpc.init_rpc >`__ and
293- `rpc.shutdown <https://pytorch.org/docs/master /rpc.html#torch.distributed.rpc.shutdown >`__,
292+ `rpc.init_rpc <https://pytorch.org/docs/stable /rpc.html#torch.distributed.rpc.init_rpc >`__ and
293+ `rpc.shutdown <https://pytorch.org/docs/stable /rpc.html#torch.distributed.rpc.shutdown >`__,
294294which initializes and terminates RPC instances respectively. More details are
295- available in the `API page <https://pytorch.org/docs/master /rpc.html >`__.
295+ available in the `API page <https://pytorch.org/docs/stable /rpc.html >`__.
296296
297297
298298.. code :: python
@@ -442,7 +442,7 @@ takes a GPU tensor, you need to move it to the proper device explicitly.
442442With the above sub- modules, we can now piece them together using RPC to
443443create an RNN model. In the code below `` ps`` represents a parameter server,
444444which hosts parameters of the embedding table and the decoder. The constructor
445- uses the `remote < https:// pytorch.org/ docs/ master / rpc.html# torch.distributed.rpc.remote>`__
445+ uses the `remote < https:// pytorch.org/ docs/ stable / rpc.html# torch.distributed.rpc.remote>`__
446446API to create an `` EmbeddingTable`` object and a `` Decoder`` object on the
447447parameter server, and locally creates the `` LSTM `` sub- module. During the
448448forward pass , the trainer uses the `` EmbeddingTable`` `` RRef`` to find the
0 commit comments