Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Commit 8b0894e

Browse files
authored
Add github links to distributed tutorials (#1884)
* added github link to dist_overview * added github link to dpp_tutorial * Added github link to dist_tuto * Added github link to FSDP_tutorial * Added github link to process_group_cpp_extension_tutorial * Added github link for rpc_tutorial * Added github link to rpc_param_server_tutorial * Added github link to dist_pipeline_parallel_tutorial * Added github link to rpc_async_execution * Added github link to rpc_dpp_tutorial * Added github link to generic_join
1 parent ce86703 commit 8b0894e

11 files changed

Lines changed: 29 additions & 2 deletions

advanced_source/generic_join.rst

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,6 +3,9 @@ Distributed Training with Uneven Inputs Using the Join Context Manager
33

44
**Author**\ : `Andrew Gu <https://github.com/andwgu>`_
55

6+
.. note::
7+
View the source code for this tutorial in `github <https://github.com/pytorch/tutorials/blob/master/advanced_source/generic_join.rst>`__.
8+
69
.. note:: ``Join`` is introduced in PyTorch 1.10 as a prototype feature. This
710
API is subject to change.
811

@@ -444,4 +447,4 @@ Some key points to highlight:
444447
.. _Shard Optimizer States with ZeroRedundancyOptimizer: https://pytorch.org/tutorials/recipes/zero_redundancy_optimizer.html
445448
.. _DistributedDataParallel: https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html
446449
.. _join(): https://pytorch.org/docs/stable/_modules/torch/nn/parallel/distributed.html#DistributedDataParallel.join
447-
.. _ZeroRedundancyOptimizer: https://pytorch.org/docs/stable/distributed.optim.html
450+
.. _ZeroRedundancyOptimizer: https://pytorch.org/docs/stable/distributed.optim.html

advanced_source/rpc_ddp_tutorial.rst

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,8 @@ Combining Distributed DataParallel with Distributed RPC Framework
22
=================================================================
33
**Authors**: `Pritam Damania <https://github.com/pritamdamania87>`_ and `Yi Wang <https://github.com/SciPioneer>`_
44

5+
.. note::
6+
View the source code for this tutorial in `github <https://github.com/pytorch/tutorials/blob/master/advanced_source/rpc_ddp_tutorial.rst>`__.
57

68
This tutorial uses a simple example to demonstrate how you can combine
79
`DistributedDataParallel <https://pytorch.org/docs/stable/nn.html#torch.nn.parallel.DistributedDataParallel>`__ (DDP)

beginner_source/dist_overview.rst

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,8 @@ PyTorch Distributed Overview
22
============================
33
**Author**: `Shen Li <https://mrshenli.github.io/>`_
44

5+
.. note::
6+
View the source code for this tutorial in `github <https://github.com/pytorch/tutorials/blob/master/beginner_source/dist_overview.rst>`__.
57

68
This is the overview page for the ``torch.distributed`` package. The goal of
79
this page is to categorize documents into different topics and briefly

intermediate_source/FSDP_tutorial.rst

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -3,6 +3,8 @@ Getting Started with Fully Sharded Data Parallel(FSDP)
33

44
**Author**: `Hamid Shojanazeri <https://github.com/HamidShojanazeri>`__, `Yanli Zhao <https://github.com/zhaojuanmao>`__, `Shen Li <https://mrshenli.github.io/>`__
55

6+
.. note::
7+
View the source code for this tutorial in `github <https://github.com/pytorch/tutorials/blob/master/intermediate_source/FSDP_tutorial.rst>`__.
68

79
Training AI models at a large scale is a challenging task that requires a lot of compute power and resources.
810
It also comes with considerable engineering complexity to handle the training of these very large models.

intermediate_source/ddp_tutorial.rst

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -4,6 +4,9 @@ Getting Started with Distributed Data Parallel
44

55
**Edited by**: `Joe Zhu <https://github.com/gunandrose4u>`_
66

7+
.. note::
8+
View the source code for this tutorial in `github <https://github.com/pytorch/tutorials/blob/master/intermediate_source/ddp_tutorial.rst>`__.
9+
710
Prerequisites:
811

912
- `PyTorch Distributed Overview <../beginner/dist_overview.html>`__

intermediate_source/dist_pipeline_parallel_tutorial.rst

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,9 @@ Distributed Pipeline Parallelism Using RPC
22
==========================================
33
**Author**: `Shen Li <https://mrshenli.github.io/>`_
44

5+
.. note::
6+
View the source code for this tutorial in `github <https://github.com/pytorch/tutorials/blob/master/intermediate_source/dist_pipeline_parallel_tutorial.rst>`__.
7+
58
Prerequisites:
69

710
- `PyTorch Distributed Overview <../beginner/dist_overview.html>`__

intermediate_source/dist_tuto.rst

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,9 @@ Writing Distributed Applications with PyTorch
22
=============================================
33
**Author**: `Séb Arnold <https://seba1511.com>`_
44

5+
.. note::
6+
View the source code for this tutorial in `github <https://github.com/pytorch/tutorials/blob/master/intermediate_source/dist_tuto.rst>`__.
7+
58
Prerequisites:
69

710
- `PyTorch Distributed Overview <../beginner/dist_overview.html>`__

intermediate_source/process_group_cpp_extension_tutorial.rst

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -3,6 +3,8 @@ Customize Process Group Backends Using Cpp Extensions
33

44
**Author**: `Feng Tian <https://github.com/ftian1>`__, `Shen Li <https://mrshenli.github.io/>`__
55

6+
.. note::
7+
View the source code for this tutorial in `github <https://github.com/pytorch/tutorials/blob/master/intermediate_source/process_group_cpp_extension_tutorial.rst>`__.
68

79
Prerequisites:
810

intermediate_source/rpc_async_execution.rst

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,8 @@ Implementing Batch RPC Processing Using Asynchronous Executions
22
===============================================================
33
**Author**: `Shen Li <https://mrshenli.github.io/>`_
44

5+
.. note::
6+
View the source code for this tutorial in `github <https://github.com/pytorch/tutorials/blob/master/intermediate_source/rpc_async_execution.rst>`__.
57

68
Prerequisites:
79

@@ -520,4 +522,4 @@ Learn More
520522
- `Batch-Updating Parameter Server Source Code <https://github.com/pytorch/examples/blob/master/distributed/rpc/batch/parameter_server.py>`__
521523
- `Batch-Processing CartPole Solver <https://github.com/pytorch/examples/blob/master/distributed/rpc/batch/reinforce.py>`__
522524
- `Distributed Autograd <https://pytorch.org/docs/master/rpc.html#distributed-autograd-framework>`__
523-
- `Distributed Pipeline Parallelism <dist_pipeline_parallel_tutorial.html>`__
525+
- `Distributed Pipeline Parallelism <dist_pipeline_parallel_tutorial.html>`__

intermediate_source/rpc_param_server_tutorial.rst

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -4,6 +4,9 @@ Implementing a Parameter Server Using Distributed RPC Framework
44

55
**Author**\ : `Rohan Varma <https://github.com/rohan-varma>`_
66

7+
.. note::
8+
View the source code for this tutorial in `github <https://github.com/pytorch/tutorials/blob/master/intermediate_source/rpc_param_server_tutorial.rst>`__.
9+
710
Prerequisites:
811

912
- `PyTorch Distributed Overview <../beginner/dist_overview.html>`__

0 commit comments

Comments
 (0)