3131# we can explore a couple examples:
3232
3333######################################################################
34- # Saving the inputs
34+ # Saving the Inputs
3535# -------------------------------------------------------------------
3636# Consider this simple squaring function. It saves an input tensor
3737# for backward. Double backward works automatically when autograd
@@ -168,7 +168,7 @@ def sinh(x):
168168
169169######################################################################
170170# Use torchviz to visualize the graph:
171-
171+ #
172172# .. code-block:: python
173173#
174174# out = sinh(x)
@@ -205,7 +205,7 @@ def backward(ctx, grad_out):
205205######################################################################
206206# Use torchviz to visualize the graph. Notice that `grad_x` is not
207207# part of the graph!
208-
208+ #
209209# .. code-block:: python
210210#
211211# out = SinhBad.apply(x)
@@ -216,7 +216,7 @@ def backward(ctx, grad_out):
216216# :width: 232
217217
218218######################################################################
219- # When Backward is not able to be Tracked by Autograd
219+ # When Backward is not Tracked
220220# -------------------------------------------------------------------
221221# Finally, let's consider an example when it may not be possible for
222222# autograd to track gradients for a functions backward at all.
@@ -269,7 +269,7 @@ def backward(ctx, grad_out):
269269
270270######################################################################
271271# Use torchviz to visualize the graph:
272-
272+ #
273273# .. code-block:: python
274274#
275275# out = Cube.apply(x)
0 commit comments