You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: prototype_source/ios_gpu_workflow.rst
+20-6Lines changed: 20 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -11,12 +11,24 @@ This tutorial introduces the steps to run your models on iOS GPU. We'll be using
11
11
Model Preparation
12
12
-------------------
13
13
14
-
Since GPUs consume weights in a different order, the first step we need to do is to convert our TorchScript model to a GPU compatible model. This step is also known as "prepacking". To do that, we'll build a custom pytorch binary from source that includes the Metal backend. Go ahead checkout the pytorch source code from github and run the command below
14
+
Since GPUs consume weights in a different order, the first step we need to do is to convert our TorchScript model to a GPU compatible model. This step is also known as "prepacking".
15
+
16
+
PyTorch with Metal
17
+
^^^^^^^^^^^^^^^^^^
18
+
To do that, we'll install a pytorch nightly binary that includes the Metal backend. Go ahead run the command below
Also, you can build a custom pytorch binary from source that includes the Metal backend. Just checkout the pytorch source code from github and run the command below
The command above will build a custom pytorch binary from master. The ``install`` argument simply tells ``setup.py`` to override the existing PyTorch on your desktop. Once the build finished, open another terminal to check the PyTorch version to see if the installation was successful. As the time of writing of this recipe, the version is ``1.8.0a0+41237a4``. You might be seeing different numbers depending on when you check out the code from master, but it should be greater than 1.7.0.
22
34
@@ -25,6 +37,8 @@ The command above will build a custom pytorch binary from master. The ``install`
25
37
import torch
26
38
torch.__version__#1.8.0a0+41237a4
27
39
40
+
Metal Compatible Model
41
+
^^^^^^^^^^^^^^^^^^^^^^
28
42
29
43
The next step is going to be converting the mobilenetv2 torchscript model to a Metal compatible model. We'll be leveraging the ``optimize_for_mobile`` API from the ``torch.utils`` module. As shown below
30
44
@@ -58,7 +72,7 @@ Those are all the ops we need to run the mobilenetv2 model on iOS GPU. Cool! Now
58
72
59
73
Use PyTorch iOS library with Metal
60
74
---------------------
61
-
The PyTorch iOS library with Metal support `LibTorch-Lite-Nightly` is available in Cocoapods. You can read the `Using the Nightly PyTorch iOS Libraries in CocoaPods <https://pytorch.org/mobile/ios/#using-the-nightly-pytorch-ios-libraries-in-cocoapods>`_ section from the iOS tutorial for more detail about its usage.
75
+
The PyTorch iOS library with Metal support ``LibTorch-Lite-Nightly`` is available in Cocoapods. You can read the `Using the Nightly PyTorch iOS Libraries in CocoaPods <https://pytorch.org/mobile/ios/#using-the-nightly-pytorch-ios-libraries-in-cocoapods>`_ section from the iOS tutorial for more detail about its usage.
62
76
63
77
We also have the `HelloWorld-Metal example <https://github.com/pytorch/ios-demo-app/tree/master/HelloWorld-Metal>`_ that shows how to conect all pieces together.
64
78
@@ -84,7 +98,7 @@ First, make sure you have deleted the **build** folder from the "Model Preparati
Note ``IOS_ARCH`` tells the script to build a arm64 version of Libtorch-Lite. This is because in PyTorch, Metal is only available for the iOS devices that support the Apple A9 chip or above. Once the build finished, follow the `Build PyTorch iOS libraries from source <https://pytorch.org/mobile/ios/#build-pytorch-ios-libraries-from-source>`_ section from the iOS tutorial to setup the XCode settings properly. Don't forget to copy the `./mobilenetv2_metal.pt` to your XCode project and modify the model file path accordingly.
101
+
Note ``IOS_ARCH`` tells the script to build a arm64 version of Libtorch-Lite. This is because in PyTorch, Metal is only available for the iOS devices that support the Apple A9 chip or above. Once the build finished, follow the `Build PyTorch iOS libraries from source <https://pytorch.org/mobile/ios/#build-pytorch-ios-libraries-from-source>`_ section from the iOS tutorial to setup the XCode settings properly. Don't forget to copy the ``./mobilenetv2_metal.pt`` to your XCode project and modify the model file path accordingly.
88
102
89
103
Next we need to make some changes in ``TorchModule.mm``
90
104
@@ -107,9 +121,9 @@ Next we need to make some changes in ``TorchModule.mm``
107
121
}
108
122
...
109
123
110
-
As you can see, we simply just call ``.metal()`` to move our input tensor from CPU to GPU, and then call ``.cpu()`` to move the result back. Internally, ``.metal()`` will copy the input data from the CPU buffer to a GPU buffer with a GPU compatible memory format. When `.cpu()` is invoked, the GPU command buffer will be flushed and synced. After `forward` finished, the final result will then be copied back from the GPU buffer back to a CPU buffer.
124
+
As you can see, we simply just call ``.metal()`` to move our input tensor from CPU to GPU, and then call ``.cpu()`` to move the result back. Internally, ``.metal()`` will copy the input data from the CPU buffer to a GPU buffer with a GPU compatible memory format. When ``.cpu()`` is invoked, the GPU command buffer will be flushed and synced. After `forward` finished, the final result will then be copied back from the GPU buffer back to a CPU buffer.
111
125
112
-
The last step we have to do is to add the `Accelerate.framework` and the `MetalPerformanceShaders.framework` to your xcode project (Open your project via XCode, go to your project target’s "General" tab, locate the "Frameworks, Libraries and Embedded Content" section and click the "+" button).
126
+
The last step we have to do is to add the ``Accelerate.framework`` and the ``MetalPerformanceShaders.framework`` to your xcode project (Open your project via XCode, go to your project target’s "General" tab, locate the "Frameworks, Libraries and Embedded Content" section and click the "+" button).
113
127
114
128
If everything works fine, you should be able to see the inference results on your phone.
0 commit comments