You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: prototype_source/ios_gpu_workflow.rst
+23-12Lines changed: 23 additions & 12 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -56,16 +56,35 @@ Note that the ``torch.jit.export_opnames(optimized_model)`` is going to dump all
56
56
Those are all the ops we need to run the mobilenetv2 model on iOS GPU. Cool! Now that you have the ``mobilenetv2_metal.pt`` saved on your disk, let's move on to the iOS part.
57
57
58
58
59
-
Use C++ APIs
59
+
Use PyTorch iOS library with Metal
60
60
---------------------
61
+
The PyTorch iOS library with Metal support `LibTorch-Lite-Nightly` is available in Cocoapods. You can read the `Using the Nightly PyTorch iOS Libraries in CocoaPods <https://pytorch.org/mobile/ios/#using-the-nightly-pytorch-ios-libraries-in-cocoapods>`_ section from the iOS tutorial for more detail about its usage.
61
62
62
-
In this section, we'll be using the `HelloWorld example <https://github.com/pytorch/ios-demo-app>`_ to demonstrate how to use the C++ APIs. The first thing we need to do is to build a custom LibTorch from Source. Make sure you have deleted the **build** folder from the previous step in PyTorch root directory. Then run the command below
63
+
We also have the `HelloWorld-Metal example <https://github.com/pytorch/ios-demo-app/tree/master/HelloWorld-Metal>`_ that shows how to conect all pieces together.
64
+
65
+
Note that if you run the HelloWorld-Metal example, you may notice that the results are slighly different from the `results <https://pytorch.org/mobile/ios/#install-libtorch-via-cocoapods>`_ we got from the CPU model as shown in the iOS tutorial.
66
+
67
+
.. code:: shell
68
+
69
+
- timber wolf, grey wolf, gray wolf, Canis lupus
70
+
- malamute, malemute, Alaskan malamute
71
+
- Eskimo dog, husky
72
+
73
+
This is because by default Metal uses fp16 rather than fp32 to compute. The precision loss is expected.
74
+
75
+
76
+
Use LibTorch-Lite Built from Source
77
+
---------------------
78
+
79
+
You can also build a custom LibTorch-Lite from Source and use it to run GPU models on iOS Metal. In this section, we'll be using the `HelloWorld example <https://github.com/pytorch/ios-demo-app/tree/master/HelloWorld>`_ to demonstrate this process.
80
+
81
+
First, make sure you have deleted the **build** folder from the "Model Preparation" step in PyTorch root directory. Then run the command below
Note ``IOS_ARCH`` tells the script to build a arm64 version of Libtorch. This is because in PyTorch, Metal is only available for the iOS devices that support the Apple A9 chip or above. Once the build finished, follow the `Build PyTorch iOS libraries from source <https://pytorch.org/mobile/ios/#build-pytorch-ios-libraries-from-source>`_ section from the iOS tutorial to setup the XCode settings properly. Don't forget to copy the `./mobilenetv2_metal.pt` to your XCode project and modify the model file path accordingly.
87
+
Note ``IOS_ARCH`` tells the script to build a arm64 version of Libtorch-Lite. This is because in PyTorch, Metal is only available for the iOS devices that support the Apple A9 chip or above. Once the build finished, follow the `Build PyTorch iOS libraries from source <https://pytorch.org/mobile/ios/#build-pytorch-ios-libraries-from-source>`_ section from the iOS tutorial to setup the XCode settings properly. Don't forget to copy the `./mobilenetv2_metal.pt` to your XCode project and modify the model file path accordingly.
69
88
70
89
Next we need to make some changes in ``TorchModule.mm``
71
90
@@ -92,15 +111,7 @@ As you can see, we simply just call ``.metal()`` to move our input tensor from C
92
111
93
112
The last step we have to do is to add the `Accelerate.framework` and the `MetalPerformanceShaders.framework` to your xcode project (Open your project via XCode, go to your project target’s "General" tab, locate the "Frameworks, Libraries and Embedded Content" section and click the "+" button).
94
113
95
-
If everything works fine, you should be able to see the inference results on your phone. The result below was captured from an iPhone 11 device
96
-
97
-
.. code:: shell
98
-
99
-
- timber wolf, grey wolf, gray wolf, Canis lupus
100
-
- malamute, malemute, Alaskan malamute
101
-
- Eskimo dog, husky
102
-
103
-
You may notice that the results are slighly different from the `results <https://pytorch.org/mobile/ios/#install-libtorch-via-cocoapods>`_ we got from the CPU model as shown in the iOS tutorial. This is because by default Metal uses fp16 rather than fp32 to compute. The precision loss is expected.
114
+
If everything works fine, you should be able to see the inference results on your phone.
0 commit comments