-
Notifications
You must be signed in to change notification settings - Fork 1.9k
Cuda-Enabled Pip Package (Linux x86-64 only) #3608
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
… github actions workflow for pycolmap CUDA Linux built, running workflows only when triggered
… of settings in pyproject.toml
|
Thank you for your contribution. IIUC you suggest having a different package name for CUDA wheels. How would you then handle different versions of CUDA? PyTorch only uploads to PyPI a single CUDA version (12.8 currently) and hosts externally other versions and their dependencies (example: https://download.pytorch.org/whl/cu129). NVIDIA does have some CUDA shared libraries exposed as PyPI packages, are they useful for us? |
That is not really tackled so far. One way to improve this might be to dynamically modify the Offering multiple CUDA versions would probably go even further than this. Maybe one option would be to build different wheels using container images with different CUDA versions and different versions of those Nvidia PyPI packages specified as dependencies in the PyTorch also seems to build multiple wheels for multiple CUDA versions. However, they do not differentiate them by name. Instead, it seems like they are hosting packages for different CUDA versions in different package indices. However, this probably requires self-hosting those alternative package indices. I think an underlying problem is that wheel tags include only operating system and CPU architecture information. So dealing with things like different CUDA versions might always require getting a bit creative. However, I do not have much experience with building CUDA-enabled Python packages. So I would be more than happy to learn about some alternative ideas. |
Sure, I have basically added the Python script that you suggested above. |
sarlinpe
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you! The CUDA build increases the size of the pycolmap .so by 2x (54MB to 110MB), I guess that this is expected?
I guess an increase in size is expected. However, for me it looks like the .whl is only 62MB in size. I think this can be seen in the summary at https://github.com/colmap/colmap/actions/runs/18450241678/attempts/2#summary-52636181721. Also, I think PyPI has an upper limit of 100MB by default (https://docs.pypi.org/project-management/storage-limits/). So publishing something larger than this limit might require special permission from PyPI. |
|
I was referring to the .so file (obtain by unpacking the wheel, which is just a zip file), the wheel size is indeed fine for PyPI (3x larger though). |
|
@ahojnnes @B1ueber2y any opinion against merging this? |
B1ueber2y
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks! Very nice feature.
Co-authored-by: Johannes Schönberger <[email protected]>
|
@Tobias314 Thank you for your work, we now have pycolmap-cuda12 on PyPI: https://pypi.org/project/pycolmap-cuda12/ Would you mind removing https://pypi.org/project/pycolmap-cuda/ (I believe that it's owned by you) to prevent any confusion? thank you! |
Great to hear! |
Adds an additional job run to the
build-pycolmap.ymlGitHub action to build a Linux x86-64 Python wheel with CUDA support named pycolmap_cudaThis is achieved by the following changes:
sameli/manylinux_2_34_x86_64_cuda_12.8container image for cibuildwheelpyproject.tomlfile during the CUDA build to name the resulting package pycolmap_cuda to avoid name collision with the non-CUDA pycolmap packageTODO:
pycolmap-cuda-12is used