Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Conversation

@Looong01
Copy link

All test passed. It could be merged.

@codecov
Copy link

codecov bot commented Aug 15, 2025

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 78.32%. Comparing base (bdee3ae) to head (b2f1dc4).
⚠️ Report is 65 commits behind head on master.

❌ Your project check has failed because the head coverage (78.32%) is below the target coverage (80.00%). You can increase the head coverage or adjust the target coverage.

Additional details and impacted files
@@            Coverage Diff             @@
##           master     #507      +/-   ##
==========================================
+ Coverage   70.13%   78.32%   +8.18%     
==========================================
  Files          37       53      +16     
  Lines        1671     2325     +654     
  Branches        0      189     +189     
==========================================
+ Hits         1172     1821     +649     
- Misses        499      502       +3     
- Partials        0        2       +2     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@Looong01
Copy link
Author

Still need some changes. Pls wait...

@gpicciuca
Copy link

Bumping! Why is there no feedback from the maintainers? No interest in supporting ROCm? Would love to see this merged and have proper wheels released.

@akihironitta
Copy link
Member

Hi @Looong01, Thank you for working on this! We would definitely be happy to support this, but we would need to have CI set up for this. Would it be possible to add it in this PR?

Why is there no feedback from the maintainers? No interest in supporting ROCm?

@gpicciuca I think we would definitely be happy to add support for ROCm, but we don't have access to AMD GPUs and don't have good ways for us to test ROCm builds at the moment. The number of AMD GPU users is still much smaller than that of NVIDIA GPUs users, and given the fact that we're a very small team, I don't see it feasiable for us to keep maintaining ROCm builds in coming months or year(s) (unless, e.g., AMD team can provide assistance).

@gpicciuca
Copy link

Why is there no feedback from the maintainers? No interest in supporting ROCm?

@gpicciuca I think we would definitely be happy to add support for ROCm, but we don't have access to AMD GPUs and don't have good ways for us to test ROCm builds at the moment. The number of AMD GPU users is still much smaller than that of NVIDIA GPUs users, and given the fact that we're a very small team, I don't see it feasiable for us to keep maintaining ROCm builds in coming months or year(s) (unless, e.g., AMD team can provide assistance).

I could probably lend a hand here as I own an RX 9070 XT. Using Windows 10 + WSL2 with ROCm 6.4 installed according to AMD's instructions.

Also, have you tried pinging some AMD dev? I think they are active in some Pytorch repository.. so they could potentially contribute to PyG as well i think.. but yea, don't take my words for granted as I don't work for AMD

@Looong01
Copy link
Author

Looong01 commented Oct 4, 2025

Sorry, the matrix multiple kernel still need to be implement by composable kernel(ck) library instead of hipblas_lt. It is hard to learn ck.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants