Tags: pytorch/pytorch
Tags
[pallas backend] implement gpu tiles/mask for power of 2 (#167584) Pull Request resolved: #167584 Approved by: https://github.com/jansel
[vision hash update] update the pinned vision hash (#167890) This PR is auto-generated nightly by [this action](https://github.com/pytorch/pytorch/blob/main/.github/workflows/nightly.yml). Update the pinned vision hash. Pull Request resolved: #167890 Approved by: https://github.com/pytorchbot
Improve char printing (#167899) This PR outputs chars to stream without building temporary strings. They were modified by (on fish) ``` sed -i -e 's/<< "\([^\\\']\)"/<< \'\1\'/g' (grep '<< "."' -r torch c10 aten -l) ``` and revert some invalid changes. Pull Request resolved: #167899 Approved by: https://github.com/Skylion007
[pallas backend] implement gpu tiles/mask for power of 2 (#167584) Pull Request resolved: #167584 Approved by: https://github.com/jansel
[xpu][test] Migrated two test files to XPU (#166684) # Description Fixes #114850, we will port test utils and schema check to Intel GPU We could enable Intel GPU with following methods and try the best to keep the original code styles: # Changes 1. Get device type with from accelerator and get_devtype helper method 2. Replace the requires cuda statement to device_type. 3. Add HAS_XPU and HAS GPU check to replace some of the HAS_XPU etc. # Notify Pull Request resolved: #166684 Approved by: https://github.com/ezyang, https://github.com/guangyey Co-authored-by: Yu, Guangye <[email protected]>
[vision hash update] update the pinned vision hash (#167890) This PR is auto-generated nightly by [this action](https://github.com/pytorch/pytorch/blob/main/.github/workflows/nightly.yml). Update the pinned vision hash. Pull Request resolved: #167890 Approved by: https://github.com/pytorchbot
only port test_custom_scan_op to xpu to avoid mps test break
Update on "[xpu][feature][Inductor XPU GEMM] Step 10/N: Enable XPU sy… …cl-tla(Intel cutlass) backend." This PR officially replaces the Inductor XPU backend scheduling from TritonScheduling to XPUCombinedScheduling, enabling support for both CUTLASS and Triton backends. It also refactors test_cutlass_backend to be enabled on XPU, and the sycl-tla(intel cutlass) is also added in XPU CI. Currently, the CUTLASS XPU backend does not yet support epilogue fusion. cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx ipiszy chenyang78 kadeng muchulee8 amjames chauhang aakhundov coconutruben [ghstack-poisoned]
PreviousNext