Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Tags: pytorch/pytorch

Tags

viable/strict/1763292167

Toggle viable/strict/1763292167's commit message
[pallas backend] implement gpu tiles/mask for power of 2 (#167584)

Pull Request resolved: #167584
Approved by: https://github.com/jansel

viable/strict/1763286573

Toggle viable/strict/1763286573's commit message
[vision hash update] update the pinned vision hash (#167890)

This PR is auto-generated nightly by [this action](https://github.com/pytorch/pytorch/blob/main/.github/workflows/nightly.yml).
Update the pinned vision hash.
Pull Request resolved: #167890
Approved by: https://github.com/pytorchbot

trunk/2245d7d3b90162ae2958929a22c140537cfc4b42

Toggle trunk/2245d7d3b90162ae2958929a22c140537cfc4b42's commit message
Improve char printing (#167899)

This PR outputs chars to stream without building temporary strings.
They were modified by (on fish)
```
sed  -i -e 's/<< "\([^\\\']\)"/<< \'\1\'/g' (grep '<< "."' -r torch c10 aten -l)
```
and revert some invalid changes.

Pull Request resolved: #167899
Approved by: https://github.com/Skylion007

trunk/98b94b90dda222b71bb07191def817873db4a977

Toggle trunk/98b94b90dda222b71bb07191def817873db4a977's commit message
[pallas backend] implement gpu tiles/mask for power of 2 (#167584)

Pull Request resolved: #167584
Approved by: https://github.com/jansel

trunk/5d99a795f54d6bf14e39ae12df58d760d4fd8984

Toggle trunk/5d99a795f54d6bf14e39ae12df58d760d4fd8984's commit message
[xpu][test] Migrated two test files to XPU (#166684)

# Description
Fixes #114850, we will port test utils and schema check to Intel GPU
We could enable Intel GPU with following methods and try the best to keep the original code styles:

# Changes
1. Get device type with from accelerator and get_devtype helper method
2. Replace the requires cuda statement to device_type.
3. Add HAS_XPU and HAS GPU check to replace some of the HAS_XPU etc.

# Notify

Pull Request resolved: #166684
Approved by: https://github.com/ezyang, https://github.com/guangyey

Co-authored-by: Yu, Guangye <[email protected]>

trunk/5cdbda140c1711b9fe8a6f999d1c465913e62345

Toggle trunk/5cdbda140c1711b9fe8a6f999d1c465913e62345's commit message
[vision hash update] update the pinned vision hash (#167890)

This PR is auto-generated nightly by [this action](https://github.com/pytorch/pytorch/blob/main/.github/workflows/nightly.yml).
Update the pinned vision hash.
Pull Request resolved: #167890
Approved by: https://github.com/pytorchbot

ciflow/xpu/167047

Toggle ciflow/xpu/167047's commit message
only port test_custom_scan_op to xpu to avoid mps test break

ciflow/xpu/166436

Toggle ciflow/xpu/166436's commit message
Apply suggestion from @anmyachev

ciflow/xpu/161940

Toggle ciflow/xpu/161940's commit message
Update on "[xpu][feature][Inductor XPU GEMM] Step 10/N: Enable XPU sy…

…cl-tla(Intel cutlass) backend."


This PR officially replaces the Inductor XPU backend scheduling from TritonScheduling to XPUCombinedScheduling, enabling support for both CUTLASS and Triton backends. It also refactors test_cutlass_backend to be enabled on XPU, and the sycl-tla(intel cutlass) is also added in XPU CI. Currently, the CUTLASS XPU backend does not yet support epilogue fusion.

cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx ipiszy chenyang78 kadeng muchulee8 amjames chauhang aakhundov coconutruben

[ghstack-poisoned]

ciflow/vllm/165274

Toggle ciflow/vllm/165274's commit message
update vllm commit hash