Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Tags: ttrung149/pytorch

Tags

ciflow/xpu/135833

Toggle ciflow/xpu/135833's commit message
Update torch-xpu-ops pin (ATen XPU implementation) (pytorch#135647)

Release cycle for PyTorch 2.5
1. Fixing runtime error on Windows: Fail to load torch_xpu_ops_unary_binary_kernels.dll as the bin size is large.

Pull Request resolved: pytorch#135647
Approved by: https://github.com/EikanWang

ciflow/xpu/135827

Toggle ciflow/xpu/135827's commit message
add memory_allocated API for device_interface

unify calling method

ciflow/xpu/135818

Toggle ciflow/xpu/135818's commit message
Update

[ghstack-poisoned]

ciflow/xpu/135656

Toggle ciflow/xpu/135656's commit message
Update

[ghstack-poisoned]

ciflow/xpu/134556

Toggle ciflow/xpu/134556's commit message
reformat

ciflow/trunk/135882

Toggle ciflow/trunk/135882's commit message
Update

[ghstack-poisoned]

ciflow/trunk/135879

Toggle ciflow/trunk/135879's commit message
[Strobelight] Abstract out strobelight profile enablement

[ghstack-poisoned]

ciflow/trunk/135878

Toggle ciflow/trunk/135878's commit message
Allow common HOPs to be cached by fx graph cache

[ghstack-poisoned]

ciflow/trunk/135877

Toggle ciflow/trunk/135877's commit message
Allow fx graph caching higher order operators (opt-in)

[ghstack-poisoned]

ciflow/trunk/135876

Toggle ciflow/trunk/135876's commit message
Add higher order operator name to the cache bypass exception

[ghstack-poisoned]