Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Tags: zmmwl/pytorch

Tags

ciflow/trunk/96182

Toggle ciflow/trunk/96182's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature. The key has expired.
Merge branch 'pytorch:master' into test-linalg

ciflow/trunk/96163

Toggle ciflow/trunk/96163's commit message
Retry CI android tests

ciflow/trunk/96142

Toggle ciflow/trunk/96142's commit message
forgot some things

ciflow/trunk/96121

Toggle ciflow/trunk/96121's commit message
Update on "Use maxint to bound integers."

We don't actually support arbitrary precision integers.

Signed-off-by: Edward Z. Yang <ezyangmeta.com>

[ghstack-poisoned]

ciflow/trunk/96087

Toggle ciflow/trunk/96087's commit message
Update on "inductor: align baddbmm behavior with eager mode for beta=…

…0 and input has nan value"



For ```torch.baddbmm(input, mat1,mat2, beta=0)```, if ```beta``` is zero, the multiplication of value ```input*beta``` will be ignored for the eager mode(always gets zero number, see https://pytorch.org/docs/stable/generated/torch.baddbmm.html?highlight=torch+baddbmm#torch.baddbmm), but the inductor is not, the inductor will get a different value if the input has a ```nan``` of ```inf``` value:

```
def fn_test(input, mat1, mat2):
    return torch.baddbmm(input, mat1, mat2, beta=0.0)

opt_fn = torch._dynamo.optimize("inductor")(fn_test)
a, b, c = [torch.rand((3,2,2)) for _ in range(3)]

real_out = fn_test(a, b, c)
a[:] = torch.nan
compiled_out = opt_fn(a, b,c)

print(compiled_out)
print(real_out)

```
before this PR, the output will be like this:

```
tensor([[[0.4272, 0.6037],
         [0.4279, 0.4219]],

        [[0.0838, 0.4873],
         [0.1210, 0.5516]],

        [[   nan,    nan],
         [   nan,    nan]]])
tensor([[[0.4272, 0.6037],
         [0.4279, 0.4219]],

        [[0.0838, 0.4873],
         [0.1210, 0.5516]],

        [[0.4985, 0.1072],
         [0.0857, 0.0186]]])

```

cc soumith voznesenskym penguinwu anijain2305 EikanWang jgong5 Guobing-Chen zhuhaozhe blzheng Xia-Weiwen wenzhe-nrv jiayisunx peterbell10 desertfire

[ghstack-poisoned]

ciflow/trunk/96086

Toggle ciflow/trunk/96086's commit message
Update on "fix issue of baddbmm when out has nan value for beta=0"



Fix pytorch#96037.

cc jgong5 mingfeima sanchitintel ashokei jingxu10

[ghstack-poisoned]

ciflow/trunk/95940

Toggle ciflow/trunk/95940's commit message
Update on "Provide more informative kernel names in Inductor"


Before: `triton_fused_add_83_add_84_relu_13_squeeze_46_var_mean_15_14`
After: `triton_fused__native_batch_norm_legit_functional_convolution_relu_14`

cc soumith voznesenskym yanboliang penguinwu anijain2305 EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng Xia-Weiwen wenzhe-nrv jiayisunx peterbell10 desertfire

[ghstack-poisoned]

ciflow/trunk/95638

Toggle ciflow/trunk/95638's commit message
fix test case

ciflow/trunk/95461

Toggle ciflow/trunk/95461's commit message
Update on "[inductor] make `philox_rand_like` work with dynamic shapes"

[ghstack-poisoned]

ciflow/trunk/95416

Toggle ciflow/trunk/95416's commit message
enable sdpa test