Thanks to visit codestin.com
Credit goes to github.com

Skip to content

DISABLED test_comprehensive_polygamma_polygamma_n_1_cuda_float16 (__main__.TestInductorOpInfoCUDA) #152470

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
pytorch-bot bot opened this issue Apr 29, 2025 · 5 comments
Labels
high priority module: flaky-tests Problem is a flaky test in CI module: inductor oncall: pt2 skipped Denotes a (flaky) test currently skipped in CI. triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Comments

@pytorch-bot
Copy link

pytorch-bot bot commented Apr 29, 2025

Platforms: inductor

This test was disabled because it is failing in CI. See recent examples and the most recent trunk workflow logs.

Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes.

Debugging instructions (after clicking on the recent samples link):
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:

  1. Click on the workflow logs linked above
  2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
  3. Grep for test_comprehensive_polygamma_polygamma_n_1_cuda_float16
  4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Sample error message
Traceback (most recent call last):
  File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/testing/_internal/common_device_type.py", line 1131, in test_wrapper
    return test(*args, **kwargs)
  File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/testing/_internal/common_device_type.py", line 1430, in only_fn
    return fn(self, *args, **kwargs)
  File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/testing/_internal/common_utils.py", line 2291, in wrapper
    fn(*args, **kwargs)
    ~~^^^^^^^^^^^^^^^^^
  File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/testing/_internal/common_device_type.py", line 1211, in dep_fn
    return fn(slf, *args, **kwargs)
  File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/testing/_internal/common_device_type.py", line 1211, in dep_fn
    return fn(slf, *args, **kwargs)
  File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/testing/_internal/common_device_type.py", line 1211, in dep_fn
    return fn(slf, *args, **kwargs)
  File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/testing/_internal/common_utils.py", line 1612, in wrapper
    fn(*args, **kwargs)
    ~~^^^^^^^^^^^^^^^^^
  File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/testing/_internal/common_utils.py", line 1534, in wrapper
    fn(*args, **kwargs)
    ~~^^^^^^^^^^^^^^^^^
  File "/opt/conda/envs/py_3.13/lib/python3.13/unittest/mock.py", line 1424, in patched
    return func(*newargs, **newkeywargs)
  File "/opt/conda/envs/py_3.13/lib/python3.13/contextlib.py", line 85, in inner
    return func(*args, **kwds)
  File "/opt/conda/envs/py_3.13/lib/python3.13/contextlib.py", line 85, in inner
    return func(*args, **kwds)
  File "/opt/conda/envs/py_3.13/lib/python3.13/contextlib.py", line 85, in inner
    return func(*args, **kwds)
  File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 962, in inner
    raise e
  File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 954, in inner
    fn(self, device, dtype, op)
    ~~^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 1207, in test_comprehensive
    raise e
  File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 1182, in test_comprehensive
    self.check_model_gpu(
    ~~~~~~~~~~~~~~~~~~~~^
        fn,
        ^^^
    ...<2 lines>...
        **adjusted_kwargs,
        ^^^^^^^^^^^^^^^^^^
    )
    ^
  File "/opt/conda/envs/py_3.13/lib/python3.13/contextlib.py", line 85, in inner
    return func(*args, **kwds)
  File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 657, in check_model_gpu
    check_model(
    ~~~~~~~~~~~^
        self,
        ^^^^^
    ...<13 lines>...
        output_process_fn_grad=output_process_fn_grad,
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    )
    ^
  File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 608, in check_model
    actual_grad = compute_grads(example_inputs, kwargs, actual, grads)
  File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 406, in compute_grads
    return torch.autograd.grad(
           ~~~~~~~~~~~~~~~~~~~^
        flat_diff_results,
        ^^^^^^^^^^^^^^^^^^
    ...<3 lines>...
        retain_graph=True,
        ^^^^^^^^^^^^^^^^^^
    )
    ^
  File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/autograd/__init__.py", line 503, in grad
    result = _engine_run_backward(
        outputs,
    ...<5 lines>...
        accumulate_grad=False,
    )
  File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/autograd/graph.py", line 824, in _engine_run_backward
    return Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
           ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
        t_outputs, *args, **kwargs
        ^^^^^^^^^^^^^^^^^^^^^^^^^^
    )  # Calls into the C++ engine to run the backward pass
    ^
  File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/autograd/function.py", line 307, in apply
    return user_fn(self, *args)
  File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 2163, in backward
    return impl_fn()
  File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 2149, in impl_fn
    out = CompiledFunction._backward_impl(ctx, all_args)
  File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 2241, in _backward_impl
    CompiledFunction.compiled_bw = aot_config.bw_compiler(
                                   ~~~~~~~~~~~~~~~~~~~~~~^
        copy.deepcopy(bw_module), placeholder_list
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    )
    ^
  File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_functorch/aot_autograd.py", line 483, in __call__
    return self.compiler_fn(gm, example_inputs)
           ~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^
  File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/backends/common.py", line 73, in _wrapped_bw_compiler
    disable(
    ~~~~~~~~
        bw_compiler_fn, reason="do not trace backward compiler function"
        ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    )(*args, **kwargs),
    ~^^^^^^^^^^^^^^^^^
  File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/eval_frame.py", line 856, in _fn
    return fn(*args, **kwargs)
  File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_utils_internal.py", line 97, in wrapper_function
    return function(*args, **kwargs)
  File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_inductor/compile_fx.py", line 2201, in bw_compiler
    return inner_compile(
        gm,
    ...<5 lines>...
        boxed_forward_device_index=forward_device,
    )
  File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_inductor/compile_fx.py", line 726, in compile_fx_inner
    return wrap_compiler_debug(_compile_fx_inner, compiler_name="inductor")(
           ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
        gm,
        ^^^
        example_inputs,
        ^^^^^^^^^^^^^^^
        **kwargs,
        ^^^^^^^^^
    )
    ^
  File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/repro/after_aot.py", line 124, in debug_wrapper
    inner_compiled_fn = compiler_fn(gm, example_inputs)
  File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_inductor/compile_fx.py", line 862, in _compile_fx_inner
    raise InductorError(e, currentframe()).with_traceback(
        e.__traceback__
    ) from None
  File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_inductor/compile_fx.py", line 846, in _compile_fx_inner
    mb_compiled_graph = fx_codegen_and_compile(
        gm, example_inputs, inputs_to_check, **graph_kwargs
    )
  File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_inductor/compile_fx.py", line 1460, in fx_codegen_and_compile
    return scheme.codegen_and_compile(gm, example_inputs, inputs_to_check, graph_kwargs)
           ~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_inductor/compile_fx.py", line 1347, in codegen_and_compile
    compiled_module = graph.compile_to_module()
  File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_inductor/graph.py", line 2219, in compile_to_module
    return self._compile_to_module()
           ~~~~~~~~~~~~~~~~~~~~~~~^^
  File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_inductor/graph.py", line 2266, in _compile_to_module
    mod = PyCodeCache.load_by_key_path(
        key,
    ...<2 lines>...
        attrs={**self.constants, **self.torchbind_constants},
    )
  File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_inductor/codecache.py", line 3006, in load_by_key_path
    mod = _reload_python_module(key, path, set_sys_modules=in_toplevel)
  File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_inductor/runtime/compile_tasks.py", line 31, in _reload_python_module
    exec(code, mod.__dict__, mod.__dict__)
    ~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/tmp/tmpfcfyaga1/le/clerbfl5cvgzsi7mue3yy6hq2346emksh4svebdjys63r4nrnkef.py", line 76, in <module>
    async_compile.wait(globals())
    ~~~~~~~~~~~~~~~~~~^^^^^^^^^^^
  File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_inductor/async_compile.py", line 448, in wait
    self._wait_futures(scope)
    ~~~~~~~~~~~~~~~~~~^^^^^^^
  File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_inductor/async_compile.py", line 468, in _wait_futures
    scope[key] = result.result()
                 ~~~~~~~~~~~~~^^
  File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_inductor/codecache.py", line 3508, in result
    return self.result_fn()
           ~~~~~~~~~~~~~~^^
  File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_inductor/async_compile.py", line 343, in get_result
    kernel.precompile(
    ~~~~~~~~~~~~~~~~~^
        warm_cache_only=False,
        ^^^^^^^^^^^^^^^^^^^^^^
        reload_kernel=reload_kernel_in_parent,
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
        static_triton_bundle_key=CompiledTritonKernels.key(source_code),
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    )
    ^
  File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 322, in precompile
    self._make_launchers()
    ~~~~~~~~~~~~~~~~~~~~^^
  File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 479, in _make_launchers
    launchers.append(result.make_launcher())
                     ~~~~~~~~~~~~~~~~~~~~^^
  File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 1276, in make_launcher
    self.reload_cubin_path()
    ~~~~~~~~~~~~~~~~~~~~~~^^
  File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 1268, in reload_cubin_path
    raise RuntimeError(
        "Cubin file saved by TritonBundler not found at %s", cubin_location
    )
torch._inductor.exc.InductorError: RuntimeError: ('Cubin file saved by TritonBundler not found at %s', '/tmp/tmp31qnqqmw/triton/OPN4Y6JTPZRK7W3V3VNIMP4B6FRUQAWHOA3KTF3NPIVJSIQC3SUA/triton_poi_fused_mul_0.cubin')

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/testing/_internal/common_utils.py", line 3154, in wrapper
    method(*args, **kwargs)
    ~~~~~~^^^^^^^^^^^^^^^^^
  File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/testing/_internal/common_utils.py", line 3154, in wrapper
    method(*args, **kwargs)
    ~~~~~~^^^^^^^^^^^^^^^^^
  File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/testing/_internal/common_device_type.py", line 426, in instantiated_test
    result = test(self, **param_kwargs)
  File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/testing/_internal/common_utils.py", line 1612, in wrapper
    fn(*args, **kwargs)
    ~~^^^^^^^^^^^^^^^^^
  File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/testing/_internal/common_device_type.py", line 1143, in test_wrapper
    raise e_tracked from e
Exception: Caused by sample input at index 0: SampleInput(input=Tensor[size=(5, 5), device="cuda:0", dtype=torch.float16], args=(1), kwargs={}, broadcasts_input=False, name='')

To execute this test, run the following from the base repo dir:
    PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=0 python test/inductor/test_torchinductor_opinfo.py TestInductorOpInfoCUDA.test_comprehensive_polygamma_polygamma_n_1_cuda_float16

This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0

Test file path: inductor/test_torchinductor_opinfo.py

ResponseTimeoutError: Response timeout for 5000ms, GET https://raw.githubusercontent.com/pytorch/pytorch/main/test/inductor/test_torchinductor_opinfo.py 200 (connected: true, keepalive socket: false, socketHandledRequests: 1, socketHandledResponses: 1)
headers: {"connection":"keep-alive","content-length":"46058","cache-control":"max-age=300","content-security-policy":"default-src 'none'; style-src 'unsafe-inline'; sandbox","content-type":"text/plain; charset=utf-8","etag":""af763533c4c8961a66cc9a4318e1cda0b3b2d5420172f31f6b1a6dbe29ece83d"","strict-transport-security":"max-age=31536000","x-content-type-options":"nosniff","x-frame-options":"deny","x-xss-protection":"1; mode=block","x-github-request-id":"DEAA:1F7751:11F4B4C:14FC1CB:68114782","accept-ranges":"bytes","date":"Tue, 29 Apr 2025 21:41:22 GMT","via":"1.1 varnish","x-served-by":"cache-sjc10070-SJC","x-cache":"HIT","x-cache-hits":"1","x-timer":"S1745962882.468956,VS0,VE212","vary":"Authorization,Accept-Encoding,Origin","access-control-allow-origin":"*","cross-origin-resource-policy":"cross-origin","x-fastly-request-id":"dfbef9f3d638ff993eb11574bbe19b3acf91acca","expires":"Tue, 29 Apr 2025 21:46:22 GMT","source-age":"0"}

cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @clee2000 @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @muchulee8 @amjames @aakhundov

@pytorch-bot pytorch-bot bot added module: flaky-tests Problem is a flaky test in CI oncall: pt2 skipped Denotes a (flaky) test currently skipped in CI. labels Apr 29, 2025
Copy link
Author

pytorch-bot bot commented Apr 29, 2025

Hello there! From the DISABLED prefix in this issue title, it looks like you are attempting to disable a test in PyTorch CI. The information I have parsed is below:
  • Test name: test_comprehensive_polygamma_polygamma_n_1_cuda_float16 (__main__.TestInductorOpInfoCUDA)
  • Platforms for which to skip the test: inductor
  • Disabled by pytorch-bot[bot]

Within ~15 minutes, test_comprehensive_polygamma_polygamma_n_1_cuda_float16 (__main__.TestInductorOpInfoCUDA) will be disabled in PyTorch CI for these platforms: inductor. Please verify that your test name looks correct, e.g., test_cuda_assert_async (__main__.TestCuda).

To modify the platforms list, please include a line in the issue body, like below. The default action will disable the test for all platforms if no platforms list is specified.

Platforms: case-insensitive, list, of, platforms

We currently support the following platforms: asan, dynamo, inductor, linux, mac, macos, rocm, slow, win, windows.

How to re-enable a test

To re-enable the test globally, close the issue. To re-enable a test for only a subset of platforms, remove the platforms from the list in the issue body. This may take some time to propagate. To re-enable a test only for a PR, put Fixes #152470 in the PR body and rerun the test jobs. Note that if a test is flaky, it maybe be difficult to tell if the test is still flaky on the PR.

Copy link
Author

pytorch-bot bot commented Apr 30, 2025

Another case of trunk flakiness has been found here. The list of platforms [inductor] appears to contain all the recently affected platforms [inductor]. Either the change didn't propogate fast enough or disable bot might be broken.

Copy link
Author

pytorch-bot bot commented Apr 30, 2025

Another case of trunk flakiness has been found here. The list of platforms [inductor] appears to contain all the recently affected platforms [inductor]. Either the change didn't propogate fast enough or disable bot might be broken.

Copy link
Author

pytorch-bot bot commented Apr 30, 2025

Another case of trunk flakiness has been found here. The list of platforms [inductor] appears to contain all the recently affected platforms [inductor]. Either the change didn't propogate fast enough or disable bot might be broken.

@masnesral masnesral added module: inductor high priority triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module labels May 5, 2025
@masnesral
Copy link
Contributor

hi pri per oncall runbook

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
high priority module: flaky-tests Problem is a flaky test in CI module: inductor oncall: pt2 skipped Denotes a (flaky) test currently skipped in CI. triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
Projects
None yet
Development

No branches or pull requests

1 participant