Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Memory leak when evaluating model on CPU with dynamic size tensor input.Β #29893

@seyong92

Description

@seyong92

πŸ› Bug

To Reproduce

Steps to reproduce the behavior:

  1. Make a simple network.
  2. Change a model to eval mode (with torch.no_grad())
  3. Evaluate model with dynamic size input.
  4. CPU memory increases a lot.
  5. I attached a simple toy example that can reproduce this error.
    https://github.com/seyong92/pytorch_memory_leak_test/
    Run this code with "python3 main.py -ds" (-ds means the input tensor size will be changed)

Expected behavior

Because it consumes memory unexpectedly, I cannot run my code on the CPU. (It works fine on GPU.)

Environment

  • PyTorch Version (e.g., 1.0): 1.2.0
  • OS (e.g., Linux): Ubuntu 16.04
  • How you installed PyTorch (conda, pip, source): pip
  • Build command you used (if compiling from source): python3 main.py -ds
  • Python version: 3.5 / 3.6 (both happen)
  • CUDA/cuDNN version: 10.1, 10.2 / 7
  • GPU models and configuration: GTX 1080 / GTX 1080 Ti / Titan Xp
  • Any other relevant information: It doesn't happen on PyTorch 0.4.1

cc @ezyang @gchanan @zou3519 @jerryzh168 @gujinghui @PenghuiCheng @XiaobingSuper @jianyuh

Metadata

Metadata

Assignees

No one assigned

    Labels

    high prioritymodule: mkldnnRelated to Intel IDEEP or oneDNN (a.k.a. mkldnn) integrationtriage reviewtriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate module

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions