Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Pandas Ran out of memory again! #69

Open
@saulshanabrook

Description

@saulshanabrook

So the pandas test suite ran out of memory again in Kubernetes. It used up ~13Gb and then was killed, because the pods only have that much available.

I am a bit hesitant to just raise the pod memory limit again... If anyone knows if this is a reasonable amount of memory for Pandas to use when testing (cc @datapythonista), that would be helpful! It's also possible that the tracing has some sort of memory leak which is blowing things up for pandas, although all the other test suites don't seem to have the same problem.

Maybe I can run Pandas test suites with some flags to ignore some high memory tests? These are my current ones:

CMD [ "pytest", "pandas", "--skip-slow", "--skip-network", "--skip-db", "-m", "not single", "-r", "sxX", "--strict", "--suppress-tests-failed-exit-code" ]

I copied it from the test-fast script, or whatever that is, in the Pandas repo.

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions