Description
So the pandas test suite ran out of memory again in Kubernetes. It used up ~13Gb and then was killed, because the pods only have that much available.
I am a bit hesitant to just raise the pod memory limit again... If anyone knows if this is a reasonable amount of memory for Pandas to use when testing (cc @datapythonista), that would be helpful! It's also possible that the tracing has some sort of memory leak which is blowing things up for pandas, although all the other test suites don't seem to have the same problem.
Maybe I can run Pandas test suites with some flags to ignore some high memory tests? These are my current ones:
I copied it from the test-fast
script, or whatever that is, in the Pandas repo.