Thanks to visit codestin.com
Credit goes to github.com

Skip to content

multiprocessing Manager exceptions create cyclic references #106558

Closed
@pteromys

Description

@pteromys

Bug report

multiprocessing.managers uses convert_to_error(kind, result) to make a raisable exception out of result when a call has responded with some sort of error. If kind == "#ERROR", then result is already an exception and the caller raises it directly—but because result was created in a frame at or under the caller, this creates a reference cycle resultresult.__traceback__ → (some frame).f_locals['result'].

In particular, every time I've used a manager queue I've expected frequent occurrences of queue.Empty, and the buildup of reference cycles sporadically wakes up the garbage collector and wrecks my hopes of consistent latency.

I'm including an example script below. PR coming in a moment, so please let me know if I should expand the example into a test and bundle that in. (Please also feel free to tell me if this is a misuse of queue.Empty and I should buzz off.)

Your environment

  • CPython versions tested on: 3.11.3
  • Operating system and architecture: uname -a says Linux delia 6.3.2-arch1-1 #1 SMP PREEMPT_DYNAMIC Thu, 11 May 2023 16:40:42 +0000 x86_64 GNU/Linux

Minimal example

Output

net allocations:  0
got from queue:   [0, 1, 2, 3, 4]
net allocations:  23
garbage produced: Counter({<class 'traceback'>: 3, <class 'frame'>: 3, <class 'list'>: 1, <class '_queue.Empty'>: 1})
net allocations:  0

Script

#!/usr/bin/env python

import collections
import gc
import multiprocessing
import queue
import time


def sender(q):
    for i in range(5):
        q.put_nowait(i)

def get_all_available(q):
    result = []
    try:
        while True:
            result.append(q.get_nowait())
    except queue.Empty:
        ...
    return result

def main():
    q = multiprocessing.Manager().Queue()
    p = multiprocessing.Process(target=sender, args=(q,))
    p.start()

    # take control of gc
    gc.disable()
    gc.collect()
    gc.set_debug(gc.DEBUG_SAVEALL)
    time.sleep(0.1)  # just in case the new process took a while to create
    print('net allocations: ', gc.get_count()[0])

    # trigger a queue.Empty
    print('got from queue:  ', get_all_available(q))

    # check for collectable garbage and print it
    print('net allocations: ', gc.get_count()[0])
    gc.collect()
    print('garbage produced:', collections.Counter(type(x) for x in gc.garbage))
    gc.set_debug(0)
    gc.garbage.clear()
    gc.collect()
    print('net allocations: ', gc.get_count()[0])

    # clean up
    p.join()


if __name__ == '__main__':
    main()

Linked PRs

Metadata

Metadata

Assignees

No one assigned

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions