diff --git a/internals/garbage-collector.rst b/internals/garbage-collector.rst index 5b220bfe50..60d0c700b4 100644 --- a/internals/garbage-collector.rst +++ b/internals/garbage-collector.rst @@ -112,7 +112,7 @@ simple type cast from the original object: :code:`((PyGC_Head *)(the_object)-1)` As is explained later in the `Optimization: reusing fields to save memory`_ section, these two extra fields are normally used to keep doubly linked lists of all the objects tracked by the garbage collector (these lists are the GC generations, more on -that in the `Optimization: generations`_ section), but they are also +that in the `Optimization: incremental collection`_ section), but they are also reused to fulfill other purposes when the full doubly linked list structure is not needed as a memory optimization. @@ -356,37 +356,77 @@ follows these steps in order: the reference counts fall to 0, triggering the destruction of all unreachable objects. -Optimization: generations -========================= +Optimization: incremental collection +==================================== -In order to limit the time each garbage collection takes, the GC -implementation for the default build uses a popular optimization: -generations. The main idea behind this concept is the assumption that most -objects have a very short lifespan and can thus be collected soon after their -creation. This has proven to be very close to the reality of many Python +In order to limit the time each garbage collection takes, the GC implementation +for the default build uses incremental collection with two generations. + +Generational garbage collection takes advantage of what is known as the weak +generational hypothesis: Most objects die young. +This has proven to be very close to the reality of many Python programs as many temporary objects are created and destroyed very quickly. To take advantage of this fact, all container objects are segregated into -three spaces/generations. Every new -object starts in the first generation (generation 0). The previous algorithm is -executed only over the objects of a particular generation and if an object -survives a collection of its generation it will be moved to the next one -(generation 1), where it will be surveyed for collection less often. If -the same object survives another GC round in this new generation (generation 1) -it will be moved to the last generation (generation 2) where it will be -surveyed the least often. - -The GC implementation for the free-threaded build does not use multiple -generations. Every collection operates on the entire heap. +two generations: young and old. Every new object starts in the young generation. +In order to keep pause times down, scanning of the old generation of the heap +occurs in increments. To keep track of what has been scanned, +the old generation contains two lists: scanned and unscanned heap. + +To detect and collect all unreachable objects in the heap, the garbage collector +must scan the whole heap. This whole heap scan is called a full scavenge. + +To limit the time each garbage collection takes, the detection and collection +algorithm is executed only on a portion of the heap called an increment. +For each full scavenge, the increments will cover the whole heap. + +Each increment, the portion of the heap scanned by a single collection is made up +of three parts: + +* The young generation +* The old generation's least recently objects +* All objects reachable from those objects that have not yet been scanned this full scavenge + +Any young generation objects surviving this collection are moved to the old generation, +and reachable objects in the old generation remain in the old generation. +The old generation is composed of two lists: scanned and unscanned. +(The implementation refers to the unscanned part as ``pending`` and the scanned part +as ``visited``). +Survivors are moved to the back of the scanned list. The old part of increment is taken +from the front of the unscanned list. + +When a full scavenge starts, no objects in the heap are considered to have been scanned. +When all objects in the heap have been scanned a cycle ends, and all objects are +considered unscanned again. + +In order to collect all unreachable cycles, each increment must contain all of +an unreachable cycle, or none of it. +In order to make sure that the whole of any unreachable cycle is contained in an +increment, all unscanned objects reachable from any object in the increment must +be included in the increment. +Thus, to form a complete increment we perform a +`transitive closure `_ +over reachable, unscanned objects from the initial increment. +We can exclude scanned objects, as they must have been reachable when scanned. +If a scanned object becomes part of an unreachable cycle after being scanned, it +will not be collected this cycle, but it will be collected next full scavenge. + +.. note:: + + The GC implementation for the free-threaded build does not use incremental collection. + Every collection operates on the entire heap. In order to decide when to run, the collector keeps track of the number of object allocations and deallocations since the last collection. When the number of -allocations minus the number of deallocations exceeds ``threshold_0``, -collection starts. Initially only generation 0 is examined. If generation 0 has -been examined more than ``threshold_1`` times since generation 1 has been -examined, then generation 1 is examined as well. With generation 2, -things are a bit more complicated; see :ref:`gc-oldest-generation` for -more information. These thresholds can be examined using the +allocations minus the number of deallocations exceeds ``threshold0``, +collection starts. ``threshold1`` determines the fraction of the old +collection that is included in the increment. +The fraction is inversely proportional to ``threshold1``, +as historically a larger ``threshold1`` meant that old generation +collections were performed less frequency. +``threshold2`` is ignored. + +These thresholds can be examined using the :func:`gc.get_threshold` function: .. code-block:: python @@ -399,6 +439,10 @@ more information. These thresholds can be examined using the The content of these generations can be examined using the ``gc.get_objects(generation=NUM)`` function and collections can be triggered specifically in a generation by calling ``gc.collect(generation=NUM)``. +Prior to 3.13, there we three generations. For that reason the +young generation is generation 0, but the old generation is generation 2. + +For compatibility, ``gc.get_objects()`` pretends there is a generation 1, but it is always empty. .. code-block:: python @@ -407,8 +451,8 @@ specifically in a generation by calling ``gc.collect(generation=NUM)``. ... pass ... - # Move everything to the last generation so it's easier to inspect - # the younger generations. + # Move everything to the old generation so it's easier to inspect + # the young generations. >>> gc.collect() 0 @@ -418,43 +462,24 @@ specifically in a generation by calling ``gc.collect(generation=NUM)``. >>> x = MyObj() >>> x.self = x - # Initially the object is in the youngest generation. + # Initially the object is in the young generation. >>> gc.get_objects(generation=0) [..., <__main__.MyObj object at 0x7fbcc12a3400>, ...] # After a collection of the youngest generation the object - # moves to the next generation. + # moves to the old generation. >>> gc.collect(generation=0) 0 >>> gc.get_objects(generation=0) [] >>> gc.get_objects(generation=1) + [] + >>> gc.get_objects(generation=2) [..., <__main__.MyObj object at 0x7fbcc12a3400>, ...] -.. _gc-oldest-generation: - -Collecting the oldest generation --------------------------------- - -In addition to the various configurable thresholds, the GC only triggers a full -collection of the oldest generation if the ratio ``long_lived_pending / long_lived_total`` -is above a given value (hardwired to 25%). The reason is that, while "non-full" -collections (that is, collections of the young and middle generations) will always -examine roughly the same number of objects (determined by the aforementioned -thresholds) the cost of a full collection is proportional to the total -number of long-lived objects, which is virtually unbounded. Indeed, it has -been remarked that doing a full collection every of object -creations entails a dramatic performance degradation in workloads which consist -of creating and storing lots of long-lived objects (for example, building a large list -of GC-tracked objects would show quadratic performance, instead of linear as -expected). Using the above ratio, instead, yields amortized linear performance -in the total number of objects (the effect of which can be summarized thusly: -"each full garbage collection is more and more costly as the number of objects -grows, but we do fewer and fewer of them"). - Optimization: reusing fields to save memory ===========================================