|
15 | 15 | To enhance the usability of the .encode() method, the special |
16 | 16 | casing of Unicode object return values was dropped (Unicode objects |
17 | 17 | were auto-magically converted to string using the default encoding). |
18 | | - |
| 18 | + |
19 | 19 | Both methods will now return whatever the codec in charge of the |
20 | 20 | requested encoding returns as object, e.g. Unicode codecs will |
21 | 21 | return Unicode objects when decoding is requested ("���".decode("latin-1") |
@@ -116,13 +116,11 @@ Core |
116 | 116 | to crash if the element comparison routines for the dict keys and/or |
117 | 117 | values mutated the dicts. Making the code bulletproof slowed it down. |
118 | 118 |
|
119 | | -- Collisions in dicts now use polynomial division instead of multiplication |
120 | | - to generate the probe sequence, following an idea of Christian Tismer's. |
121 | | - This allows all bits of the hash code to come into play. It should have |
122 | | - little or no effect on speed in ordinary cases, but can help dramatically |
123 | | - in bad cases. For example, looking up every key in a dict d with |
124 | | - d.keys() = [i << 16 for i in range(20000)] is approximately 500x faster |
125 | | - now. |
| 119 | +- Collisions in dicts are resolved via a new approach, which can help |
| 120 | + dramatically in bad cases. For example, looking up every key in a dict |
| 121 | + d with d.keys() = [i << 16 for i in range(20000)] is approximately 500x |
| 122 | + faster now. Thanks to Christian Tismer for pointing out the cause and |
| 123 | + the nature of an effective cure (last December! better late than never). |
126 | 124 |
|
127 | 125 | Library |
128 | 126 |
|
|
0 commit comments