@@ -276,10 +276,10 @@ def _mk_bitmap(bits):
276276# set is constructed. Then, this bitmap is sliced into chunks of 256
277277# characters, duplicate chunks are eliminated, and each chunk is
278278# given a number. In the compiled expression, the charset is
279- # represented by a 16 -bit word sequence, consisting of one word for
280- # the number of different chunks, a sequence of 256 bytes (128 words)
279+ # represented by a 32 -bit word sequence, consisting of one word for
280+ # the number of different chunks, a sequence of 256 bytes (64 words)
281281# of chunk numbers indexed by their original chunk position, and a
282- # sequence of chunks (16 words each).
282+ # sequence of 256-bit chunks (8 words each).
283283
284284# Compression is normally good: in a typical charset, large ranges of
285285# Unicode will be either completely excluded (e.g. if only cyrillic
@@ -292,9 +292,9 @@ def _mk_bitmap(bits):
292292# less significant byte is a bit index in the chunk (just like the
293293# CHARSET matching).
294294
295- # In UCS-4 mode, the BIGCHARSET opcode still supports only subsets
295+ # The BIGCHARSET opcode still supports only subsets
296296# of the basic multilingual plane; an efficient representation
297- # for all of UTF-16 has not yet been developed. This means,
297+ # for all of Unicode has not yet been developed. This means,
298298# in particular, that negated charsets cannot be represented as
299299# bigcharsets.
300300
0 commit comments