Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Commit b5db4f1

Browse files
github-actions[bot]mattwang44
authored andcommitted
sync with cpython 080d17e9
1 parent da7129e commit b5db4f1

File tree

1 file changed

+37
-38
lines changed

1 file changed

+37
-38
lines changed

library/tokenize.po

Lines changed: 37 additions & 38 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ msgid ""
77
msgstr ""
88
"Project-Id-Version: Python 3.13\n"
99
"Report-Msgid-Bugs-To: \n"
10-
"POT-Creation-Date: 2024-09-03 11:11+0800\n"
10+
"POT-Creation-Date: 2025-01-20 00:13+0000\n"
1111
"PO-Revision-Date: 2018-05-23 16:13+0000\n"
1212
"Last-Translator: Adrian Liaw <[email protected]>\n"
1313
"Language-Team: Chinese - TAIWAN (https://github.com/python/python-docs-zh-"
@@ -140,84 +140,83 @@ msgstr ""
140140

141141
#: ../../library/tokenize.rst:94
142142
msgid ""
143-
"The reconstructed script is returned as a single string. The result is "
144-
"guaranteed to tokenize back to match the input so that the conversion is "
145-
"lossless and round-trips are assured. The guarantee applies only to the "
146-
"token type and token string as the spacing between tokens (column positions) "
147-
"may change."
143+
"The result is guaranteed to tokenize back to match the input so that the "
144+
"conversion is lossless and round-trips are assured. The guarantee applies "
145+
"only to the token type and token string as the spacing between tokens "
146+
"(column positions) may change."
148147
msgstr ""
149148

150-
#: ../../library/tokenize.rst:100
149+
#: ../../library/tokenize.rst:99
151150
msgid ""
152151
"It returns bytes, encoded using the :data:`~token.ENCODING` token, which is "
153152
"the first token sequence output by :func:`.tokenize`. If there is no "
154153
"encoding token in the input, it returns a str instead."
155154
msgstr ""
156155

157-
#: ../../library/tokenize.rst:105
156+
#: ../../library/tokenize.rst:104
158157
msgid ""
159158
":func:`.tokenize` needs to detect the encoding of source files it tokenizes. "
160159
"The function it uses to do this is available:"
161160
msgstr ""
162161

163-
#: ../../library/tokenize.rst:110
162+
#: ../../library/tokenize.rst:109
164163
msgid ""
165164
"The :func:`detect_encoding` function is used to detect the encoding that "
166165
"should be used to decode a Python source file. It requires one argument, "
167166
"readline, in the same way as the :func:`.tokenize` generator."
168167
msgstr ""
169168

170-
#: ../../library/tokenize.rst:114
169+
#: ../../library/tokenize.rst:113
171170
msgid ""
172171
"It will call readline a maximum of twice, and return the encoding used (as a "
173172
"string) and a list of any lines (not decoded from bytes) it has read in."
174173
msgstr ""
175174

176-
#: ../../library/tokenize.rst:118
175+
#: ../../library/tokenize.rst:117
177176
msgid ""
178177
"It detects the encoding from the presence of a UTF-8 BOM or an encoding "
179178
"cookie as specified in :pep:`263`. If both a BOM and a cookie are present, "
180179
"but disagree, a :exc:`SyntaxError` will be raised. Note that if the BOM is "
181180
"found, ``'utf-8-sig'`` will be returned as an encoding."
182181
msgstr ""
183182

184-
#: ../../library/tokenize.rst:123
183+
#: ../../library/tokenize.rst:122
185184
msgid ""
186185
"If no encoding is specified, then the default of ``'utf-8'`` will be "
187186
"returned."
188187
msgstr ""
189188

190-
#: ../../library/tokenize.rst:126
189+
#: ../../library/tokenize.rst:125
191190
msgid ""
192191
"Use :func:`.open` to open Python source files: it uses :func:"
193192
"`detect_encoding` to detect the file encoding."
194193
msgstr ""
195194

196-
#: ../../library/tokenize.rst:132
195+
#: ../../library/tokenize.rst:131
197196
msgid ""
198197
"Open a file in read only mode using the encoding detected by :func:"
199198
"`detect_encoding`."
200199
msgstr ""
201200

202-
#: ../../library/tokenize.rst:139
201+
#: ../../library/tokenize.rst:138
203202
msgid ""
204203
"Raised when either a docstring or expression that may be split over several "
205204
"lines is not completed anywhere in the file, for example::"
206205
msgstr ""
207206

208-
#: ../../library/tokenize.rst:142
207+
#: ../../library/tokenize.rst:141
209208
msgid ""
210209
"\"\"\"Beginning of\n"
211210
"docstring"
212211
msgstr ""
213212
"\"\"\"Beginning of\n"
214213
"docstring"
215214

216-
#: ../../library/tokenize.rst:145
215+
#: ../../library/tokenize.rst:144
217216
msgid "or::"
218217
msgstr "或是: ::"
219218

220-
#: ../../library/tokenize.rst:147
219+
#: ../../library/tokenize.rst:146
221220
msgid ""
222221
"[1,\n"
223222
" 2,\n"
@@ -227,49 +226,49 @@ msgstr ""
227226
" 2,\n"
228227
" 3"
229228

230-
#: ../../library/tokenize.rst:154
229+
#: ../../library/tokenize.rst:153
231230
msgid "Command-Line Usage"
232231
msgstr ""
233232

234-
#: ../../library/tokenize.rst:158
233+
#: ../../library/tokenize.rst:157
235234
msgid ""
236235
"The :mod:`tokenize` module can be executed as a script from the command "
237236
"line. It is as simple as:"
238237
msgstr ""
239238

240-
#: ../../library/tokenize.rst:161
239+
#: ../../library/tokenize.rst:160
241240
msgid "python -m tokenize [-e] [filename.py]"
242241
msgstr "python -m tokenize [-e] [filename.py]"
243242

244-
#: ../../library/tokenize.rst:165
243+
#: ../../library/tokenize.rst:164
245244
msgid "The following options are accepted:"
246245
msgstr ""
247246

248-
#: ../../library/tokenize.rst:171
247+
#: ../../library/tokenize.rst:170
249248
msgid "show this help message and exit"
250249
msgstr ""
251250

252-
#: ../../library/tokenize.rst:175
251+
#: ../../library/tokenize.rst:174
253252
msgid "display token names using the exact type"
254253
msgstr ""
255254

256-
#: ../../library/tokenize.rst:177
255+
#: ../../library/tokenize.rst:176
257256
msgid ""
258257
"If :file:`filename.py` is specified its contents are tokenized to stdout. "
259258
"Otherwise, tokenization is performed on stdin."
260259
msgstr ""
261260

262-
#: ../../library/tokenize.rst:181
261+
#: ../../library/tokenize.rst:180
263262
msgid "Examples"
264263
msgstr "範例"
265264

266-
#: ../../library/tokenize.rst:183
265+
#: ../../library/tokenize.rst:182
267266
msgid ""
268267
"Example of a script rewriter that transforms float literals into Decimal "
269268
"objects::"
270269
msgstr ""
271270

272-
#: ../../library/tokenize.rst:186
271+
#: ../../library/tokenize.rst:185
273272
msgid ""
274273
"from tokenize import tokenize, untokenize, NUMBER, STRING, NAME, OP\n"
275274
"from io import BytesIO\n"
@@ -312,11 +311,11 @@ msgid ""
312311
" return untokenize(result).decode('utf-8')"
313312
msgstr ""
314313

315-
#: ../../library/tokenize.rst:225
314+
#: ../../library/tokenize.rst:224
316315
msgid "Example of tokenizing from the command line. The script::"
317316
msgstr ""
318317

319-
#: ../../library/tokenize.rst:227
318+
#: ../../library/tokenize.rst:226
320319
msgid ""
321320
"def say_hello():\n"
322321
" print(\"Hello, World!\")\n"
@@ -328,15 +327,15 @@ msgstr ""
328327
"\n"
329328
"say_hello()"
330329

331-
#: ../../library/tokenize.rst:232
330+
#: ../../library/tokenize.rst:231
332331
msgid ""
333332
"will be tokenized to the following output where the first column is the "
334333
"range of the line/column coordinates where the token is found, the second "
335334
"column is the name of the token, and the final column is the value of the "
336335
"token (if any)"
337336
msgstr ""
338337

339-
#: ../../library/tokenize.rst:236
338+
#: ../../library/tokenize.rst:235
340339
msgid ""
341340
"$ python -m tokenize hello.py\n"
342341
"0,0-0,0: ENCODING 'utf-8'\n"
@@ -382,12 +381,12 @@ msgstr ""
382381
"4,11-4,12: NEWLINE '\\n'\n"
383382
"5,0-5,0: ENDMARKER ''"
384383

385-
#: ../../library/tokenize.rst:260
384+
#: ../../library/tokenize.rst:259
386385
msgid ""
387386
"The exact token type names can be displayed using the :option:`-e` option:"
388387
msgstr ""
389388

390-
#: ../../library/tokenize.rst:262
389+
#: ../../library/tokenize.rst:261
391390
msgid ""
392391
"$ python -m tokenize -e hello.py\n"
393392
"0,0-0,0: ENCODING 'utf-8'\n"
@@ -433,13 +432,13 @@ msgstr ""
433432
"4,11-4,12: NEWLINE '\\n'\n"
434433
"5,0-5,0: ENDMARKER ''"
435434

436-
#: ../../library/tokenize.rst:286
435+
#: ../../library/tokenize.rst:285
437436
msgid ""
438437
"Example of tokenizing a file programmatically, reading unicode strings "
439438
"instead of bytes with :func:`generate_tokens`::"
440439
msgstr ""
441440

442-
#: ../../library/tokenize.rst:289
441+
#: ../../library/tokenize.rst:288
443442
msgid ""
444443
"import tokenize\n"
445444
"\n"
@@ -455,11 +454,11 @@ msgstr ""
455454
" for token in tokens:\n"
456455
" print(token)"
457456

458-
#: ../../library/tokenize.rst:296
457+
#: ../../library/tokenize.rst:295
459458
msgid "Or reading bytes directly with :func:`.tokenize`::"
460459
msgstr ""
461460

462-
#: ../../library/tokenize.rst:298
461+
#: ../../library/tokenize.rst:297
463462
msgid ""
464463
"import tokenize\n"
465464
"\n"

0 commit comments

Comments
 (0)