7
7
msgstr ""
8
8
"Project-Id-Version : Python 3.13\n "
9
9
"Report-Msgid-Bugs-To : \n "
10
- "POT-Creation-Date : 2024-09-03 11:11+0800 \n "
10
+ "POT-Creation-Date : 2025-01-20 00:13+0000 \n "
11
11
"PO-Revision-Date : 2018-05-23 16:13+0000\n "
12
12
"
Last-Translator :
Adrian Liaw <[email protected] >\n "
13
13
"Language-Team : Chinese - TAIWAN (https://github.com/python/python-docs-zh- "
@@ -140,84 +140,83 @@ msgstr ""
140
140
141
141
#: ../../library/tokenize.rst:94
142
142
msgid ""
143
- "The reconstructed script is returned as a single string. The result is "
144
- "guaranteed to tokenize back to match the input so that the conversion is "
145
- "lossless and round-trips are assured. The guarantee applies only to the "
146
- "token type and token string as the spacing between tokens (column positions) "
147
- "may change."
143
+ "The result is guaranteed to tokenize back to match the input so that the "
144
+ "conversion is lossless and round-trips are assured. The guarantee applies "
145
+ "only to the token type and token string as the spacing between tokens "
146
+ "(column positions) may change."
148
147
msgstr ""
149
148
150
- #: ../../library/tokenize.rst:100
149
+ #: ../../library/tokenize.rst:99
151
150
msgid ""
152
151
"It returns bytes, encoded using the :data:`~token.ENCODING` token, which is "
153
152
"the first token sequence output by :func:`.tokenize`. If there is no "
154
153
"encoding token in the input, it returns a str instead."
155
154
msgstr ""
156
155
157
- #: ../../library/tokenize.rst:105
156
+ #: ../../library/tokenize.rst:104
158
157
msgid ""
159
158
":func:`.tokenize` needs to detect the encoding of source files it tokenizes. "
160
159
"The function it uses to do this is available:"
161
160
msgstr ""
162
161
163
- #: ../../library/tokenize.rst:110
162
+ #: ../../library/tokenize.rst:109
164
163
msgid ""
165
164
"The :func:`detect_encoding` function is used to detect the encoding that "
166
165
"should be used to decode a Python source file. It requires one argument, "
167
166
"readline, in the same way as the :func:`.tokenize` generator."
168
167
msgstr ""
169
168
170
- #: ../../library/tokenize.rst:114
169
+ #: ../../library/tokenize.rst:113
171
170
msgid ""
172
171
"It will call readline a maximum of twice, and return the encoding used (as a "
173
172
"string) and a list of any lines (not decoded from bytes) it has read in."
174
173
msgstr ""
175
174
176
- #: ../../library/tokenize.rst:118
175
+ #: ../../library/tokenize.rst:117
177
176
msgid ""
178
177
"It detects the encoding from the presence of a UTF-8 BOM or an encoding "
179
178
"cookie as specified in :pep:`263`. If both a BOM and a cookie are present, "
180
179
"but disagree, a :exc:`SyntaxError` will be raised. Note that if the BOM is "
181
180
"found, ``'utf-8-sig'`` will be returned as an encoding."
182
181
msgstr ""
183
182
184
- #: ../../library/tokenize.rst:123
183
+ #: ../../library/tokenize.rst:122
185
184
msgid ""
186
185
"If no encoding is specified, then the default of ``'utf-8'`` will be "
187
186
"returned."
188
187
msgstr ""
189
188
190
- #: ../../library/tokenize.rst:126
189
+ #: ../../library/tokenize.rst:125
191
190
msgid ""
192
191
"Use :func:`.open` to open Python source files: it uses :func:"
193
192
"`detect_encoding` to detect the file encoding."
194
193
msgstr ""
195
194
196
- #: ../../library/tokenize.rst:132
195
+ #: ../../library/tokenize.rst:131
197
196
msgid ""
198
197
"Open a file in read only mode using the encoding detected by :func:"
199
198
"`detect_encoding`."
200
199
msgstr ""
201
200
202
- #: ../../library/tokenize.rst:139
201
+ #: ../../library/tokenize.rst:138
203
202
msgid ""
204
203
"Raised when either a docstring or expression that may be split over several "
205
204
"lines is not completed anywhere in the file, for example::"
206
205
msgstr ""
207
206
208
- #: ../../library/tokenize.rst:142
207
+ #: ../../library/tokenize.rst:141
209
208
msgid ""
210
209
"\"\"\" Beginning of\n"
211
210
"docstring"
212
211
msgstr ""
213
212
"\"\"\" Beginning of\n"
214
213
"docstring"
215
214
216
- #: ../../library/tokenize.rst:145
215
+ #: ../../library/tokenize.rst:144
217
216
msgid "or::"
218
217
msgstr "或是: ::"
219
218
220
- #: ../../library/tokenize.rst:147
219
+ #: ../../library/tokenize.rst:146
221
220
msgid ""
222
221
"[1,\n"
223
222
" 2,\n"
@@ -227,49 +226,49 @@ msgstr ""
227
226
" 2,\n"
228
227
" 3"
229
228
230
- #: ../../library/tokenize.rst:154
229
+ #: ../../library/tokenize.rst:153
231
230
msgid "Command-Line Usage"
232
231
msgstr ""
233
232
234
- #: ../../library/tokenize.rst:158
233
+ #: ../../library/tokenize.rst:157
235
234
msgid ""
236
235
"The :mod:`tokenize` module can be executed as a script from the command "
237
236
"line. It is as simple as:"
238
237
msgstr ""
239
238
240
- #: ../../library/tokenize.rst:161
239
+ #: ../../library/tokenize.rst:160
241
240
msgid "python -m tokenize [-e] [filename.py]"
242
241
msgstr "python -m tokenize [-e] [filename.py]"
243
242
244
- #: ../../library/tokenize.rst:165
243
+ #: ../../library/tokenize.rst:164
245
244
msgid "The following options are accepted:"
246
245
msgstr ""
247
246
248
- #: ../../library/tokenize.rst:171
247
+ #: ../../library/tokenize.rst:170
249
248
msgid "show this help message and exit"
250
249
msgstr ""
251
250
252
- #: ../../library/tokenize.rst:175
251
+ #: ../../library/tokenize.rst:174
253
252
msgid "display token names using the exact type"
254
253
msgstr ""
255
254
256
- #: ../../library/tokenize.rst:177
255
+ #: ../../library/tokenize.rst:176
257
256
msgid ""
258
257
"If :file:`filename.py` is specified its contents are tokenized to stdout. "
259
258
"Otherwise, tokenization is performed on stdin."
260
259
msgstr ""
261
260
262
- #: ../../library/tokenize.rst:181
261
+ #: ../../library/tokenize.rst:180
263
262
msgid "Examples"
264
263
msgstr "範例"
265
264
266
- #: ../../library/tokenize.rst:183
265
+ #: ../../library/tokenize.rst:182
267
266
msgid ""
268
267
"Example of a script rewriter that transforms float literals into Decimal "
269
268
"objects::"
270
269
msgstr ""
271
270
272
- #: ../../library/tokenize.rst:186
271
+ #: ../../library/tokenize.rst:185
273
272
msgid ""
274
273
"from tokenize import tokenize, untokenize, NUMBER, STRING, NAME, OP\n"
275
274
"from io import BytesIO\n"
@@ -312,11 +311,11 @@ msgid ""
312
311
" return untokenize(result).decode('utf-8')"
313
312
msgstr ""
314
313
315
- #: ../../library/tokenize.rst:225
314
+ #: ../../library/tokenize.rst:224
316
315
msgid "Example of tokenizing from the command line. The script::"
317
316
msgstr ""
318
317
319
- #: ../../library/tokenize.rst:227
318
+ #: ../../library/tokenize.rst:226
320
319
msgid ""
321
320
"def say_hello():\n"
322
321
" print(\" Hello, World!\" )\n"
@@ -328,15 +327,15 @@ msgstr ""
328
327
"\n"
329
328
"say_hello()"
330
329
331
- #: ../../library/tokenize.rst:232
330
+ #: ../../library/tokenize.rst:231
332
331
msgid ""
333
332
"will be tokenized to the following output where the first column is the "
334
333
"range of the line/column coordinates where the token is found, the second "
335
334
"column is the name of the token, and the final column is the value of the "
336
335
"token (if any)"
337
336
msgstr ""
338
337
339
- #: ../../library/tokenize.rst:236
338
+ #: ../../library/tokenize.rst:235
340
339
msgid ""
341
340
"$ python -m tokenize hello.py\n"
342
341
"0,0-0,0: ENCODING 'utf-8'\n"
@@ -382,12 +381,12 @@ msgstr ""
382
381
"4,11-4,12: NEWLINE '\\ n'\n"
383
382
"5,0-5,0: ENDMARKER ''"
384
383
385
- #: ../../library/tokenize.rst:260
384
+ #: ../../library/tokenize.rst:259
386
385
msgid ""
387
386
"The exact token type names can be displayed using the :option:`-e` option:"
388
387
msgstr ""
389
388
390
- #: ../../library/tokenize.rst:262
389
+ #: ../../library/tokenize.rst:261
391
390
msgid ""
392
391
"$ python -m tokenize -e hello.py\n"
393
392
"0,0-0,0: ENCODING 'utf-8'\n"
@@ -433,13 +432,13 @@ msgstr ""
433
432
"4,11-4,12: NEWLINE '\\ n'\n"
434
433
"5,0-5,0: ENDMARKER ''"
435
434
436
- #: ../../library/tokenize.rst:286
435
+ #: ../../library/tokenize.rst:285
437
436
msgid ""
438
437
"Example of tokenizing a file programmatically, reading unicode strings "
439
438
"instead of bytes with :func:`generate_tokens`::"
440
439
msgstr ""
441
440
442
- #: ../../library/tokenize.rst:289
441
+ #: ../../library/tokenize.rst:288
443
442
msgid ""
444
443
"import tokenize\n"
445
444
"\n"
@@ -455,11 +454,11 @@ msgstr ""
455
454
" for token in tokens:\n"
456
455
" print(token)"
457
456
458
- #: ../../library/tokenize.rst:296
457
+ #: ../../library/tokenize.rst:295
459
458
msgid "Or reading bytes directly with :func:`.tokenize`::"
460
459
msgstr ""
461
460
462
- #: ../../library/tokenize.rst:298
461
+ #: ../../library/tokenize.rst:297
463
462
msgid ""
464
463
"import tokenize\n"
465
464
"\n"
0 commit comments