Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Commit 20b1bfa

Browse files
committed
Issue #26127: Fix links in tokenize documentation; patch by Silent Ghost
1 parent a3a5833 commit 20b1bfa

1 file changed

Lines changed: 7 additions & 7 deletions

File tree

Doc/library/tokenize.rst

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,7 @@ The primary entry point is a :term:`generator`:
2727

2828
.. function:: tokenize(readline)
2929

30-
The :func:`tokenize` generator requires one argument, *readline*, which
30+
The :func:`.tokenize` generator requires one argument, *readline*, which
3131
must be a callable object which provides the same interface as the
3232
:meth:`io.IOBase.readline` method of file objects. Each call to the
3333
function should return one line of input as bytes.
@@ -52,7 +52,7 @@ The primary entry point is a :term:`generator`:
5252
.. versionchanged:: 3.3
5353
Added support for ``exact_type``.
5454

55-
:func:`tokenize` determines the source encoding of the file by looking for a
55+
:func:`.tokenize` determines the source encoding of the file by looking for a
5656
UTF-8 BOM or encoding cookie, according to :pep:`263`.
5757

5858

@@ -74,7 +74,7 @@ All constants from the :mod:`token` module are also exported from
7474
.. data:: ENCODING
7575

7676
Token value that indicates the encoding used to decode the source bytes
77-
into text. The first token returned by :func:`tokenize` will always be an
77+
into text. The first token returned by :func:`.tokenize` will always be an
7878
ENCODING token.
7979

8080

@@ -96,17 +96,17 @@ write back the modified script.
9696
positions) may change.
9797

9898
It returns bytes, encoded using the ENCODING token, which is the first
99-
token sequence output by :func:`tokenize`.
99+
token sequence output by :func:`.tokenize`.
100100

101101

102-
:func:`tokenize` needs to detect the encoding of source files it tokenizes. The
102+
:func:`.tokenize` needs to detect the encoding of source files it tokenizes. The
103103
function it uses to do this is available:
104104

105105
.. function:: detect_encoding(readline)
106106

107107
The :func:`detect_encoding` function is used to detect the encoding that
108108
should be used to decode a Python source file. It requires one argument,
109-
readline, in the same way as the :func:`tokenize` generator.
109+
readline, in the same way as the :func:`.tokenize` generator.
110110

111111
It will call readline a maximum of twice, and return the encoding used
112112
(as a string) and a list of any lines (not decoded from bytes) it has read
@@ -120,7 +120,7 @@ function it uses to do this is available:
120120
If no encoding is specified, then the default of ``'utf-8'`` will be
121121
returned.
122122

123-
Use :func:`open` to open Python source files: it uses
123+
Use :func:`.open` to open Python source files: it uses
124124
:func:`detect_encoding` to detect the file encoding.
125125

126126

0 commit comments

Comments
 (0)