Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Commit 6b103f1

Browse files
committed
Documentation for pyclbr and tokenize modules.
1 parent 3d199af commit 6b103f1

2 files changed

Lines changed: 102 additions & 0 deletions

File tree

Doc/lib/libpyclbr.tex

Lines changed: 58 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,58 @@
1+
\section{\module{pyclbr} ---
2+
Python class browser information}
3+
4+
\declaremodule{standard}{pyclbr}
5+
\modulesynopsis{Supports information extraction for a Python class
6+
browser.}
7+
\sectionauthor{Fred L. Drake, Jr.}{[email protected]}
8+
9+
10+
The \module{pyclbr} can be used to determine some limited information
11+
about the classes and methods defined in a module. The information
12+
provided is sufficient to implement a traditional three-pane class
13+
browser. The information is extracted from the source code rather
14+
than from an imported module, so this module is safe to use with
15+
untrusted source code.
16+
17+
18+
\begin{funcdesc}{readmodule}{module\optional{, path}}
19+
% The 'inpackage' parameter appears to be for internal use only....
20+
Read a module and return a dictionary mapping class names to class
21+
descriptor objects. The parameter \var{module} should be the name
22+
of a module as a string; it may be the name of a module within a
23+
package. The \var{path} parameter should be a sequence, and is used
24+
to augment the value of \code{sys.path}, which is used to locate
25+
module source code.
26+
\end{funcdesc}
27+
28+
29+
\subsection{Class Descriptor Objects \label{pyclbr-class-objects}}
30+
31+
The class descriptor objects used as values in the dictionary returned
32+
by \function{readmodule()} provide the following data members:
33+
34+
35+
\begin{memberdesc}[class descriptor]{name}
36+
The name of the class.
37+
\end{memberdesc}
38+
39+
\begin{memberdesc}[class descriptor]{super}
40+
A list of class descriptors which describe the immediate base
41+
classes of the class being described. Classes which are named as
42+
superclasses but which are not discoverable by
43+
\function{readmodule()} are listed as a string with the class name
44+
instead of class descriptors.
45+
\end{memberdesc}
46+
47+
\begin{memberdesc}[class descriptor]{methods}
48+
A dictionary mapping method names to line numbers.
49+
\end{memberdesc}
50+
51+
\begin{memberdesc}[class descriptor]{file}
52+
Name of the file containing the class statement defining the class.
53+
\end{memberdesc}
54+
55+
\begin{memberdesc}[class descriptor]{lineno}
56+
The line number of the class statement within the file named by
57+
\member{file}.
58+
\end{memberdesc}

Doc/lib/libtokenize.tex

Lines changed: 44 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,44 @@
1+
\section{\module{tokenize} ---
2+
Tokenizer for Python source}
3+
4+
\declaremodule{standard}{tokenize}
5+
\modulesynopsis{Lexical scanner for Python source code.}
6+
\moduleauthor{Ka Ping Yee}{}
7+
\sectionauthor{Fred L. Drake, Jr.}{[email protected]}
8+
9+
10+
The \module{tokenize} module provides a lexical scanner for Python
11+
source code, implemented in Python. The scanner in this module
12+
returns comments as tokens as well, making it useful for implementing
13+
``pretty-printers,'' including colorizers for on-screen displays.
14+
15+
The scanner is exposed via single function:
16+
17+
18+
\begin{funcdesc}{tokenize}{readline\optional{, tokeneater}}
19+
The \function{tokenize()} function accepts two parameters: one
20+
representing the input stream, and one providing an output mechanism
21+
for \function{tokenize()}.
22+
23+
The first parameter, \var{readline}, must be a callable object which
24+
provides the same interface as \method{readline()} method of
25+
built-in file objects (see section~\ref{bltin-file-objects}). Each
26+
call to the function should return one line of input as a string.
27+
28+
The second parameter, \var{tokeneater}, must also be a callable
29+
object. It is called with five parameters: the token type, the
30+
token string, a tuple \code{(\var{srow}, \var{scol})} specifying the
31+
row and column where the token begins in the source, a tuple
32+
\code{(\var{erow}, \var{ecol})} giving the ending position of the
33+
token, and the line on which the token was found. The line passed
34+
is the \emph{logical} line; continuation lines are included.
35+
\end{funcdesc}
36+
37+
38+
All constants from the \refmodule{token} module are also exported from
39+
\module{tokenize}, as is one additional token type value that might be
40+
passed to the \var{tokeneater} function by \function{tokenize()}:
41+
42+
\begin{datadesc}{COMMENT}
43+
Token value used to indicate a comment.
44+
\end{datadesc}

0 commit comments

Comments
 (0)