summaryrefslogtreecommitdiffstats
path: root/contrib/python/Twisted/py3/twisted/persisted/_tokenize.py
diff options
context:
space:
mode:
authorrobot-piglet <[email protected]>2025-06-22 18:50:56 +0300
committerrobot-piglet <[email protected]>2025-06-22 19:04:42 +0300
commitc7cbc6d480c5488ff6e921c709680fd2c1340a10 (patch)
tree10843f44b67c0fb5717ad555556064095f701d8c /contrib/python/Twisted/py3/twisted/persisted/_tokenize.py
parent26d391cdb94d2ce5efc8d0cc5cea7607dc363c0b (diff)
Intermediate changes
commit_hash:28750b74281710ec1ab5bdc2403c8ab24bdd164b
Diffstat (limited to 'contrib/python/Twisted/py3/twisted/persisted/_tokenize.py')
-rw-r--r--contrib/python/Twisted/py3/twisted/persisted/_tokenize.py16
1 files changed, 8 insertions, 8 deletions
diff --git a/contrib/python/Twisted/py3/twisted/persisted/_tokenize.py b/contrib/python/Twisted/py3/twisted/persisted/_tokenize.py
index 2ae94292a04..aefd22587f8 100644
--- a/contrib/python/Twisted/py3/twisted/persisted/_tokenize.py
+++ b/contrib/python/Twisted/py3/twisted/persisted/_tokenize.py
@@ -15,11 +15,11 @@ It accepts a readline-like method which is called repeatedly to get the
next line of input (or b"" for EOF). It generates 5-tuples with these
members:
- the token type (see token.py)
- the token (a string)
- the starting (row, column) indices of the token (a 2-tuple of ints)
- the ending (row, column) indices of the token (a 2-tuple of ints)
- the original line (string)
+ - the token type (see token.py)
+ - the token (a string)
+ - the starting (row, column) indices of the token (a 2-tuple of ints)
+ - the ending (row, column) indices of the token (a 2-tuple of ints)
+ - the original line (string)
It is designed to match the working of the Python tokenizer exactly, except
that it produces COMMENT tokens for comments and gives type OP for all
@@ -446,9 +446,9 @@ def untokenize(iterable):
only two tokens are passed, the resulting output is poor.
Round-trip invariant for full input:
- Untokenized source will match input source exactly
+ Untokenized source will match input source exactly
- Round-trip invariant for limited input:
+ Round-trip invariant for limited input::
# Output bytes will tokenize back to the input
t1 = [tok[:2] for tok in tokenize(f.readline)]
newcode = untokenize(t1)
@@ -591,7 +591,7 @@ def tokenize(readline):
must be a callable object which provides the same interface as the
readline() method of built-in file objects. Each call to the function
should return one line of input as bytes. Alternatively, readline
- can be a callable function terminating with StopIteration:
+ can be a callable function terminating with StopIteration::
readline = open(myfile, 'rb').__next__ # Example of alternate readline
The generator produces 5-tuples with these members: the token type; the