Python
Doc/reference
: pythonのリファレンスのrstが入ってる https://docs.python.org/3/reference/
Doc/reference/compound_stmt.rst
の中にwith文の説明があるGrammar/python.gram
Python Grammar File ( Grammar/Grammar
)
Parsing Expression Grammar(PEG)で書かれている
構文:
*
: repeat+
: at-least-once repeat[]
: optional parts|
: alternative()
: groupingex: コーヒー
カップは必要、エスプレッソが少なくとも一つ以上必要、ミルクと水は入れても入れなくても、ミルクはfull-fat, skimmed, soyなどの種類があるよ
coffee: 'cup' ('espresso')+ ['water'] [milk]
milk: 'full-fat' | 'skimmed' | 'soy'
railload diagramを使って書くとこう
Example
while文:
while letters := read(document, 10) :
)を考慮するwhile: else:
)を考慮するwhile_stmt[stmt_ty]:
| 'while' a=named_expression ':' b=block c=[else_block] { _Py_While(a, b, c, EXTRA) }
Grammarの方:
while_stmt: 'while' namedexpr_test ':' suite ['else' ':' suite]
namedexpr_test: test [':=' test]
test: or_test ['if' or_test 'else' test] | lambdef
try:
try_stmt[stmt_ty]:
| 'try' ':' b=block f=finally_block { _Py_Try(b, NULL, NULL, f, EXTRA) }
| 'try' ':' b=block ex=except_block+ el=[else_block] f=[finally_block] { _Py_Try(b, ex, el, f, EXTRA) }
except_block[excepthandler_ty]:
| 'except' e=expression t=['as' z=NAME { z }] ':' b=block {
_Py_ExceptHandler(e, (t) ? ((expr_ty) t)->v.Name.id : NULL, b, EXTRA) }
| 'except' ':' b=block { _Py_ExceptHandler(NULL, NULL, b, EXTRA) }
finally_block[asdl_seq*]: 'finally' ':' a=block { a }
Parser Generator
Regenerating Grammar
small_stmt
を書き換えて遊ぶよ('pass' | 'proceed') { _Py_Pass(EXTRA) }
に書き換えるmake regen-pegen
>>> def example():
... proceed
...
>>> example()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 2, in example
NameError: name 'proceed' is not defined
make -j2 -s
>>> def example():
... proceed
...
>>> example()
>>>
TOKEN
Grammar/Tokens
を見ていくよ$ ./python.exe -m tokenize -e cpython-internal/test_tokens.py
0,0-0,0: ENCODING 'utf-8'
1,0-1,18: COMMENT '# Demo application'
1,18-1,20: NL '\\r\\n'
2,0-2,3: NAME 'def'
2,4-2,15: NAME 'my_function'
2,15-2,16: LPAR '('
2,16-2,17: RPAR ')'
2,17-2,18: COLON ':'
2,18-2,20: NEWLINE '\\r\\n'
3,0-3,4: INDENT ' '
3,4-3,11: NAME 'proceed'
3,11-3,12: NEWLINE ''
4,0-4,0: DEDENT ''
4,0-4,0: ENDMARKER ''
Lib/tokenize.py