This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
A Python library providing language extensions and utilities inspired by Lisp, Haskell, and functional programming. Three-tier architecture:
- Pure Python layer (
unpythonic/): ~45 modules of functional utilities (curry, memoize, fold, TCO, conditions/restarts, dynamic variables, linked lists, etc.). No macro dependency. - Macro layer (
unpythonic/syntax/): Syntactic macros viamcpyrateproviding cleaner syntax for let-bindings, autocurry, lazify, TCO, continuations, etc. - Dialect layer (
unpythonic/dialects/): Full language variants (Lispython, Listhell, Pytkell) built on the macro layer.
Beyond the language-extension core, unpythonic also fills gaps in the Python standard library — cases where the stdlib almost gets it right, then punts at the last moment. memoize adds exception-replay machinery that functools.lru_cache lacks; the scan/fold suite brings Racket-level completeness to what itertools sketches; env supports several ABC protocols that types.SimpleNamespace doesn't. Smaller general-purpose utilities (e.g. timer, si_prefix, environ_override) also land here when they prove themselves as recurring needs across projects.
Released as 2.0.0 in March 2026 (floor bump + mcpyrate 4.0.0 dependency). The public API (everything in __all__) should remain backward-compatible. Prefer non-breaking solutions when possible.
Uses PDM with pdm-backend. Python 3.10–3.14, also PyPy 3.11.
# Set up development environment
pdm install # creates .venv/ and installs deps
pdm use --venv in-projectPrefix commands with pdm run if the venv is not active.
The project venv is managed by PDM (pdm venv create, pdm use --venv in-project). To switch Python versions, remove the old venv and create a new one:
pdm venv remove in-project
pdm config venv.in_project true
pdm venv create 3.14 # or whichever version
pdm use --venv in-project
pdm installCritical: Never compile .py files in this project using py_compile, python -m compileall, --compile, or any other mechanism that bypasses the macro expander. Stale .pyc files compiled without macro support will break macro imports (symptom: ImportError: cannot import name 'macros' from 'mcpyrate.quotes'). If this happens, clean the caches with macropython -c unpythonic and re-run.
Custom test framework (unpythonic.test.fixtures, not pytest). Tests use macros (test[], test_raises[]) and conditions/restarts for reporting. The test runner does not need the macropython wrapper—it activates macros via import mcpyrate.activate. Note: test framework is at unpythonic/test/ (singular); actual tests are in tests/ (plural) subdirectories.
# Run all tests (from repo root)
python runtests.py
# Run a single test module directly
python -c "import mcpyrate.activate; from unpythonic.tests.test_fun import runtests; runtests()"
# Run macro tests similarly
python -c "import mcpyrate.activate; from unpythonic.syntax.tests.test_letdo import runtests; runtests()"Test suites discovered by runtests.py:
unpythonic/tests/test_*.py— pure Python featuresunpythonic/net/tests/test_*.py— REPL server/clientunpythonic/syntax/tests/test_*.py— macro featuresunpythonic/dialects/tests/test_*.py— dialect features
Each test module exports a runtests() function. Tests are grouped with testset() context managers.
Reading test results: The framework reports Pass/Fail/Error/Total (plus optional + N Warn) per testset. Nested testsets show hierarchy with indentation and asterisk depth (**, ****, ******, etc.). The distinction between Fail and Error is semantically load-bearing — see the next subsection.
Part of unpythonic's public API (unpythonic.test.fixtures, unpythonic.test.runner). Reusable by any project that writes macro-enabled Python tests. Rationale for not using pytest:
- pytest installs an import hook that rewrites
assertstatements (to give you the informative "assert x == 42 where x was 41" diagnostics you're used to). - mcpyrate installs its own import hook to macro-expand source before compilation.
- Python only supports one source-rewriting import hook at a time; the two loaders can't be chained. So if you want both "nice assert messages" and "macro expansion", you have to pick one — and macro expansion is non-negotiable for code that uses macros.
unpythonic.test.fixtures is the answer: instead of overriding the assert keyword, it provides test[expr], test_raises[cls, expr], test_signals[cls, expr], and warn[msg] macros that construct test assertions at the AST level, and route results through mcpyrate's condition system. The result categories:
- Pass: the
test[...]expression evaluated to a truthy value (ortest_raises[...]saw exactly the expected exception, etc.). The test ran to completion and met its expectation. - Fail: the test ran to completion, but the expectation was not met —
test[x == 42]sawx == 41, ortest_raises[TypeError, ...]saw the expression return normally. This is the "your code is wrong" category. - Error: the test did not run to completion. An unhandled exception (or unhandled
error/cerrorcondition) escaped thetest[...]expression itself. This is the "the test infrastructure or the code under test crashed in a way the test didn't expect" category — semantically distinct from Fail, because the test never got to judge the expectation. An Error in CI means something is broken in a way that needs investigation, not just "the assertion didn't hold." - Warn: advisory, emitted via
warn[msg](or by the runner itself for version-gated skips like "this test requires Python 3.14+, skipping on 3.13"). Does not count toward Pass/Fail/Error totals and does not fail the testset. Used for temporarily disabled tests, optional-dependency skips, and similar soft signals.
Capturing values with the[]: when a test[] fails, you want to see what the interesting subexpression actually evaluated to, not just "the assertion was falsy." The the[...] helper macro marks a subexpression for capture; at run time, when the test fires, the framework formats a failure message with the source text and captured value of each the[]. The name is chosen to mostly preserve English reading order at the use site (test[the[x] == 42] reads roughly as "test that the x equals 42"), and is also a nod to Common Lisp's THE special form — though CL's THE is a type-declaration construct, so it's a name pun, not a semantic port. Heads-up for grepping: the is a word-boundary nightmare; anchor searches with \bthe\[. Usage:
test[x == 42]→ on failure, auto-captures and reportsx(leftmost term of a comparison).test["green tea" == the[vert]]→ on failure, reportsvertand its value.test[f(the[a]) == g(the[b])]→ reports bothaandb, in evaluation order. Atest[]can contain any number ofthe[], including nested (the[outer(the[inner])]).- Default: if the top-level expression of
test[]is a comparison and no explicitthe[]is present, the leftmost term is implicitly wrapped — sotest[x == 42]already reportsxwithout you having to writethe[x]. This is the common case. - Use explicit
the[]when you want to capture something other than the LHS of the top-level comparison — e.g. a subexpression inside a function call, a term in a non-comparison assertion, or multiple values at once. - Compound LHS — choose the capture granularity:
- Auto-capture wraps the LHS as-written. For
test[reply["status"] == "ok"]that capturesreply["status"], and a failure shows"failed"— the leaf value — not the full dict. - Any explicit
the[]anywhere in the expression disables auto-capture.test[the[reply]["status"] == "ok"]capturesreplyinstead — the whole dict, useful for seeing a"reason"field the server attached alongside"status": "failed". - Both are valid. Decide by "what value on failure would I actually want to see?", not "is
the[]redundant?". Leaf is enough when it's self-explanatory (timer.dt == 0.0). Wrap the container when the leaf is lossy (reply["status"] == "ok"— "failed" doesn't tell you why).
- Auto-capture wraps the LHS as-written. For
- The helper is smart enough to skip trivial captures (literal values), so
test[4 in the[(1, 2, 3)]]won't clutter the output with(1, 2, 3) = (1, 2, 3). - Not supported inside
test_raises,test_signals,fail,error, orwarn— only intest[...]andwith test:blocks.
Debugging cheat sheet: a small number of Warns on CI is expected (optional dependencies, version gates). Fail means a real expectation mismatch — read the captured values from the[] in the message. Error is the one you should always look at first: it means control flow in the test went somewhere unexpected, and the count alone won't tell you where. The log above the summary line has the actual traceback.
ruff check <changed .py files> # primary linter (config in pyproject.toml)Legacy flake8rc also present (used by Emacs flycheck, not by CI or CC).
- Regular code in
unpythonic/, macros inunpythonic/syntax/, REPL networking inunpythonic/net/, dialects inunpythonic/dialects/. - Tests are in
tests/(plural) subdirectories under the code they test. The testing framework lives atunpythonic/test/(singular). - Each module declares
__all__explicitly for public API. The top-level__init__.pyre-exports via star imports. - Import style: Use
from ... import ...(notimport ...). The from-import syntax is mandatory for macro imports and used consistently throughout. Don't rename unpythonic features withas—macro code depends on original bare names. - No star imports in user code (only in the top-level
__init__.pyfor re-export). - Curry-friendly signatures: Parameters that change least often go on the left. Use
def f(func, thing0, *things)(notdef f(func, *things)) when at least onethingis required, socurryknows when to trigger. - Macros are the nuclear option: Only make a macro when a regular function can't do the job. Prefer a pure-Python core with a thin macro layer for UX.
- Macro
**kwpassing: Usedyn(dynamic variables) to passmcpyrate**kwarguments through to syntax transformers, rather than threading them through parameter lists. - Variable names: Descriptive but compact. Prefer
theconstantovernodewhen the type matters,thebodyoverbwhen scope is more than a few lines. Avoid generic names liketmp,data,xunless scope is trivially small. In test code using thethe[]macro, avoidthe-prefixed names —the[theconstant]isn't English. Use e.g.constant_nodeinstead. - Line width ~110 characters. Docstrings in reStructuredText.
- Module size target: ~100–300 SLOC, rough max ~700 lines. Some modules are longer when appropriate (e.g.
syntax/tailtools.pyat ~1600 lines). Never split just because the line count was exceeded. - Type annotations: New code should include type annotations. Existing unannotated code will be gradually updated. Some deeply Lispy parts (curry, TCO, conditions/restarts) may resist typing.
- Top-level re-exports: Most modules re-export via
from .module import *in__init__.py. A small number of names use explicit aliased imports when the module-level name reads naturally at its own level but needs qualification at the top level (e.g.environ.override→environ_override,lispylet.let→ordered_let). - Dependencies: Avoid external dependencies.
mcpyrateis the only allowed external dep and must remain strictly optional for the pure-Python layer.
curryhas cross-cutting behavior — grep for it when investigating interactions.@generic(multiple dispatch) similarly has cross-cutting concerns.- The
lazifymacro: also grep forpassthrough_lazy_argsandmaybe_force_args. - The
continuationsmacro builds ontco— readtcofirst when studying continuations. unpythonic.syntax.scopeanalyzerimplements lexical scope analysis for macros that interact with Python's scoping rules (notablylet).