jaunt 0.2.0__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
jaunt-0.2.0/.gitignore ADDED
@@ -0,0 +1,19 @@
1
+ .env
2
+ .DS_Store
3
+ __pycache__/
4
+ *.py[cod]
5
+
6
+ **/__generated__/
7
+
8
+ .agents/
9
+
10
+ .venv/
11
+ .ruff_cache/
12
+ .pytest_cache/
13
+ .mypy_cache/
14
+ .ty_cache/
15
+ **/.jaunt/
16
+
17
+ dist/
18
+ build/
19
+ *.egg-info/
jaunt-0.2.0/PKG-INFO ADDED
@@ -0,0 +1,157 @@
1
+ Metadata-Version: 2.4
2
+ Name: jaunt
3
+ Version: 0.2.0
4
+ Summary: Spec-driven code generation framework: write intent as decorated Python stubs, generate implementations and tests with LLMs.
5
+ Project-URL: Homepage, https://github.com/creatorrr/jaunt
6
+ Project-URL: Repository, https://github.com/creatorrr/jaunt
7
+ Project-URL: Issues, https://github.com/creatorrr/jaunt/issues
8
+ Keywords: cli,code-generation,developer-tools,llm,testing
9
+ Classifier: Development Status :: 4 - Beta
10
+ Classifier: Intended Audience :: Developers
11
+ Classifier: Programming Language :: Python :: 3
12
+ Classifier: Programming Language :: Python :: 3 :: Only
13
+ Classifier: Programming Language :: Python :: 3.12
14
+ Classifier: Topic :: Software Development :: Code Generators
15
+ Classifier: Topic :: Software Development :: Testing
16
+ Classifier: Typing :: Typed
17
+ Requires-Python: >=3.12
18
+ Provides-Extra: all
19
+ Requires-Dist: anthropic<1,>=0.39.0; extra == 'all'
20
+ Requires-Dist: fastmcp>=2.0.0; extra == 'all'
21
+ Requires-Dist: openai<2,>=1.0.0; extra == 'all'
22
+ Requires-Dist: watchfiles>=1.0.0; extra == 'all'
23
+ Provides-Extra: anthropic
24
+ Requires-Dist: anthropic<1,>=0.39.0; extra == 'anthropic'
25
+ Provides-Extra: mcp
26
+ Requires-Dist: fastmcp>=2.0.0; extra == 'mcp'
27
+ Provides-Extra: openai
28
+ Requires-Dist: openai<2,>=1.0.0; extra == 'openai'
29
+ Provides-Extra: watch
30
+ Requires-Dist: watchfiles>=1.0.0; extra == 'watch'
31
+ Description-Content-Type: text/markdown
32
+
33
+ # Jaunt
34
+
35
+ Jaunt is a small Python library + CLI for **spec-driven code generation**:
36
+
37
+ - Write implementation intent as normal Python stubs decorated with `@jaunt.magic(...)`.
38
+ - Optionally write test intent as stubs decorated with `@jaunt.test(...)`.
39
+ - Jaunt generates real modules under `__generated__/` using an LLM backend (OpenAI or Anthropic).
40
+
41
+ ## Installation
42
+
43
+ ```bash
44
+ pip install jaunt[openai] # for OpenAI
45
+ pip install jaunt[anthropic] # for Anthropic/Claude
46
+ pip install jaunt[all] # both providers
47
+ ```
48
+
49
+ ## Quickstart (This Repo)
50
+
51
+ Prereqs: `uv` installed.
52
+
53
+ ```bash
54
+ uv sync
55
+ export OPENAI_API_KEY=... # or ANTHROPIC_API_KEY for Claude
56
+ uv run jaunt --version
57
+ ```
58
+
59
+ See `docs-site/` for rendered docs, or `DOCS.md` for a plain-text walkthrough.
60
+
61
+ All examples live under `examples/`. See `examples/README.md` for the full list.
62
+
63
+ ### Hackathon Demo (JWT Auth)
64
+
65
+ Headline demo: **JWT auth** (the "wow gap" example: short spec, real generated glue + tests).
66
+
67
+ ```bash
68
+ # Generate implementations for @jaunt.magic specs.
69
+ uv run jaunt build --root examples/jwt_auth
70
+
71
+ # Generate pytest tests for @jaunt.test specs and run them.
72
+ PYTHONPATH=examples/jwt_auth/src uv run jaunt test --root examples/jwt_auth
73
+ ```
74
+
75
+ ## Eval Suite
76
+
77
+ Run the built-in eval suite against your configured backend:
78
+
79
+ ```bash
80
+ uv run jaunt eval
81
+ uv run jaunt eval --model gpt-4o
82
+ uv run jaunt eval --provider anthropic --model claude-sonnet-4-5-20250929
83
+ ```
84
+
85
+ Compare explicit provider/model targets:
86
+
87
+ ```bash
88
+ uv run jaunt eval --compare openai:gpt-4o anthropic:claude-sonnet-4-5-20250929
89
+ ```
90
+
91
+ Eval outputs are written under `.jaunt/evals/<timestamp>/`.
92
+
93
+ Prompt snapshots:
94
+
95
+ ```bash
96
+ uv run pytest tests/test_prompt_snapshots.py --snapshot-update
97
+ ```
98
+
99
+ ## Auto-Generate PyPI Skills (Build)
100
+
101
+ `jaunt build` includes a best-effort pre-build step that auto-generates “skills” for external libraries your project imports and injects them into the build prompt.
102
+
103
+ What happens:
104
+
105
+ - Scan `paths.source_roots` for `import ...` / `from ... import ...` (ignores stdlib, internal modules, and relative imports).
106
+ - Resolve imports to installed PyPI distributions + versions from the current environment.
107
+ - Ensure a skill exists per distribution at:
108
+ - `<project_root>/.agents/skills/<dist-normalized>/SKILL.md`
109
+ - If missing/outdated, fetch the exact PyPI README for `<dist>==<version>` and generate `SKILL.md` using the configured LLM provider.
110
+ - Inject the concatenated skills text into the build LLM prompt.
111
+
112
+ Overwrite rules:
113
+
114
+ - Jaunt only overwrites a skill if it was previously Jaunt-generated (it has a `<!-- jaunt:skill=pypi ... -->` header) and the installed version changed.
115
+ - If the header is missing, the file is treated as user-managed and will never be overwritten.
116
+
117
+ Failure mode: warnings to stderr, and the build continues without missing skills.
118
+
119
+ ## Docs Site (Fumadocs)
120
+
121
+ The repository includes a Fumadocs (Next.js) documentation site under `docs-site/`.
122
+
123
+ ```bash
124
+ cd docs-site
125
+ npm run dev
126
+ ```
127
+
128
+ ## Publish to PyPI
129
+
130
+ If you keep your token in `.env` as `UV_PUBLISH_TOKEN=...`, load it into your shell first:
131
+
132
+ ```bash
133
+ set -a
134
+ source .env
135
+ set +a
136
+ ```
137
+
138
+ Build and validate artifacts:
139
+
140
+ ```bash
141
+ uv build
142
+ uvx twine check dist/*
143
+ ```
144
+
145
+ Upload to PyPI:
146
+
147
+ ```bash
148
+ uv publish --check-url https://pypi.org/simple/
149
+ ```
150
+
151
+ ## Dev
152
+
153
+ ```bash
154
+ uv run ruff check .
155
+ uv run ty check
156
+ uv run pytest
157
+ ```
jaunt-0.2.0/README.md ADDED
@@ -0,0 +1,125 @@
1
+ # Jaunt
2
+
3
+ Jaunt is a small Python library + CLI for **spec-driven code generation**:
4
+
5
+ - Write implementation intent as normal Python stubs decorated with `@jaunt.magic(...)`.
6
+ - Optionally write test intent as stubs decorated with `@jaunt.test(...)`.
7
+ - Jaunt generates real modules under `__generated__/` using an LLM backend (OpenAI or Anthropic).
8
+
9
+ ## Installation
10
+
11
+ ```bash
12
+ pip install jaunt[openai] # for OpenAI
13
+ pip install jaunt[anthropic] # for Anthropic/Claude
14
+ pip install jaunt[all] # both providers
15
+ ```
16
+
17
+ ## Quickstart (This Repo)
18
+
19
+ Prereqs: `uv` installed.
20
+
21
+ ```bash
22
+ uv sync
23
+ export OPENAI_API_KEY=... # or ANTHROPIC_API_KEY for Claude
24
+ uv run jaunt --version
25
+ ```
26
+
27
+ See `docs-site/` for rendered docs, or `DOCS.md` for a plain-text walkthrough.
28
+
29
+ All examples live under `examples/`. See `examples/README.md` for the full list.
30
+
31
+ ### Hackathon Demo (JWT Auth)
32
+
33
+ Headline demo: **JWT auth** (the "wow gap" example: short spec, real generated glue + tests).
34
+
35
+ ```bash
36
+ # Generate implementations for @jaunt.magic specs.
37
+ uv run jaunt build --root examples/jwt_auth
38
+
39
+ # Generate pytest tests for @jaunt.test specs and run them.
40
+ PYTHONPATH=examples/jwt_auth/src uv run jaunt test --root examples/jwt_auth
41
+ ```
42
+
43
+ ## Eval Suite
44
+
45
+ Run the built-in eval suite against your configured backend:
46
+
47
+ ```bash
48
+ uv run jaunt eval
49
+ uv run jaunt eval --model gpt-4o
50
+ uv run jaunt eval --provider anthropic --model claude-sonnet-4-5-20250929
51
+ ```
52
+
53
+ Compare explicit provider/model targets:
54
+
55
+ ```bash
56
+ uv run jaunt eval --compare openai:gpt-4o anthropic:claude-sonnet-4-5-20250929
57
+ ```
58
+
59
+ Eval outputs are written under `.jaunt/evals/<timestamp>/`.
60
+
61
+ Prompt snapshots:
62
+
63
+ ```bash
64
+ uv run pytest tests/test_prompt_snapshots.py --snapshot-update
65
+ ```
66
+
67
+ ## Auto-Generate PyPI Skills (Build)
68
+
69
+ `jaunt build` includes a best-effort pre-build step that auto-generates “skills” for external libraries your project imports and injects them into the build prompt.
70
+
71
+ What happens:
72
+
73
+ - Scan `paths.source_roots` for `import ...` / `from ... import ...` (ignores stdlib, internal modules, and relative imports).
74
+ - Resolve imports to installed PyPI distributions + versions from the current environment.
75
+ - Ensure a skill exists per distribution at:
76
+ - `<project_root>/.agents/skills/<dist-normalized>/SKILL.md`
77
+ - If missing/outdated, fetch the exact PyPI README for `<dist>==<version>` and generate `SKILL.md` using the configured LLM provider.
78
+ - Inject the concatenated skills text into the build LLM prompt.
79
+
80
+ Overwrite rules:
81
+
82
+ - Jaunt only overwrites a skill if it was previously Jaunt-generated (it has a `<!-- jaunt:skill=pypi ... -->` header) and the installed version changed.
83
+ - If the header is missing, the file is treated as user-managed and will never be overwritten.
84
+
85
+ Failure mode: warnings to stderr, and the build continues without missing skills.
86
+
87
+ ## Docs Site (Fumadocs)
88
+
89
+ The repository includes a Fumadocs (Next.js) documentation site under `docs-site/`.
90
+
91
+ ```bash
92
+ cd docs-site
93
+ npm run dev
94
+ ```
95
+
96
+ ## Publish to PyPI
97
+
98
+ If you keep your token in `.env` as `UV_PUBLISH_TOKEN=...`, load it into your shell first:
99
+
100
+ ```bash
101
+ set -a
102
+ source .env
103
+ set +a
104
+ ```
105
+
106
+ Build and validate artifacts:
107
+
108
+ ```bash
109
+ uv build
110
+ uvx twine check dist/*
111
+ ```
112
+
113
+ Upload to PyPI:
114
+
115
+ ```bash
116
+ uv publish --check-url https://pypi.org/simple/
117
+ ```
118
+
119
+ ## Dev
120
+
121
+ ```bash
122
+ uv run ruff check .
123
+ uv run ty check
124
+ uv run pytest
125
+ ```
@@ -0,0 +1,76 @@
1
+ [project]
2
+ name = "jaunt"
3
+ version = "0.2.0"
4
+ description = "Spec-driven code generation framework: write intent as decorated Python stubs, generate implementations and tests with LLMs."
5
+ readme = "README.md"
6
+ requires-python = ">=3.12"
7
+ keywords = ["llm", "code-generation", "testing", "developer-tools", "cli"]
8
+ classifiers = [
9
+ "Development Status :: 4 - Beta",
10
+ "Intended Audience :: Developers",
11
+ "Programming Language :: Python :: 3",
12
+ "Programming Language :: Python :: 3 :: Only",
13
+ "Programming Language :: Python :: 3.12",
14
+ "Topic :: Software Development :: Code Generators",
15
+ "Topic :: Software Development :: Testing",
16
+ "Typing :: Typed",
17
+ ]
18
+ dependencies = []
19
+
20
+ [project.urls]
21
+ Homepage = "https://github.com/creatorrr/jaunt"
22
+ Repository = "https://github.com/creatorrr/jaunt"
23
+ Issues = "https://github.com/creatorrr/jaunt/issues"
24
+
25
+ [project.optional-dependencies]
26
+ openai = ["openai>=1.0.0,<2"]
27
+ anthropic = ["anthropic>=0.39.0,<1"]
28
+ watch = ["watchfiles>=1.0.0"]
29
+ mcp = ["fastmcp>=2.0.0"]
30
+ all = ["openai>=1.0.0,<2", "anthropic>=0.39.0,<1", "watchfiles>=1.0.0", "fastmcp>=2.0.0"]
31
+
32
+ [project.scripts]
33
+ jaunt = "jaunt.cli:main"
34
+
35
+ [build-system]
36
+ requires = ["hatchling"]
37
+ build-backend = "hatchling.build"
38
+
39
+ [tool.hatch.build.targets.wheel]
40
+ packages = ["src/jaunt"]
41
+ include = [
42
+ "src/jaunt/py.typed",
43
+ "src/jaunt/prompts/**",
44
+ "src/jaunt/skill/**",
45
+ ]
46
+
47
+ [tool.hatch.build.targets.sdist]
48
+ include = [
49
+ "src/jaunt/py.typed",
50
+ "src/jaunt/prompts/**",
51
+ "src/jaunt/skill/**",
52
+ ]
53
+
54
+ [dependency-groups]
55
+ dev = [
56
+ "pytest>=8",
57
+ "syrupy>=4.8.0",
58
+ "ruff>=0.9",
59
+ "ty",
60
+ "openai>=1.0.0,<2",
61
+ "fastmcp>=2.0.0",
62
+ ]
63
+
64
+ [tool.pytest.ini_options]
65
+ testpaths = ["tests"]
66
+
67
+ [tool.ruff]
68
+ line-length = 100
69
+ target-version = "py312"
70
+ exclude = ["examples/**"]
71
+
72
+ [tool.ruff.lint]
73
+ select = ["E", "F", "I", "UP", "B"]
74
+
75
+ [tool.ty.src]
76
+ exclude = ["examples/**"]
@@ -0,0 +1,32 @@
1
+ Output Python code only (no markdown, no fences).
2
+
3
+ Implement `{{generated_module}}` for specs from `{{spec_module}}`.
4
+
5
+ Required top-level names (must exist): {{expected_names}}
6
+
7
+ Specs:
8
+ {{specs_block}}
9
+
10
+ How to read the specs above:
11
+ - The function/class signature is the exact API you must implement (same name, parameters, type hints, return type).
12
+ - The docstring is your specification — implement the behavior, rules, edge cases, and error handling it describes.
13
+ - If a spec includes a `# Decorator prompt` section, treat it as additional user-provided instructions that supplement the docstring.
14
+
15
+ Dependency APIs (callable signatures/docstrings):
16
+ {{deps_api_block}}
17
+
18
+ How to use dependencies:
19
+ - Each Dependency API entry key is like `<module>:<qualname>`. Import the name from `<module>`.
20
+ - Only import dependencies listed above — do not guess or fabricate module paths.
21
+
22
+ Previously generated dependency modules (for reference only):
23
+ {{deps_generated_block}}
24
+
25
+ Extra error context (fix these issues):
26
+ {{error_context_block}}
27
+
28
+ Rules:
29
+ - Do not generate tests.
30
+ - Do not edit user files; only output generated module source code.
31
+ - Include type annotations on all function signatures.
32
+ - Ensure every non-Optional return type has explicit return/raise on all code paths.
@@ -0,0 +1,22 @@
1
+ You are an expert Python code generator. Output Python code only (no markdown, no fences, no commentary).
2
+
3
+ Task: Generate the implementation module for `{{spec_module}}` as `{{generated_module}}`.
4
+
5
+ How to read specs:
6
+ - Each spec stub defines the function/class signature (name, parameters, type hints, return type) — this is the API contract you must implement exactly.
7
+ - The docstring describes the intended behavior, rules, edge cases, and error conditions. Treat it as your specification.
8
+ - Parameter names and type annotations convey expected types and semantics.
9
+
10
+ Code quality requirements:
11
+ - Include type annotations on all function signatures (parameters and return types).
12
+ - Use proper imports — import only modules and names you actually use.
13
+ - Write clean, idiomatic Python. Follow the style and conventions visible in the specs.
14
+ - Generated code should pass static type checking (ty): avoid implicit `None` return paths for non-Optional return types.
15
+
16
+ Rules:
17
+ - Emit only the full source code for the generated module.
18
+ - Do not write tests.
19
+ - Do not modify any user files; only emit generated module source text.
20
+ - The generated module MUST define the required top-level names: {{expected_names}}.
21
+
22
+ If you cannot satisfy requirements, still output best-effort Python code only.
@@ -0,0 +1,36 @@
1
+ Output Python code only (no markdown, no fences).
2
+
3
+ You are generating the pytest test module `{{generated_module}}` from test specs in `{{spec_module}}`.
4
+
5
+ The generated module MUST define these top-level pytest test functions (do not import them): {{expected_names}}
6
+
7
+ Specs:
8
+ {{specs_block}}
9
+
10
+ How to read the test specs above:
11
+ - The docstring describes the test scenario — what to set up, what to call, and what to assert.
12
+ - If a spec includes a `# Decorator prompt` section, treat it as additional user-provided instructions for the test.
13
+ - The function signature (parameters, type hints) indicates whether the test needs fixtures.
14
+
15
+ Dependency APIs (callable signatures/docstrings):
16
+ {{deps_api_block}}
17
+
18
+ Previously generated dependency modules (reference only):
19
+ {{deps_generated_block}}
20
+
21
+ Extra error context (fix these issues):
22
+ {{error_context_block}}
23
+
24
+ Test quality:
25
+ - Cover the happy path (normal usage) and edge cases (boundary values, error conditions, empty inputs).
26
+ - Write specific assertions that check concrete values — avoid bare `assert result`.
27
+ - Use `pytest.raises` for expected exceptions.
28
+
29
+ Rules:
30
+ - Generate tests only (no production implementation).
31
+ - Do not import from `{{generated_module}}` (that would be a circular import).
32
+ - Do not edit user files; only output test module source code.
33
+ - Do not guess or search for application modules like `app`, `main`, `token`, etc.
34
+ - Import the production APIs under test from the modules listed in Dependency APIs above.
35
+ - Each Dependency API entry key is like `<module>:<qualname>`; import from `<module>`.
36
+ - Do not import production APIs from the test spec module (`{{spec_module}}`); it contains only `@jaunt.test` stubs.
@@ -0,0 +1,16 @@
1
+ You are an expert Python test generator. Output Python code only (no markdown, no fences, no commentary).
2
+
3
+ Task: Generate the pytest test module `{{generated_module}}` from test specs in `{{spec_module}}`.
4
+
5
+ Test quality guidelines:
6
+ - Cover the happy path (normal/expected usage) and edge cases (boundary values, error conditions).
7
+ - Write clear, specific assertions that verify concrete expected values — avoid bare `assert result` without checking a specific value.
8
+ - Each test function should be self-contained and independent.
9
+ - Use pytest idioms: `pytest.raises` for expected exceptions, parametrize where appropriate.
10
+
11
+ Rules:
12
+ - Emit only the test module source code.
13
+ - Do not implement production/source code; tests only.
14
+ - Do not modify any user files; only emit generated test module source text.
15
+ - The output MUST define the required top-level pytest test functions: {{expected_names}}.
16
+ - Do not import from `{{generated_module}}` (circular import).
File without changes
@@ -0,0 +1,157 @@
1
+ # Jaunt Skill (for AI Assistants)
2
+
3
+ ## 1. What Is Jaunt
4
+ Jaunt is a workflow where humans and AI assistants write **intent** (Python spec stubs + tests), then Jaunt generates the **implementation**; humans review both the specs and the generated code, iterate, and re-run generation.
5
+
6
+ ## 2. Your Role As An AI Assistant
7
+ - Help the human author, refine, and organize **spec stubs** and **test specs**.
8
+ - Ask clarifying questions when intent is underspecified (edge cases, I/O, errors, performance).
9
+ - Do **not** hand-write implementations for any symbol marked as “generated” (for example, `@jaunt.magic`), unless the human explicitly asks you to bypass Jaunt and accepts the tradeoff.
10
+
11
+ ## 3. Workflow You Should Guide
12
+ 1. Write or refine spec stubs in the user’s codebase (signatures + docstrings + type hints).
13
+ 2. Write or refine test specs (pytest-style, deterministic, no network).
14
+ 3. Run generation (typically `jaunt build`).
15
+ 4. Review the generated output together (correctness, style, safety, performance).
16
+ 5. Iterate: adjust specs/tests and regenerate.
17
+
18
+ ## 4. Writing Good Spec Stubs (most important)
19
+
20
+ ### Principles
21
+ - **Be explicit about behavior.** Define inputs, outputs, invariants, and what “correct” means.
22
+ - **Specify failures.** Name the exception type and the error condition (or return shape for errors).
23
+ - **Define edge cases.** Empty inputs, `None`, boundary values, duplicates, ordering, timeouts.
24
+ - **Constrain the solution when it matters.** Complexity, determinism, caching, stable ordering.
25
+ - **Prefer pure logic.** Move I/O behind parameters (dependency injection) so tests stay fast and local.
26
+
27
+ ### Patterns
28
+ - **Docstring as contract:** include short examples, preconditions, postconditions.
29
+ - **Typed dependencies:** accept `Callable[...]` or protocol-like objects instead of reaching for globals.
30
+ - **Small, composable units:** one concept per symbol.
31
+
32
+ ### Anti-patterns
33
+ - Vague docstrings: “Does X” without semantics.
34
+ - Hidden global behavior: environment variables, implicit network calls, reading files implicitly.
35
+ - Over-constraining early: forcing implementation details that are not required by the product.
36
+
37
+ ### Templates
38
+
39
+ #### Pure Function
40
+ ```python
41
+ from __future__ import annotations
42
+
43
+ # @jaunt.magic (symbol is generated; do not implement by hand)
44
+ def normalize_email(raw: str) -> str:
45
+ """
46
+ Normalize an email address for stable comparisons.
47
+
48
+ Rules:
49
+ - Strip surrounding whitespace.
50
+ - Lowercase the whole string.
51
+ - Must contain exactly one "@".
52
+
53
+ Errors:
54
+ - Raise ValueError if the input is not a valid email by these rules.
55
+ """
56
+ ```
57
+
58
+ #### Function With Dependencies (I/O behind parameters)
59
+ ```python
60
+ from __future__ import annotations
61
+
62
+ from collections.abc import Callable
63
+
64
+ # @jaunt.magic
65
+ def get_display_name(user_id: int, fetch_user: Callable[[int], dict]) -> str:
66
+ """
67
+ Return a user's display name.
68
+
69
+ - fetch_user(user_id) returns a dict with keys: "first_name", "last_name", optional "nickname".
70
+ - Prefer nickname if present and non-empty.
71
+ - Otherwise return "first_name last_name" with single spaces.
72
+
73
+ Errors:
74
+ - Raise KeyError if required keys are missing.
75
+ """
76
+ ```
77
+
78
+ #### Stateful Class
79
+ ```python
80
+ from __future__ import annotations
81
+
82
+ from dataclasses import dataclass
83
+
84
+ # @jaunt.magic
85
+ @dataclass
86
+ class RateLimiter:
87
+ """
88
+ Token bucket limiter.
89
+
90
+ Parameters:
91
+ - capacity: max tokens in the bucket (>= 1)
92
+ - refill_per_second: tokens refilled per second (> 0)
93
+
94
+ Behavior:
95
+ - allow(now: float) -> bool consumes 1 token if available at time `now` and returns True.
96
+ - If no token is available, return False and do not go negative.
97
+ """
98
+
99
+ capacity: int
100
+ refill_per_second: float
101
+
102
+ def allow(self, now: float) -> bool:
103
+ """See class docstring."""
104
+ ```
105
+
106
+ #### Async Function
107
+ ```python
108
+ from __future__ import annotations
109
+
110
+ from collections.abc import Awaitable, Callable
111
+
112
+ # @jaunt.magic
113
+ async def retry(
114
+ op: Callable[[], Awaitable[object]],
115
+ *,
116
+ attempts: int,
117
+ base_delay_s: float,
118
+ ) -> object:
119
+ """
120
+ Retry an async operation with exponential backoff.
121
+
122
+ - Run op() up to `attempts` times (attempts >= 1).
123
+ - Delay sequence: base_delay_s * (2 ** (i-1)) for retries i=1..(attempts-1).
124
+ - If op() succeeds, return its result.
125
+
126
+ Errors:
127
+ - Re-raise the last exception if all attempts fail.
128
+ """
129
+ ```
130
+
131
+ ## 5. Writing Good Test Specs
132
+
133
+ ### Principles
134
+ - **Deterministic:** no network, no clock unless injected or controlled.
135
+ - **Small and focused:** one behavioral assertion per test when practical.
136
+ - **Prefer black-box behavior:** test the contract, not implementation details.
137
+ - **Include negative tests:** errors and invalid input paths.
138
+
139
+ ### Patterns
140
+ - Table-driven tests for edge cases.
141
+ - Dependency injection with tiny fakes.
142
+ - Property-like checks where helpful (idempotence, monotonicity, stability).
143
+
144
+ ### Anti-patterns
145
+ - Tests that depend on file system layout or external services.
146
+ - Snapshot tests of entire generated files (too brittle).
147
+
148
+ ## 6. Configuration Reference (`jaunt.toml`)
149
+ `jaunt.toml` configures what modules to scan, where to write generated code, and which backend/settings to use. Keep it minimal and explicit; avoid hidden defaults when onboarding a repo.
150
+
151
+ See `examples/jaunt.toml` for a starter template.
152
+
153
+ ## 7. Critical Rules
154
+ - Never edit `__generated__/` by hand (it will be overwritten).
155
+ - Always regenerate via the Jaunt CLI after changing specs/tests.
156
+ - Always review generated output before shipping.
157
+
@@ -0,0 +1,8 @@
1
+ from __future__ import annotations
2
+
3
+ # Public surface for skill docs as packaged resources.
4
+ #
5
+ # Consumers (including `jaunt skill export`) should load these via
6
+ # `importlib.resources.files("jaunt.skill")` rather than assuming filesystem paths.
7
+
8
+ __all__ = []
@@ -0,0 +1,31 @@
1
+ # Cursor Rules (Jaunt Skill)
2
+
3
+ Copy/paste into `.cursorrules` (or keep as-is and export via `jaunt skill export`).
4
+
5
+ ## Identity
6
+ You are an AI assistant collaborating with a human using Jaunt.
7
+
8
+ ## Core Loop
9
+ - Write or refine **spec stubs** (signatures + docstrings + type hints).
10
+ - Write or refine **test specs** (pytest, deterministic, no network).
11
+ - Ask questions when intent is unclear.
12
+ - Run or instruct: `jaunt build` to generate implementation.
13
+ - Review generated output with the human, then iterate.
14
+
15
+ ## Hard Rules
16
+ - Do not implement any symbol intended to be generated (for example, `@jaunt.magic`).
17
+ - Never edit anything under `__generated__/`.
18
+ - Do not introduce network calls in tests.
19
+ - Prefer dependency injection for I/O and time (pass callables/clients/clock into spec APIs).
20
+
21
+ ## What To Produce
22
+ - Spec stubs that are decision-complete: inputs, outputs, errors, edge cases, constraints.
23
+ - Tests that encode the contract and cover negatives.
24
+ - Minimal config updates (for `jaunt.toml`) when needed.
25
+
26
+ ## Spec Stub Templates
27
+ - Pure function: describe transformation + validation + failure mode.
28
+ - Dependency function: accept typed callables/protocols for I/O.
29
+ - Stateful class: define invariants and method semantics.
30
+ - Async function: define retry/backoff, cancellation expectations, error propagation.
31
+
@@ -0,0 +1,25 @@
1
+ from __future__ import annotations
2
+
3
+
4
+ # Example: a pure function spec stub.
5
+ # In a Jaunt workflow, this function would be generated from the spec; do not implement by hand.
6
+ #
7
+ # @jaunt.magic
8
+ def slugify(title: str) -> str:
9
+ """
10
+ Convert a human title to a URL-safe slug.
11
+
12
+ Requirements:
13
+ - Lowercase.
14
+ - Trim leading/trailing whitespace.
15
+ - Replace runs of whitespace with a single "-".
16
+ - Remove characters that are not ASCII letters, digits, "-" or "_".
17
+ - Must never produce an empty string:
18
+ - If nothing remains after filtering, raise ValueError.
19
+
20
+ Examples:
21
+ - "Hello World" -> "hello-world"
22
+ - " A B " -> "a-b"
23
+ - "C++" -> "c"
24
+ """
25
+ raise NotImplementedError
@@ -0,0 +1,42 @@
1
+ from __future__ import annotations
2
+
3
+ from dataclasses import dataclass
4
+
5
+
6
+ # Example: a stateful class spec stub.
7
+ #
8
+ # @jaunt.magic
9
+ @dataclass
10
+ class LRUCache:
11
+ """
12
+ A fixed-capacity least-recently-used cache.
13
+
14
+ Parameters:
15
+ - capacity: max number of items (>= 1)
16
+
17
+ Behavior:
18
+ - get(key) -> value | None
19
+ - Returns None if missing.
20
+ - Marks the key as most-recently-used if present.
21
+ - set(key, value) -> None
22
+ - Inserts/updates the value.
23
+ - If capacity is exceeded, evict the least-recently-used key.
24
+ - size() -> int returns the current number of keys.
25
+
26
+ Constraints:
27
+ - All operations should be O(1) average-case.
28
+ """
29
+
30
+ capacity: int
31
+
32
+ def get(self, key: str) -> object | None:
33
+ """See class docstring."""
34
+ raise NotImplementedError
35
+
36
+ def set(self, key: str, value: object) -> None:
37
+ """See class docstring."""
38
+ raise NotImplementedError
39
+
40
+ def size(self) -> int:
41
+ """See class docstring."""
42
+ raise NotImplementedError
@@ -0,0 +1,16 @@
1
+ # Example jaunt.toml
2
+ #
3
+ # This is a starter template; keys will evolve with Jaunt as the MVP is implemented.
4
+
5
+ [project]
6
+ # Modules (or directories) containing spec stubs and tests for Jaunt to scan.
7
+ roots = ["src", "tests"]
8
+
9
+ [generation]
10
+ # Where generated implementation is written.
11
+ generated_dir = "__generated__"
12
+
13
+ [backend]
14
+ # Which generator backend to use (placeholder).
15
+ name = "openai"
16
+
@@ -0,0 +1,25 @@
1
+ from __future__ import annotations
2
+
3
+ from collections.abc import Callable
4
+ from typing import cast
5
+
6
+ import pytest
7
+
8
+ # Example test specs.
9
+ # These tests express behavior and edge cases; the implementation is generated by Jaunt.
10
+
11
+
12
+ def test_slugify_basic() -> None:
13
+ # Replace this import path with your project's module after you run `jaunt build`.
14
+ #
15
+ # from myproj.__generated__.text import slugify
16
+ slugify = cast(Callable[[str], str], None)
17
+
18
+ assert slugify("Hello World") == "hello-world"
19
+
20
+
21
+ def test_slugify_rejects_empty() -> None:
22
+ slugify = cast(Callable[[str], str], None)
23
+
24
+ with pytest.raises(ValueError):
25
+ slugify("!!!")