textprompts 0.0.1__tar.gz
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- textprompts-0.0.1/PKG-INFO +493 -0
- textprompts-0.0.1/README.md +469 -0
- textprompts-0.0.1/pyproject.toml +62 -0
- textprompts-0.0.1/src/textprompts/__init__.py +29 -0
- textprompts-0.0.1/src/textprompts/_parser.py +139 -0
- textprompts-0.0.1/src/textprompts/cli.py +31 -0
- textprompts-0.0.1/src/textprompts/config.py +110 -0
- textprompts-0.0.1/src/textprompts/errors.py +18 -0
- textprompts-0.0.1/src/textprompts/loaders.py +113 -0
- textprompts-0.0.1/src/textprompts/models.py +42 -0
- textprompts-0.0.1/src/textprompts/placeholder_utils.py +138 -0
- textprompts-0.0.1/src/textprompts/py.typed +0 -0
- textprompts-0.0.1/src/textprompts/safe_string.py +110 -0
- textprompts-0.0.1/src/textprompts/savers.py +66 -0
@@ -0,0 +1,493 @@
|
|
1
|
+
Metadata-Version: 2.4
|
2
|
+
Name: textprompts
|
3
|
+
Version: 0.0.1
|
4
|
+
Summary: Minimal text-based prompt-loader with TOML front-matter
|
5
|
+
Keywords: prompts,toml,frontmatter,template
|
6
|
+
Author: Jan Siml
|
7
|
+
Author-email: Jan Siml <49557684+svilupp@users.noreply.github.com>
|
8
|
+
License-Expression: MIT
|
9
|
+
Classifier: Development Status :: 3 - Alpha
|
10
|
+
Classifier: Intended Audience :: Developers
|
11
|
+
Classifier: License :: OSI Approved :: MIT License
|
12
|
+
Classifier: Programming Language :: Python :: 3
|
13
|
+
Classifier: Programming Language :: Python :: 3.11
|
14
|
+
Classifier: Programming Language :: Python :: 3.12
|
15
|
+
Classifier: Programming Language :: Python :: 3.13
|
16
|
+
Classifier: Topic :: Software Development :: Libraries :: Python Modules
|
17
|
+
Requires-Dist: pydantic~=2.7
|
18
|
+
Requires-Dist: tomli>=1.0.0 ; python_full_version < '3.11'
|
19
|
+
Requires-Python: >=3.11
|
20
|
+
Project-URL: Bug Tracker, https://github.com/svilupp/textprompts/issues
|
21
|
+
Project-URL: Documentation, https://github.com/svilupp/textprompts#readme
|
22
|
+
Project-URL: Homepage, https://github.com/svilupp/textprompts
|
23
|
+
Description-Content-Type: text/markdown
|
24
|
+
|
25
|
+
# textprompts
|
26
|
+
|
27
|
+
> **So simple, it's not even worth vibing about coding yet it just makes so much sense.**
|
28
|
+
|
29
|
+
Are you tired of vendors trying to sell you fancy UIs for prompt management that just make your system more confusing and harder to debug? Isn't it nice to just have your prompts **next to your code**?
|
30
|
+
|
31
|
+
But then you worry: *Did my formatter change my prompt? Are those spaces at the beginning actually part of the prompt or just indentation?*
|
32
|
+
|
33
|
+
**textprompts** solves this elegantly: treat your prompts as **text files** and keep your linters and formatters away from them.
|
34
|
+
|
35
|
+
## Why textprompts?
|
36
|
+
|
37
|
+
- ✅ **Prompts live next to your code** - no external systems to manage
|
38
|
+
- ✅ **Git is your version control** - diff, branch, and experiment with ease
|
39
|
+
- ✅ **No formatter headaches** - your prompts stay exactly as you wrote them
|
40
|
+
- ✅ **Minimal markup** - just TOML front-matter when you need metadata (or no metadata if you prefer!)
|
41
|
+
- ✅ **Zero dependencies** - well, almost (just Pydantic)
|
42
|
+
- ✅ **Safe formatting** - catch missing variables before they cause problems
|
43
|
+
- ✅ **Works with everything** - OpenAI, Anthropic, local models, function calls
|
44
|
+
|
45
|
+
## Installation
|
46
|
+
|
47
|
+
```bash
|
48
|
+
uv add textprompts # or pip install textprompts
|
49
|
+
```
|
50
|
+
|
51
|
+
## Quick Start
|
52
|
+
|
53
|
+
**Super simple by default** - TextPrompts just loads text files with optional metadata:
|
54
|
+
|
55
|
+
1. **Create a prompt file** (`greeting.txt`):
|
56
|
+
```
|
57
|
+
---
|
58
|
+
title = "Customer Greeting"
|
59
|
+
version = "1.0.0"
|
60
|
+
description = "Friendly greeting for customer support"
|
61
|
+
---
|
62
|
+
|
63
|
+
Hello {customer_name}!
|
64
|
+
|
65
|
+
Welcome to {company_name}. We're here to help you with {issue_type}.
|
66
|
+
|
67
|
+
Best regards,
|
68
|
+
{agent_name}
|
69
|
+
```
|
70
|
+
|
71
|
+
2. **Load and use it** (no configuration needed):
|
72
|
+
```python
|
73
|
+
import textprompts
|
74
|
+
|
75
|
+
# Just load it - works with or without metadata
|
76
|
+
prompt = textprompts.load_prompt("greeting.txt")
|
77
|
+
|
78
|
+
# Use it safely - all placeholders must be provided
|
79
|
+
message = prompt.body.format(
|
80
|
+
customer_name="Alice",
|
81
|
+
company_name="ACME Corp",
|
82
|
+
issue_type="billing question",
|
83
|
+
agent_name="Sarah"
|
84
|
+
)
|
85
|
+
|
86
|
+
print(message)
|
87
|
+
|
88
|
+
# Or use partial formatting when needed
|
89
|
+
partial = prompt.body.format(
|
90
|
+
customer_name="Alice",
|
91
|
+
company_name="ACME Corp",
|
92
|
+
skip_validation=True
|
93
|
+
)
|
94
|
+
# Result: "Hello Alice!\n\nWelcome to ACME Corp. We're here to help you with {issue_type}.\n\nBest regards,\n{agent_name}"
|
95
|
+
```
|
96
|
+
|
97
|
+
**Even simpler** - no metadata required:
|
98
|
+
```python
|
99
|
+
# simple_prompt.txt contains just: "Analyze this data: {data}"
|
100
|
+
prompt = textprompts.load_prompt("simple_prompt.txt") # Just works!
|
101
|
+
result = prompt.body.format(data="sales figures")
|
102
|
+
```
|
103
|
+
|
104
|
+
## Core Features
|
105
|
+
|
106
|
+
### Safe String Formatting
|
107
|
+
|
108
|
+
Never ship a prompt with missing variables again:
|
109
|
+
|
110
|
+
```python
|
111
|
+
from textprompts import SafeString
|
112
|
+
|
113
|
+
template = SafeString("Hello {name}, your order {order_id} is {status}")
|
114
|
+
|
115
|
+
# ✅ Strict formatting - all placeholders must be provided
|
116
|
+
result = template.format(name="Alice", order_id="12345", status="shipped")
|
117
|
+
|
118
|
+
# ❌ This catches the error by default
|
119
|
+
try:
|
120
|
+
result = template.format(name="Alice") # Missing order_id and status
|
121
|
+
except ValueError as e:
|
122
|
+
print(f"Error: {e}") # Missing format variables: ['order_id', 'status']
|
123
|
+
|
124
|
+
# ✅ Partial formatting - replace only what you have
|
125
|
+
partial = template.format(name="Alice", skip_validation=True)
|
126
|
+
print(partial) # "Hello Alice, your order {order_id} is {status}"
|
127
|
+
```
|
128
|
+
|
129
|
+
### Bulk Loading
|
130
|
+
|
131
|
+
Load entire directories of prompts:
|
132
|
+
|
133
|
+
```python
|
134
|
+
from textprompts import load_prompts
|
135
|
+
|
136
|
+
# Load all prompts from a directory
|
137
|
+
prompts = load_prompts("prompts/", recursive=True)
|
138
|
+
|
139
|
+
# Create a lookup
|
140
|
+
prompt_dict = {p.meta.title: p for p in prompts if p.meta}
|
141
|
+
greeting = prompt_dict["Customer Greeting"]
|
142
|
+
```
|
143
|
+
|
144
|
+
### Simple & Flexible Metadata Handling
|
145
|
+
|
146
|
+
TextPrompts is designed to be **super simple** by default - just load text files with optional metadata when available. No configuration needed!
|
147
|
+
|
148
|
+
```python
|
149
|
+
import textprompts
|
150
|
+
|
151
|
+
# Default behavior: load metadata if available, otherwise just use the file content
|
152
|
+
prompt = textprompts.load_prompt("my_prompt.txt") # Just works!
|
153
|
+
|
154
|
+
# Three modes available for different use cases:
|
155
|
+
# 1. IGNORE (default): Treat as simple text file, use filename as title
|
156
|
+
textprompts.set_metadata("ignore") # Super simple file loading
|
157
|
+
prompt = textprompts.load_prompt("prompt.txt") # No metadata parsing
|
158
|
+
print(prompt.meta.title) # "prompt" (from filename)
|
159
|
+
|
160
|
+
# 2. ALLOW: Load metadata if present, don't worry if it's incomplete
|
161
|
+
textprompts.set_metadata("allow") # Flexible metadata loading
|
162
|
+
prompt = textprompts.load_prompt("prompt.txt") # Loads any metadata found
|
163
|
+
|
164
|
+
# 3. STRICT: Require complete metadata for production use
|
165
|
+
textprompts.set_metadata("strict") # Prevent errors in production
|
166
|
+
prompt = textprompts.load_prompt("prompt.txt") # Must have title, description, version
|
167
|
+
|
168
|
+
# Override per prompt when needed
|
169
|
+
prompt = textprompts.load_prompt("prompt.txt", meta="strict")
|
170
|
+
```
|
171
|
+
|
172
|
+
**Why this design?**
|
173
|
+
- **Default = Simple**: No configuration needed, just load files
|
174
|
+
- **Flexible**: Add metadata when you want structure
|
175
|
+
- **Production-Safe**: Use strict mode to catch missing metadata before deployment
|
176
|
+
|
177
|
+
## Real-World Examples
|
178
|
+
|
179
|
+
### OpenAI Integration
|
180
|
+
|
181
|
+
```python
|
182
|
+
import openai
|
183
|
+
from textprompts import load_prompt
|
184
|
+
|
185
|
+
system_prompt = load_prompt("prompts/customer_support_system.txt")
|
186
|
+
user_prompt = load_prompt("prompts/user_query_template.txt")
|
187
|
+
|
188
|
+
response = openai.chat.completions.create(
|
189
|
+
model="gpt-4.1-mini",
|
190
|
+
messages=[
|
191
|
+
{
|
192
|
+
"role": "system",
|
193
|
+
"content": system_prompt.body.format(
|
194
|
+
company_name="ACME Corp",
|
195
|
+
support_level="premium"
|
196
|
+
)
|
197
|
+
},
|
198
|
+
{
|
199
|
+
"role": "user",
|
200
|
+
"content": user_prompt.body.format(
|
201
|
+
query="How do I return an item?",
|
202
|
+
customer_tier="premium"
|
203
|
+
)
|
204
|
+
}
|
205
|
+
]
|
206
|
+
)
|
207
|
+
```
|
208
|
+
|
209
|
+
### Function Calling (Tool Definitions)
|
210
|
+
|
211
|
+
Yes, you can version control your function schemas too:
|
212
|
+
|
213
|
+
```python
|
214
|
+
# tools/search_products.txt
|
215
|
+
---
|
216
|
+
title = "Product Search Tool"
|
217
|
+
version = "2.1.0"
|
218
|
+
description = "Search our product catalog"
|
219
|
+
---
|
220
|
+
|
221
|
+
{
|
222
|
+
"type": "function",
|
223
|
+
"function": {
|
224
|
+
"name": "search_products",
|
225
|
+
"description": "Search for products in our catalog",
|
226
|
+
"parameters": {
|
227
|
+
"type": "object",
|
228
|
+
"properties": {
|
229
|
+
"query": {
|
230
|
+
"type": "string",
|
231
|
+
"description": "Search query for products"
|
232
|
+
},
|
233
|
+
"category": {
|
234
|
+
"type": "string",
|
235
|
+
"enum": ["electronics", "clothing", "books"],
|
236
|
+
"description": "Product category to search within"
|
237
|
+
},
|
238
|
+
"max_results": {
|
239
|
+
"type": "integer",
|
240
|
+
"default": 10,
|
241
|
+
"description": "Maximum number of results to return"
|
242
|
+
}
|
243
|
+
},
|
244
|
+
"required": ["query"]
|
245
|
+
}
|
246
|
+
}
|
247
|
+
}
|
248
|
+
```
|
249
|
+
|
250
|
+
```python
|
251
|
+
import json
|
252
|
+
from textprompts import load_prompt
|
253
|
+
|
254
|
+
# Load and parse the tool definition
|
255
|
+
tool_prompt = load_prompt("tools/search_products.txt")
|
256
|
+
tool_schema = json.loads(tool_prompt.body)
|
257
|
+
|
258
|
+
# Use with OpenAI
|
259
|
+
response = openai.chat.completions.create(
|
260
|
+
model="gpt-4.1-mini",
|
261
|
+
messages=[{"role": "user", "content": "Find me some electronics"}],
|
262
|
+
tools=[tool_schema]
|
263
|
+
)
|
264
|
+
```
|
265
|
+
|
266
|
+
### Environment-Specific Prompts
|
267
|
+
|
268
|
+
```python
|
269
|
+
import os
|
270
|
+
from textprompts import load_prompt
|
271
|
+
|
272
|
+
env = os.getenv("ENVIRONMENT", "development")
|
273
|
+
system_prompt = load_prompt(f"prompts/{env}/system.txt")
|
274
|
+
|
275
|
+
# prompts/development/system.txt - verbose logging
|
276
|
+
# prompts/production/system.txt - concise responses
|
277
|
+
```
|
278
|
+
|
279
|
+
### Prompt Versioning & Experimentation
|
280
|
+
|
281
|
+
```python
|
282
|
+
from textprompts import load_prompt
|
283
|
+
|
284
|
+
# Easy A/B testing
|
285
|
+
prompt_version = "v2" # or "v1", "experimental", etc.
|
286
|
+
prompt = load_prompt(f"prompts/{prompt_version}/system.txt")
|
287
|
+
|
288
|
+
# Git handles the rest:
|
289
|
+
# git checkout experiment-branch
|
290
|
+
# git diff main -- prompts/
|
291
|
+
```
|
292
|
+
|
293
|
+
## File Format
|
294
|
+
|
295
|
+
TextPrompts uses TOML front-matter (optional) followed by your prompt content:
|
296
|
+
|
297
|
+
```
|
298
|
+
---
|
299
|
+
title = "My Prompt"
|
300
|
+
version = "1.0.0"
|
301
|
+
author = "Your Name"
|
302
|
+
description = "What this prompt does"
|
303
|
+
created = "2024-01-15"
|
304
|
+
tags = ["customer-support", "greeting"]
|
305
|
+
---
|
306
|
+
|
307
|
+
Your prompt content goes here.
|
308
|
+
|
309
|
+
Use {variables} for templating.
|
310
|
+
```
|
311
|
+
|
312
|
+
### Metadata Modes
|
313
|
+
|
314
|
+
Choose the right level of strictness for your use case:
|
315
|
+
|
316
|
+
1. **IGNORE** (default) - Simple text file loading, filename becomes title
|
317
|
+
2. **ALLOW** - Load metadata if present, don't worry about completeness
|
318
|
+
3. **STRICT** - Require complete metadata (title, description, version) for production safety
|
319
|
+
|
320
|
+
```python
|
321
|
+
# Set globally
|
322
|
+
textprompts.set_metadata("ignore") # Default: simple file loading
|
323
|
+
textprompts.set_metadata("allow") # Flexible: load any metadata
|
324
|
+
textprompts.set_metadata("strict") # Production: require complete metadata
|
325
|
+
|
326
|
+
# Or override per prompt
|
327
|
+
prompt = textprompts.load_prompt("file.txt", meta="strict")
|
328
|
+
```
|
329
|
+
|
330
|
+
## API Reference
|
331
|
+
|
332
|
+
### `load_prompt(path, *, meta=None)`
|
333
|
+
|
334
|
+
Load a single prompt file.
|
335
|
+
|
336
|
+
- `path`: Path to the prompt file
|
337
|
+
- `meta`: Metadata handling mode - `MetadataMode.STRICT`, `MetadataMode.ALLOW`, `MetadataMode.IGNORE`, or string equivalents. None uses global config.
|
338
|
+
|
339
|
+
Returns a `Prompt` object with:
|
340
|
+
- `prompt.meta`: Metadata from TOML front-matter (always present)
|
341
|
+
- `prompt.body`: The prompt content as a `SafeString`
|
342
|
+
- `prompt.path`: Path to the original file
|
343
|
+
|
344
|
+
### `load_prompts(*paths, recursive=False, glob="*.txt", meta=None, max_files=1000)`
|
345
|
+
|
346
|
+
Load multiple prompts from files or directories.
|
347
|
+
|
348
|
+
- `*paths`: Files or directories to load
|
349
|
+
- `recursive`: Search directories recursively (default: False)
|
350
|
+
- `glob`: File pattern to match (default: "*.txt")
|
351
|
+
- `meta`: Metadata handling mode - `MetadataMode.STRICT`, `MetadataMode.ALLOW`, `MetadataMode.IGNORE`, or string equivalents. None uses global config.
|
352
|
+
- `max_files`: Maximum files to process (default: 1000)
|
353
|
+
|
354
|
+
### `set_metadata(mode)` / `get_metadata()`
|
355
|
+
|
356
|
+
Set or get the global metadata handling mode.
|
357
|
+
|
358
|
+
- `mode`: `MetadataMode.STRICT`, `MetadataMode.ALLOW`, `MetadataMode.IGNORE`, or string equivalents
|
359
|
+
|
360
|
+
```python
|
361
|
+
import textprompts
|
362
|
+
|
363
|
+
# Set global mode
|
364
|
+
textprompts.set_metadata(textprompts.MetadataMode.STRICT)
|
365
|
+
textprompts.set_metadata("allow") # String also works
|
366
|
+
|
367
|
+
# Get current mode
|
368
|
+
current_mode = textprompts.get_metadata()
|
369
|
+
```
|
370
|
+
|
371
|
+
### `save_prompt(path, content)`
|
372
|
+
|
373
|
+
Save a prompt to a file.
|
374
|
+
|
375
|
+
- `path`: Path to save the prompt file
|
376
|
+
- `content`: Either a string (creates template with required fields) or a `Prompt` object
|
377
|
+
|
378
|
+
```python
|
379
|
+
from textprompts import save_prompt
|
380
|
+
|
381
|
+
# Save a simple prompt with metadata template
|
382
|
+
save_prompt("my_prompt.txt", "You are a helpful assistant.")
|
383
|
+
|
384
|
+
# Save a Prompt object with full metadata
|
385
|
+
save_prompt("my_prompt.txt", prompt_object)
|
386
|
+
```
|
387
|
+
|
388
|
+
### `SafeString`
|
389
|
+
|
390
|
+
A string subclass that validates `format()` calls:
|
391
|
+
|
392
|
+
```python
|
393
|
+
from textprompts import SafeString
|
394
|
+
|
395
|
+
template = SafeString("Hello {name}, you are {role}")
|
396
|
+
|
397
|
+
# Strict formatting (default) - all placeholders required
|
398
|
+
result = template.format(name="Alice", role="admin") # ✅ Works
|
399
|
+
result = template.format(name="Alice") # ❌ Raises ValueError
|
400
|
+
|
401
|
+
# Partial formatting - replace only available placeholders
|
402
|
+
partial = template.format(name="Alice", skip_validation=True) # ✅ "Hello Alice, you are {role}"
|
403
|
+
|
404
|
+
# Access placeholder information
|
405
|
+
print(template.placeholders) # {'name', 'role'}
|
406
|
+
```
|
407
|
+
|
408
|
+
## Error Handling
|
409
|
+
|
410
|
+
TextPrompts provides specific exception types:
|
411
|
+
|
412
|
+
```python
|
413
|
+
from textprompts import (
|
414
|
+
TextPromptsError, # Base exception
|
415
|
+
FileMissingError, # File not found
|
416
|
+
MissingMetadataError, # No TOML front-matter when required
|
417
|
+
InvalidMetadataError, # Invalid TOML syntax
|
418
|
+
MalformedHeaderError, # Malformed front-matter structure
|
419
|
+
MetadataMode, # Metadata handling mode enum
|
420
|
+
set_metadata, # Set global metadata mode
|
421
|
+
get_metadata # Get global metadata mode
|
422
|
+
)
|
423
|
+
```
|
424
|
+
|
425
|
+
## CLI Tool
|
426
|
+
|
427
|
+
TextPrompts includes a CLI for quick prompt inspection:
|
428
|
+
|
429
|
+
```bash
|
430
|
+
# View a single prompt
|
431
|
+
textprompts show greeting.txt
|
432
|
+
|
433
|
+
# List all prompts in a directory
|
434
|
+
textprompts list prompts/ --recursive
|
435
|
+
|
436
|
+
# Validate prompts
|
437
|
+
textprompts validate prompts/
|
438
|
+
```
|
439
|
+
|
440
|
+
## Best Practices
|
441
|
+
|
442
|
+
1. **Organize by purpose**: Group related prompts in folders
|
443
|
+
```
|
444
|
+
prompts/
|
445
|
+
├── customer-support/
|
446
|
+
├── content-generation/
|
447
|
+
└── code-review/
|
448
|
+
```
|
449
|
+
|
450
|
+
2. **Use semantic versioning**: Version your prompts like code
|
451
|
+
```
|
452
|
+
version = "1.2.0" # major.minor.patch
|
453
|
+
```
|
454
|
+
|
455
|
+
3. **Document your variables**: List expected variables in descriptions
|
456
|
+
```
|
457
|
+
description = "Requires: customer_name, issue_type, agent_name"
|
458
|
+
```
|
459
|
+
|
460
|
+
4. **Test your prompts**: Write unit tests for critical prompts
|
461
|
+
```python
|
462
|
+
def test_greeting_prompt():
|
463
|
+
prompt = load_prompt("greeting.txt")
|
464
|
+
result = prompt.body.format(customer_name="Test")
|
465
|
+
assert "Test" in result
|
466
|
+
```
|
467
|
+
|
468
|
+
5. **Use environment-specific prompts**: Different prompts for dev/prod
|
469
|
+
```python
|
470
|
+
env = os.getenv("ENV", "development")
|
471
|
+
prompt = load_prompt(f"prompts/{env}/system.txt")
|
472
|
+
```
|
473
|
+
|
474
|
+
## Why Not Just Use String Templates?
|
475
|
+
|
476
|
+
You could, but then you lose:
|
477
|
+
- **Metadata tracking** (versions, authors, descriptions)
|
478
|
+
- **Safe formatting** (catch missing variables)
|
479
|
+
- **Organized storage** (searchable, documentable)
|
480
|
+
- **Version control benefits** (proper diffs, blame, history)
|
481
|
+
- **Tooling support** (CLI, validation, testing)
|
482
|
+
|
483
|
+
## Contributing
|
484
|
+
|
485
|
+
We welcome contributions! Please see [CONTRIBUTING.md](CONTRIBUTING.md) for guidelines.
|
486
|
+
|
487
|
+
## License
|
488
|
+
|
489
|
+
MIT License - see [LICENSE](LICENSE) for details.
|
490
|
+
|
491
|
+
---
|
492
|
+
|
493
|
+
**textprompts** - Because your prompts deserve better than being buried in code strings. 🚀
|