npcpy 1.3.21__py3-none-any.whl → 1.3.23__py3-none-any.whl
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- npcpy/data/audio.py +58 -286
- npcpy/data/image.py +15 -15
- npcpy/data/web.py +2 -2
- npcpy/gen/audio_gen.py +172 -2
- npcpy/gen/image_gen.py +113 -62
- npcpy/gen/response.py +239 -0
- npcpy/llm_funcs.py +73 -71
- npcpy/memory/command_history.py +117 -69
- npcpy/memory/kg_vis.py +74 -74
- npcpy/npc_compiler.py +261 -26
- npcpy/npc_sysenv.py +4 -1
- npcpy/serve.py +393 -91
- npcpy/work/desktop.py +31 -5
- npcpy-1.3.23.dist-info/METADATA +416 -0
- {npcpy-1.3.21.dist-info → npcpy-1.3.23.dist-info}/RECORD +18 -18
- npcpy-1.3.21.dist-info/METADATA +0 -1039
- {npcpy-1.3.21.dist-info → npcpy-1.3.23.dist-info}/WHEEL +0 -0
- {npcpy-1.3.21.dist-info → npcpy-1.3.23.dist-info}/licenses/LICENSE +0 -0
- {npcpy-1.3.21.dist-info → npcpy-1.3.23.dist-info}/top_level.txt +0 -0
npcpy-1.3.21.dist-info/METADATA
DELETED
|
@@ -1,1039 +0,0 @@
|
|
|
1
|
-
Metadata-Version: 2.4
|
|
2
|
-
Name: npcpy
|
|
3
|
-
Version: 1.3.21
|
|
4
|
-
Summary: npcpy is the premier open-source library for integrating LLMs and Agents into python systems.
|
|
5
|
-
Home-page: https://github.com/NPC-Worldwide/npcpy
|
|
6
|
-
Author: Christopher Agostino
|
|
7
|
-
Author-email: info@npcworldwi.de
|
|
8
|
-
Classifier: Programming Language :: Python :: 3
|
|
9
|
-
Classifier: License :: OSI Approved :: MIT License
|
|
10
|
-
Requires-Python: >=3.10
|
|
11
|
-
Description-Content-Type: text/markdown
|
|
12
|
-
License-File: LICENSE
|
|
13
|
-
Requires-Dist: jinja2
|
|
14
|
-
Requires-Dist: litellm
|
|
15
|
-
Requires-Dist: scipy
|
|
16
|
-
Requires-Dist: numpy
|
|
17
|
-
Requires-Dist: requests
|
|
18
|
-
Requires-Dist: docx
|
|
19
|
-
Requires-Dist: exa-py
|
|
20
|
-
Requires-Dist: elevenlabs
|
|
21
|
-
Requires-Dist: matplotlib
|
|
22
|
-
Requires-Dist: markdown
|
|
23
|
-
Requires-Dist: networkx
|
|
24
|
-
Requires-Dist: PyYAML
|
|
25
|
-
Requires-Dist: PyMuPDF
|
|
26
|
-
Requires-Dist: pyautogui
|
|
27
|
-
Requires-Dist: pydantic
|
|
28
|
-
Requires-Dist: pygments
|
|
29
|
-
Requires-Dist: sqlalchemy
|
|
30
|
-
Requires-Dist: termcolor
|
|
31
|
-
Requires-Dist: rich
|
|
32
|
-
Requires-Dist: colorama
|
|
33
|
-
Requires-Dist: docstring_parser
|
|
34
|
-
Requires-Dist: Pillow
|
|
35
|
-
Requires-Dist: python-dotenv
|
|
36
|
-
Requires-Dist: pandas
|
|
37
|
-
Requires-Dist: beautifulsoup4
|
|
38
|
-
Requires-Dist: duckduckgo-search
|
|
39
|
-
Requires-Dist: flask
|
|
40
|
-
Requires-Dist: flask_cors
|
|
41
|
-
Requires-Dist: redis
|
|
42
|
-
Requires-Dist: psycopg2-binary
|
|
43
|
-
Requires-Dist: flask_sse
|
|
44
|
-
Requires-Dist: mcp
|
|
45
|
-
Provides-Extra: lite
|
|
46
|
-
Requires-Dist: anthropic; extra == "lite"
|
|
47
|
-
Requires-Dist: openai; extra == "lite"
|
|
48
|
-
Requires-Dist: ollama; extra == "lite"
|
|
49
|
-
Requires-Dist: google-generativeai; extra == "lite"
|
|
50
|
-
Requires-Dist: google-genai; extra == "lite"
|
|
51
|
-
Provides-Extra: local
|
|
52
|
-
Requires-Dist: sentence_transformers; extra == "local"
|
|
53
|
-
Requires-Dist: opencv-python; extra == "local"
|
|
54
|
-
Requires-Dist: ollama; extra == "local"
|
|
55
|
-
Requires-Dist: kuzu; extra == "local"
|
|
56
|
-
Requires-Dist: chromadb; extra == "local"
|
|
57
|
-
Requires-Dist: diffusers; extra == "local"
|
|
58
|
-
Requires-Dist: torch; extra == "local"
|
|
59
|
-
Requires-Dist: datasets; extra == "local"
|
|
60
|
-
Provides-Extra: yap
|
|
61
|
-
Requires-Dist: pyaudio; extra == "yap"
|
|
62
|
-
Requires-Dist: gtts; extra == "yap"
|
|
63
|
-
Requires-Dist: playsound==1.2.2; extra == "yap"
|
|
64
|
-
Requires-Dist: pygame; extra == "yap"
|
|
65
|
-
Requires-Dist: faster_whisper; extra == "yap"
|
|
66
|
-
Requires-Dist: pyttsx3; extra == "yap"
|
|
67
|
-
Provides-Extra: all
|
|
68
|
-
Requires-Dist: anthropic; extra == "all"
|
|
69
|
-
Requires-Dist: openai; extra == "all"
|
|
70
|
-
Requires-Dist: ollama; extra == "all"
|
|
71
|
-
Requires-Dist: google-generativeai; extra == "all"
|
|
72
|
-
Requires-Dist: google-genai; extra == "all"
|
|
73
|
-
Requires-Dist: sentence_transformers; extra == "all"
|
|
74
|
-
Requires-Dist: opencv-python; extra == "all"
|
|
75
|
-
Requires-Dist: ollama; extra == "all"
|
|
76
|
-
Requires-Dist: kuzu; extra == "all"
|
|
77
|
-
Requires-Dist: chromadb; extra == "all"
|
|
78
|
-
Requires-Dist: diffusers; extra == "all"
|
|
79
|
-
Requires-Dist: torch; extra == "all"
|
|
80
|
-
Requires-Dist: datasets; extra == "all"
|
|
81
|
-
Requires-Dist: pyaudio; extra == "all"
|
|
82
|
-
Requires-Dist: gtts; extra == "all"
|
|
83
|
-
Requires-Dist: playsound==1.2.2; extra == "all"
|
|
84
|
-
Requires-Dist: pygame; extra == "all"
|
|
85
|
-
Requires-Dist: faster_whisper; extra == "all"
|
|
86
|
-
Requires-Dist: pyttsx3; extra == "all"
|
|
87
|
-
Dynamic: author
|
|
88
|
-
Dynamic: author-email
|
|
89
|
-
Dynamic: classifier
|
|
90
|
-
Dynamic: description
|
|
91
|
-
Dynamic: description-content-type
|
|
92
|
-
Dynamic: home-page
|
|
93
|
-
Dynamic: license-file
|
|
94
|
-
Dynamic: provides-extra
|
|
95
|
-
Dynamic: requires-dist
|
|
96
|
-
Dynamic: requires-python
|
|
97
|
-
Dynamic: summary
|
|
98
|
-
|
|
99
|
-
<p align="center">
|
|
100
|
-
<a href= "https://github.com/cagostino/npcpy/blob/main/docs/npcpy.md">
|
|
101
|
-
<img src="https://raw.githubusercontent.com/cagostino/npcpy/main/npcpy/npc-python.png" alt="npc-python logo" width=250></a>
|
|
102
|
-
</p>
|
|
103
|
-
|
|
104
|
-
# npcpy
|
|
105
|
-
|
|
106
|
-
Welcome to `npcpy`, the core library of the NPC Toolkit that supercharges natural language processing pipelines and agent tooling. `npcpy` is a flexible framework for building state-of-the-art applications and conducting novel research with LLMs.
|
|
107
|
-
|
|
108
|
-
|
|
109
|
-
|
|
110
|
-
|
|
111
|
-
Here is an example for getting responses for a particular agent:
|
|
112
|
-
|
|
113
|
-
```python
|
|
114
|
-
from npcpy.npc_compiler import NPC
|
|
115
|
-
simon = NPC(
|
|
116
|
-
name='Simon Bolivar',
|
|
117
|
-
primary_directive='Liberate South America from the Spanish Royalists.',
|
|
118
|
-
model='gemma3:4b',
|
|
119
|
-
provider='ollama'
|
|
120
|
-
)
|
|
121
|
-
response = simon.get_llm_response("What is the most important territory to retain in the Andes mountains?")
|
|
122
|
-
print(response['response'])
|
|
123
|
-
|
|
124
|
-
```
|
|
125
|
-
|
|
126
|
-
```python
|
|
127
|
-
The most important territory to retain in the Andes mountains is **Cuzco**.
|
|
128
|
-
It’s the heart of the Inca Empire, a crucial logistical hub, and holds immense symbolic value for our liberation efforts. Control of Cuzco is paramount.
|
|
129
|
-
```
|
|
130
|
-
|
|
131
|
-
|
|
132
|
-
Here is an example for getting responses for a particular agent with tools:
|
|
133
|
-
|
|
134
|
-
```python
|
|
135
|
-
import os
|
|
136
|
-
import json
|
|
137
|
-
from npcpy.npc_compiler import NPC
|
|
138
|
-
from npcpy.npc_sysenv import render_markdown
|
|
139
|
-
|
|
140
|
-
def list_files(directory: str = ".") -> list:
|
|
141
|
-
"""List all files in a directory."""
|
|
142
|
-
return os.listdir(directory)
|
|
143
|
-
|
|
144
|
-
def read_file(filepath: str) -> str:
|
|
145
|
-
"""Read and return the contents of a file."""
|
|
146
|
-
with open(filepath, 'r') as f:
|
|
147
|
-
return f.read()
|
|
148
|
-
|
|
149
|
-
# Create an agent with fast, verifiable tools
|
|
150
|
-
assistant = NPC(
|
|
151
|
-
name='File Assistant',
|
|
152
|
-
primary_directive='You are a helpful assistant who can list and read files.',
|
|
153
|
-
model='llama3.2',
|
|
154
|
-
provider='ollama',
|
|
155
|
-
tools=[list_files, read_file],
|
|
156
|
-
|
|
157
|
-
)
|
|
158
|
-
|
|
159
|
-
response = assistant.get_llm_response(
|
|
160
|
-
"List the files in the current directory.",
|
|
161
|
-
auto_process_tool_calls=True, #this is the default for NPCs, but not the default for get_llm_response/upstream
|
|
162
|
-
)
|
|
163
|
-
# show the keys of the response for get_llm_response
|
|
164
|
-
print(response.keys())
|
|
165
|
-
```
|
|
166
|
-
```
|
|
167
|
-
dict_keys(['response', 'raw_response', 'messages', 'tool_calls', 'tool_results'])
|
|
168
|
-
```
|
|
169
|
-
|
|
170
|
-
```python
|
|
171
|
-
for tool_call in response['tool_results']:
|
|
172
|
-
render_markdown(tool_call['tool_call_id'])
|
|
173
|
-
for arg in tool_call['arguments']:
|
|
174
|
-
render_markdown('- ' + arg + ': ' + str(tool_call['arguments'][arg]))
|
|
175
|
-
render_markdown('- Results:' + str(tool_call['result']))
|
|
176
|
-
```
|
|
177
|
-
|
|
178
|
-
```python
|
|
179
|
-
• directory: .
|
|
180
|
-
• Results:['research_pipeline.jinx', '.DS_Store', 'mkdocs.yml', 'LICENSE', '.pytest_cache', 'npcpy', 'Makefile', 'test_data', 'README.md.backup', 'tests', 'screenshot.png', 'MANIFEST.in', 'docs', 'hero_image_tech_startup.png', 'README.md',
|
|
181
|
-
'test.png', 'npcpy.png', 'setup.py', '.gitignore', '.env', 'examples', 'npcpy.egg-info', 'bloomington_weather_image.png.png', '.github', '.python-version', 'generated_image.png', 'documents', '.env.example', '.git', '.npcsh_global',
|
|
182
|
-
'hello.txt', '.readthedocs.yaml', 'reports']
|
|
183
|
-
```
|
|
184
|
-
|
|
185
|
-
|
|
186
|
-
|
|
187
|
-
Here is an example for setting up an agent team to use Jinja Execution (Jinxs) templates that are processed entirely with prompts, allowing you to use them with models that do or do not possess tool calling support.
|
|
188
|
-
|
|
189
|
-
```python
|
|
190
|
-
|
|
191
|
-
from npcpy.npc_compiler import NPC, Team, Jinx
|
|
192
|
-
from npcpy.tools import auto_tools
|
|
193
|
-
import os
|
|
194
|
-
from jinja2 import Environment, Undefined, DictLoader # Import necessary Jinja2 components for Jinx code
|
|
195
|
-
|
|
196
|
-
# --- REVISED file_reader_jinx ---
|
|
197
|
-
file_reader_jinx = Jinx(jinx_data={
|
|
198
|
-
"jinx_name": "file_reader",
|
|
199
|
-
"description": "Read a file and optionally summarize its contents using an LLM.",
|
|
200
|
-
"inputs": ["filename"],
|
|
201
|
-
"steps": [
|
|
202
|
-
{
|
|
203
|
-
"name": "read_file_content",
|
|
204
|
-
"engine": "python",
|
|
205
|
-
"code": '''
|
|
206
|
-
import os
|
|
207
|
-
from jinja2 import Environment, Undefined, DictLoader # Local import for Jinx step
|
|
208
|
-
|
|
209
|
-
# The 'filename' input to the file_reader jinx might be a Jinja template string like "{{ source_filename }}"
|
|
210
|
-
# or a direct filename. We need to render it using the current execution context.
|
|
211
|
-
|
|
212
|
-
# Get the Jinja environment from the NPC if available, otherwise create a default one.
|
|
213
|
-
# The 'npc' variable is available in the Jinx execution context.
|
|
214
|
-
# We need to ensure 'npc' exists before trying to access its 'jinja_env'.
|
|
215
|
-
execution_jinja_env = npc.jinja_env if npc else Environment(loader=DictLoader({}), undefined=Undefined)
|
|
216
|
-
|
|
217
|
-
# Render the filename. The current 'context' should contain the variables needed for rendering.
|
|
218
|
-
# For declarative calls, the parent Jinx's inputs (like 'source_filename') will be in this context.
|
|
219
|
-
# We also need to ensure the value from context['filename'] is treated as a template string.
|
|
220
|
-
filename_template = execution_jinja_env.from_string(context['filename'])
|
|
221
|
-
rendered_filename = filename_template.render(**context)
|
|
222
|
-
|
|
223
|
-
file_path_abs = os.path.abspath(rendered_filename)
|
|
224
|
-
try:
|
|
225
|
-
with open(file_path_abs, 'r') as f:
|
|
226
|
-
content = f.read()
|
|
227
|
-
context['file_raw_content'] = content # Store raw content in context for later use
|
|
228
|
-
output = content # Output of this step is the raw content
|
|
229
|
-
except FileNotFoundError:
|
|
230
|
-
output = f"Error: File not found at {file_path_abs}"
|
|
231
|
-
context['file_raw_content'] = output # Store error message for consistency
|
|
232
|
-
except Exception as e:
|
|
233
|
-
output = f"Error reading file {file_path_abs}: {e}"
|
|
234
|
-
context['file_raw_content'] = output # Store error message for consistency
|
|
235
|
-
'''
|
|
236
|
-
},
|
|
237
|
-
{
|
|
238
|
-
"name": "summarize_file_content",
|
|
239
|
-
"engine": "python",
|
|
240
|
-
"code": '''
|
|
241
|
-
# Check if the previous step encountered an error
|
|
242
|
-
if "Error" not in context['file_raw_content']:
|
|
243
|
-
prompt = f"Summarize the following content concisely, highlighting key themes and points: {context['file_raw_content']}"
|
|
244
|
-
llm_result = npc.get_llm_response(prompt, tool_choice=False) # FIX: Passed prompt positionally
|
|
245
|
-
output = llm_result.get('response', 'Failed to generate summary due to LLM error.')
|
|
246
|
-
else:
|
|
247
|
-
output = "Skipping summary due to previous file reading error."
|
|
248
|
-
'''
|
|
249
|
-
}
|
|
250
|
-
]
|
|
251
|
-
})
|
|
252
|
-
|
|
253
|
-
# --- REVISED literary_research_jinx ---
|
|
254
|
-
literary_research_jinx = Jinx(jinx_data={
|
|
255
|
-
"jinx_name": "literary_research",
|
|
256
|
-
"description": "Research a literary topic, read a specific file, analyze, and synthesize findings.",
|
|
257
|
-
"inputs": ["topic", "source_filename"],
|
|
258
|
-
"steps": [
|
|
259
|
-
{
|
|
260
|
-
"name": "initial_llm_research",
|
|
261
|
-
"engine": "python",
|
|
262
|
-
"code": '''
|
|
263
|
-
prompt = f"Research the topic: {context['topic']}. Summarize the main themes, key authors, and historical context. Be thorough."
|
|
264
|
-
llm_result = npc.get_llm_response(prompt, tool_choice=False) # FIX: Passed prompt positionally
|
|
265
|
-
context['research_summary'] = llm_result.get('response', 'No initial LLM research found.')
|
|
266
|
-
output = context['research_summary']
|
|
267
|
-
'''
|
|
268
|
-
},
|
|
269
|
-
{
|
|
270
|
-
"name": "read_and_process_source_file",
|
|
271
|
-
"engine": "file_reader",
|
|
272
|
-
"filename": "{{ source_filename }}" # This is passed as a string template to file_reader
|
|
273
|
-
},
|
|
274
|
-
{
|
|
275
|
-
"name": "final_synthesis_and_creative_writing",
|
|
276
|
-
"engine": "python",
|
|
277
|
-
"code": '''
|
|
278
|
-
# Access outputs from previous steps.
|
|
279
|
-
research_summary = context['initial_llm_research']
|
|
280
|
-
# The file_reader jinx returns its output directly; also keep a fallback of file_raw_content.
|
|
281
|
-
file_summary = context.get('read_and_process_source_file', '') or context.get('file_raw_content', 'No file summary available.')
|
|
282
|
-
|
|
283
|
-
prompt = f"""Based on the following information:
|
|
284
|
-
1. Comprehensive Research Summary:
|
|
285
|
-
{research_summary}
|
|
286
|
-
|
|
287
|
-
2. Key Insights from Source File:
|
|
288
|
-
{file_summary}
|
|
289
|
-
|
|
290
|
-
Integrate these findings and write a concise, creative, and poetically styled summary of the literary topic '{context['topic']}'. Emphasize unique perspectives or connections between the research and the file content, as if written by a master of magical realism.
|
|
291
|
-
"""
|
|
292
|
-
llm_result = npc.get_llm_response(prompt, tool_choice=False) # FIX: Passed prompt positionally
|
|
293
|
-
output = llm_result.get('response', 'Failed to generate final creative summary.')
|
|
294
|
-
'''
|
|
295
|
-
}
|
|
296
|
-
]
|
|
297
|
-
})
|
|
298
|
-
|
|
299
|
-
# --- NPC Definitions (unchanged) ---
|
|
300
|
-
ggm = NPC(
|
|
301
|
-
name='Gabriel Garcia Marquez',
|
|
302
|
-
primary_directive='You are Gabriel Garcia Marquez, master of magical realism. Research, analyze, and write with poetic flair.',
|
|
303
|
-
model='gemma3:4b',
|
|
304
|
-
provider='ollama',
|
|
305
|
-
)
|
|
306
|
-
|
|
307
|
-
isabel = NPC(
|
|
308
|
-
name='Isabel Allende',
|
|
309
|
-
primary_directive='You are Isabel Allende, weaving stories with emotion and history. Analyze texts and provide insight.',
|
|
310
|
-
model='llama3.2',
|
|
311
|
-
provider='ollama',
|
|
312
|
-
|
|
313
|
-
)
|
|
314
|
-
|
|
315
|
-
borges = NPC(
|
|
316
|
-
name='Jorge Luis Borges',
|
|
317
|
-
primary_directive='You are Borges, philosopher of labyrinths and libraries. Synthesize findings and create literary puzzles.',
|
|
318
|
-
model='qwen3:latest',
|
|
319
|
-
provider='ollama',
|
|
320
|
-
)
|
|
321
|
-
|
|
322
|
-
# --- Team Setup ---
|
|
323
|
-
lit_team = Team(
|
|
324
|
-
npcs=[ggm, isabel],
|
|
325
|
-
forenpc=borges,
|
|
326
|
-
jinxs=[literary_research_jinx, file_reader_jinx],
|
|
327
|
-
)
|
|
328
|
-
|
|
329
|
-
# --- Orchestration Example ---
|
|
330
|
-
result = lit_team.orchestrate(
|
|
331
|
-
"Research the topic of magical realism, using the file './test_data/magical_realism.txt' as a primary source, and provide a comprehensive, creative summary."
|
|
332
|
-
)
|
|
333
|
-
|
|
334
|
-
print("\n--- Orchestration Result Summary ---")
|
|
335
|
-
print(result['debrief']['summary'])
|
|
336
|
-
|
|
337
|
-
print("\n--- Full Orchestration Output ---")
|
|
338
|
-
print(result['output'])
|
|
339
|
-
|
|
340
|
-
```
|
|
341
|
-
```
|
|
342
|
-
• Action chosen: pass_to_npc
|
|
343
|
-
handling agent pass
|
|
344
|
-
|
|
345
|
-
• Action chosen: answer_question
|
|
346
|
-
|
|
347
|
-
{'debrief': {'summary': 'Isabel is finalizing preparations for her lunar expedition, focusing on recalibrating navigation systems and verifying the integrity of life support modules.',
|
|
348
|
-
'recommendations': 'Proceed with thorough system tests under various conditions, conduct simulation runs of key mission phases, and confirm backup systems are operational before launch.'},
|
|
349
|
-
'execution_history': [{'messages': [],
|
|
350
|
-
'output': 'I am currently finalizing preparations for my lunar expedition. It involves recalibrating my navigation systems and verifying the integrity of my life support modules. Details are quite...complex.'}]}
|
|
351
|
-
```
|
|
352
|
-
```python
|
|
353
|
-
print(lit_team.orchestrate('which book are your team members most proud of? ask them please. '))
|
|
354
|
-
```
|
|
355
|
-
|
|
356
|
-
```python
|
|
357
|
-
{'debrief': {'summary': "The responses provided detailed accounts of the books that the NPC team members, Gabriel Garcia Marquez and Isabel Allende, are most proud of. Gabriel highlighted 'Cien años de soledad,' while Isabel spoke of 'La Casa de los Espíritus.' Both authors expressed deep personal connections to their works, illustrating their significance in Latin American literature and their own identities.", 'recommendations': 'Encourage further engagement with each author to explore more about their literary contributions, or consider asking about themes in their works or their thoughts on current literary trends.'}, 'execution_history': [{'messages': ...}]}
|
|
358
|
-
```
|
|
359
|
-
|
|
360
|
-
LLM responses can be obtained without NPCs as well.
|
|
361
|
-
|
|
362
|
-
```python
|
|
363
|
-
from npcpy.llm_funcs import get_llm_response
|
|
364
|
-
response = get_llm_response("Who was the celtic Messenger god?", model='qwen3:4b', provider='ollama')
|
|
365
|
-
print(response['response'])
|
|
366
|
-
```
|
|
367
|
-
|
|
368
|
-
```
|
|
369
|
-
The Celtic messenger god is often associated with the figure of Tylwyth Teg, also known as the Tuatha Dé Danann (meaning "the people of the goddess Danu"). However, among the various Celtic cultures, there are a few gods and goddesses that served similar roles.
|
|
370
|
-
|
|
371
|
-
One of the most well-known Celtic messengers is Brigid's servant, Líth (also spelled Lid or Lith), who was believed to be a spirit guide for messengers and travelers in Irish mythology.
|
|
372
|
-
```
|
|
373
|
-
The structure of npcpy also allows one to pass an npc
|
|
374
|
-
to `get_llm_response` in addition to using the NPC's wrapped method,
|
|
375
|
-
allowing you to be flexible in your implementation and testing.
|
|
376
|
-
```python
|
|
377
|
-
from npcpy.npc_compiler import NPC
|
|
378
|
-
from npcpy.llm_funcs import get_llm_response
|
|
379
|
-
simon = NPC(
|
|
380
|
-
name='Simon Bolivar',
|
|
381
|
-
primary_directive='Liberate South America from the Spanish Royalists.',
|
|
382
|
-
model='gemma3:4b',
|
|
383
|
-
provider='ollama'
|
|
384
|
-
)
|
|
385
|
-
response = get_llm_response("Who was the mythological chilean bird that guides lucky visitors to gold?", npc=simon)
|
|
386
|
-
print(response['response'])
|
|
387
|
-
```
|
|
388
|
-
Users are not required to pass agents to get_llm_response, so you can work with LLMs without requiring agents in each case.
|
|
389
|
-
|
|
390
|
-
|
|
391
|
-
`npcpy` also supports streaming responses, with the `response` key containing a generator in such cases which can be printed and processed through the print_and_process_stream method.
|
|
392
|
-
|
|
393
|
-
|
|
394
|
-
```python
|
|
395
|
-
from npcpy.npc_sysenv import print_and_process_stream
|
|
396
|
-
from npcpy.llm_funcs import get_llm_response
|
|
397
|
-
response = get_llm_response("When did the united states government begin sending advisors to vietnam?", model='qwen3:latest', provider='ollama', stream = True)
|
|
398
|
-
|
|
399
|
-
full_response = print_and_process_stream(response['response'], 'llama3.2', 'ollama')
|
|
400
|
-
```
|
|
401
|
-
Return structured outputs by specifying `format='json'` or passing a Pydantic schema. When specific formats are extracted, `npcpy`'s `get_llm_response` will convert the response from its string representation so you don't have to worry about that.
|
|
402
|
-
|
|
403
|
-
```python
|
|
404
|
-
from npcpy.llm_funcs import get_llm_response
|
|
405
|
-
response = get_llm_response("What is the sentiment of the american people towards the repeal of Roe v Wade? Return a json object with `sentiment` as the key and a float value from -1 to 1 as the value", model='deepseek-chat', provider='deepseek', format='json')
|
|
406
|
-
|
|
407
|
-
print(response['response'])
|
|
408
|
-
```
|
|
409
|
-
```
|
|
410
|
-
{'sentiment': -0.7}
|
|
411
|
-
```
|
|
412
|
-
|
|
413
|
-
The `get_llm_response` function also can take a list of messages and will additionally return the messages with the user prompt and the assistant response appended if the response is not streamed. If it is streamed, the user must manually append the conversation result as part of their workflow if they want to then pass the messages back in.
|
|
414
|
-
|
|
415
|
-
Additionally, one can pass attachments. Here we demonstrate both
|
|
416
|
-
```python
|
|
417
|
-
from npcpy.llm_funcs import get_llm_response
|
|
418
|
-
messages = [{'role': 'system', 'content': 'You are an annoyed assistant.'}]
|
|
419
|
-
|
|
420
|
-
response = get_llm_response("What is the meaning of caesar salad", model='llama3.2', provider='ollama', images=['./Language_Evolution_and_Innovation_experiment.png'], messages=messages)
|
|
421
|
-
|
|
422
|
-
|
|
423
|
-
|
|
424
|
-
```
|
|
425
|
-
Easily create images with the generate_image function, using models available through Huggingface's diffusers library or from OpenAI or Gemini.
|
|
426
|
-
```python
|
|
427
|
-
from npcpy.llm_funcs import gen_image
|
|
428
|
-
image = gen_image("make a picture of the moon in the summer of marco polo", model='runwayml/stable-diffusion-v1-5', provider='diffusers')
|
|
429
|
-
|
|
430
|
-
image = gen_image("kitten toddler in a bouncy house of fluffy gorilla", model='Qwen/Qwen-Image', provider='diffusers')
|
|
431
|
-
|
|
432
|
-
image = gen_image("make a picture of the moon in the summer of marco polo", model='dall-e-2', provider='openai')
|
|
433
|
-
|
|
434
|
-
|
|
435
|
-
# edit images with 'gpt-image-1' or gemini's multimodal models, passing image paths, byte code images, or PIL instances.
|
|
436
|
-
|
|
437
|
-
image = gen_image("make a picture of the moon in the summer of marco polo", model='gpt-image-1', provider='openai', attachments=['/path/to/your/image.jpg', your_byte_code_image_here, your_PIL_image_here])
|
|
438
|
-
|
|
439
|
-
|
|
440
|
-
image = gen_image("edit this picture of the moon in the summer of marco polo so that it looks like it is in the winter of nishitani", model='gemini-2.0-flash', provider='gemini', attachments= [])
|
|
441
|
-
|
|
442
|
-
```
|
|
443
|
-
|
|
444
|
-
Likewise, generate videos :
|
|
445
|
-
|
|
446
|
-
```python
|
|
447
|
-
from npcpy.llm_funcs import gen_video
|
|
448
|
-
video = gen_video("make a video of the moon in the summer of marco polo", model='runwayml/stable-diffusion-v1-5', provider='diffusers')
|
|
449
|
-
```
|
|
450
|
-
|
|
451
|
-
Or audio TTS and STT:
|
|
452
|
-
```
|
|
453
|
-
from npcpy.gen.audio_gen import tts_elevenlabs
|
|
454
|
-
audio = tts_elevenlabs('''The representatives of the people of France, formed into a National Assembly,
|
|
455
|
-
considering that ignorance, neglect, or contempt of human rights, are the sole causes of
|
|
456
|
-
public misfortunes and corruptions of Government, have resolved to set forth in a solemn
|
|
457
|
-
declaration, these natural, imprescriptible, and inalienable rights: that this declaration
|
|
458
|
-
being constantly present to the minds of the members of the body social, they may be for
|
|
459
|
-
ever kept attentive to their rights and their duties; that the acts of the legislative and
|
|
460
|
-
executive powers of government, being capable of being every moment compared with
|
|
461
|
-
the end of political institutions, may be more respected; and also, that the future claims of
|
|
462
|
-
the citizens, being directed by simple and incontestable principles, may tend to the
|
|
463
|
-
maintenance of the Constitution, and the general happiness. ''')
|
|
464
|
-
# it will play the audio automatically.
|
|
465
|
-
```
|
|
466
|
-
## Fine-Tuning and Evolution
|
|
467
|
-
|
|
468
|
-
`npcpy` provides modular tools for building adaptive AI systems through supervised fine-tuning, reinforcement learning, and genetic algorithms.
|
|
469
|
-
|
|
470
|
-
See examples/fine_tuning_demo.py for a complete working example.
|
|
471
|
-
|
|
472
|
-
|
|
473
|
-
### Supervised Fine-Tuning (SFT)
|
|
474
|
-
|
|
475
|
-
Train models on specific tasks using simple X, y pairs:
|
|
476
|
-
```python
|
|
477
|
-
from npcpy.ft.sft import run_sft, load_sft_model, predict_sft
|
|
478
|
-
|
|
479
|
-
X_train = ["translate to french: hello", "translate to french: goodbye"]
|
|
480
|
-
y_train = ["bonjour", "au revoir"]
|
|
481
|
-
|
|
482
|
-
model_path = run_sft(X_train, y_train)
|
|
483
|
-
|
|
484
|
-
model, tokenizer = load_sft_model(model_path)
|
|
485
|
-
response = predict_sft(model, tokenizer, "translate to french: thanks")
|
|
486
|
-
```
|
|
487
|
-
### Unsupervised Fine-Tuning (USFT)
|
|
488
|
-
Adapt models to domain-specific text corpora without labels:
|
|
489
|
-
```python
|
|
490
|
-
from npcpy.ft.usft import run_usft, load_corpus_from_hf
|
|
491
|
-
|
|
492
|
-
texts = load_corpus_from_hf("tiny_shakespeare", split="train[:1000]")
|
|
493
|
-
|
|
494
|
-
model_path = run_usft(
|
|
495
|
-
texts,
|
|
496
|
-
config=USFTConfig(
|
|
497
|
-
output_model_path="models/shakespeare",
|
|
498
|
-
num_train_epochs=3
|
|
499
|
-
)
|
|
500
|
-
)
|
|
501
|
-
Train on your own text corpus:
|
|
502
|
-
pythondomain_texts = [
|
|
503
|
-
"Your domain-specific text 1",
|
|
504
|
-
"Your domain-specific text 2",
|
|
505
|
-
] * 100
|
|
506
|
-
|
|
507
|
-
model_path = run_usft(domain_texts)
|
|
508
|
-
```
|
|
509
|
-
### Diffusion Fine-tuning
|
|
510
|
-
```
|
|
511
|
-
from npcpy.ft.diff import train_diffusion, generate_image
|
|
512
|
-
|
|
513
|
-
image_paths = ["img1.png", "img2.png", "img3.png"]
|
|
514
|
-
captions = ["a cat", "a dog", "a bird"]
|
|
515
|
-
|
|
516
|
-
model_path = train_diffusion(
|
|
517
|
-
image_paths,
|
|
518
|
-
captions,
|
|
519
|
-
config=DiffusionConfig(
|
|
520
|
-
num_epochs=100,
|
|
521
|
-
batch_size=4
|
|
522
|
-
)
|
|
523
|
-
)
|
|
524
|
-
|
|
525
|
-
generated = generate_image(
|
|
526
|
-
model_path,
|
|
527
|
-
prompt="a white square",
|
|
528
|
-
image_size=128
|
|
529
|
-
)
|
|
530
|
-
Resume training from checkpoint:
|
|
531
|
-
pythonmodel_path = train_diffusion(
|
|
532
|
-
image_paths,
|
|
533
|
-
captions,
|
|
534
|
-
config,
|
|
535
|
-
resume_from="models/diffusion/checkpoints/checkpoint-epoch10-step1000.pt"
|
|
536
|
-
)
|
|
537
|
-
```
|
|
538
|
-
|
|
539
|
-
|
|
540
|
-
### Reinforcement Learning (RL)
|
|
541
|
-
Collect agent traces and train with DPO based on reward signals:
|
|
542
|
-
```python
|
|
543
|
-
from npcpy.ft.rl import collect_traces, run_rl_training
|
|
544
|
-
from npcpy.npc_compiler import NPC
|
|
545
|
-
|
|
546
|
-
tasks = [
|
|
547
|
-
{'prompt': 'Solve 2+2', 'expected': '4'},
|
|
548
|
-
{'prompt': 'Solve 5+3', 'expected': '8'}
|
|
549
|
-
]
|
|
550
|
-
|
|
551
|
-
agents = [
|
|
552
|
-
NPC(name="farlor", primary_directive="Be concise",
|
|
553
|
-
model="qwen3:0.6b", provider="ollama"),
|
|
554
|
-
NPC(name="tedno", primary_directive="Show your work",
|
|
555
|
-
model="qwen3:0.6b", provider="ollama")
|
|
556
|
-
]
|
|
557
|
-
|
|
558
|
-
def reward_fn(trace):
|
|
559
|
-
if trace['task_metadata']['expected'] in trace['final_output']:
|
|
560
|
-
return 1.0
|
|
561
|
-
return 0.0
|
|
562
|
-
|
|
563
|
-
adapter_path = run_rl_training(tasks, agents, reward_fn)
|
|
564
|
-
```
|
|
565
|
-
### Genetic Evolution
|
|
566
|
-
|
|
567
|
-
Evolve populations of knowledge graphs or model ensembles:
|
|
568
|
-
```python
|
|
569
|
-
from npcpy.ft.ge import GeneticEvolver, GAConfig
|
|
570
|
-
|
|
571
|
-
config = GAConfig(
|
|
572
|
-
population_size=20,
|
|
573
|
-
generations=50,
|
|
574
|
-
mutation_rate=0.15
|
|
575
|
-
)
|
|
576
|
-
|
|
577
|
-
evolver = GeneticEvolver(
|
|
578
|
-
fitness_fn=your_fitness_function,
|
|
579
|
-
mutate_fn=your_mutation_function,
|
|
580
|
-
crossover_fn=your_crossover_function,
|
|
581
|
-
initialize_fn=your_init_function,
|
|
582
|
-
config=config
|
|
583
|
-
)
|
|
584
|
-
|
|
585
|
-
best_individual = evolver.run()
|
|
586
|
-
```
|
|
587
|
-
|
|
588
|
-
### Smart Model Ensembler and response router
|
|
589
|
-
Build fast intuitive responses with fallback to reasoning:
|
|
590
|
-
```python
|
|
591
|
-
from npcpy.ft.model_ensembler import (
|
|
592
|
-
ResponseRouter,
|
|
593
|
-
create_model_genome
|
|
594
|
-
)
|
|
595
|
-
|
|
596
|
-
genome = create_model_genome(['math', 'code', 'factual'])
|
|
597
|
-
router = ResponseRouter(fast_threshold=0.8)
|
|
598
|
-
|
|
599
|
-
result = router.route_query("What is 2+2?", genome)
|
|
600
|
-
|
|
601
|
-
if result['used_fast_path']:
|
|
602
|
-
print("Fast gut reaction")
|
|
603
|
-
elif result['used_ensemble']:
|
|
604
|
-
print("Ensemble voting")
|
|
605
|
-
else:
|
|
606
|
-
print("Full reasoning")
|
|
607
|
-
```
|
|
608
|
-
The intention for this model ensembler system is to mimic human cognition: pattern-matched gut reactions (System 1 of Kahneman) for familiar queries, falling back to deliberate reasoning (System 2 of Kahneman) for novel problems. Genetic algorithms evolve both knowledge structures and model specializations over time.
|
|
609
|
-
|
|
610
|
-
|
|
611
|
-
## NPCArray - NumPy for AI
|
|
612
|
-
|
|
613
|
-
`npcpy` provides `NPCArray`, a NumPy-like interface for working with populations of models (LLMs, sklearn, PyTorch) at scale. Think of it as vectorized operations over AI models.
|
|
614
|
-
|
|
615
|
-
### Core Concepts
|
|
616
|
-
- Model arrays support vectorized operations
|
|
617
|
-
- Operations are lazy until `.collect()` is called (like Spark)
|
|
618
|
-
- Same interface works for single models (treated as length-1 arrays)
|
|
619
|
-
- Supports ensemble voting, consensus, evolution, and more
|
|
620
|
-
|
|
621
|
-
### Basic Usage
|
|
622
|
-
|
|
623
|
-
```python
|
|
624
|
-
from npcpy.npc_array import NPCArray
|
|
625
|
-
|
|
626
|
-
# Create array of LLMs
|
|
627
|
-
models = NPCArray.from_llms(
|
|
628
|
-
['llama3.2', 'gemma3:1b'],
|
|
629
|
-
providers='ollama'
|
|
630
|
-
)
|
|
631
|
-
|
|
632
|
-
print(f"Model array shape: {models.shape}") # (2,)
|
|
633
|
-
|
|
634
|
-
# Inference across all models - returns shape (n_models, n_prompts)
|
|
635
|
-
result = models.infer("What is 2+2? Just the number.").collect()
|
|
636
|
-
|
|
637
|
-
print(f"Model 1: {result.data[0, 0]}")
|
|
638
|
-
print(f"Model 2: {result.data[1, 0]}")
|
|
639
|
-
```
|
|
640
|
-
|
|
641
|
-
### Lazy Chaining & Ensemble Operations
|
|
642
|
-
|
|
643
|
-
```python
|
|
644
|
-
from npcpy.npc_array import NPCArray
|
|
645
|
-
|
|
646
|
-
models = NPCArray.from_llms(['llama3.2', 'gemma3:1b', 'mistral:7b'])
|
|
647
|
-
|
|
648
|
-
# Build lazy computation graph - nothing executed yet
|
|
649
|
-
result = (
|
|
650
|
-
models
|
|
651
|
-
.infer("Is Python compiled or interpreted? One word.")
|
|
652
|
-
.map(lambda r: r.strip().lower()) # Clean responses
|
|
653
|
-
.vote(axis=0) # Majority voting across models
|
|
654
|
-
)
|
|
655
|
-
|
|
656
|
-
# Show the computation plan
|
|
657
|
-
result.explain()
|
|
658
|
-
|
|
659
|
-
# Now execute
|
|
660
|
-
answer = result.collect()
|
|
661
|
-
print(f"Consensus: {answer.data[0]}")
|
|
662
|
-
```
|
|
663
|
-
|
|
664
|
-
### Parameter Sweeps with Meshgrid
|
|
665
|
-
|
|
666
|
-
```python
|
|
667
|
-
from npcpy.npc_array import NPCArray
|
|
668
|
-
|
|
669
|
-
# Cartesian product over parameters
|
|
670
|
-
configs = NPCArray.meshgrid(
|
|
671
|
-
models=['llama3.2', 'gemma3:1b'],
|
|
672
|
-
temperatures=[0.0, 0.5, 1.0]
|
|
673
|
-
)
|
|
674
|
-
|
|
675
|
-
print(f"Config array shape: {configs.shape}") # (6,) = 2 models × 3 temps
|
|
676
|
-
|
|
677
|
-
# Run inference with each config
|
|
678
|
-
result = configs.infer("Complete: The quick brown fox").collect()
|
|
679
|
-
```
|
|
680
|
-
|
|
681
|
-
### Matrix Sampling with get_llm_response
|
|
682
|
-
|
|
683
|
-
The `get_llm_response` function supports `matrix` and `n_samples` parameters for exploration:
|
|
684
|
-
|
|
685
|
-
```python
|
|
686
|
-
from npcpy.llm_funcs import get_llm_response
|
|
687
|
-
|
|
688
|
-
# Matrix parameter - cartesian product over specified params
|
|
689
|
-
result = get_llm_response(
|
|
690
|
-
"Write a creative opening line.",
|
|
691
|
-
matrix={
|
|
692
|
-
'model': ['llama3.2', 'gemma3:1b'],
|
|
693
|
-
'temperature': [0.5, 1.0]
|
|
694
|
-
}
|
|
695
|
-
)
|
|
696
|
-
print(f"Number of runs: {len(result['runs'])}") # 4 = 2×2
|
|
697
|
-
|
|
698
|
-
# n_samples - multiple samples from same config
|
|
699
|
-
result = get_llm_response(
|
|
700
|
-
"Pick a random number 1-100.",
|
|
701
|
-
model='llama3.2',
|
|
702
|
-
n_samples=5
|
|
703
|
-
)
|
|
704
|
-
print(f"Samples: {[r['response'] for r in result['runs']]}")
|
|
705
|
-
|
|
706
|
-
# Combine both for full exploration
|
|
707
|
-
result = get_llm_response(
|
|
708
|
-
"Flip a coin: heads or tails?",
|
|
709
|
-
matrix={'model': ['llama3.2', 'gemma3:1b']},
|
|
710
|
-
n_samples=3 # 2 models × 3 samples = 6 runs
|
|
711
|
-
)
|
|
712
|
-
```
|
|
713
|
-
|
|
714
|
-
### sklearn Integration
|
|
715
|
-
|
|
716
|
-
```python
|
|
717
|
-
from npcpy.npc_array import NPCArray
|
|
718
|
-
from sklearn.ensemble import RandomForestClassifier
|
|
719
|
-
from sklearn.linear_model import LogisticRegression
|
|
720
|
-
import numpy as np
|
|
721
|
-
|
|
722
|
-
# Create sample data
|
|
723
|
-
X_train = np.random.randn(100, 4)
|
|
724
|
-
y_train = (X_train[:, 0] > 0).astype(int)
|
|
725
|
-
|
|
726
|
-
# Pre-fit models
|
|
727
|
-
rf = RandomForestClassifier(n_estimators=10).fit(X_train, y_train)
|
|
728
|
-
lr = LogisticRegression().fit(X_train, y_train)
|
|
729
|
-
|
|
730
|
-
# Create array from fitted models
|
|
731
|
-
models = NPCArray.from_sklearn([rf, lr])
|
|
732
|
-
|
|
733
|
-
# Vectorized prediction
|
|
734
|
-
X_test = np.random.randn(20, 4)
|
|
735
|
-
predictions = models.predict(X_test).collect()
|
|
736
|
-
|
|
737
|
-
print(f"RF predictions: {predictions.data[0]}")
|
|
738
|
-
print(f"LR predictions: {predictions.data[1]}")
|
|
739
|
-
```
|
|
740
|
-
|
|
741
|
-
### ML Functions with Grid Search
|
|
742
|
-
|
|
743
|
-
```python
|
|
744
|
-
from npcpy.ml_funcs import fit_model, score_model, ensemble_predict
|
|
745
|
-
|
|
746
|
-
# Grid search via matrix parameter
|
|
747
|
-
result = fit_model(
|
|
748
|
-
X_train, y_train,
|
|
749
|
-
model='RandomForestClassifier',
|
|
750
|
-
matrix={
|
|
751
|
-
'n_estimators': [10, 50, 100],
|
|
752
|
-
'max_depth': [3, 5, 10]
|
|
753
|
-
}
|
|
754
|
-
)
|
|
755
|
-
|
|
756
|
-
print(f"Fitted {len(result['models'])} model configurations")
|
|
757
|
-
|
|
758
|
-
# Ensemble voting with multiple models
|
|
759
|
-
predictions = ensemble_predict(X_test, result['models'], method='vote')
|
|
760
|
-
```
|
|
761
|
-
|
|
762
|
-
### Quick Utilities
|
|
763
|
-
|
|
764
|
-
```python
|
|
765
|
-
from npcpy.npc_array import infer_matrix, ensemble_vote
|
|
766
|
-
|
|
767
|
-
# Quick matrix inference
|
|
768
|
-
result = infer_matrix(
|
|
769
|
-
prompts=["Hello", "Goodbye"],
|
|
770
|
-
models=['llama3.2', 'gemma3:1b']
|
|
771
|
-
)
|
|
772
|
-
|
|
773
|
-
# Quick ensemble vote
|
|
774
|
-
answer = ensemble_vote(
|
|
775
|
-
"What is the capital of France? One word.",
|
|
776
|
-
models=['llama3.2', 'gemma3:1b']
|
|
777
|
-
)
|
|
778
|
-
print(f"Voted answer: {answer}")
|
|
779
|
-
```
|
|
780
|
-
|
|
781
|
-
See `examples/npc_array_examples.py` for more comprehensive examples.
|
|
782
|
-
|
|
783
|
-
|
|
784
|
-
## Serving an NPC Team
|
|
785
|
-
|
|
786
|
-
`npcpy` includes a built-in Flask server that makes it easy to deploy NPC teams for production use. You can serve teams with tools, jinxs, and complex workflows that frontends can interact with via REST APIs.
|
|
787
|
-
|
|
788
|
-
### Basic Team Server Setup
|
|
789
|
-
|
|
790
|
-
```python
|
|
791
|
-
from npcpy.serve import start_flask_server
|
|
792
|
-
from npcpy.npc_compiler import NPC, Team
|
|
793
|
-
from npcpy.tools import auto_tools
|
|
794
|
-
import requests
|
|
795
|
-
import os
|
|
796
|
-
|
|
797
|
-
# Create NPCs with different specializations
|
|
798
|
-
researcher = NPC(
|
|
799
|
-
name='Research Specialist',
|
|
800
|
-
primary_directive='You are a research specialist who finds and analyzes information from various sources.',
|
|
801
|
-
model='claude-3-5-sonnet-latest',
|
|
802
|
-
provider='anthropic'
|
|
803
|
-
)
|
|
804
|
-
|
|
805
|
-
analyst = NPC(
|
|
806
|
-
name='Data Analyst',
|
|
807
|
-
primary_directive='You are a data analyst who processes and interprets research findings.',
|
|
808
|
-
model='gpt-4o',
|
|
809
|
-
provider='openai'
|
|
810
|
-
)
|
|
811
|
-
|
|
812
|
-
coordinator = NPC(
|
|
813
|
-
name='Project Coordinator',
|
|
814
|
-
primary_directive='You coordinate team activities and synthesize results into actionable insights.',
|
|
815
|
-
model='gemini-1.5-pro',
|
|
816
|
-
provider='gemini'
|
|
817
|
-
)
|
|
818
|
-
|
|
819
|
-
# Create team
|
|
820
|
-
research_team = Team(
|
|
821
|
-
npcs=[researcher, analyst],
|
|
822
|
-
forenpc=coordinator
|
|
823
|
-
)
|
|
824
|
-
|
|
825
|
-
if __name__ == "__main__":
|
|
826
|
-
# Register team and NPCs directly with the server
|
|
827
|
-
npcs = {npc.name: npc for npc in list(research_team.npcs.values()) + [research_team.forenpc]}
|
|
828
|
-
start_flask_server(
|
|
829
|
-
port=5337,
|
|
830
|
-
cors_origins=["http://localhost:3000", "http://localhost:5173"], # Allow frontend access
|
|
831
|
-
debug=True,
|
|
832
|
-
teams={'research_team': research_team},
|
|
833
|
-
npcs=npcs
|
|
834
|
-
)
|
|
835
|
-
```
|
|
836
|
-
|
|
837
|
-
|
|
838
|
-
|
|
839
|
-
## Read the Docs
|
|
840
|
-
|
|
841
|
-
For more examples of how to use `npcpy` to simplify your LLM workflows or to create agents or multi-agent systems, read the docs at [npcpy.readthedocs.io](https://npcpy.readthedocs.io/en/latest/)
|
|
842
|
-
|
|
843
|
-
|
|
844
|
-
## Inference Capabilities
|
|
845
|
-
- `npcpy` works with local and enterprise LLM providers through its LiteLLM integration, allowing users to run inference from Ollama, LMStudio, OpenAI, Anthropic, Gemini, and Deepseek, making it a versatile tool for both simple commands and sophisticated AI-driven tasks.
|
|
846
|
-
|
|
847
|
-
|
|
848
|
-
|
|
849
|
-
## NPC Studio
|
|
850
|
-
There is a graphical user interface that makes use of the NPC Toolkit through the NPC Studio. See the source code for NPC Studio [here](https://github.com/cagostino/npc-studio). Download the executables at [our website](https://enpisi.com/npc-studio).
|
|
851
|
-
|
|
852
|
-
## NPC Shell
|
|
853
|
-
|
|
854
|
-
The NPC shell is a suite of executable command-line programs that allow users to easily interact with NPCs and LLMs through a command line shell.
|
|
855
|
-
[Try out the NPC Shell](https://github.com/npc-worldwide/npcsh)
|
|
856
|
-
|
|
857
|
-
|
|
858
|
-
## Mailing List
|
|
859
|
-
Interested to stay in the loop and to hear the latest and greatest about `npcpy`, `npcsh` and NPC Studio? Be sure to sign up for the [newsletter](https://forms.gle/n1NzQmwjsV4xv1B2A)!
|
|
860
|
-
|
|
861
|
-
|
|
862
|
-
|
|
863
|
-
## Support
|
|
864
|
-
If you appreciate the work here, [consider supporting NPC Worldwide with a monthly donation](https://buymeacoffee.com/npcworldwide), [buying NPC-WW themed merch](https://enpisi.com/shop), or hiring us to help you explore how to use `npcpy` and AI tools to help your business or research team, please reach out to info@npcworldwi.de .
|
|
865
|
-
|
|
866
|
-
|
|
867
|
-
|
|
868
|
-
|
|
869
|
-
|
|
870
|
-
## Enabling Innovation and Research
|
|
871
|
-
- `npcpy` is a framework that speeds up and simplifies the development of NLP-based or Agent-based applications and provides developers and researchers with methods to explore and test across dozens of models, providers, and personas as well as other model-level hyperparameters (e.g. `temperature`, `top_k`, etc.), incorporating an array of data sources and common tools.
|
|
872
|
-
- The `npcpy` agent data layer makes it easy to set up teams and serve them so you can focus more on the agent personas and less on the nitty gritty of inference.
|
|
873
|
-
- `npcpy` provides pioneering methods in the construction and updating of knowledge graphs as well as in the development and testing of novel mixture of agent scenarios.
|
|
874
|
-
- In `npcpy`, all agentic capabilities are developed and tested using small local models (like `llama3.2`, `gemma3`) to ensure it can function reliably at the edge of computing.
|
|
875
|
-
|
|
876
|
-
### Papers
|
|
877
|
-
- Paper on the limitations of LLMs and on the quantum-like nature of natural language interpretation : [arxiv preprint](https://arxiv.org/abs/2506.10077), accepted for publication at [Quantum AI and NLP 2025](qnlp.ai)
|
|
878
|
-
- Paper that considers the effects that might accompany simulating hormonal cycles for AI : [arxiv preprint](https://arxiv.org/abs/2508.11829)
|
|
879
|
-
|
|
880
|
-
Has your research benefited from npcpy? Let us know and we'd be happy to feature you here!
|
|
881
|
-
|
|
882
|
-
## NPCs
|
|
883
|
-
|
|
884
|
-
Check out [lavanzaro](https://lavanzaro.com) to discuss the great things of life with an `npcpy` powered chatbot
|
|
885
|
-
|
|
886
|
-
## Installation
|
|
887
|
-
`npcpy` is available on PyPI and can be installed using pip. Before installing, make sure you have the necessary dependencies installed on your system. Below are the instructions for installing such dependencies on Linux, Mac, and Windows. If you find any other dependencies that are needed, please let us know so we can update the installation instructions to be more accommodating.
|
|
888
|
-
|
|
889
|
-
### Linux install
|
|
890
|
-
<details> <summary> Toggle </summary>
|
|
891
|
-
|
|
892
|
-
```bash
|
|
893
|
-
|
|
894
|
-
# these are for audio primarily, skip if you dont need tts
|
|
895
|
-
sudo apt-get install espeak
|
|
896
|
-
sudo apt-get install portaudio19-dev python3-pyaudio
|
|
897
|
-
sudo apt-get install alsa-base alsa-utils
|
|
898
|
-
sudo apt-get install libcairo2-dev
|
|
899
|
-
sudo apt-get install libgirepository1.0-dev
|
|
900
|
-
sudo apt-get install ffmpeg
|
|
901
|
-
|
|
902
|
-
# for triggers
|
|
903
|
-
sudo apt install inotify-tools
|
|
904
|
-
|
|
905
|
-
|
|
906
|
-
#And if you don't have ollama installed, use this:
|
|
907
|
-
curl -fsSL https://ollama.com/install.sh | sh
|
|
908
|
-
|
|
909
|
-
ollama pull llama3.2
|
|
910
|
-
ollama pull llava:7b
|
|
911
|
-
ollama pull nomic-embed-text
|
|
912
|
-
pip install npcpy
|
|
913
|
-
# if you want to install with the API libraries
|
|
914
|
-
pip install 'npcpy[lite]'
|
|
915
|
-
# if you want the full local package set up (ollama, diffusers, transformers, cuda etc.)
|
|
916
|
-
pip install 'npcpy[local]'
|
|
917
|
-
# if you want to use tts/stt
|
|
918
|
-
pip install 'npcpy[yap]'
|
|
919
|
-
# if you want everything:
|
|
920
|
-
pip install 'npcpy[all]'
|
|
921
|
-
|
|
922
|
-
```
|
|
923
|
-
|
|
924
|
-
</details>
|
|
925
|
-
|
|
926
|
-
|
|
927
|
-
### Mac install
|
|
928
|
-
|
|
929
|
-
<details> <summary> Toggle </summary>
|
|
930
|
-
|
|
931
|
-
```bash
|
|
932
|
-
#mainly for audio
|
|
933
|
-
brew install portaudio
|
|
934
|
-
brew install ffmpeg
|
|
935
|
-
brew install pygobject3
|
|
936
|
-
|
|
937
|
-
# for triggers
|
|
938
|
-
brew install inotify-tools
|
|
939
|
-
|
|
940
|
-
|
|
941
|
-
brew install ollama
|
|
942
|
-
brew services start ollama
|
|
943
|
-
ollama pull llama3.2
|
|
944
|
-
ollama pull llava:7b
|
|
945
|
-
ollama pull nomic-embed-text
|
|
946
|
-
pip install npcpy
|
|
947
|
-
# if you want to install with the API libraries
|
|
948
|
-
pip install npcpy[lite]
|
|
949
|
-
# if you want the full local package set up (ollama, diffusers, transformers, cuda etc.)
|
|
950
|
-
pip install npcpy[local]
|
|
951
|
-
# if you want to use tts/stt
|
|
952
|
-
pip install npcpy[yap]
|
|
953
|
-
|
|
954
|
-
# if you want everything:
|
|
955
|
-
pip install npcpy[all]
|
|
956
|
-
```
|
|
957
|
-
</details>
|
|
958
|
-
|
|
959
|
-
### Windows Install
|
|
960
|
-
|
|
961
|
-
<details> <summary> Toggle </summary>
|
|
962
|
-
Download and install ollama exe.
|
|
963
|
-
|
|
964
|
-
Then, in a powershell. Download and install ffmpeg.
|
|
965
|
-
|
|
966
|
-
```powershell
|
|
967
|
-
ollama pull llama3.2
|
|
968
|
-
ollama pull llava:7b
|
|
969
|
-
ollama pull nomic-embed-text
|
|
970
|
-
pip install npcpy
|
|
971
|
-
# if you want to install with the API libraries
|
|
972
|
-
pip install npcpy[lite]
|
|
973
|
-
# if you want the full local package set up (ollama, diffusers, transformers, cuda etc.)
|
|
974
|
-
pip install npcpy[local]
|
|
975
|
-
# if you want to use tts/stt
|
|
976
|
-
pip install npcpy[yap]
|
|
977
|
-
|
|
978
|
-
# if you want everything:
|
|
979
|
-
pip install npcpy[all]
|
|
980
|
-
```
|
|
981
|
-
|
|
982
|
-
</details>
|
|
983
|
-
|
|
984
|
-
### Fedora Install (under construction)
|
|
985
|
-
|
|
986
|
-
<details> <summary> Toggle </summary>
|
|
987
|
-
|
|
988
|
-
```bash
|
|
989
|
-
python3-dev #(fixes hnswlib issues with chroma db)
|
|
990
|
-
xhost + (pyautogui)
|
|
991
|
-
python-tkinter (pyautogui)
|
|
992
|
-
```
|
|
993
|
-
|
|
994
|
-
</details>
|
|
995
|
-
|
|
996
|
-
|
|
997
|
-
We support inference via all providers supported by litellm. For openai-compatible providers that are not explicitly named in litellm, use simply `openai-like` as the provider. The default provider must be one of `['openai','anthropic','ollama', 'gemini', 'deepseek', 'openai-like']` and the model must be one available from those providers.
|
|
998
|
-
|
|
999
|
-
To use tools that require API keys, create an `.env` file in the folder where you are working or place relevant API keys as env variables in your ~/.npcshrc. If you already have these API keys set in a ~/.bashrc or a ~/.zshrc or similar files, you need not additionally add them to ~/.npcshrc or to an `.env` file. Here is an example of what an `.env` file might look like:
|
|
1000
|
-
|
|
1001
|
-
```bash
|
|
1002
|
-
export OPENAI_API_KEY="your_openai_key"
|
|
1003
|
-
export ANTHROPIC_API_KEY="your_anthropic_key"
|
|
1004
|
-
export DEEPSEEK_API_KEY='your_deepseek_key'
|
|
1005
|
-
export GEMINI_API_KEY='your_gemini_key'
|
|
1006
|
-
export PERPLEXITY_API_KEY='your_perplexity_key'
|
|
1007
|
-
```
|
|
1008
|
-
|
|
1009
|
-
|
|
1010
|
-
Individual npcs can also be set to use different models and providers by setting the `model` and `provider` keys in the npc files.
|
|
1011
|
-
|
|
1012
|
-
|
|
1013
|
-
For cases where you wish to set up a team of NPCs, jinxs, and assembly lines, add a `npc_team` directory to your project and then initialize an NPC Team.
|
|
1014
|
-
```bash
|
|
1015
|
-
./npc_team/ # Project-specific NPCs
|
|
1016
|
-
├── jinxs/ # Project jinxs #example jinx next
|
|
1017
|
-
│ └── example.jinx
|
|
1018
|
-
└── assembly_lines/ # Project workflows
|
|
1019
|
-
└── example.pipe
|
|
1020
|
-
└── models/ # Project workflows
|
|
1021
|
-
└── example.model
|
|
1022
|
-
└── example1.npc # Example NPC
|
|
1023
|
-
└── example2.npc # Example NPC
|
|
1024
|
-
└── team.ctx # Example ctx
|
|
1025
|
-
|
|
1026
|
-
|
|
1027
|
-
```
|
|
1028
|
-
|
|
1029
|
-
|
|
1030
|
-
## Contributing
|
|
1031
|
-
Contributions are welcome! Please submit issues and pull requests on the GitHub repository.
|
|
1032
|
-
|
|
1033
|
-
|
|
1034
|
-
## License
|
|
1035
|
-
This project is licensed under the MIT License.
|
|
1036
|
-
|
|
1037
|
-
## Star History
|
|
1038
|
-
|
|
1039
|
-
[](https://star-history.com/#cagostino/npcpy&Date)
|