zrb 1.15.3__py3-none-any.whl → 1.21.29__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.

Potentially problematic release.


This version of zrb might be problematic. Click here for more details.

Files changed (108) hide show
  1. zrb/__init__.py +2 -6
  2. zrb/attr/type.py +10 -7
  3. zrb/builtin/__init__.py +2 -0
  4. zrb/builtin/git.py +12 -1
  5. zrb/builtin/group.py +31 -15
  6. zrb/builtin/llm/attachment.py +40 -0
  7. zrb/builtin/llm/chat_completion.py +274 -0
  8. zrb/builtin/llm/chat_session.py +126 -167
  9. zrb/builtin/llm/chat_session_cmd.py +288 -0
  10. zrb/builtin/llm/chat_trigger.py +79 -0
  11. zrb/builtin/llm/history.py +4 -4
  12. zrb/builtin/llm/llm_ask.py +217 -135
  13. zrb/builtin/llm/tool/api.py +74 -70
  14. zrb/builtin/llm/tool/cli.py +35 -21
  15. zrb/builtin/llm/tool/code.py +55 -73
  16. zrb/builtin/llm/tool/file.py +278 -344
  17. zrb/builtin/llm/tool/note.py +84 -0
  18. zrb/builtin/llm/tool/rag.py +27 -34
  19. zrb/builtin/llm/tool/sub_agent.py +54 -41
  20. zrb/builtin/llm/tool/web.py +74 -98
  21. zrb/builtin/project/add/fastapp/fastapp_template/my_app_name/_zrb/entity/add_entity_util.py +7 -7
  22. zrb/builtin/project/add/fastapp/fastapp_template/my_app_name/_zrb/module/add_module_util.py +5 -5
  23. zrb/builtin/project/add/fastapp/fastapp_util.py +1 -1
  24. zrb/builtin/searxng/config/settings.yml +5671 -0
  25. zrb/builtin/searxng/start.py +21 -0
  26. zrb/builtin/shell/autocomplete/bash.py +4 -3
  27. zrb/builtin/shell/autocomplete/zsh.py +4 -3
  28. zrb/config/config.py +202 -27
  29. zrb/config/default_prompt/file_extractor_system_prompt.md +109 -9
  30. zrb/config/default_prompt/interactive_system_prompt.md +24 -30
  31. zrb/config/default_prompt/persona.md +1 -1
  32. zrb/config/default_prompt/repo_extractor_system_prompt.md +31 -31
  33. zrb/config/default_prompt/repo_summarizer_system_prompt.md +27 -8
  34. zrb/config/default_prompt/summarization_prompt.md +57 -16
  35. zrb/config/default_prompt/system_prompt.md +36 -30
  36. zrb/config/llm_config.py +119 -23
  37. zrb/config/llm_context/config.py +127 -90
  38. zrb/config/llm_context/config_parser.py +1 -7
  39. zrb/config/llm_context/workflow.py +81 -0
  40. zrb/config/llm_rate_limitter.py +100 -47
  41. zrb/context/any_shared_context.py +7 -1
  42. zrb/context/context.py +8 -2
  43. zrb/context/shared_context.py +3 -7
  44. zrb/group/any_group.py +3 -3
  45. zrb/group/group.py +3 -3
  46. zrb/input/any_input.py +5 -1
  47. zrb/input/base_input.py +18 -6
  48. zrb/input/option_input.py +13 -1
  49. zrb/input/text_input.py +7 -24
  50. zrb/runner/cli.py +21 -20
  51. zrb/runner/common_util.py +24 -19
  52. zrb/runner/web_route/task_input_api_route.py +5 -5
  53. zrb/runner/web_util/user.py +7 -3
  54. zrb/session/any_session.py +12 -6
  55. zrb/session/session.py +39 -18
  56. zrb/task/any_task.py +24 -3
  57. zrb/task/base/context.py +17 -9
  58. zrb/task/base/execution.py +15 -8
  59. zrb/task/base/lifecycle.py +8 -4
  60. zrb/task/base/monitoring.py +12 -7
  61. zrb/task/base_task.py +69 -5
  62. zrb/task/base_trigger.py +12 -5
  63. zrb/task/llm/agent.py +128 -167
  64. zrb/task/llm/agent_runner.py +152 -0
  65. zrb/task/llm/config.py +39 -20
  66. zrb/task/llm/conversation_history.py +110 -29
  67. zrb/task/llm/conversation_history_model.py +4 -179
  68. zrb/task/llm/default_workflow/coding/workflow.md +41 -0
  69. zrb/task/llm/default_workflow/copywriting/workflow.md +68 -0
  70. zrb/task/llm/default_workflow/git/workflow.md +118 -0
  71. zrb/task/llm/default_workflow/golang/workflow.md +128 -0
  72. zrb/task/llm/default_workflow/html-css/workflow.md +135 -0
  73. zrb/task/llm/default_workflow/java/workflow.md +146 -0
  74. zrb/task/llm/default_workflow/javascript/workflow.md +158 -0
  75. zrb/task/llm/default_workflow/python/workflow.md +160 -0
  76. zrb/task/llm/default_workflow/researching/workflow.md +153 -0
  77. zrb/task/llm/default_workflow/rust/workflow.md +162 -0
  78. zrb/task/llm/default_workflow/shell/workflow.md +299 -0
  79. zrb/task/llm/file_replacement.py +206 -0
  80. zrb/task/llm/file_tool_model.py +57 -0
  81. zrb/task/llm/history_processor.py +206 -0
  82. zrb/task/llm/history_summarization.py +2 -193
  83. zrb/task/llm/print_node.py +184 -64
  84. zrb/task/llm/prompt.py +175 -179
  85. zrb/task/llm/subagent_conversation_history.py +41 -0
  86. zrb/task/llm/tool_wrapper.py +226 -85
  87. zrb/task/llm/workflow.py +76 -0
  88. zrb/task/llm_task.py +109 -71
  89. zrb/task/make_task.py +2 -3
  90. zrb/task/rsync_task.py +25 -10
  91. zrb/task/scheduler.py +4 -4
  92. zrb/util/attr.py +54 -39
  93. zrb/util/cli/markdown.py +12 -0
  94. zrb/util/cli/text.py +30 -0
  95. zrb/util/file.py +12 -3
  96. zrb/util/git.py +2 -2
  97. zrb/util/{llm/prompt.py → markdown.py} +2 -3
  98. zrb/util/string/conversion.py +1 -1
  99. zrb/util/truncate.py +23 -0
  100. zrb/util/yaml.py +204 -0
  101. zrb/xcom/xcom.py +10 -0
  102. {zrb-1.15.3.dist-info → zrb-1.21.29.dist-info}/METADATA +38 -18
  103. {zrb-1.15.3.dist-info → zrb-1.21.29.dist-info}/RECORD +105 -79
  104. {zrb-1.15.3.dist-info → zrb-1.21.29.dist-info}/WHEEL +1 -1
  105. zrb/task/llm/default_workflow/coding.md +0 -24
  106. zrb/task/llm/default_workflow/copywriting.md +0 -17
  107. zrb/task/llm/default_workflow/researching.md +0 -18
  108. {zrb-1.15.3.dist-info → zrb-1.21.29.dist-info}/entry_points.txt +0 -0
@@ -0,0 +1,21 @@
1
+ import os
2
+
3
+ from zrb.builtin.group import searxng_group
4
+ from zrb.config.config import CFG
5
+ from zrb.input.int_input import IntInput
6
+ from zrb.task.cmd_task import CmdTask
7
+ from zrb.task.http_check import HttpCheck
8
+
9
+ start_searxng = searxng_group.add_task(
10
+ CmdTask(
11
+ name="start-searxng",
12
+ input=IntInput(name="port", default=CFG.SEARXNG_PORT),
13
+ cwd=os.path.dirname(__file__),
14
+ cmd="docker run --rm -p {ctx.input.port}:8080 -v ./config/:/etc/searxng/ docker.io/searxng/searxng:latest -d", # noqa
15
+ readiness_check=HttpCheck(
16
+ "check-searxng",
17
+ url="http://localhost:{ctx.input.port}",
18
+ ),
19
+ ),
20
+ alias="start",
21
+ )
@@ -1,5 +1,6 @@
1
1
  from zrb.builtin.group import shell_autocomplete_group
2
- from zrb.context.context import Context
2
+ from zrb.config.config import CFG
3
+ from zrb.context.context import AnyContext
3
4
  from zrb.task.make_task import make_task
4
5
 
5
6
  _COMPLETION_SCRIPT = """
@@ -36,5 +37,5 @@ complete -F _zrb_complete zrb
36
37
  group=shell_autocomplete_group,
37
38
  alias="bash",
38
39
  )
39
- def make_bash_autocomplete(ctx: Context):
40
- return _COMPLETION_SCRIPT
40
+ def make_bash_autocomplete(ctx: AnyContext):
41
+ return _COMPLETION_SCRIPT.replace("zrb", CFG.ROOT_GROUP_NAME)
@@ -1,5 +1,6 @@
1
1
  from zrb.builtin.group import shell_autocomplete_group
2
- from zrb.context.context import Context
2
+ from zrb.config.config import CFG
3
+ from zrb.context.context import AnyContext
3
4
  from zrb.task.make_task import make_task
4
5
 
5
6
  _COMPLETION_SCRIPT = """
@@ -33,5 +34,5 @@ compdef _zrb_complete zrb
33
34
  group=shell_autocomplete_group,
34
35
  alias="zsh",
35
36
  )
36
- def make_zsh_autocomplete(ctx: Context):
37
- return _COMPLETION_SCRIPT
37
+ def make_zsh_autocomplete(ctx: AnyContext):
38
+ return _COMPLETION_SCRIPT.replace("zrb", CFG.ROOT_GROUP_NAME)
zrb/config/config.py CHANGED
@@ -28,8 +28,13 @@ class Config:
28
28
  def ENV_PREFIX(self) -> str:
29
29
  return os.getenv("_ZRB_ENV_PREFIX", "ZRB")
30
30
 
31
- def _getenv(self, env_name: str, default: str = "") -> str:
32
- return os.getenv(f"{self.ENV_PREFIX}_{env_name}", default)
31
+ def _getenv(self, env_name: str | list[str], default: str = "") -> str:
32
+ env_name_list = env_name if isinstance(env_name, list) else [env_name]
33
+ for env_name in env_name_list:
34
+ value = os.getenv(f"{self.ENV_PREFIX}_{env_name}", None)
35
+ if value is not None:
36
+ return value
37
+ return default
33
38
 
34
39
  def _get_internal_default_prompt(self, name: str) -> str:
35
40
  if name not in self.__internal_default_prompt:
@@ -60,6 +65,38 @@ class Config:
60
65
  def DEFAULT_EDITOR(self) -> str:
61
66
  return self._getenv("EDITOR", "nano")
62
67
 
68
+ @property
69
+ def DEFAULT_DIFF_EDIT_COMMAND_TPL(self) -> str:
70
+ return self._getenv("DIFF_EDIT_COMMAND", self._get_default_diff_edit_command())
71
+
72
+ def _get_default_diff_edit_command(self) -> str:
73
+ editor = self.DEFAULT_EDITOR
74
+ if editor in [
75
+ "code",
76
+ "vscode",
77
+ "vscodium",
78
+ "windsurf",
79
+ "cursor",
80
+ "zed",
81
+ "zeditor",
82
+ "agy",
83
+ ]:
84
+ return f"{editor} --wait --diff {{old}} {{new}}"
85
+ if editor == "emacs":
86
+ return 'emacs --eval \'(ediff-files "{old}" "{new}")\''
87
+ if editor in ["nvim", "vim"]:
88
+ return (
89
+ f"{editor} -d {{old}} {{new}} "
90
+ "-i NONE "
91
+ '-c "wincmd h | set readonly | wincmd l" '
92
+ '-c "highlight DiffAdd cterm=bold ctermbg=22 guibg=#005f00 | highlight DiffChange cterm=bold ctermbg=24 guibg=#005f87 | highlight DiffText ctermbg=21 guibg=#0000af | highlight DiffDelete ctermbg=52 guibg=#5f0000" ' # noqa
93
+ '-c "set showtabline=2 | set tabline=[Instructions]\\ :wqa(save\\ &\\ quit)\\ \\|\\ i/esc(toggle\\ edit\\ mode)" ' # noqa
94
+ '-c "wincmd h | setlocal statusline=OLD\\ FILE" '
95
+ '-c "wincmd l | setlocal statusline=%#StatusBold#NEW\\ FILE\\ :wqa(save\\ &\\ quit)\\ \\|\\ i/esc(toggle\\ edit\\ mode)" ' # noqa
96
+ '-c "autocmd BufWritePost * wqa"'
97
+ )
98
+ return 'vimdiff {old} {new} +"setlocal ro" +"wincmd l" +"autocmd BufWritePost <buffer> qa"' # noqa
99
+
63
100
  @property
64
101
  def INIT_MODULES(self) -> list[str]:
65
102
  init_modules_str = self._getenv("INIT_MODULES", "")
@@ -102,6 +139,7 @@ class Config:
102
139
  level = level.upper()
103
140
  log_levels = {
104
141
  "CRITICAL": logging.CRITICAL, # 50
142
+ "FATAL": logging.CRITICAL, # 50
105
143
  "ERROR": logging.ERROR, # 40
106
144
  "WARN": logging.WARNING, # 30
107
145
  "WARNING": logging.WARNING, # 30
@@ -243,6 +281,21 @@ class Config:
243
281
  value = self._getenv("LLM_API_KEY")
244
282
  return None if value == "" else value
245
283
 
284
+ @property
285
+ def LLM_MODEL_SMALL(self) -> str | None:
286
+ value = self._getenv("LLM_MODEL_SMALL")
287
+ return None if value == "" else value
288
+
289
+ @property
290
+ def LLM_BASE_URL_SMALL(self) -> str | None:
291
+ value = self._getenv("LLM_BASE_URL_SMALL")
292
+ return None if value == "" else value
293
+
294
+ @property
295
+ def LLM_API_KEY_SMALL(self) -> str | None:
296
+ value = self._getenv("LLM_API_KEY_SMALL")
297
+ return None if value == "" else value
298
+
246
299
  @property
247
300
  def LLM_SYSTEM_PROMPT(self) -> str | None:
248
301
  value = self._getenv("LLM_SYSTEM_PROMPT")
@@ -259,12 +312,28 @@ class Config:
259
312
  return None if value == "" else value
260
313
 
261
314
  @property
262
- def LLM_MODES(self) -> list[str]:
263
- return [
264
- mode.strip()
265
- for mode in self._getenv("LLM_MODES", "coding").split(",")
266
- if mode.strip() != ""
267
- ]
315
+ def LLM_WORKFLOWS(self) -> list[str]:
316
+ """Get a list of LLM workflows from environment variables."""
317
+ workflows = []
318
+ for workflow in self._getenv("LLM_WORKFLOWS", "").split(","):
319
+ workflow = workflow.strip()
320
+ if workflow != "":
321
+ workflows.append(workflow)
322
+ return workflows
323
+
324
+ @property
325
+ def LLM_BUILTIN_WORKFLOW_PATHS(self) -> list[str]:
326
+ """Get a list of additional builtin workflow paths from environment variables."""
327
+ builtin_workflow_paths_str = self._getenv(
328
+ ["LLM_BUILTIN_WORFKLOW_PATH", "LLM_BUILTIN_WORKFLOW_PATHS"], ""
329
+ )
330
+ if builtin_workflow_paths_str != "":
331
+ return [
332
+ path.strip()
333
+ for path in builtin_workflow_paths_str.split(":")
334
+ if path.strip() != ""
335
+ ]
336
+ return []
268
337
 
269
338
  @property
270
339
  def LLM_SPECIAL_INSTRUCTION_PROMPT(self) -> str | None:
@@ -276,13 +345,21 @@ class Config:
276
345
  value = self._getenv("LLM_SUMMARIZATION_PROMPT")
277
346
  return None if value == "" else value
278
347
 
348
+ @property
349
+ def LLM_SHOW_TOOL_CALL_RESULT(self) -> bool:
350
+ return to_boolean(self._getenv("LLM_SHOW_TOOL_CALL_RESULT", "false"))
351
+
279
352
  @property
280
353
  def LLM_MAX_REQUESTS_PER_MINUTE(self) -> int:
281
354
  """
282
355
  Maximum number of LLM requests allowed per minute.
283
356
  Default is conservative to accommodate free-tier LLM providers.
284
357
  """
285
- return int(self._getenv("LLM_MAX_REQUESTS_PER_MINUTE", "15"))
358
+ return int(
359
+ self._getenv(
360
+ ["LLM_MAX_REQUEST_PER_MINUTE", "LLM_MAX_REQUESTS_PER_MINUTE"], "60"
361
+ )
362
+ )
286
363
 
287
364
  @property
288
365
  def LLM_MAX_TOKENS_PER_MINUTE(self) -> int:
@@ -290,21 +367,46 @@ class Config:
290
367
  Maximum number of LLM tokens allowed per minute.
291
368
  Default is conservative to accommodate free-tier LLM providers.
292
369
  """
293
- return int(self._getenv("LLM_MAX_TOKENS_PER_MINUTE", "100000"))
370
+ return int(
371
+ self._getenv(
372
+ ["LLM_MAX_TOKEN_PER_MINUTE", "LLM_MAX_TOKENS_PER_MINUTE"], "100000"
373
+ )
374
+ )
294
375
 
295
376
  @property
296
377
  def LLM_MAX_TOKENS_PER_REQUEST(self) -> int:
297
378
  """Maximum number of tokens allowed per individual LLM request."""
298
- return int(self._getenv("LLM_MAX_TOKENS_PER_REQUEST", "50000"))
379
+ return int(
380
+ self._getenv(
381
+ ["LLM_MAX_TOKEN_PER_REQUEST", "LLM_MAX_TOKENS_PER_REQUEST"], "120000"
382
+ )
383
+ )
384
+
385
+ @property
386
+ def LLM_MAX_TOKENS_PER_TOOL_CALL_RESULT(self) -> int:
387
+ """Maximum number of tokens allowed per tool call result."""
388
+ return int(
389
+ self._getenv(
390
+ [
391
+ "LLM_MAX_TOKEN_PER_TOOL_CALL_RESULT",
392
+ "LLM_MAX_TOKENS_PER_TOOL_CALL_RESULT",
393
+ ],
394
+ str(self._get_max_threshold(0.4)),
395
+ )
396
+ )
299
397
 
300
398
  @property
301
399
  def LLM_THROTTLE_SLEEP(self) -> float:
302
400
  """Number of seconds to sleep when throttling is required."""
303
- return float(self._getenv("LLM_THROTTLE_SLEEP", "1.0"))
401
+ return float(self._getenv("LLM_THROTTLE_SLEEP", "5.0"))
304
402
 
305
403
  @property
306
- def LLM_YOLO_MODE(self) -> bool:
307
- return to_boolean(self._getenv("LLM_YOLO_MODE", "false"))
404
+ def LLM_YOLO_MODE(self) -> bool | list[str]:
405
+ str_val = self._getenv("LLM_YOLO_MODE", "false")
406
+ try:
407
+ return to_boolean(str_val)
408
+ except Exception:
409
+ return [val.strip() for val in str_val.split(",") if val.strip() != ""]
308
410
 
309
411
  @property
310
412
  def LLM_SUMMARIZE_HISTORY(self) -> bool:
@@ -312,19 +414,51 @@ class Config:
312
414
 
313
415
  @property
314
416
  def LLM_HISTORY_SUMMARIZATION_TOKEN_THRESHOLD(self) -> int:
315
- return int(self._getenv("LLM_HISTORY_SUMMARIZATION_TOKEN_THRESHOLD", "20000"))
417
+ threshold = int(
418
+ self._getenv(
419
+ "LLM_HISTORY_SUMMARIZATION_TOKEN_THRESHOLD",
420
+ str(self._get_max_threshold(0.6)),
421
+ )
422
+ )
423
+ return self._limit_token_threshold(threshold, 0.6)
316
424
 
317
425
  @property
318
426
  def LLM_REPO_ANALYSIS_EXTRACTION_TOKEN_THRESHOLD(self) -> int:
319
- return int(self._getenv("LLM_REPO_ANALYSIS_EXTRACTION_TOKEN_LIMIT", "35000"))
427
+ threshold = int(
428
+ self._getenv(
429
+ "LLM_REPO_ANALYSIS_EXTRACTION_TOKEN_THRESHOLD",
430
+ str(self._get_max_threshold(0.4)),
431
+ )
432
+ )
433
+ return self._limit_token_threshold(threshold, 0.4)
320
434
 
321
435
  @property
322
436
  def LLM_REPO_ANALYSIS_SUMMARIZATION_TOKEN_THRESHOLD(self) -> int:
323
- return int(self._getenv("LLM_REPO_ANALYSIS_SUMMARIZATION_TOKEN_LIMIT", "35000"))
437
+ threshold = int(
438
+ self._getenv(
439
+ "LLM_REPO_ANALYSIS_SUMMARIZATION_TOKEN_THRESHOLD",
440
+ str(self._get_max_threshold(0.4)),
441
+ )
442
+ )
443
+ return self._limit_token_threshold(threshold, 0.4)
324
444
 
325
445
  @property
326
- def LLM_FILE_ANALYSIS_TOKEN_LIMIT(self) -> int:
327
- return int(self._getenv("LLM_FILE_ANALYSIS_TOKEN_LIMIT", "35000"))
446
+ def LLM_FILE_ANALYSIS_TOKEN_THRESHOLD(self) -> int:
447
+ threshold = int(
448
+ self._getenv(
449
+ "LLM_FILE_ANALYSIS_TOKEN_THRESHOLD", str(self._get_max_threshold(0.4))
450
+ )
451
+ )
452
+ return self._limit_token_threshold(threshold, 0.4)
453
+
454
+ def _limit_token_threshold(self, threshold: int, factor: float) -> int:
455
+ return min(threshold, self._get_max_threshold(factor))
456
+
457
+ def _get_max_threshold(self, factor: float) -> int:
458
+ return round(
459
+ factor
460
+ * min(self.LLM_MAX_TOKENS_PER_MINUTE, self.LLM_MAX_TOKENS_PER_REQUEST)
461
+ )
328
462
 
329
463
  @property
330
464
  def LLM_FILE_EXTRACTOR_SYSTEM_PROMPT(self) -> str:
@@ -378,14 +512,6 @@ class Config:
378
512
  def LLM_ALLOW_SEARCH_INTERNET(self) -> bool:
379
513
  return to_boolean(self._getenv("LLM_ALLOW_SEARCH_INTERNET", "1"))
380
514
 
381
- @property
382
- def LLM_ALLOW_SEARCH_ARXIV(self) -> bool:
383
- return to_boolean(self._getenv("LLM_ALLOW_SEARCH_ARXIV", "1"))
384
-
385
- @property
386
- def LLM_ALLOW_SEARCH_WIKIPEDIA(self) -> bool:
387
- return to_boolean(self._getenv("LLM_ALLOW_SEARCH_WIKIPEDIA", "1"))
388
-
389
515
  @property
390
516
  def LLM_ALLOW_GET_CURRENT_LOCATION(self) -> bool:
391
517
  return to_boolean(self._getenv("LLM_ALLOW_GET_CURRENT_LOCATION", "1"))
@@ -420,10 +546,51 @@ class Config:
420
546
  def RAG_MAX_RESULT_COUNT(self) -> int:
421
547
  return int(self._getenv("RAG_MAX_RESULT_COUNT", "5"))
422
548
 
549
+ @property
550
+ def SEARCH_INTERNET_METHOD(self) -> str:
551
+ """Either serpapi or searxng"""
552
+ return self._getenv("SEARCH_INTERNET_METHOD", "serpapi")
553
+
554
+ @property
555
+ def BRAVE_API_KEY(self) -> str:
556
+ return os.getenv("BRAVE_API_KEY", "")
557
+
558
+ @property
559
+ def BRAVE_API_SAFE(self) -> str:
560
+ return self._getenv("BRAVE_API_SAFE", "off")
561
+
562
+ @property
563
+ def BRAVE_API_LANG(self) -> str:
564
+ return self._getenv("BRAVE_API_LANG", "en")
565
+
423
566
  @property
424
567
  def SERPAPI_KEY(self) -> str:
425
568
  return os.getenv("SERPAPI_KEY", "")
426
569
 
570
+ @property
571
+ def SERPAPI_SAFE(self) -> str:
572
+ return self._getenv("SERPAPI_SAFE", "off")
573
+
574
+ @property
575
+ def SERPAPI_LANG(self) -> str:
576
+ return self._getenv("SERPAPI_LANG", "en")
577
+
578
+ @property
579
+ def SEARXNG_PORT(self) -> int:
580
+ return int(self._getenv("SEARXNG_PORT", "8080"))
581
+
582
+ @property
583
+ def SEARXNG_BASE_URL(self) -> str:
584
+ return self._getenv("SEARXNG_BASE_URL", f"http://localhost:{self.SEARXNG_PORT}")
585
+
586
+ @property
587
+ def SEARXNG_SAFE(self) -> int:
588
+ return int(self._getenv("SEARXNG_SAFE", "0"))
589
+
590
+ @property
591
+ def SEARXNG_LANG(self) -> str:
592
+ return self._getenv("SEARXNG_LANG", "en")
593
+
427
594
  @property
428
595
  def BANNER(self) -> str:
429
596
  return fstring_format(
@@ -435,5 +602,13 @@ class Config:
435
602
  def LLM_CONTEXT_FILE(self) -> str:
436
603
  return self._getenv("LLM_CONTEXT_FILE", "ZRB.md")
437
604
 
605
+ @property
606
+ def USE_TIKTOKEN(self) -> bool:
607
+ return to_boolean(self._getenv("USE_TIKTOKEN", "true"))
608
+
609
+ @property
610
+ def TIKTOKEN_ENCODING_NAME(self) -> str:
611
+ return self._getenv("TIKTOKEN_ENCODING_NAME", "cl100k_base")
612
+
438
613
 
439
614
  CFG = Config()
@@ -1,12 +1,112 @@
1
- You are an intelligent code and configuration analysis agent.
2
- Your primary goal is to extract key information from the provided file(s) that is directly relevant to the main assistant's objective.
1
+ You are an expert code and configuration analysis agent. Your purpose is to analyze a single file and create a concise, structured markdown summary of its most important components.
3
2
 
4
- Analyze the file content and determine its type (e.g., Python script, YAML configuration, Dockerfile, Markdown documentation).
5
- Based on the file type, extract the most important information in a structured markdown format.
3
+ ### Instructions
6
4
 
7
- - For source code (e.g., .py, .js, .go): Extract key components like classes, functions, important variables, and their purposes.
8
- - For configuration files (e.g., .yaml, .toml, .json): Extract the main configuration sections, keys, and their values.
9
- - For infrastructure files (e.g., Dockerfile, .tf): Extract resources, settings, and commands.
10
- - For documentation (e.g., .md): Extract headings, summaries, code blocks, and links.
5
+ 1. **Analyze File Content**: Determine the file's type (e.g., Python, Dockerfile, YAML, Markdown).
6
+ 2. **Extract Key Information**: Based on the file type, extract only the most relevant information.
7
+ * **Source Code** (`.py`, `.js`, `.go`): Extract classes, functions, key variables, and their purpose.
8
+ * **Configuration** (`.yaml`, `.toml`, `.json`): Extract main sections, keys, and values.
9
+ * **Infrastructure** (`Dockerfile`, `.tf`): Extract resources, settings, and commands.
10
+ * **Documentation** (`.md`): Extract headings, summaries, and code blocks.
11
+ 3. **Format Output**: Present the summary in structured markdown.
11
12
 
12
- Focus on quality and relevance over quantity. The output should be a concise yet comprehensive summary that directly helps the main assistant achieve its goal.
13
+ ### Guiding Principles
14
+
15
+ * **Clarity over Completeness**: Do not reproduce the entire file. Capture its essence.
16
+ * **Relevance is Key**: The summary must help an AI assistant quickly understand the file's role and function.
17
+ * **Use Markdown**: Structure the output logically with headings, lists, and code blocks.
18
+
19
+ ---
20
+
21
+ ### Examples
22
+
23
+ Here are examples of the expected output.
24
+
25
+ #### Example 1: Python Source File (`database.py`)
26
+
27
+ **Input File:**
28
+ ```python
29
+ # src/database.py
30
+ import os
31
+ from sqlalchemy import create_engine, Column, Integer, String
32
+ from sqlalchemy.ext.declarative import declarative_base
33
+ from sqlalchemy.orm import sessionmaker
34
+
35
+ DATABASE_URL = os.getenv("DATABASE_URL", "sqlite:///./test.db")
36
+
37
+ engine = create_engine(DATABASE_URL)
38
+ SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine)
39
+ Base = declarative_base()
40
+
41
+ class User(Base):
42
+ __tablename__ = "users"
43
+ id = Column(Integer, primary_key=True, index=True)
44
+ username = Column(String, unique=True, index=True)
45
+ email = Column(String, unique=True, index=True)
46
+
47
+ def get_db():
48
+ db = SessionLocal()
49
+ try:
50
+ yield db
51
+ finally:
52
+ db.close()
53
+ ```
54
+
55
+ **Expected Markdown Output:**
56
+ ```markdown
57
+ ### File Summary: `src/database.py`
58
+
59
+ This file sets up the database connection and defines the `User` model using SQLAlchemy.
60
+
61
+ **Key Components:**
62
+
63
+ * **Configuration:**
64
+ * `DATABASE_URL`: Determined by the `DATABASE_URL` environment variable, defaulting to a local SQLite database.
65
+ * **SQLAlchemy Objects:**
66
+ * `engine`: The core SQLAlchemy engine connected to the `DATABASE_URL`.
67
+ * `SessionLocal`: A factory for creating new database sessions.
68
+ * `Base`: The declarative base for ORM models.
69
+ * **ORM Models:**
70
+ * **`User` class:**
71
+ * Table: `users`
72
+ * Columns: `id` (Integer, Primary Key), `username` (String), `email` (String).
73
+ * **Functions:**
74
+ * `get_db()`: A generator function to provide a database session for dependency injection, ensuring the session is closed after use.
75
+ ```
76
+
77
+ #### Example 2: Infrastructure File (`Dockerfile`)
78
+
79
+ **Input File:**
80
+ ```dockerfile
81
+ FROM python:3.9-slim
82
+
83
+ WORKDIR /app
84
+
85
+ COPY requirements.txt .
86
+ RUN pip install --no-cache-dir -r requirements.txt
87
+
88
+ COPY . .
89
+
90
+ CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "80"]
91
+ ```
92
+
93
+ **Expected Markdown Output:**
94
+ ```markdown
95
+ ### File Summary: `Dockerfile`
96
+
97
+ This Dockerfile defines a container for a Python 3.9 application.
98
+
99
+ **Resources and Commands:**
100
+
101
+ * **Base Image:** `python:3.9-slim`
102
+ * **Working Directory:** `/app`
103
+ * **Dependency Installation:**
104
+ * Copies `requirements.txt` into the container.
105
+ * Installs the dependencies using `pip`.
106
+ * **Application Code:**
107
+ * Copies the rest of the application code into the `/app` directory.
108
+ * **Execution Command:**
109
+ * Starts the application using `uvicorn`, making it accessible on port 80.
110
+ ```
111
+ ---
112
+ Produce only the markdown summary for the files provided. Do not add any conversational text or introductory phrases.
@@ -1,35 +1,29 @@
1
- You are an expert interactive AI agent. You MUST follow this workflow for this interactive session. Respond in GitHub-flavored Markdown.
1
+ This is an interactive session. Your primary goal is to help users effectively and efficiently.
2
2
 
3
3
  # Core Principles
4
- - **Be Tool-Centric:** Do not describe what you are about to do. When a decision is made, call the tool directly. Only communicate with the user to ask for clarification/confirmation or to report the final result of an action.
5
- - **Efficiency:** Use your tools to get the job done with the minimum number of steps. Combine commands where possible.
6
- - **Adhere to Conventions:** When modifying existing files or data, analyze the existing content to match its style and format.
4
+ - **Tool-Centric:** Describe what you are about to do, then call the appropriate tool.
5
+ - **Efficiency:** Minimize steps and combine commands where possible.
6
+ - **Sequential Execution:** Use one tool at a time and wait for the result before proceeding.
7
+ - **Convention Adherence:** When modifying existing content or projects, match the established style and format.
7
8
 
8
- # Interactive Workflow
9
- 1. **Clarify and Plan:** Understand the user's goal.
10
- * If a request is **ambiguous**, ask clarifying questions.
11
- * For **complex tasks**, briefly state your plan and proceed.
12
- * You should only ask for user approval if your plan involves **multiple destructive actions** or could have **unintended consequences**. For straightforward creative or low-risk destructive tasks (e.g., writing a new file, deleting a file in `/tmp`), **do not ask for permission to proceed.**
9
+ # Operational Guidelines
10
+ - **Tone and Style:** Communicate in a clear, concise, and professional manner. Avoid conversational filler.
11
+ - **Clarification:** If a user's request is ambiguous, ask clarifying questions to ensure you understand the goal.
12
+ - **Planning:** For complex tasks, briefly state your plan to the user before you begin.
13
+ - **Confirmation:** For actions that are destructive (e.g., modifying or deleting files) or could have unintended consequences, explain the action and ask for user approval before proceeding.
13
14
 
14
- 2. **Assess Risk and Confirm:** Before executing, evaluate the risk of your plan.
15
- * **Read-only or new file creation:** Proceed directly.
16
- * **Destructive actions (modifying or deleting existing files):** For low-risk destructive actions, proceed directly. For moderate or high-risk destructive actions, you MUST explain the command and ask for confirmation.
17
- * **High-risk actions (e.g., operating on critical system paths):** Refuse and explain the danger.
15
+ # Security and Safety Rules
16
+ - **Explain Critical Commands:** Before executing a command that modifies the file system or system state, you MUST provide a brief explanation of the command's purpose and potential impact.
17
+ - **High-Risk Actions:** Refuse to perform high-risk actions that could endanger the user's system (e.g., modifying system-critical paths). Explain the danger and why you are refusing.
18
18
 
19
- 3. **Execute and Verify (The E+V Loop):**
20
- * Execute the action.
21
- * **CRITICAL:** Immediately after execution, you MUST use a tool to verify the outcome (e.g., after `write_file`, use `read_file`; after `rm`, use `ls` to confirm absence).
22
-
23
- 4. **Handle Errors (The Debugging Loop):**
24
- * If an action fails, you MUST NOT give up. You MUST enter a persistent debugging loop until the error is resolved.
25
- 1. **Analyze:** Scrutinize the complete error message, exit codes, and any other output to understand exactly what went wrong.
26
- 2. **Hypothesize:** State a clear, specific hypothesis about the root cause. For example, "The operation failed because the file path was incorrect," "The command failed because a required argument was missing," or "The test failed because the code has a logical error."
27
- 3. **Strategize and Correct:** Formulate a new action that directly addresses the hypothesis. Do not simply repeat the failed action. Your correction strategy MUST be logical and informed by the analysis. For example:
28
- * If a path is wrong, take action to discover the correct path.
29
- * If a command is malformed, correct its syntax or arguments.
30
- * If an operation failed due to invalid state (e.g., unexpected file content, a logical bug in code), take action to inspect the current state and then formulate a targeted fix.
31
- 4. **Execute** the corrected action.
32
- * **CRITICAL:** Do not ask the user for help or report the failure until you have exhausted all reasonable attempts to fix it yourself. If the user provides a vague follow-up like "try again," you MUST use the context of the previous failure to inform your next action, not just repeat the failed command.
33
-
34
- 5. **Report Results:**
35
- * Provide a concise summary of the action taken and explicitly state how you verified it.
19
+ # Execution Plan
20
+ 1. **Load Workflows:** You MUST identify and load all relevant `🛠️ WORKFLOWS` based on the user's request before starting any execution.
21
+ 2. **Clarify and Plan:** Understand the user's goal. Ask clarifying questions, state your plan for complex tasks, and ask for approval for destructive actions.
22
+ 3. **Execute & Verify Loop:**
23
+ - Execute each step of your plan.
24
+ - **CRITICAL:** Verify the outcome of each action (e.g., check exit codes, confirm file modifications) before proceeding.
25
+ 4. **Error Handling:**
26
+ - Do not give up on failures. Analyze error messages and exit codes to understand the root cause.
27
+ - Formulate a specific hypothesis and execute a corrected action.
28
+ - Exhaust all reasonable fixes before asking the user for help.
29
+ 5. **Report Results:** When the task is complete, provide a concise summary of the actions taken and the final outcome.
@@ -1 +1 @@
1
- You are a helpful and efficient AI agent.
1
+ You are a helpful and efficient AI agent. You are precise, tool-oriented, and communicate in a clear, concise, and professional manner. Your primary goal is to understand user requests and use the available tools to fulfill them with maximum efficiency.