1bcoder 0.1.2__tar.gz → 0.1.3__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (78) hide show
  1. {1bcoder-0.1.2 → 1bcoder-0.1.3/1bcoder.egg-info}/PKG-INFO +106 -14
  2. {1bcoder-0.1.2 → 1bcoder-0.1.3}/1bcoder.egg-info/SOURCES.txt +4 -0
  3. 1bcoder-0.1.2/README.md → 1bcoder-0.1.3/PKG-INFO +118 -13
  4. 1bcoder-0.1.2/1bcoder.egg-info/PKG-INFO → 1bcoder-0.1.3/README.md +103 -26
  5. 1bcoder-0.1.3/_bcoder_data/aliases.txt +13 -0
  6. 1bcoder-0.1.3/_bcoder_data/doc/PROC.md +235 -0
  7. 1bcoder-0.1.3/_bcoder_data/proc/ctx_cut.py +19 -0
  8. 1bcoder-0.1.3/_bcoder_data/proc/rude_words.py +34 -0
  9. 1bcoder-0.1.3/_bcoder_data/proc/secret_check.py +29 -0
  10. 1bcoder-0.1.3/_bcoder_data/proc/sql_readonly_guard.py +40 -0
  11. {1bcoder-0.1.2 → 1bcoder-0.1.3}/_bcoder_data/profiles.txt +6 -0
  12. {1bcoder-0.1.2 → 1bcoder-0.1.3}/chat.py +422 -45
  13. {1bcoder-0.1.2 → 1bcoder-0.1.3}/pyproject.toml +5 -1
  14. 1bcoder-0.1.2/_bcoder_data/aliases.txt +0 -8
  15. 1bcoder-0.1.2/_bcoder_data/doc/PROC.md +0 -150
  16. {1bcoder-0.1.2 → 1bcoder-0.1.3}/1bcoder.egg-info/dependency_links.txt +0 -0
  17. {1bcoder-0.1.2 → 1bcoder-0.1.3}/1bcoder.egg-info/entry_points.txt +0 -0
  18. {1bcoder-0.1.2 → 1bcoder-0.1.3}/1bcoder.egg-info/requires.txt +0 -0
  19. {1bcoder-0.1.2 → 1bcoder-0.1.3}/1bcoder.egg-info/top_level.txt +0 -0
  20. {1bcoder-0.1.2 → 1bcoder-0.1.3}/LICENSE +0 -0
  21. {1bcoder-0.1.2 → 1bcoder-0.1.3}/_bcoder_data/__init__.py +0 -0
  22. {1bcoder-0.1.2 → 1bcoder-0.1.3}/_bcoder_data/agents/advance.txt +0 -0
  23. {1bcoder-0.1.2 → 1bcoder-0.1.3}/_bcoder_data/agents/ask.txt +0 -0
  24. {1bcoder-0.1.2 → 1bcoder-0.1.3}/_bcoder_data/agents/fill.txt +0 -0
  25. {1bcoder-0.1.2 → 1bcoder-0.1.3}/_bcoder_data/agents/planning.txt +0 -0
  26. {1bcoder-0.1.2 → 1bcoder-0.1.3}/_bcoder_data/agents/sqlite.txt +0 -0
  27. {1bcoder-0.1.2 → 1bcoder-0.1.3}/_bcoder_data/doc/MCP.md +0 -0
  28. {1bcoder-0.1.2 → 1bcoder-0.1.3}/_bcoder_data/doc/PARAM.md +0 -0
  29. {1bcoder-0.1.2 → 1bcoder-0.1.3}/_bcoder_data/map.txt +0 -0
  30. {1bcoder-0.1.2 → 1bcoder-0.1.3}/_bcoder_data/proc/add-save.py +0 -0
  31. {1bcoder-0.1.2 → 1bcoder-0.1.3}/_bcoder_data/proc/collect-files.py +0 -0
  32. {1bcoder-0.1.2 → 1bcoder-0.1.3}/_bcoder_data/proc/extract-code.py +0 -0
  33. {1bcoder-0.1.2 → 1bcoder-0.1.3}/_bcoder_data/proc/extract-files.py +0 -0
  34. {1bcoder-0.1.2 → 1bcoder-0.1.3}/_bcoder_data/proc/extract-list.py +0 -0
  35. {1bcoder-0.1.2 → 1bcoder-0.1.3}/_bcoder_data/proc/grounding-check.py +0 -0
  36. {1bcoder-0.1.2 → 1bcoder-0.1.3}/_bcoder_data/proc/md.py +0 -0
  37. {1bcoder-0.1.2 → 1bcoder-0.1.3}/_bcoder_data/proc/mdx.py +0 -0
  38. {1bcoder-0.1.2 → 1bcoder-0.1.3}/_bcoder_data/proc/regexp-extract.py +0 -0
  39. {1bcoder-0.1.2 → 1bcoder-0.1.3}/_bcoder_data/prompts/analysis.txt +0 -0
  40. {1bcoder-0.1.2 → 1bcoder-0.1.3}/_bcoder_data/prompts/sumarise.txt +0 -0
  41. {1bcoder-0.1.2 → 1bcoder-0.1.3}/_bcoder_data/prompts.txt +0 -0
  42. {1bcoder-0.1.2 → 1bcoder-0.1.3}/_bcoder_data/scripts/AddFunction.txt +0 -0
  43. {1bcoder-0.1.2 → 1bcoder-0.1.3}/_bcoder_data/scripts/AskProject.txt +0 -0
  44. {1bcoder-0.1.2 → 1bcoder-0.1.3}/_bcoder_data/scripts/CheckRequirements.txt +0 -0
  45. {1bcoder-0.1.2 → 1bcoder-0.1.3}/_bcoder_data/scripts/DockerMySQL.txt +0 -0
  46. {1bcoder-0.1.2 → 1bcoder-0.1.3}/_bcoder_data/scripts/DockerNginx.txt +0 -0
  47. {1bcoder-0.1.2 → 1bcoder-0.1.3}/_bcoder_data/scripts/DockerPython.txt +0 -0
  48. {1bcoder-0.1.2 → 1bcoder-0.1.3}/_bcoder_data/scripts/DockerStack.txt +0 -0
  49. {1bcoder-0.1.2 → 1bcoder-0.1.3}/_bcoder_data/scripts/DuckDuckGoInstant.txt +0 -0
  50. {1bcoder-0.1.2 → 1bcoder-0.1.3}/_bcoder_data/scripts/EnvTemplate.txt +0 -0
  51. {1bcoder-0.1.2 → 1bcoder-0.1.3}/_bcoder_data/scripts/Explain.txt +0 -0
  52. {1bcoder-0.1.2 → 1bcoder-0.1.3}/_bcoder_data/scripts/ExploreProjectStructure.txt +0 -0
  53. {1bcoder-0.1.2 → 1bcoder-0.1.3}/_bcoder_data/scripts/GitIgnorePython.txt +0 -0
  54. {1bcoder-0.1.2 → 1bcoder-0.1.3}/_bcoder_data/scripts/MySQLDump.txt +0 -0
  55. {1bcoder-0.1.2 → 1bcoder-0.1.3}/_bcoder_data/scripts/NewScript.txt +0 -0
  56. {1bcoder-0.1.2 → 1bcoder-0.1.3}/_bcoder_data/scripts/PipFreeze.txt +0 -0
  57. {1bcoder-0.1.2 → 1bcoder-0.1.3}/_bcoder_data/scripts/PyPI.txt +0 -0
  58. {1bcoder-0.1.2 → 1bcoder-0.1.3}/_bcoder_data/scripts/Refactor.txt +0 -0
  59. {1bcoder-0.1.2 → 1bcoder-0.1.3}/_bcoder_data/scripts/RunAndFix.txt +0 -0
  60. {1bcoder-0.1.2 → 1bcoder-0.1.3}/_bcoder_data/scripts/SQLiteSchema.txt +0 -0
  61. {1bcoder-0.1.2 → 1bcoder-0.1.3}/_bcoder_data/scripts/WikiPage.txt +0 -0
  62. {1bcoder-0.1.2 → 1bcoder-0.1.3}/_bcoder_data/scripts/WikiSearch.txt +0 -0
  63. {1bcoder-0.1.2 → 1bcoder-0.1.3}/_bcoder_data/scripts/parallel_call.txt +0 -0
  64. {1bcoder-0.1.2 → 1bcoder-0.1.3}/_bcoder_data/scripts/personal/content/create-regular-content.txt +0 -0
  65. {1bcoder-0.1.2 → 1bcoder-0.1.3}/_bcoder_data/scripts/personal/content/plan.txt +0 -0
  66. {1bcoder-0.1.2 → 1bcoder-0.1.3}/_bcoder_data/scripts/personal/test/collect-data-from-test-environment.txt +0 -0
  67. {1bcoder-0.1.2 → 1bcoder-0.1.3}/_bcoder_data/scripts/plan.txt +0 -0
  68. {1bcoder-0.1.2 → 1bcoder-0.1.3}/_bcoder_data/scripts/remote/create-content-on-remote-server.txt +0 -0
  69. {1bcoder-0.1.2 → 1bcoder-0.1.3}/_bcoder_data/scripts/set_ctx.txt +0 -0
  70. {1bcoder-0.1.2 → 1bcoder-0.1.3}/_bcoder_data/scripts/team-map-worker.txt +0 -0
  71. {1bcoder-0.1.2 → 1bcoder-0.1.3}/_bcoder_data/scripts/team-search-worker.txt +0 -0
  72. {1bcoder-0.1.2 → 1bcoder-0.1.3}/_bcoder_data/scripts/team-summarize.txt +0 -0
  73. {1bcoder-0.1.2 → 1bcoder-0.1.3}/_bcoder_data/scripts/team-tree-worker.txt +0 -0
  74. {1bcoder-0.1.2 → 1bcoder-0.1.3}/_bcoder_data/scripts/test.txt +0 -0
  75. {1bcoder-0.1.2 → 1bcoder-0.1.3}/_bcoder_data/teams/code-analysis.yaml +0 -0
  76. {1bcoder-0.1.2 → 1bcoder-0.1.3}/map_index.py +0 -0
  77. {1bcoder-0.1.2 → 1bcoder-0.1.3}/map_query.py +0 -0
  78. {1bcoder-0.1.2 → 1bcoder-0.1.3}/setup.cfg +0 -0
@@ -1,7 +1,9 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: 1bcoder
3
- Version: 0.1.2
3
+ Version: 0.1.3
4
4
  Summary: AI coding assistant agent for 1B–7B local models (Ollama, LMStudio, llama.cpp). Terminal REPL with file editing, project map, agents, scripts, and parallel multi-model queries.
5
+ Project-URL: Homepage, https://github.com/szholobetsky/1bcoder
6
+ Project-URL: Repository, https://github.com/szholobetsky/1bcoder
5
7
  Requires-Python: >=3.10
6
8
  Description-Content-Type: text/markdown
7
9
  License-File: LICENSE
@@ -90,7 +92,8 @@ Tasks that require the model to decide *what to look at* — refactoring across
90
92
  - **Aliases** — define command shortcuts with `/alias /name = expansion` (supports `{{args}}`); persisted in `aliases.txt`; loaded from global then project directory at startup and survive `/clear`
91
93
  - **Backup/restore** — `/bkup save` rotates existing backups (`file.bkup` → `file.bkup(1)`, `file.bkup(2)`…) so no snapshot is ever overwritten; `/bkup restore` always restores the latest
92
94
  - **MCP support** — connect external tool servers (filesystem, web, git, database, browser…) via the Model Context Protocol
93
- - **Parallel queries** — send prompts to multiple models simultaneously with `/parallel`, with saved profiles
95
+ - **Parallel queries** — send prompts to multiple models simultaneously with `/parallel`; control context sent (`--ctx`/`--last`/`--no-ctx`) and route replies back into main context (`ctx` output) for sub-agent workflows
96
+ - **Command hooks** — `/hook before|after <cmd> <script>` runs a script before or after edit/patch/fix/insert; `before` hook cancels the command if the script is missing; `{{file}}` and `{{range}}` injected automatically
94
97
  - Switch model or host at runtime without restarting (`/model gemma3:1b`, `/host openai://localhost:1234`)
95
98
  - **Model parameters** — `/param temperature 0.2`, `/param enable_thinking false` — sent with every request, auto-cast to correct type
96
99
  - **Multi-provider** — connect to Ollama, LMStudio, or LiteLLM using `ollama://` / `openai://` URL scheme; plain host defaults to Ollama
@@ -105,7 +108,7 @@ Tasks that require the model to decide *what to look at* — refactoring across
105
108
  pip install 1bcoder
106
109
  ```
107
110
 
108
- On first launch, default agents, procs, and scripts are copied to `~/.1bcoder/` automatically.
111
+ On first launch, default agents, procs, scripts, profiles, and aliases are copied to `~/.1bcoder/` automatically. On upgrade (`pip install --upgrade 1bcoder`), new entries in `aliases.txt` and `profiles.txt` are merged in without overwriting your customisations.
109
112
 
110
113
  ### Option 2 — Clone and install locally
111
114
 
@@ -675,7 +678,7 @@ Lines starting with `[v]` are already done and skipped. Lines starting with `#`
675
678
  | `/script show` | Display steps of the current script |
676
679
  | `/script add <command>` | Append a step to the current script |
677
680
  | `/script clear` | Wipe current script completely |
678
- | `/script reset` | Unmark all done steps |
681
+ | `/script reset` | Unmark all done steps (also happens automatically when a script runs to completion) |
679
682
  | `/script reapply [key=value ...]` | Reset all done steps then apply automatically; prompts for any NaN `{{variables}}` before running |
680
683
  | `/script refresh` | Reload script from disk and show contents |
681
684
  | `/script apply [file] [key=value ...]` | Run steps one by one (Y/n/q per step) |
@@ -725,6 +728,7 @@ Session variables store named values that are substituted as `{{name}}` in any c
725
728
  /var set name =MyService literal value
726
729
  /var def port db host declare multiple NaN variables (skips if already set)
727
730
  /var get list all variables (NaN = unset)
731
+ /var get port print value of a single variable (useful with ->)
728
732
  /var del port remove a variable
729
733
  ```
730
734
 
@@ -770,9 +774,12 @@ Any `{{key}}` found but not yet set is registered as NaN — `/script reapply` w
770
774
 
771
775
  ---
772
776
 
773
- ### Output capture (`->` and `$`)
777
+ ### Output capture (`->`, `$` and `~`)
774
778
 
775
- Any command — LLM reply, tool output, or proc result — can be captured into a session variable using the `->` suffix. The special token `$` expands to the last captured output anywhere in a command or message.
779
+ Any command — LLM reply, tool output, or proc result — can be captured into a session variable using the `->` suffix. Two special tokens expand anywhere in a command or message:
780
+
781
+ - `$` — last captured output (last AI reply or tool result)
782
+ - `~` — last user input (last message or command you typed)
776
783
 
777
784
  ```
778
785
  /map keyword extract auth.py -> keywords # capture tool output into variable
@@ -788,6 +795,15 @@ summarize this for me -> myplan # capture LLM reply
788
795
  /var set port result # also works: grab key from proc output
789
796
  ```
790
797
 
798
+ **`~` — repeat or redirect the last question:**
799
+ ```
800
+ як працює цей метод? # ask main model
801
+ /small ~ # same question → small model
802
+ /ask ~ # same question → agent mode
803
+ /explain "$" # ask small model to explain the reply
804
+ поясни: $ # ask main model to explain its own reply
805
+ ```
806
+
791
807
  `->` stores the full text (including ANSI-stripped terminal output) and also updates `$` for immediate reuse. Variables captured with `->` appear in `/var get` like any other session variable.
792
808
 
793
809
  ---
@@ -867,24 +883,34 @@ Connect external tool servers to give the AI access to filesystems, databases, w
867
883
  /mcp disconnect fs
868
884
  ```
869
885
 
870
- See [MCP.md](MCP.md) for a full list of ready-to-use servers.
886
+ See `/doc MCP` for a full list of ready-to-use servers.
871
887
 
872
888
  ---
873
889
 
874
890
  ### Parallel queries
875
891
 
876
- Send prompts to multiple models at the same time. Each answer is saved to its own file.
892
+ Send a prompt to multiple models at the same time.
877
893
 
878
894
  ```
879
- /parallel ["prompt"] [profile <name>] [host:port|model|file ...]
895
+ /parallel ["prompt"] [--ctx|--last|--no-ctx] [profile <name>] [host:port|model|(file or ctx) ...]
880
896
  ```
881
897
 
898
+ | Flag | Behaviour |
899
+ |---|---|
900
+ | *(default)* | Full conversation context is sent to every worker |
901
+ | `--last` | Only the last user message is sent (saves tokens for small models) |
902
+ | `--no-ctx` | No context — prompt only (fastest, zero leakage) |
903
+
904
+ Workers write results to a file **or** inject them back into the main context:
905
+
882
906
  ```
883
907
  /parallel "review this for bugs" \
884
- localhost:11434|llama3.2:1b|answers/llm1.txt \
885
- localhost:11435|qwen2.5:1b|answers/llm2.txt
908
+ localhost:11434|llama3.2:1b|ans/llm1.txt \
909
+ localhost:11435|qwen2.5:1b|ctx
886
910
  ```
887
911
 
912
+ Using `ctx` as the output target injects the worker's reply into the main conversation — the next AI turn will see it.
913
+
888
914
  **Profiles** — save a set of workers for reuse:
889
915
 
890
916
  ```
@@ -902,6 +928,60 @@ review: localhost:11434|ministral3:3b|ans/review.txt localhost:11435|cogito:3b|a
902
928
  fast: localhost:11434|qwen2.5-coder:0.6b|ans/q.txt # quick sanity check
903
929
  ```
904
930
 
931
+ **Sub-agent profiles** — built-in profiles that return answers directly to the main context (`ctx`):
932
+
933
+ ```
934
+ small: localhost:11434|qwen3:0.6b|ctx
935
+ explain: localhost:11434|gemma3:1b|ctx
936
+ thinking: localhost:11434|lfm2.5-thinking:1.2b|ctx
937
+ short: localhost:11434|llama3.2:1b|ctx
938
+ ```
939
+
940
+ These are aliased as `/small`, `/explain`, `/thinking`, `/short` — use them like sub-agents:
941
+
942
+ ```
943
+ /small "what does this function return?" --no-ctx # ask tiny model, no context bleed
944
+ /explain "$" # ask gemma to explain last reply
945
+ /small ~ # repeat last question to a small model
946
+ ```
947
+
948
+ `~` expands to the last message you typed; `$` expands to the last AI reply — combine them to build sub-agent pipelines without copy-pasting.
949
+
950
+ ---
951
+
952
+ ### Hooks (`/hook`)
953
+
954
+ Run a script automatically **before** or **after** a command. Useful for backups before edits, linting after patches, or any pre/post workflow step.
955
+
956
+ ```
957
+ /hook before <cmd> <script> # run script before every <cmd>
958
+ /hook after <cmd> <script> # run script after every <cmd>
959
+ /hook list # show active hooks
960
+ /hook clear <cmd> # remove hooks for <cmd>
961
+ /hook clear # remove all hooks
962
+ ```
963
+
964
+ `<cmd>` is the command name without the slash: `edit`, `patch`, `fix`, `insert`, `run`.
965
+
966
+ **Two script types:**
967
+ - `.txt` — 1bcoder script (sequence of commands). `{{file}}` and `{{range}}` are injected as session variables.
968
+ - `.py` — Python guard subprocess. Receives trigger content on `stdin`, outputs `BLOCK:`/`ALERT:`/`ACTION:` lines.
969
+
970
+ **Auto-injected for `.txt` scripts:**
971
+
972
+ | Variable | Value |
973
+ |---|---|
974
+ | `{{file}}` | file argument of the triggering command |
975
+ | `{{range}}` | line range (if specified), e.g. `10-25` |
976
+
977
+ **Examples:**
978
+ ```
979
+ /hook before edit /bkup {{file}} # backup before every edit (.txt script)
980
+ /hook before run sql_readonly_guard.py # block dangerous SQL (.py guard)
981
+ ```
982
+
983
+ Missing `.txt` script cancels a `before` hook. `.py` guard cancels only if it prints `BLOCK:`. Step errors inside `.txt` scripts do not cancel the command.
984
+
905
985
  ---
906
986
 
907
987
  ### Prompt templates
@@ -930,9 +1010,9 @@ Run a Python script against the last LLM reply. Useful for extracting filenames,
930
1010
  /proc new my-proc # create a new processor from template
931
1011
  ```
932
1012
 
933
- **Processor protocol:** `stdin` = last LLM reply · `stdout` = result · `key=value` lines = extracted params · `ACTION: /command` = confirmed and executed (run mode only) · exit 1 = failure.
1013
+ **Processor protocol:** `stdin` = last LLM reply · `stdout` = result · `key=value` lines = extracted params · `ACTION: /command` = confirmed and executed (run mode only) · `ALERT: message` = warning printed, continues · `BLOCK: reason` = cancels the triggering command (hook mode only) · exit 1 = failure.
934
1014
 
935
- Built-in processors in `<install>/.1bcoder/proc/`:
1015
+ Built-in processors in `~/.1bcoder/proc/`:
936
1016
 
937
1017
  | Processor | Purpose | Best mode |
938
1018
  |---|---|---|
@@ -941,8 +1021,20 @@ Built-in processors in `<install>/.1bcoder/proc/`:
941
1021
  | `extract-list` | Convert first bullet/numbered list in reply to comma-separated line | one-shot |
942
1022
  | `grounding-check` | Score identifiers against `map.txt`, warn if <50% | persistent |
943
1023
  | `collect-files` | Accumulate filenames to `.1bcoder/collected-files.txt` | persistent |
944
- | `md` | Render last reply as formatted Markdown in terminal (`pip install rich`) | one-shot |
1024
+ | `md` | Render last reply as formatted Markdown in terminal | one-shot |
945
1025
  | `mdx` | Render last reply as Markdown + LaTeX (KaTeX) + Mermaid diagrams in browser | one-shot |
1026
+ | `ctx_cut` | Auto `/ctx cut` when context exceeds threshold (default 90%) | persistent |
1027
+ | `rude_words` | Alert if reply contains profanity (`ua` arg adds Ukrainian list) | persistent |
1028
+ | `secret_check` | Alert if reply contains sensitive names (google, anthropic…) | persistent |
1029
+ | `sql_readonly_guard` | Alert (proc) or block (hook) on write SQL statements | both |
1030
+
1031
+ **Guard usage examples:**
1032
+ ```
1033
+ /proc on ctx_cut 80 # auto cut at 80%
1034
+ /proc on rude_words ua # profanity check + Ukrainian
1035
+ /proc on secret_check client=acme # + custom keyword
1036
+ /hook before run sql_readonly_guard.py # block /run with DELETE/DROP/UPDATE
1037
+ ```
946
1038
 
947
1039
  See `/doc PROC` for the full protocol, built-in processor reference, and guide to writing your own.
948
1040
 
@@ -25,6 +25,7 @@ _bcoder_data/doc/PARAM.md
25
25
  _bcoder_data/doc/PROC.md
26
26
  _bcoder_data/proc/add-save.py
27
27
  _bcoder_data/proc/collect-files.py
28
+ _bcoder_data/proc/ctx_cut.py
28
29
  _bcoder_data/proc/extract-code.py
29
30
  _bcoder_data/proc/extract-files.py
30
31
  _bcoder_data/proc/extract-list.py
@@ -32,6 +33,9 @@ _bcoder_data/proc/grounding-check.py
32
33
  _bcoder_data/proc/md.py
33
34
  _bcoder_data/proc/mdx.py
34
35
  _bcoder_data/proc/regexp-extract.py
36
+ _bcoder_data/proc/rude_words.py
37
+ _bcoder_data/proc/secret_check.py
38
+ _bcoder_data/proc/sql_readonly_guard.py
35
39
  _bcoder_data/prompts/analysis.txt
36
40
  _bcoder_data/prompts/sumarise.txt
37
41
  _bcoder_data/scripts/AddFunction.txt
@@ -1,3 +1,18 @@
1
+ Metadata-Version: 2.4
2
+ Name: 1bcoder
3
+ Version: 0.1.3
4
+ Summary: AI coding assistant agent for 1B–7B local models (Ollama, LMStudio, llama.cpp). Terminal REPL with file editing, project map, agents, scripts, and parallel multi-model queries.
5
+ Project-URL: Homepage, https://github.com/szholobetsky/1bcoder
6
+ Project-URL: Repository, https://github.com/szholobetsky/1bcoder
7
+ Requires-Python: >=3.10
8
+ Description-Content-Type: text/markdown
9
+ License-File: LICENSE
10
+ Requires-Dist: requests>=2.28
11
+ Requires-Dist: pyreadline3>=3.4; sys_platform == "win32"
12
+ Requires-Dist: tqdm>=4.64
13
+ Requires-Dist: rich>=13.0
14
+ Dynamic: license-file
15
+
1
16
  # 1bcoder
2
17
 
3
18
  AI coding assistant agent for 1B–7B local models running locally via [Ollama](https://ollama.com), [LMStudio](https://lmstudio.ai), or [LiteLLM](https://litellm.ai).
@@ -77,7 +92,8 @@ Tasks that require the model to decide *what to look at* — refactoring across
77
92
  - **Aliases** — define command shortcuts with `/alias /name = expansion` (supports `{{args}}`); persisted in `aliases.txt`; loaded from global then project directory at startup and survive `/clear`
78
93
  - **Backup/restore** — `/bkup save` rotates existing backups (`file.bkup` → `file.bkup(1)`, `file.bkup(2)`…) so no snapshot is ever overwritten; `/bkup restore` always restores the latest
79
94
  - **MCP support** — connect external tool servers (filesystem, web, git, database, browser…) via the Model Context Protocol
80
- - **Parallel queries** — send prompts to multiple models simultaneously with `/parallel`, with saved profiles
95
+ - **Parallel queries** — send prompts to multiple models simultaneously with `/parallel`; control context sent (`--ctx`/`--last`/`--no-ctx`) and route replies back into main context (`ctx` output) for sub-agent workflows
96
+ - **Command hooks** — `/hook before|after <cmd> <script>` runs a script before or after edit/patch/fix/insert; `before` hook cancels the command if the script is missing; `{{file}}` and `{{range}}` injected automatically
81
97
  - Switch model or host at runtime without restarting (`/model gemma3:1b`, `/host openai://localhost:1234`)
82
98
  - **Model parameters** — `/param temperature 0.2`, `/param enable_thinking false` — sent with every request, auto-cast to correct type
83
99
  - **Multi-provider** — connect to Ollama, LMStudio, or LiteLLM using `ollama://` / `openai://` URL scheme; plain host defaults to Ollama
@@ -92,7 +108,7 @@ Tasks that require the model to decide *what to look at* — refactoring across
92
108
  pip install 1bcoder
93
109
  ```
94
110
 
95
- On first launch, default agents, procs, and scripts are copied to `~/.1bcoder/` automatically.
111
+ On first launch, default agents, procs, scripts, profiles, and aliases are copied to `~/.1bcoder/` automatically. On upgrade (`pip install --upgrade 1bcoder`), new entries in `aliases.txt` and `profiles.txt` are merged in without overwriting your customisations.
96
112
 
97
113
  ### Option 2 — Clone and install locally
98
114
 
@@ -662,7 +678,7 @@ Lines starting with `[v]` are already done and skipped. Lines starting with `#`
662
678
  | `/script show` | Display steps of the current script |
663
679
  | `/script add <command>` | Append a step to the current script |
664
680
  | `/script clear` | Wipe current script completely |
665
- | `/script reset` | Unmark all done steps |
681
+ | `/script reset` | Unmark all done steps (also happens automatically when a script runs to completion) |
666
682
  | `/script reapply [key=value ...]` | Reset all done steps then apply automatically; prompts for any NaN `{{variables}}` before running |
667
683
  | `/script refresh` | Reload script from disk and show contents |
668
684
  | `/script apply [file] [key=value ...]` | Run steps one by one (Y/n/q per step) |
@@ -712,6 +728,7 @@ Session variables store named values that are substituted as `{{name}}` in any c
712
728
  /var set name =MyService literal value
713
729
  /var def port db host declare multiple NaN variables (skips if already set)
714
730
  /var get list all variables (NaN = unset)
731
+ /var get port print value of a single variable (useful with ->)
715
732
  /var del port remove a variable
716
733
  ```
717
734
 
@@ -757,9 +774,12 @@ Any `{{key}}` found but not yet set is registered as NaN — `/script reapply` w
757
774
 
758
775
  ---
759
776
 
760
- ### Output capture (`->` and `$`)
777
+ ### Output capture (`->`, `$` and `~`)
778
+
779
+ Any command — LLM reply, tool output, or proc result — can be captured into a session variable using the `->` suffix. Two special tokens expand anywhere in a command or message:
761
780
 
762
- Any commandLLM reply, tool output, or proc result — can be captured into a session variable using the `->` suffix. The special token `$` expands to the last captured output anywhere in a command or message.
781
+ - `$` — last captured output (last AI reply or tool result)
782
+ - `~` — last user input (last message or command you typed)
763
783
 
764
784
  ```
765
785
  /map keyword extract auth.py -> keywords # capture tool output into variable
@@ -775,6 +795,15 @@ summarize this for me -> myplan # capture LLM reply
775
795
  /var set port result # also works: grab key from proc output
776
796
  ```
777
797
 
798
+ **`~` — repeat or redirect the last question:**
799
+ ```
800
+ як працює цей метод? # ask main model
801
+ /small ~ # same question → small model
802
+ /ask ~ # same question → agent mode
803
+ /explain "$" # ask small model to explain the reply
804
+ поясни: $ # ask main model to explain its own reply
805
+ ```
806
+
778
807
  `->` stores the full text (including ANSI-stripped terminal output) and also updates `$` for immediate reuse. Variables captured with `->` appear in `/var get` like any other session variable.
779
808
 
780
809
  ---
@@ -854,24 +883,34 @@ Connect external tool servers to give the AI access to filesystems, databases, w
854
883
  /mcp disconnect fs
855
884
  ```
856
885
 
857
- See [MCP.md](MCP.md) for a full list of ready-to-use servers.
886
+ See `/doc MCP` for a full list of ready-to-use servers.
858
887
 
859
888
  ---
860
889
 
861
890
  ### Parallel queries
862
891
 
863
- Send prompts to multiple models at the same time. Each answer is saved to its own file.
892
+ Send a prompt to multiple models at the same time.
864
893
 
865
894
  ```
866
- /parallel ["prompt"] [profile <name>] [host:port|model|file ...]
895
+ /parallel ["prompt"] [--ctx|--last|--no-ctx] [profile <name>] [host:port|model|(file or ctx) ...]
867
896
  ```
868
897
 
898
+ | Flag | Behaviour |
899
+ |---|---|
900
+ | *(default)* | Full conversation context is sent to every worker |
901
+ | `--last` | Only the last user message is sent (saves tokens for small models) |
902
+ | `--no-ctx` | No context — prompt only (fastest, zero leakage) |
903
+
904
+ Workers write results to a file **or** inject them back into the main context:
905
+
869
906
  ```
870
907
  /parallel "review this for bugs" \
871
- localhost:11434|llama3.2:1b|answers/llm1.txt \
872
- localhost:11435|qwen2.5:1b|answers/llm2.txt
908
+ localhost:11434|llama3.2:1b|ans/llm1.txt \
909
+ localhost:11435|qwen2.5:1b|ctx
873
910
  ```
874
911
 
912
+ Using `ctx` as the output target injects the worker's reply into the main conversation — the next AI turn will see it.
913
+
875
914
  **Profiles** — save a set of workers for reuse:
876
915
 
877
916
  ```
@@ -889,6 +928,60 @@ review: localhost:11434|ministral3:3b|ans/review.txt localhost:11435|cogito:3b|a
889
928
  fast: localhost:11434|qwen2.5-coder:0.6b|ans/q.txt # quick sanity check
890
929
  ```
891
930
 
931
+ **Sub-agent profiles** — built-in profiles that return answers directly to the main context (`ctx`):
932
+
933
+ ```
934
+ small: localhost:11434|qwen3:0.6b|ctx
935
+ explain: localhost:11434|gemma3:1b|ctx
936
+ thinking: localhost:11434|lfm2.5-thinking:1.2b|ctx
937
+ short: localhost:11434|llama3.2:1b|ctx
938
+ ```
939
+
940
+ These are aliased as `/small`, `/explain`, `/thinking`, `/short` — use them like sub-agents:
941
+
942
+ ```
943
+ /small "what does this function return?" --no-ctx # ask tiny model, no context bleed
944
+ /explain "$" # ask gemma to explain last reply
945
+ /small ~ # repeat last question to a small model
946
+ ```
947
+
948
+ `~` expands to the last message you typed; `$` expands to the last AI reply — combine them to build sub-agent pipelines without copy-pasting.
949
+
950
+ ---
951
+
952
+ ### Hooks (`/hook`)
953
+
954
+ Run a script automatically **before** or **after** a command. Useful for backups before edits, linting after patches, or any pre/post workflow step.
955
+
956
+ ```
957
+ /hook before <cmd> <script> # run script before every <cmd>
958
+ /hook after <cmd> <script> # run script after every <cmd>
959
+ /hook list # show active hooks
960
+ /hook clear <cmd> # remove hooks for <cmd>
961
+ /hook clear # remove all hooks
962
+ ```
963
+
964
+ `<cmd>` is the command name without the slash: `edit`, `patch`, `fix`, `insert`, `run`.
965
+
966
+ **Two script types:**
967
+ - `.txt` — 1bcoder script (sequence of commands). `{{file}}` and `{{range}}` are injected as session variables.
968
+ - `.py` — Python guard subprocess. Receives trigger content on `stdin`, outputs `BLOCK:`/`ALERT:`/`ACTION:` lines.
969
+
970
+ **Auto-injected for `.txt` scripts:**
971
+
972
+ | Variable | Value |
973
+ |---|---|
974
+ | `{{file}}` | file argument of the triggering command |
975
+ | `{{range}}` | line range (if specified), e.g. `10-25` |
976
+
977
+ **Examples:**
978
+ ```
979
+ /hook before edit /bkup {{file}} # backup before every edit (.txt script)
980
+ /hook before run sql_readonly_guard.py # block dangerous SQL (.py guard)
981
+ ```
982
+
983
+ Missing `.txt` script cancels a `before` hook. `.py` guard cancels only if it prints `BLOCK:`. Step errors inside `.txt` scripts do not cancel the command.
984
+
892
985
  ---
893
986
 
894
987
  ### Prompt templates
@@ -917,9 +1010,9 @@ Run a Python script against the last LLM reply. Useful for extracting filenames,
917
1010
  /proc new my-proc # create a new processor from template
918
1011
  ```
919
1012
 
920
- **Processor protocol:** `stdin` = last LLM reply · `stdout` = result · `key=value` lines = extracted params · `ACTION: /command` = confirmed and executed (run mode only) · exit 1 = failure.
1013
+ **Processor protocol:** `stdin` = last LLM reply · `stdout` = result · `key=value` lines = extracted params · `ACTION: /command` = confirmed and executed (run mode only) · `ALERT: message` = warning printed, continues · `BLOCK: reason` = cancels the triggering command (hook mode only) · exit 1 = failure.
921
1014
 
922
- Built-in processors in `<install>/.1bcoder/proc/`:
1015
+ Built-in processors in `~/.1bcoder/proc/`:
923
1016
 
924
1017
  | Processor | Purpose | Best mode |
925
1018
  |---|---|---|
@@ -928,8 +1021,20 @@ Built-in processors in `<install>/.1bcoder/proc/`:
928
1021
  | `extract-list` | Convert first bullet/numbered list in reply to comma-separated line | one-shot |
929
1022
  | `grounding-check` | Score identifiers against `map.txt`, warn if <50% | persistent |
930
1023
  | `collect-files` | Accumulate filenames to `.1bcoder/collected-files.txt` | persistent |
931
- | `md` | Render last reply as formatted Markdown in terminal (`pip install rich`) | one-shot |
1024
+ | `md` | Render last reply as formatted Markdown in terminal | one-shot |
932
1025
  | `mdx` | Render last reply as Markdown + LaTeX (KaTeX) + Mermaid diagrams in browser | one-shot |
1026
+ | `ctx_cut` | Auto `/ctx cut` when context exceeds threshold (default 90%) | persistent |
1027
+ | `rude_words` | Alert if reply contains profanity (`ua` arg adds Ukrainian list) | persistent |
1028
+ | `secret_check` | Alert if reply contains sensitive names (google, anthropic…) | persistent |
1029
+ | `sql_readonly_guard` | Alert (proc) or block (hook) on write SQL statements | both |
1030
+
1031
+ **Guard usage examples:**
1032
+ ```
1033
+ /proc on ctx_cut 80 # auto cut at 80%
1034
+ /proc on rude_words ua # profanity check + Ukrainian
1035
+ /proc on secret_check client=acme # + custom keyword
1036
+ /hook before run sql_readonly_guard.py # block /run with DELETE/DROP/UPDATE
1037
+ ```
933
1038
 
934
1039
  See `/doc PROC` for the full protocol, built-in processor reference, and guide to writing your own.
935
1040
 
@@ -1,16 +1,3 @@
1
- Metadata-Version: 2.4
2
- Name: 1bcoder
3
- Version: 0.1.2
4
- Summary: AI coding assistant agent for 1B–7B local models (Ollama, LMStudio, llama.cpp). Terminal REPL with file editing, project map, agents, scripts, and parallel multi-model queries.
5
- Requires-Python: >=3.10
6
- Description-Content-Type: text/markdown
7
- License-File: LICENSE
8
- Requires-Dist: requests>=2.28
9
- Requires-Dist: pyreadline3>=3.4; sys_platform == "win32"
10
- Requires-Dist: tqdm>=4.64
11
- Requires-Dist: rich>=13.0
12
- Dynamic: license-file
13
-
14
1
  # 1bcoder
15
2
 
16
3
  AI coding assistant agent for 1B–7B local models running locally via [Ollama](https://ollama.com), [LMStudio](https://lmstudio.ai), or [LiteLLM](https://litellm.ai).
@@ -90,7 +77,8 @@ Tasks that require the model to decide *what to look at* — refactoring across
90
77
  - **Aliases** — define command shortcuts with `/alias /name = expansion` (supports `{{args}}`); persisted in `aliases.txt`; loaded from global then project directory at startup and survive `/clear`
91
78
  - **Backup/restore** — `/bkup save` rotates existing backups (`file.bkup` → `file.bkup(1)`, `file.bkup(2)`…) so no snapshot is ever overwritten; `/bkup restore` always restores the latest
92
79
  - **MCP support** — connect external tool servers (filesystem, web, git, database, browser…) via the Model Context Protocol
93
- - **Parallel queries** — send prompts to multiple models simultaneously with `/parallel`, with saved profiles
80
+ - **Parallel queries** — send prompts to multiple models simultaneously with `/parallel`; control context sent (`--ctx`/`--last`/`--no-ctx`) and route replies back into main context (`ctx` output) for sub-agent workflows
81
+ - **Command hooks** — `/hook before|after <cmd> <script>` runs a script before or after edit/patch/fix/insert; `before` hook cancels the command if the script is missing; `{{file}}` and `{{range}}` injected automatically
94
82
  - Switch model or host at runtime without restarting (`/model gemma3:1b`, `/host openai://localhost:1234`)
95
83
  - **Model parameters** — `/param temperature 0.2`, `/param enable_thinking false` — sent with every request, auto-cast to correct type
96
84
  - **Multi-provider** — connect to Ollama, LMStudio, or LiteLLM using `ollama://` / `openai://` URL scheme; plain host defaults to Ollama
@@ -105,7 +93,7 @@ Tasks that require the model to decide *what to look at* — refactoring across
105
93
  pip install 1bcoder
106
94
  ```
107
95
 
108
- On first launch, default agents, procs, and scripts are copied to `~/.1bcoder/` automatically.
96
+ On first launch, default agents, procs, scripts, profiles, and aliases are copied to `~/.1bcoder/` automatically. On upgrade (`pip install --upgrade 1bcoder`), new entries in `aliases.txt` and `profiles.txt` are merged in without overwriting your customisations.
109
97
 
110
98
  ### Option 2 — Clone and install locally
111
99
 
@@ -675,7 +663,7 @@ Lines starting with `[v]` are already done and skipped. Lines starting with `#`
675
663
  | `/script show` | Display steps of the current script |
676
664
  | `/script add <command>` | Append a step to the current script |
677
665
  | `/script clear` | Wipe current script completely |
678
- | `/script reset` | Unmark all done steps |
666
+ | `/script reset` | Unmark all done steps (also happens automatically when a script runs to completion) |
679
667
  | `/script reapply [key=value ...]` | Reset all done steps then apply automatically; prompts for any NaN `{{variables}}` before running |
680
668
  | `/script refresh` | Reload script from disk and show contents |
681
669
  | `/script apply [file] [key=value ...]` | Run steps one by one (Y/n/q per step) |
@@ -725,6 +713,7 @@ Session variables store named values that are substituted as `{{name}}` in any c
725
713
  /var set name =MyService literal value
726
714
  /var def port db host declare multiple NaN variables (skips if already set)
727
715
  /var get list all variables (NaN = unset)
716
+ /var get port print value of a single variable (useful with ->)
728
717
  /var del port remove a variable
729
718
  ```
730
719
 
@@ -770,9 +759,12 @@ Any `{{key}}` found but not yet set is registered as NaN — `/script reapply` w
770
759
 
771
760
  ---
772
761
 
773
- ### Output capture (`->` and `$`)
762
+ ### Output capture (`->`, `$` and `~`)
763
+
764
+ Any command — LLM reply, tool output, or proc result — can be captured into a session variable using the `->` suffix. Two special tokens expand anywhere in a command or message:
774
765
 
775
- Any commandLLM reply, tool output, or proc result — can be captured into a session variable using the `->` suffix. The special token `$` expands to the last captured output anywhere in a command or message.
766
+ - `$` — last captured output (last AI reply or tool result)
767
+ - `~` — last user input (last message or command you typed)
776
768
 
777
769
  ```
778
770
  /map keyword extract auth.py -> keywords # capture tool output into variable
@@ -788,6 +780,15 @@ summarize this for me -> myplan # capture LLM reply
788
780
  /var set port result # also works: grab key from proc output
789
781
  ```
790
782
 
783
+ **`~` — repeat or redirect the last question:**
784
+ ```
785
+ як працює цей метод? # ask main model
786
+ /small ~ # same question → small model
787
+ /ask ~ # same question → agent mode
788
+ /explain "$" # ask small model to explain the reply
789
+ поясни: $ # ask main model to explain its own reply
790
+ ```
791
+
791
792
  `->` stores the full text (including ANSI-stripped terminal output) and also updates `$` for immediate reuse. Variables captured with `->` appear in `/var get` like any other session variable.
792
793
 
793
794
  ---
@@ -867,24 +868,34 @@ Connect external tool servers to give the AI access to filesystems, databases, w
867
868
  /mcp disconnect fs
868
869
  ```
869
870
 
870
- See [MCP.md](MCP.md) for a full list of ready-to-use servers.
871
+ See `/doc MCP` for a full list of ready-to-use servers.
871
872
 
872
873
  ---
873
874
 
874
875
  ### Parallel queries
875
876
 
876
- Send prompts to multiple models at the same time. Each answer is saved to its own file.
877
+ Send a prompt to multiple models at the same time.
877
878
 
878
879
  ```
879
- /parallel ["prompt"] [profile <name>] [host:port|model|file ...]
880
+ /parallel ["prompt"] [--ctx|--last|--no-ctx] [profile <name>] [host:port|model|(file or ctx) ...]
880
881
  ```
881
882
 
883
+ | Flag | Behaviour |
884
+ |---|---|
885
+ | *(default)* | Full conversation context is sent to every worker |
886
+ | `--last` | Only the last user message is sent (saves tokens for small models) |
887
+ | `--no-ctx` | No context — prompt only (fastest, zero leakage) |
888
+
889
+ Workers write results to a file **or** inject them back into the main context:
890
+
882
891
  ```
883
892
  /parallel "review this for bugs" \
884
- localhost:11434|llama3.2:1b|answers/llm1.txt \
885
- localhost:11435|qwen2.5:1b|answers/llm2.txt
893
+ localhost:11434|llama3.2:1b|ans/llm1.txt \
894
+ localhost:11435|qwen2.5:1b|ctx
886
895
  ```
887
896
 
897
+ Using `ctx` as the output target injects the worker's reply into the main conversation — the next AI turn will see it.
898
+
888
899
  **Profiles** — save a set of workers for reuse:
889
900
 
890
901
  ```
@@ -902,6 +913,60 @@ review: localhost:11434|ministral3:3b|ans/review.txt localhost:11435|cogito:3b|a
902
913
  fast: localhost:11434|qwen2.5-coder:0.6b|ans/q.txt # quick sanity check
903
914
  ```
904
915
 
916
+ **Sub-agent profiles** — built-in profiles that return answers directly to the main context (`ctx`):
917
+
918
+ ```
919
+ small: localhost:11434|qwen3:0.6b|ctx
920
+ explain: localhost:11434|gemma3:1b|ctx
921
+ thinking: localhost:11434|lfm2.5-thinking:1.2b|ctx
922
+ short: localhost:11434|llama3.2:1b|ctx
923
+ ```
924
+
925
+ These are aliased as `/small`, `/explain`, `/thinking`, `/short` — use them like sub-agents:
926
+
927
+ ```
928
+ /small "what does this function return?" --no-ctx # ask tiny model, no context bleed
929
+ /explain "$" # ask gemma to explain last reply
930
+ /small ~ # repeat last question to a small model
931
+ ```
932
+
933
+ `~` expands to the last message you typed; `$` expands to the last AI reply — combine them to build sub-agent pipelines without copy-pasting.
934
+
935
+ ---
936
+
937
+ ### Hooks (`/hook`)
938
+
939
+ Run a script automatically **before** or **after** a command. Useful for backups before edits, linting after patches, or any pre/post workflow step.
940
+
941
+ ```
942
+ /hook before <cmd> <script> # run script before every <cmd>
943
+ /hook after <cmd> <script> # run script after every <cmd>
944
+ /hook list # show active hooks
945
+ /hook clear <cmd> # remove hooks for <cmd>
946
+ /hook clear # remove all hooks
947
+ ```
948
+
949
+ `<cmd>` is the command name without the slash: `edit`, `patch`, `fix`, `insert`, `run`.
950
+
951
+ **Two script types:**
952
+ - `.txt` — 1bcoder script (sequence of commands). `{{file}}` and `{{range}}` are injected as session variables.
953
+ - `.py` — Python guard subprocess. Receives trigger content on `stdin`, outputs `BLOCK:`/`ALERT:`/`ACTION:` lines.
954
+
955
+ **Auto-injected for `.txt` scripts:**
956
+
957
+ | Variable | Value |
958
+ |---|---|
959
+ | `{{file}}` | file argument of the triggering command |
960
+ | `{{range}}` | line range (if specified), e.g. `10-25` |
961
+
962
+ **Examples:**
963
+ ```
964
+ /hook before edit /bkup {{file}} # backup before every edit (.txt script)
965
+ /hook before run sql_readonly_guard.py # block dangerous SQL (.py guard)
966
+ ```
967
+
968
+ Missing `.txt` script cancels a `before` hook. `.py` guard cancels only if it prints `BLOCK:`. Step errors inside `.txt` scripts do not cancel the command.
969
+
905
970
  ---
906
971
 
907
972
  ### Prompt templates
@@ -930,9 +995,9 @@ Run a Python script against the last LLM reply. Useful for extracting filenames,
930
995
  /proc new my-proc # create a new processor from template
931
996
  ```
932
997
 
933
- **Processor protocol:** `stdin` = last LLM reply · `stdout` = result · `key=value` lines = extracted params · `ACTION: /command` = confirmed and executed (run mode only) · exit 1 = failure.
998
+ **Processor protocol:** `stdin` = last LLM reply · `stdout` = result · `key=value` lines = extracted params · `ACTION: /command` = confirmed and executed (run mode only) · `ALERT: message` = warning printed, continues · `BLOCK: reason` = cancels the triggering command (hook mode only) · exit 1 = failure.
934
999
 
935
- Built-in processors in `<install>/.1bcoder/proc/`:
1000
+ Built-in processors in `~/.1bcoder/proc/`:
936
1001
 
937
1002
  | Processor | Purpose | Best mode |
938
1003
  |---|---|---|
@@ -941,8 +1006,20 @@ Built-in processors in `<install>/.1bcoder/proc/`:
941
1006
  | `extract-list` | Convert first bullet/numbered list in reply to comma-separated line | one-shot |
942
1007
  | `grounding-check` | Score identifiers against `map.txt`, warn if <50% | persistent |
943
1008
  | `collect-files` | Accumulate filenames to `.1bcoder/collected-files.txt` | persistent |
944
- | `md` | Render last reply as formatted Markdown in terminal (`pip install rich`) | one-shot |
1009
+ | `md` | Render last reply as formatted Markdown in terminal | one-shot |
945
1010
  | `mdx` | Render last reply as Markdown + LaTeX (KaTeX) + Mermaid diagrams in browser | one-shot |
1011
+ | `ctx_cut` | Auto `/ctx cut` when context exceeds threshold (default 90%) | persistent |
1012
+ | `rude_words` | Alert if reply contains profanity (`ua` arg adds Ukrainian list) | persistent |
1013
+ | `secret_check` | Alert if reply contains sensitive names (google, anthropic…) | persistent |
1014
+ | `sql_readonly_guard` | Alert (proc) or block (hook) on write SQL statements | both |
1015
+
1016
+ **Guard usage examples:**
1017
+ ```
1018
+ /proc on ctx_cut 80 # auto cut at 80%
1019
+ /proc on rude_words ua # profanity check + Ukrainian
1020
+ /proc on secret_check client=acme # + custom keyword
1021
+ /hook before run sql_readonly_guard.py # block /run with DELETE/DROP/UPDATE
1022
+ ```
946
1023
 
947
1024
  See `/doc PROC` for the full protocol, built-in processor reference, and guide to writing your own.
948
1025