1bcoder 0.1.1__tar.gz → 0.1.3__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (78) hide show
  1. {1bcoder-0.1.1 → 1bcoder-0.1.3/1bcoder.egg-info}/PKG-INFO +140 -38
  2. {1bcoder-0.1.1 → 1bcoder-0.1.3}/1bcoder.egg-info/SOURCES.txt +4 -0
  3. 1bcoder-0.1.1/README.md → 1bcoder-0.1.3/PKG-INFO +151 -37
  4. 1bcoder-0.1.1/1bcoder.egg-info/PKG-INFO → 1bcoder-0.1.3/README.md +136 -49
  5. 1bcoder-0.1.3/_bcoder_data/aliases.txt +13 -0
  6. 1bcoder-0.1.3/_bcoder_data/doc/PROC.md +235 -0
  7. 1bcoder-0.1.3/_bcoder_data/proc/ctx_cut.py +19 -0
  8. 1bcoder-0.1.3/_bcoder_data/proc/rude_words.py +34 -0
  9. 1bcoder-0.1.3/_bcoder_data/proc/secret_check.py +29 -0
  10. 1bcoder-0.1.3/_bcoder_data/proc/sql_readonly_guard.py +40 -0
  11. {1bcoder-0.1.1 → 1bcoder-0.1.3}/_bcoder_data/profiles.txt +6 -0
  12. {1bcoder-0.1.1 → 1bcoder-0.1.3}/chat.py +579 -123
  13. {1bcoder-0.1.1 → 1bcoder-0.1.3}/pyproject.toml +6 -1
  14. 1bcoder-0.1.1/_bcoder_data/aliases.txt +0 -8
  15. 1bcoder-0.1.1/_bcoder_data/doc/PROC.md +0 -150
  16. {1bcoder-0.1.1 → 1bcoder-0.1.3}/1bcoder.egg-info/dependency_links.txt +0 -0
  17. {1bcoder-0.1.1 → 1bcoder-0.1.3}/1bcoder.egg-info/entry_points.txt +0 -0
  18. {1bcoder-0.1.1 → 1bcoder-0.1.3}/1bcoder.egg-info/requires.txt +0 -0
  19. {1bcoder-0.1.1 → 1bcoder-0.1.3}/1bcoder.egg-info/top_level.txt +0 -0
  20. {1bcoder-0.1.1 → 1bcoder-0.1.3}/LICENSE +0 -0
  21. {1bcoder-0.1.1 → 1bcoder-0.1.3}/_bcoder_data/__init__.py +0 -0
  22. {1bcoder-0.1.1 → 1bcoder-0.1.3}/_bcoder_data/agents/advance.txt +0 -0
  23. {1bcoder-0.1.1 → 1bcoder-0.1.3}/_bcoder_data/agents/ask.txt +0 -0
  24. {1bcoder-0.1.1 → 1bcoder-0.1.3}/_bcoder_data/agents/fill.txt +0 -0
  25. {1bcoder-0.1.1 → 1bcoder-0.1.3}/_bcoder_data/agents/planning.txt +0 -0
  26. {1bcoder-0.1.1 → 1bcoder-0.1.3}/_bcoder_data/agents/sqlite.txt +0 -0
  27. {1bcoder-0.1.1 → 1bcoder-0.1.3}/_bcoder_data/doc/MCP.md +0 -0
  28. {1bcoder-0.1.1 → 1bcoder-0.1.3}/_bcoder_data/doc/PARAM.md +0 -0
  29. {1bcoder-0.1.1 → 1bcoder-0.1.3}/_bcoder_data/map.txt +0 -0
  30. {1bcoder-0.1.1 → 1bcoder-0.1.3}/_bcoder_data/proc/add-save.py +0 -0
  31. {1bcoder-0.1.1 → 1bcoder-0.1.3}/_bcoder_data/proc/collect-files.py +0 -0
  32. {1bcoder-0.1.1 → 1bcoder-0.1.3}/_bcoder_data/proc/extract-code.py +0 -0
  33. {1bcoder-0.1.1 → 1bcoder-0.1.3}/_bcoder_data/proc/extract-files.py +0 -0
  34. {1bcoder-0.1.1 → 1bcoder-0.1.3}/_bcoder_data/proc/extract-list.py +0 -0
  35. {1bcoder-0.1.1 → 1bcoder-0.1.3}/_bcoder_data/proc/grounding-check.py +0 -0
  36. {1bcoder-0.1.1 → 1bcoder-0.1.3}/_bcoder_data/proc/md.py +0 -0
  37. {1bcoder-0.1.1 → 1bcoder-0.1.3}/_bcoder_data/proc/mdx.py +0 -0
  38. {1bcoder-0.1.1 → 1bcoder-0.1.3}/_bcoder_data/proc/regexp-extract.py +0 -0
  39. {1bcoder-0.1.1 → 1bcoder-0.1.3}/_bcoder_data/prompts/analysis.txt +0 -0
  40. {1bcoder-0.1.1 → 1bcoder-0.1.3}/_bcoder_data/prompts/sumarise.txt +0 -0
  41. {1bcoder-0.1.1 → 1bcoder-0.1.3}/_bcoder_data/prompts.txt +0 -0
  42. {1bcoder-0.1.1 → 1bcoder-0.1.3}/_bcoder_data/scripts/AddFunction.txt +0 -0
  43. {1bcoder-0.1.1 → 1bcoder-0.1.3}/_bcoder_data/scripts/AskProject.txt +0 -0
  44. {1bcoder-0.1.1 → 1bcoder-0.1.3}/_bcoder_data/scripts/CheckRequirements.txt +0 -0
  45. {1bcoder-0.1.1 → 1bcoder-0.1.3}/_bcoder_data/scripts/DockerMySQL.txt +0 -0
  46. {1bcoder-0.1.1 → 1bcoder-0.1.3}/_bcoder_data/scripts/DockerNginx.txt +0 -0
  47. {1bcoder-0.1.1 → 1bcoder-0.1.3}/_bcoder_data/scripts/DockerPython.txt +0 -0
  48. {1bcoder-0.1.1 → 1bcoder-0.1.3}/_bcoder_data/scripts/DockerStack.txt +0 -0
  49. {1bcoder-0.1.1 → 1bcoder-0.1.3}/_bcoder_data/scripts/DuckDuckGoInstant.txt +0 -0
  50. {1bcoder-0.1.1 → 1bcoder-0.1.3}/_bcoder_data/scripts/EnvTemplate.txt +0 -0
  51. {1bcoder-0.1.1 → 1bcoder-0.1.3}/_bcoder_data/scripts/Explain.txt +0 -0
  52. {1bcoder-0.1.1 → 1bcoder-0.1.3}/_bcoder_data/scripts/ExploreProjectStructure.txt +0 -0
  53. {1bcoder-0.1.1 → 1bcoder-0.1.3}/_bcoder_data/scripts/GitIgnorePython.txt +0 -0
  54. {1bcoder-0.1.1 → 1bcoder-0.1.3}/_bcoder_data/scripts/MySQLDump.txt +0 -0
  55. {1bcoder-0.1.1 → 1bcoder-0.1.3}/_bcoder_data/scripts/NewScript.txt +0 -0
  56. {1bcoder-0.1.1 → 1bcoder-0.1.3}/_bcoder_data/scripts/PipFreeze.txt +0 -0
  57. {1bcoder-0.1.1 → 1bcoder-0.1.3}/_bcoder_data/scripts/PyPI.txt +0 -0
  58. {1bcoder-0.1.1 → 1bcoder-0.1.3}/_bcoder_data/scripts/Refactor.txt +0 -0
  59. {1bcoder-0.1.1 → 1bcoder-0.1.3}/_bcoder_data/scripts/RunAndFix.txt +0 -0
  60. {1bcoder-0.1.1 → 1bcoder-0.1.3}/_bcoder_data/scripts/SQLiteSchema.txt +0 -0
  61. {1bcoder-0.1.1 → 1bcoder-0.1.3}/_bcoder_data/scripts/WikiPage.txt +0 -0
  62. {1bcoder-0.1.1 → 1bcoder-0.1.3}/_bcoder_data/scripts/WikiSearch.txt +0 -0
  63. {1bcoder-0.1.1 → 1bcoder-0.1.3}/_bcoder_data/scripts/parallel_call.txt +0 -0
  64. {1bcoder-0.1.1 → 1bcoder-0.1.3}/_bcoder_data/scripts/personal/content/create-regular-content.txt +0 -0
  65. {1bcoder-0.1.1 → 1bcoder-0.1.3}/_bcoder_data/scripts/personal/content/plan.txt +0 -0
  66. {1bcoder-0.1.1 → 1bcoder-0.1.3}/_bcoder_data/scripts/personal/test/collect-data-from-test-environment.txt +0 -0
  67. {1bcoder-0.1.1 → 1bcoder-0.1.3}/_bcoder_data/scripts/plan.txt +0 -0
  68. {1bcoder-0.1.1 → 1bcoder-0.1.3}/_bcoder_data/scripts/remote/create-content-on-remote-server.txt +0 -0
  69. {1bcoder-0.1.1 → 1bcoder-0.1.3}/_bcoder_data/scripts/set_ctx.txt +0 -0
  70. {1bcoder-0.1.1 → 1bcoder-0.1.3}/_bcoder_data/scripts/team-map-worker.txt +0 -0
  71. {1bcoder-0.1.1 → 1bcoder-0.1.3}/_bcoder_data/scripts/team-search-worker.txt +0 -0
  72. {1bcoder-0.1.1 → 1bcoder-0.1.3}/_bcoder_data/scripts/team-summarize.txt +0 -0
  73. {1bcoder-0.1.1 → 1bcoder-0.1.3}/_bcoder_data/scripts/team-tree-worker.txt +0 -0
  74. {1bcoder-0.1.1 → 1bcoder-0.1.3}/_bcoder_data/scripts/test.txt +0 -0
  75. {1bcoder-0.1.1 → 1bcoder-0.1.3}/_bcoder_data/teams/code-analysis.yaml +0 -0
  76. {1bcoder-0.1.1 → 1bcoder-0.1.3}/map_index.py +0 -0
  77. {1bcoder-0.1.1 → 1bcoder-0.1.3}/map_query.py +0 -0
  78. {1bcoder-0.1.1 → 1bcoder-0.1.3}/setup.cfg +0 -0
@@ -1,6 +1,9 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: 1bcoder
3
- Version: 0.1.1
3
+ Version: 0.1.3
4
+ Summary: AI coding assistant agent for 1B–7B local models (Ollama, LMStudio, llama.cpp). Terminal REPL with file editing, project map, agents, scripts, and parallel multi-model queries.
5
+ Project-URL: Homepage, https://github.com/szholobetsky/1bcoder
6
+ Project-URL: Repository, https://github.com/szholobetsky/1bcoder
4
7
  Requires-Python: >=3.10
5
8
  Description-Content-Type: text/markdown
6
9
  License-File: LICENSE
@@ -12,15 +15,7 @@ Dynamic: license-file
12
15
 
13
16
  # 1bcoder
14
17
 
15
- AI-assisted code editor designed for small (1B parameter) language models running locally via [Ollama](https://ollama.com), [LMStudio](https://lmstudio.ai), or [LiteLLM](https://litellm.ai).
16
-
17
- ---
18
-
19
- **(c) 2026 Stanislav Zholobetskyi**
20
- Institute for Information Recording, National Academy of Sciences of Ukraine, Kyiv
21
-
22
- *Створено в рамках аспірантського дослідження на тему:
23
- «Інтелектуальна технологія підтримки розробки та супроводу програмних продуктів»*
18
+ AI coding assistant agent for 1B–7B local models running locally via [Ollama](https://ollama.com), [LMStudio](https://lmstudio.ai), or [LiteLLM](https://litellm.ai).
24
19
 
25
20
  ---
26
21
 
@@ -93,11 +88,12 @@ Tasks that require the model to decide *what to look at* — refactoring across
93
88
  - **`/plan <goal>`** — planning agent: researches the project, writes a natural-language step-by-step plan to `plan.txt`; run `/agent <task> plan plan.txt` to execute it step by step
94
89
  - **`/fill`** — fill agent: reads NaN session variables, scans project for `.var` files and config files, sets each value automatically
95
90
  - **Session variables** — `{{name}}` placeholders substituted in any command; save/load from `.var` files for offline reuse without loading files into context
96
- - **Project config** — `/config save` persists session state (host, model, ctx, params, vars, procs) to `.1bcoder/config.yml`; auto-loaded on startup when `auto: true`
91
+ - **Project config** — `/config save` persists session state (host, model, ctx, params, vars, procs) to `.1bcoder/config.yml`; `/config save global` saves to `~/.1bcoder/config.yml`; on startup, the first config with `auto: true` (local → global) is applied automatically
97
92
  - **Aliases** — define command shortcuts with `/alias /name = expansion` (supports `{{args}}`); persisted in `aliases.txt`; loaded from global then project directory at startup and survive `/clear`
98
93
  - **Backup/restore** — `/bkup save` rotates existing backups (`file.bkup` → `file.bkup(1)`, `file.bkup(2)`…) so no snapshot is ever overwritten; `/bkup restore` always restores the latest
99
94
  - **MCP support** — connect external tool servers (filesystem, web, git, database, browser…) via the Model Context Protocol
100
- - **Parallel queries** — send prompts to multiple models simultaneously with `/parallel`, with saved profiles
95
+ - **Parallel queries** — send prompts to multiple models simultaneously with `/parallel`; control context sent (`--ctx`/`--last`/`--no-ctx`) and route replies back into main context (`ctx` output) for sub-agent workflows
96
+ - **Command hooks** — `/hook before|after <cmd> <script>` runs a script before or after edit/patch/fix/insert; `before` hook cancels the command if the script is missing; `{{file}}` and `{{range}}` injected automatically
101
97
  - Switch model or host at runtime without restarting (`/model gemma3:1b`, `/host openai://localhost:1234`)
102
98
  - **Model parameters** — `/param temperature 0.2`, `/param enable_thinking false` — sent with every request, auto-cast to correct type
103
99
  - **Multi-provider** — connect to Ollama, LMStudio, or LiteLLM using `ollama://` / `openai://` URL scheme; plain host defaults to Ollama
@@ -112,7 +108,7 @@ Tasks that require the model to decide *what to look at* — refactoring across
112
108
  pip install 1bcoder
113
109
  ```
114
110
 
115
- On first launch, default agents, procs, and scripts are copied to `~/.1bcoder/` automatically.
111
+ On first launch, default agents, procs, scripts, profiles, and aliases are copied to `~/.1bcoder/` automatically. On upgrade (`pip install --upgrade 1bcoder`), new entries in `aliases.txt` and `profiles.txt` are merged in without overwriting your customisations.
116
112
 
117
113
  ### Option 2 — Clone and install locally
118
114
 
@@ -184,15 +180,15 @@ pip install -e .
184
180
  python chat.py
185
181
  ```
186
182
 
187
- On startup a numbered list of available Ollama models is shown type the number to select one. Use `--model` to skip the prompt.
183
+ On startup, 1bcoder checks for a config with `auto: true` (local `.1bcoder/config.yml` first, then `~/.1bcoder/config.yml`) and connects to the host and model stored there. If no config is found, it connects to local Ollama and prompts for a model. Use `--model` or `--host` to override.
188
184
 
189
185
  ### CLI options
190
186
 
191
187
  ```
192
188
  1bcoder [--host URL] [--model NAME] [--init] [--scriptapply SCRIPT] [--param KEY=VALUE]
193
189
 
194
- --host URL Host URL — supports ollama:// and openai:// schemes (default: http://localhost:11434)
195
- --model NAME Skip model selection, use this model directly
190
+ --host URL Host URL — supports ollama:// and openai:// schemes (default: from config or http://localhost:11434)
191
+ --model NAME Model to use; overrides config (shows list if not available on host)
196
192
  --init Create .1bcoder/ scaffold in the current directory
197
193
  --scriptapply SCRIPT Run a script file non-interactively, then exit
198
194
  --param KEY=VALUE Plan parameter substitution (repeatable)
@@ -682,7 +678,7 @@ Lines starting with `[v]` are already done and skipped. Lines starting with `#`
682
678
  | `/script show` | Display steps of the current script |
683
679
  | `/script add <command>` | Append a step to the current script |
684
680
  | `/script clear` | Wipe current script completely |
685
- | `/script reset` | Unmark all done steps |
681
+ | `/script reset` | Unmark all done steps (also happens automatically when a script runs to completion) |
686
682
  | `/script reapply [key=value ...]` | Reset all done steps then apply automatically; prompts for any NaN `{{variables}}` before running |
687
683
  | `/script refresh` | Reload script from disk and show contents |
688
684
  | `/script apply [file] [key=value ...]` | Run steps one by one (Y/n/q per step) |
@@ -732,6 +728,7 @@ Session variables store named values that are substituted as `{{name}}` in any c
732
728
  /var set name =MyService literal value
733
729
  /var def port db host declare multiple NaN variables (skips if already set)
734
730
  /var get list all variables (NaN = unset)
731
+ /var get port print value of a single variable (useful with ->)
735
732
  /var del port remove a variable
736
733
  ```
737
734
 
@@ -777,9 +774,12 @@ Any `{{key}}` found but not yet set is registered as NaN — `/script reapply` w
777
774
 
778
775
  ---
779
776
 
780
- ### Output capture (`->` and `$`)
777
+ ### Output capture (`->`, `$` and `~`)
778
+
779
+ Any command — LLM reply, tool output, or proc result — can be captured into a session variable using the `->` suffix. Two special tokens expand anywhere in a command or message:
781
780
 
782
- Any commandLLM reply, tool output, or proc result — can be captured into a session variable using the `->` suffix. The special token `$` expands to the last captured output anywhere in a command or message.
781
+ - `$` — last captured output (last AI reply or tool result)
782
+ - `~` — last user input (last message or command you typed)
783
783
 
784
784
  ```
785
785
  /map keyword extract auth.py -> keywords # capture tool output into variable
@@ -795,23 +795,43 @@ summarize this for me -> myplan # capture LLM reply
795
795
  /var set port result # also works: grab key from proc output
796
796
  ```
797
797
 
798
+ **`~` — repeat or redirect the last question:**
799
+ ```
800
+ як працює цей метод? # ask main model
801
+ /small ~ # same question → small model
802
+ /ask ~ # same question → agent mode
803
+ /explain "$" # ask small model to explain the reply
804
+ поясни: $ # ask main model to explain its own reply
805
+ ```
806
+
798
807
  `->` stores the full text (including ANSI-stripped terminal output) and also updates `$` for immediate reuse. Variables captured with `->` appear in `/var get` like any other session variable.
799
808
 
800
809
  ---
801
810
 
802
811
  ### Project config (`/config`)
803
812
 
804
- Save and restore session state (host, model, ctx, params, vars, procs) to `.1bcoder/config.yml` in the current working directory. Useful for project-specific presets that are too large to fit in model context.
813
+ Save and restore session state (host, model, ctx, params, vars, procs). Two config locations are supported:
814
+
815
+ - **Local** — `.1bcoder/config.yml` in the current working directory (project-specific)
816
+ - **Global** — `~/.1bcoder/config.yml` (user-wide default for all projects)
817
+
818
+ **Startup priority:** on launch without `--host`/`--model`, 1bcoder checks local config first, then global. The first one with `auto: true` wins. If neither has `auto: true`, connects to local Ollama and prompts for a model.
805
819
 
806
820
  ```
807
- /config save # save all current state
808
- /config save host # save only host
809
- /config save model # save only model
810
- /config save vars # save only vars
811
- /config load # restore from config.yml
812
- /config show # print config.yml contents
813
- /config auto on # auto-load on every startup in this directory
814
- /config auto off # disable auto-load
821
+ /config save # save all current state to local config
822
+ /config save global # save all current state to global config
823
+ /config save host # save only host to local config
824
+ /config save global host # save only host to global config
825
+ /config save model # save only model to local config
826
+ /config save global model # save only model to global config
827
+ /config save vars # save only vars to local config
828
+ /config load # restore from local config
829
+ /config load global # restore from global config
830
+ /config show # print local config contents
831
+ /config show global # print global config contents
832
+ /config auto on # enable auto-load in local config
833
+ /config auto on global # enable auto-load in global config
834
+ /config auto off # disable auto-load in local config
815
835
  ```
816
836
 
817
837
  **Selective delete:**
@@ -824,7 +844,7 @@ Save and restore session state (host, model, ctx, params, vars, procs) to `.1bco
824
844
  /config del proc collect-files # remove one proc
825
845
  ```
826
846
 
827
- **Config file format** (`.1bcoder/config.yml`):
847
+ **Config file format** (`.1bcoder/config.yml` or `~/.1bcoder/config.yml`):
828
848
  ```yaml
829
849
  auto: true
830
850
  host: ollama://localhost:11434
@@ -840,7 +860,7 @@ procs:
840
860
  - collect-files output.txt
841
861
  ```
842
862
 
843
- When `auto: true`, the config is applied automatically after the startup banner host, model, ctx, params, vars, and procs are restored without any command.
863
+ When `auto: true`, host and model are used at startup to connect; ctx, params, vars, and procs are also restored.
844
864
 
845
865
  ---
846
866
 
@@ -863,24 +883,34 @@ Connect external tool servers to give the AI access to filesystems, databases, w
863
883
  /mcp disconnect fs
864
884
  ```
865
885
 
866
- See [MCP.md](MCP.md) for a full list of ready-to-use servers.
886
+ See `/doc MCP` for a full list of ready-to-use servers.
867
887
 
868
888
  ---
869
889
 
870
890
  ### Parallel queries
871
891
 
872
- Send prompts to multiple models at the same time. Each answer is saved to its own file.
892
+ Send a prompt to multiple models at the same time.
873
893
 
874
894
  ```
875
- /parallel ["prompt"] [profile <name>] [host:port|model|file ...]
895
+ /parallel ["prompt"] [--ctx|--last|--no-ctx] [profile <name>] [host:port|model|(file or ctx) ...]
876
896
  ```
877
897
 
898
+ | Flag | Behaviour |
899
+ |---|---|
900
+ | *(default)* | Full conversation context is sent to every worker |
901
+ | `--last` | Only the last user message is sent (saves tokens for small models) |
902
+ | `--no-ctx` | No context — prompt only (fastest, zero leakage) |
903
+
904
+ Workers write results to a file **or** inject them back into the main context:
905
+
878
906
  ```
879
907
  /parallel "review this for bugs" \
880
- localhost:11434|llama3.2:1b|answers/llm1.txt \
881
- localhost:11435|qwen2.5:1b|answers/llm2.txt
908
+ localhost:11434|llama3.2:1b|ans/llm1.txt \
909
+ localhost:11435|qwen2.5:1b|ctx
882
910
  ```
883
911
 
912
+ Using `ctx` as the output target injects the worker's reply into the main conversation — the next AI turn will see it.
913
+
884
914
  **Profiles** — save a set of workers for reuse:
885
915
 
886
916
  ```
@@ -898,6 +928,60 @@ review: localhost:11434|ministral3:3b|ans/review.txt localhost:11435|cogito:3b|a
898
928
  fast: localhost:11434|qwen2.5-coder:0.6b|ans/q.txt # quick sanity check
899
929
  ```
900
930
 
931
+ **Sub-agent profiles** — built-in profiles that return answers directly to the main context (`ctx`):
932
+
933
+ ```
934
+ small: localhost:11434|qwen3:0.6b|ctx
935
+ explain: localhost:11434|gemma3:1b|ctx
936
+ thinking: localhost:11434|lfm2.5-thinking:1.2b|ctx
937
+ short: localhost:11434|llama3.2:1b|ctx
938
+ ```
939
+
940
+ These are aliased as `/small`, `/explain`, `/thinking`, `/short` — use them like sub-agents:
941
+
942
+ ```
943
+ /small "what does this function return?" --no-ctx # ask tiny model, no context bleed
944
+ /explain "$" # ask gemma to explain last reply
945
+ /small ~ # repeat last question to a small model
946
+ ```
947
+
948
+ `~` expands to the last message you typed; `$` expands to the last AI reply — combine them to build sub-agent pipelines without copy-pasting.
949
+
950
+ ---
951
+
952
+ ### Hooks (`/hook`)
953
+
954
+ Run a script automatically **before** or **after** a command. Useful for backups before edits, linting after patches, or any pre/post workflow step.
955
+
956
+ ```
957
+ /hook before <cmd> <script> # run script before every <cmd>
958
+ /hook after <cmd> <script> # run script after every <cmd>
959
+ /hook list # show active hooks
960
+ /hook clear <cmd> # remove hooks for <cmd>
961
+ /hook clear # remove all hooks
962
+ ```
963
+
964
+ `<cmd>` is the command name without the slash: `edit`, `patch`, `fix`, `insert`, `run`.
965
+
966
+ **Two script types:**
967
+ - `.txt` — 1bcoder script (sequence of commands). `{{file}}` and `{{range}}` are injected as session variables.
968
+ - `.py` — Python guard subprocess. Receives trigger content on `stdin`, outputs `BLOCK:`/`ALERT:`/`ACTION:` lines.
969
+
970
+ **Auto-injected for `.txt` scripts:**
971
+
972
+ | Variable | Value |
973
+ |---|---|
974
+ | `{{file}}` | file argument of the triggering command |
975
+ | `{{range}}` | line range (if specified), e.g. `10-25` |
976
+
977
+ **Examples:**
978
+ ```
979
+ /hook before edit /bkup {{file}} # backup before every edit (.txt script)
980
+ /hook before run sql_readonly_guard.py # block dangerous SQL (.py guard)
981
+ ```
982
+
983
+ Missing `.txt` script cancels a `before` hook. `.py` guard cancels only if it prints `BLOCK:`. Step errors inside `.txt` scripts do not cancel the command.
984
+
901
985
  ---
902
986
 
903
987
  ### Prompt templates
@@ -926,9 +1010,9 @@ Run a Python script against the last LLM reply. Useful for extracting filenames,
926
1010
  /proc new my-proc # create a new processor from template
927
1011
  ```
928
1012
 
929
- **Processor protocol:** `stdin` = last LLM reply · `stdout` = result · `key=value` lines = extracted params · `ACTION: /command` = confirmed and executed (run mode only) · exit 1 = failure.
1013
+ **Processor protocol:** `stdin` = last LLM reply · `stdout` = result · `key=value` lines = extracted params · `ACTION: /command` = confirmed and executed (run mode only) · `ALERT: message` = warning printed, continues · `BLOCK: reason` = cancels the triggering command (hook mode only) · exit 1 = failure.
930
1014
 
931
- Built-in processors in `<install>/.1bcoder/proc/`:
1015
+ Built-in processors in `~/.1bcoder/proc/`:
932
1016
 
933
1017
  | Processor | Purpose | Best mode |
934
1018
  |---|---|---|
@@ -937,8 +1021,20 @@ Built-in processors in `<install>/.1bcoder/proc/`:
937
1021
  | `extract-list` | Convert first bullet/numbered list in reply to comma-separated line | one-shot |
938
1022
  | `grounding-check` | Score identifiers against `map.txt`, warn if <50% | persistent |
939
1023
  | `collect-files` | Accumulate filenames to `.1bcoder/collected-files.txt` | persistent |
940
- | `md` | Render last reply as formatted Markdown in terminal (`pip install rich`) | one-shot |
1024
+ | `md` | Render last reply as formatted Markdown in terminal | one-shot |
941
1025
  | `mdx` | Render last reply as Markdown + LaTeX (KaTeX) + Mermaid diagrams in browser | one-shot |
1026
+ | `ctx_cut` | Auto `/ctx cut` when context exceeds threshold (default 90%) | persistent |
1027
+ | `rude_words` | Alert if reply contains profanity (`ua` arg adds Ukrainian list) | persistent |
1028
+ | `secret_check` | Alert if reply contains sensitive names (google, anthropic…) | persistent |
1029
+ | `sql_readonly_guard` | Alert (proc) or block (hook) on write SQL statements | both |
1030
+
1031
+ **Guard usage examples:**
1032
+ ```
1033
+ /proc on ctx_cut 80 # auto cut at 80%
1034
+ /proc on rude_words ua # profanity check + Ukrainian
1035
+ /proc on secret_check client=acme # + custom keyword
1036
+ /hook before run sql_readonly_guard.py # block /run with DELETE/DROP/UPDATE
1037
+ ```
942
1038
 
943
1039
  See `/doc PROC` for the full protocol, built-in processor reference, and guide to writing your own.
944
1040
 
@@ -1187,3 +1283,9 @@ For human input, the corrected command is shown with `[fix?]` and you are asked
1187
1283
  | text-generation-webui | Linux / Win | `--api` flag | 5000 | `openai://` | oobabooga UI, needs `--api` flag to expose OpenAI endpoint |
1188
1284
  | TabbyAPI | Linux / Win | built-in | 5000 | `openai://` | Focused on exl2/GPTQ quantized models, low VRAM |
1189
1285
  | vLLM | Linux | built-in | 8000 | `openai://` | Production server, high throughput, requires significant VRAM |
1286
+
1287
+ ---
1288
+
1289
+ **(c) 2026 Stanislav Zholobetskyi**
1290
+ Institute for Information Recording, National Academy of Sciences of Ukraine, Kyiv
1291
+ *PhD research: «Intelligent Technology for Software Development and Maintenance Support»*
@@ -25,6 +25,7 @@ _bcoder_data/doc/PARAM.md
25
25
  _bcoder_data/doc/PROC.md
26
26
  _bcoder_data/proc/add-save.py
27
27
  _bcoder_data/proc/collect-files.py
28
+ _bcoder_data/proc/ctx_cut.py
28
29
  _bcoder_data/proc/extract-code.py
29
30
  _bcoder_data/proc/extract-files.py
30
31
  _bcoder_data/proc/extract-list.py
@@ -32,6 +33,9 @@ _bcoder_data/proc/grounding-check.py
32
33
  _bcoder_data/proc/md.py
33
34
  _bcoder_data/proc/mdx.py
34
35
  _bcoder_data/proc/regexp-extract.py
36
+ _bcoder_data/proc/rude_words.py
37
+ _bcoder_data/proc/secret_check.py
38
+ _bcoder_data/proc/sql_readonly_guard.py
35
39
  _bcoder_data/prompts/analysis.txt
36
40
  _bcoder_data/prompts/sumarise.txt
37
41
  _bcoder_data/scripts/AddFunction.txt
@@ -1,14 +1,21 @@
1
- # 1bcoder
2
-
3
- AI-assisted code editor designed for small (1B parameter) language models running locally via [Ollama](https://ollama.com), [LMStudio](https://lmstudio.ai), or [LiteLLM](https://litellm.ai).
4
-
5
- ---
1
+ Metadata-Version: 2.4
2
+ Name: 1bcoder
3
+ Version: 0.1.3
4
+ Summary: AI coding assistant agent for 1B–7B local models (Ollama, LMStudio, llama.cpp). Terminal REPL with file editing, project map, agents, scripts, and parallel multi-model queries.
5
+ Project-URL: Homepage, https://github.com/szholobetsky/1bcoder
6
+ Project-URL: Repository, https://github.com/szholobetsky/1bcoder
7
+ Requires-Python: >=3.10
8
+ Description-Content-Type: text/markdown
9
+ License-File: LICENSE
10
+ Requires-Dist: requests>=2.28
11
+ Requires-Dist: pyreadline3>=3.4; sys_platform == "win32"
12
+ Requires-Dist: tqdm>=4.64
13
+ Requires-Dist: rich>=13.0
14
+ Dynamic: license-file
6
15
 
7
- **(c) 2026 Stanislav Zholobetskyi**
8
- Institute for Information Recording, National Academy of Sciences of Ukraine, Kyiv
16
+ # 1bcoder
9
17
 
10
- *Створено в рамках аспірантського дослідження на тему:
11
- «Інтелектуальна технологія підтримки розробки та супроводу програмних продуктів»*
18
+ AI coding assistant agent for 1B–7B local models running locally via [Ollama](https://ollama.com), [LMStudio](https://lmstudio.ai), or [LiteLLM](https://litellm.ai).
12
19
 
13
20
  ---
14
21
 
@@ -81,11 +88,12 @@ Tasks that require the model to decide *what to look at* — refactoring across
81
88
  - **`/plan <goal>`** — planning agent: researches the project, writes a natural-language step-by-step plan to `plan.txt`; run `/agent <task> plan plan.txt` to execute it step by step
82
89
  - **`/fill`** — fill agent: reads NaN session variables, scans project for `.var` files and config files, sets each value automatically
83
90
  - **Session variables** — `{{name}}` placeholders substituted in any command; save/load from `.var` files for offline reuse without loading files into context
84
- - **Project config** — `/config save` persists session state (host, model, ctx, params, vars, procs) to `.1bcoder/config.yml`; auto-loaded on startup when `auto: true`
91
+ - **Project config** — `/config save` persists session state (host, model, ctx, params, vars, procs) to `.1bcoder/config.yml`; `/config save global` saves to `~/.1bcoder/config.yml`; on startup, the first config with `auto: true` (local → global) is applied automatically
85
92
  - **Aliases** — define command shortcuts with `/alias /name = expansion` (supports `{{args}}`); persisted in `aliases.txt`; loaded from global then project directory at startup and survive `/clear`
86
93
  - **Backup/restore** — `/bkup save` rotates existing backups (`file.bkup` → `file.bkup(1)`, `file.bkup(2)`…) so no snapshot is ever overwritten; `/bkup restore` always restores the latest
87
94
  - **MCP support** — connect external tool servers (filesystem, web, git, database, browser…) via the Model Context Protocol
88
- - **Parallel queries** — send prompts to multiple models simultaneously with `/parallel`, with saved profiles
95
+ - **Parallel queries** — send prompts to multiple models simultaneously with `/parallel`; control context sent (`--ctx`/`--last`/`--no-ctx`) and route replies back into main context (`ctx` output) for sub-agent workflows
96
+ - **Command hooks** — `/hook before|after <cmd> <script>` runs a script before or after edit/patch/fix/insert; `before` hook cancels the command if the script is missing; `{{file}}` and `{{range}}` injected automatically
89
97
  - Switch model or host at runtime without restarting (`/model gemma3:1b`, `/host openai://localhost:1234`)
90
98
  - **Model parameters** — `/param temperature 0.2`, `/param enable_thinking false` — sent with every request, auto-cast to correct type
91
99
  - **Multi-provider** — connect to Ollama, LMStudio, or LiteLLM using `ollama://` / `openai://` URL scheme; plain host defaults to Ollama
@@ -100,7 +108,7 @@ Tasks that require the model to decide *what to look at* — refactoring across
100
108
  pip install 1bcoder
101
109
  ```
102
110
 
103
- On first launch, default agents, procs, and scripts are copied to `~/.1bcoder/` automatically.
111
+ On first launch, default agents, procs, scripts, profiles, and aliases are copied to `~/.1bcoder/` automatically. On upgrade (`pip install --upgrade 1bcoder`), new entries in `aliases.txt` and `profiles.txt` are merged in without overwriting your customisations.
104
112
 
105
113
  ### Option 2 — Clone and install locally
106
114
 
@@ -172,15 +180,15 @@ pip install -e .
172
180
  python chat.py
173
181
  ```
174
182
 
175
- On startup a numbered list of available Ollama models is shown type the number to select one. Use `--model` to skip the prompt.
183
+ On startup, 1bcoder checks for a config with `auto: true` (local `.1bcoder/config.yml` first, then `~/.1bcoder/config.yml`) and connects to the host and model stored there. If no config is found, it connects to local Ollama and prompts for a model. Use `--model` or `--host` to override.
176
184
 
177
185
  ### CLI options
178
186
 
179
187
  ```
180
188
  1bcoder [--host URL] [--model NAME] [--init] [--scriptapply SCRIPT] [--param KEY=VALUE]
181
189
 
182
- --host URL Host URL — supports ollama:// and openai:// schemes (default: http://localhost:11434)
183
- --model NAME Skip model selection, use this model directly
190
+ --host URL Host URL — supports ollama:// and openai:// schemes (default: from config or http://localhost:11434)
191
+ --model NAME Model to use; overrides config (shows list if not available on host)
184
192
  --init Create .1bcoder/ scaffold in the current directory
185
193
  --scriptapply SCRIPT Run a script file non-interactively, then exit
186
194
  --param KEY=VALUE Plan parameter substitution (repeatable)
@@ -670,7 +678,7 @@ Lines starting with `[v]` are already done and skipped. Lines starting with `#`
670
678
  | `/script show` | Display steps of the current script |
671
679
  | `/script add <command>` | Append a step to the current script |
672
680
  | `/script clear` | Wipe current script completely |
673
- | `/script reset` | Unmark all done steps |
681
+ | `/script reset` | Unmark all done steps (also happens automatically when a script runs to completion) |
674
682
  | `/script reapply [key=value ...]` | Reset all done steps then apply automatically; prompts for any NaN `{{variables}}` before running |
675
683
  | `/script refresh` | Reload script from disk and show contents |
676
684
  | `/script apply [file] [key=value ...]` | Run steps one by one (Y/n/q per step) |
@@ -720,6 +728,7 @@ Session variables store named values that are substituted as `{{name}}` in any c
720
728
  /var set name =MyService literal value
721
729
  /var def port db host declare multiple NaN variables (skips if already set)
722
730
  /var get list all variables (NaN = unset)
731
+ /var get port print value of a single variable (useful with ->)
723
732
  /var del port remove a variable
724
733
  ```
725
734
 
@@ -765,9 +774,12 @@ Any `{{key}}` found but not yet set is registered as NaN — `/script reapply` w
765
774
 
766
775
  ---
767
776
 
768
- ### Output capture (`->` and `$`)
777
+ ### Output capture (`->`, `$` and `~`)
769
778
 
770
- Any command — LLM reply, tool output, or proc result — can be captured into a session variable using the `->` suffix. The special token `$` expands to the last captured output anywhere in a command or message.
779
+ Any command — LLM reply, tool output, or proc result — can be captured into a session variable using the `->` suffix. Two special tokens expand anywhere in a command or message:
780
+
781
+ - `$` — last captured output (last AI reply or tool result)
782
+ - `~` — last user input (last message or command you typed)
771
783
 
772
784
  ```
773
785
  /map keyword extract auth.py -> keywords # capture tool output into variable
@@ -783,23 +795,43 @@ summarize this for me -> myplan # capture LLM reply
783
795
  /var set port result # also works: grab key from proc output
784
796
  ```
785
797
 
798
+ **`~` — repeat or redirect the last question:**
799
+ ```
800
+ як працює цей метод? # ask main model
801
+ /small ~ # same question → small model
802
+ /ask ~ # same question → agent mode
803
+ /explain "$" # ask small model to explain the reply
804
+ поясни: $ # ask main model to explain its own reply
805
+ ```
806
+
786
807
  `->` stores the full text (including ANSI-stripped terminal output) and also updates `$` for immediate reuse. Variables captured with `->` appear in `/var get` like any other session variable.
787
808
 
788
809
  ---
789
810
 
790
811
  ### Project config (`/config`)
791
812
 
792
- Save and restore session state (host, model, ctx, params, vars, procs) to `.1bcoder/config.yml` in the current working directory. Useful for project-specific presets that are too large to fit in model context.
813
+ Save and restore session state (host, model, ctx, params, vars, procs). Two config locations are supported:
814
+
815
+ - **Local** — `.1bcoder/config.yml` in the current working directory (project-specific)
816
+ - **Global** — `~/.1bcoder/config.yml` (user-wide default for all projects)
817
+
818
+ **Startup priority:** on launch without `--host`/`--model`, 1bcoder checks local config first, then global. The first one with `auto: true` wins. If neither has `auto: true`, connects to local Ollama and prompts for a model.
793
819
 
794
820
  ```
795
- /config save # save all current state
796
- /config save host # save only host
797
- /config save model # save only model
798
- /config save vars # save only vars
799
- /config load # restore from config.yml
800
- /config show # print config.yml contents
801
- /config auto on # auto-load on every startup in this directory
802
- /config auto off # disable auto-load
821
+ /config save # save all current state to local config
822
+ /config save global # save all current state to global config
823
+ /config save host # save only host to local config
824
+ /config save global host # save only host to global config
825
+ /config save model # save only model to local config
826
+ /config save global model # save only model to global config
827
+ /config save vars # save only vars to local config
828
+ /config load # restore from local config
829
+ /config load global # restore from global config
830
+ /config show # print local config contents
831
+ /config show global # print global config contents
832
+ /config auto on # enable auto-load in local config
833
+ /config auto on global # enable auto-load in global config
834
+ /config auto off # disable auto-load in local config
803
835
  ```
804
836
 
805
837
  **Selective delete:**
@@ -812,7 +844,7 @@ Save and restore session state (host, model, ctx, params, vars, procs) to `.1bco
812
844
  /config del proc collect-files # remove one proc
813
845
  ```
814
846
 
815
- **Config file format** (`.1bcoder/config.yml`):
847
+ **Config file format** (`.1bcoder/config.yml` or `~/.1bcoder/config.yml`):
816
848
  ```yaml
817
849
  auto: true
818
850
  host: ollama://localhost:11434
@@ -828,7 +860,7 @@ procs:
828
860
  - collect-files output.txt
829
861
  ```
830
862
 
831
- When `auto: true`, the config is applied automatically after the startup banner host, model, ctx, params, vars, and procs are restored without any command.
863
+ When `auto: true`, host and model are used at startup to connect; ctx, params, vars, and procs are also restored.
832
864
 
833
865
  ---
834
866
 
@@ -851,24 +883,34 @@ Connect external tool servers to give the AI access to filesystems, databases, w
851
883
  /mcp disconnect fs
852
884
  ```
853
885
 
854
- See [MCP.md](MCP.md) for a full list of ready-to-use servers.
886
+ See `/doc MCP` for a full list of ready-to-use servers.
855
887
 
856
888
  ---
857
889
 
858
890
  ### Parallel queries
859
891
 
860
- Send prompts to multiple models at the same time. Each answer is saved to its own file.
892
+ Send a prompt to multiple models at the same time.
861
893
 
862
894
  ```
863
- /parallel ["prompt"] [profile <name>] [host:port|model|file ...]
895
+ /parallel ["prompt"] [--ctx|--last|--no-ctx] [profile <name>] [host:port|model|(file or ctx) ...]
864
896
  ```
865
897
 
898
+ | Flag | Behaviour |
899
+ |---|---|
900
+ | *(default)* | Full conversation context is sent to every worker |
901
+ | `--last` | Only the last user message is sent (saves tokens for small models) |
902
+ | `--no-ctx` | No context — prompt only (fastest, zero leakage) |
903
+
904
+ Workers write results to a file **or** inject them back into the main context:
905
+
866
906
  ```
867
907
  /parallel "review this for bugs" \
868
- localhost:11434|llama3.2:1b|answers/llm1.txt \
869
- localhost:11435|qwen2.5:1b|answers/llm2.txt
908
+ localhost:11434|llama3.2:1b|ans/llm1.txt \
909
+ localhost:11435|qwen2.5:1b|ctx
870
910
  ```
871
911
 
912
+ Using `ctx` as the output target injects the worker's reply into the main conversation — the next AI turn will see it.
913
+
872
914
  **Profiles** — save a set of workers for reuse:
873
915
 
874
916
  ```
@@ -886,6 +928,60 @@ review: localhost:11434|ministral3:3b|ans/review.txt localhost:11435|cogito:3b|a
886
928
  fast: localhost:11434|qwen2.5-coder:0.6b|ans/q.txt # quick sanity check
887
929
  ```
888
930
 
931
+ **Sub-agent profiles** — built-in profiles that return answers directly to the main context (`ctx`):
932
+
933
+ ```
934
+ small: localhost:11434|qwen3:0.6b|ctx
935
+ explain: localhost:11434|gemma3:1b|ctx
936
+ thinking: localhost:11434|lfm2.5-thinking:1.2b|ctx
937
+ short: localhost:11434|llama3.2:1b|ctx
938
+ ```
939
+
940
+ These are aliased as `/small`, `/explain`, `/thinking`, `/short` — use them like sub-agents:
941
+
942
+ ```
943
+ /small "what does this function return?" --no-ctx # ask tiny model, no context bleed
944
+ /explain "$" # ask gemma to explain last reply
945
+ /small ~ # repeat last question to a small model
946
+ ```
947
+
948
+ `~` expands to the last message you typed; `$` expands to the last AI reply — combine them to build sub-agent pipelines without copy-pasting.
949
+
950
+ ---
951
+
952
+ ### Hooks (`/hook`)
953
+
954
+ Run a script automatically **before** or **after** a command. Useful for backups before edits, linting after patches, or any pre/post workflow step.
955
+
956
+ ```
957
+ /hook before <cmd> <script> # run script before every <cmd>
958
+ /hook after <cmd> <script> # run script after every <cmd>
959
+ /hook list # show active hooks
960
+ /hook clear <cmd> # remove hooks for <cmd>
961
+ /hook clear # remove all hooks
962
+ ```
963
+
964
+ `<cmd>` is the command name without the slash: `edit`, `patch`, `fix`, `insert`, `run`.
965
+
966
+ **Two script types:**
967
+ - `.txt` — 1bcoder script (sequence of commands). `{{file}}` and `{{range}}` are injected as session variables.
968
+ - `.py` — Python guard subprocess. Receives trigger content on `stdin`, outputs `BLOCK:`/`ALERT:`/`ACTION:` lines.
969
+
970
+ **Auto-injected for `.txt` scripts:**
971
+
972
+ | Variable | Value |
973
+ |---|---|
974
+ | `{{file}}` | file argument of the triggering command |
975
+ | `{{range}}` | line range (if specified), e.g. `10-25` |
976
+
977
+ **Examples:**
978
+ ```
979
+ /hook before edit /bkup {{file}} # backup before every edit (.txt script)
980
+ /hook before run sql_readonly_guard.py # block dangerous SQL (.py guard)
981
+ ```
982
+
983
+ Missing `.txt` script cancels a `before` hook. `.py` guard cancels only if it prints `BLOCK:`. Step errors inside `.txt` scripts do not cancel the command.
984
+
889
985
  ---
890
986
 
891
987
  ### Prompt templates
@@ -914,9 +1010,9 @@ Run a Python script against the last LLM reply. Useful for extracting filenames,
914
1010
  /proc new my-proc # create a new processor from template
915
1011
  ```
916
1012
 
917
- **Processor protocol:** `stdin` = last LLM reply · `stdout` = result · `key=value` lines = extracted params · `ACTION: /command` = confirmed and executed (run mode only) · exit 1 = failure.
1013
+ **Processor protocol:** `stdin` = last LLM reply · `stdout` = result · `key=value` lines = extracted params · `ACTION: /command` = confirmed and executed (run mode only) · `ALERT: message` = warning printed, continues · `BLOCK: reason` = cancels the triggering command (hook mode only) · exit 1 = failure.
918
1014
 
919
- Built-in processors in `<install>/.1bcoder/proc/`:
1015
+ Built-in processors in `~/.1bcoder/proc/`:
920
1016
 
921
1017
  | Processor | Purpose | Best mode |
922
1018
  |---|---|---|
@@ -925,8 +1021,20 @@ Built-in processors in `<install>/.1bcoder/proc/`:
925
1021
  | `extract-list` | Convert first bullet/numbered list in reply to comma-separated line | one-shot |
926
1022
  | `grounding-check` | Score identifiers against `map.txt`, warn if <50% | persistent |
927
1023
  | `collect-files` | Accumulate filenames to `.1bcoder/collected-files.txt` | persistent |
928
- | `md` | Render last reply as formatted Markdown in terminal (`pip install rich`) | one-shot |
1024
+ | `md` | Render last reply as formatted Markdown in terminal | one-shot |
929
1025
  | `mdx` | Render last reply as Markdown + LaTeX (KaTeX) + Mermaid diagrams in browser | one-shot |
1026
+ | `ctx_cut` | Auto `/ctx cut` when context exceeds threshold (default 90%) | persistent |
1027
+ | `rude_words` | Alert if reply contains profanity (`ua` arg adds Ukrainian list) | persistent |
1028
+ | `secret_check` | Alert if reply contains sensitive names (google, anthropic…) | persistent |
1029
+ | `sql_readonly_guard` | Alert (proc) or block (hook) on write SQL statements | both |
1030
+
1031
+ **Guard usage examples:**
1032
+ ```
1033
+ /proc on ctx_cut 80 # auto cut at 80%
1034
+ /proc on rude_words ua # profanity check + Ukrainian
1035
+ /proc on secret_check client=acme # + custom keyword
1036
+ /hook before run sql_readonly_guard.py # block /run with DELETE/DROP/UPDATE
1037
+ ```
930
1038
 
931
1039
  See `/doc PROC` for the full protocol, built-in processor reference, and guide to writing your own.
932
1040
 
@@ -1175,3 +1283,9 @@ For human input, the corrected command is shown with `[fix?]` and you are asked
1175
1283
  | text-generation-webui | Linux / Win | `--api` flag | 5000 | `openai://` | oobabooga UI, needs `--api` flag to expose OpenAI endpoint |
1176
1284
  | TabbyAPI | Linux / Win | built-in | 5000 | `openai://` | Focused on exl2/GPTQ quantized models, low VRAM |
1177
1285
  | vLLM | Linux | built-in | 8000 | `openai://` | Production server, high throughput, requires significant VRAM |
1286
+
1287
+ ---
1288
+
1289
+ **(c) 2026 Stanislav Zholobetskyi**
1290
+ Institute for Information Recording, National Academy of Sciences of Ukraine, Kyiv
1291
+ *PhD research: «Intelligent Technology for Software Development and Maintenance Support»*