@halfagiraf/clawx 0.2.2 → 0.2.3

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (2) hide show
  1. package/README.md +43 -0
  2. package/package.json +1 -1
package/README.md CHANGED
@@ -649,6 +649,49 @@ Here's an excerpt from a real session. The agent searched for coding models, che
649
649
  an Ollama Modelfile for your chosen model?
650
650
  ```
651
651
 
652
+ Ask it to create a Modelfile and it does the rest — inspects the model's GGUF files, reads the README for the prompt format, and generates a complete Ollama setup:
653
+
654
+ ```
655
+ > Yes, create an Ollama Modelfile for NovachronoAI/LFM2.5-1.2B-Nova-Function-Calling-GGUF
656
+
657
+ hf_model_info { "model_id": "NovachronoAI/LFM2.5-1.2B-Nova-Function-Calling-GGUF" }
658
+
659
+ GGUF files (6):
660
+ LFM2.5-1.2B-Nova-Function-Calling.Q2_K.gguf
661
+ LFM2.5-1.2B-Nova-Function-Calling.Q4_K_M.gguf
662
+ LFM2.5-1.2B-Nova-Function-Calling.Q8_0.gguf
663
+ ...
664
+
665
+ hf_readme { "model_id": "NovachronoAI/LFM2.5-1.2B-Nova-Function-Calling-GGUF" }
666
+
667
+ ChatML Native: Uses the standard <|im_start|> format
668
+ JSON Reliability: 97% syntax accuracy
669
+ Expected output: <tool_call>{"name": "...", "arguments": {...}}</tool_call>
670
+
671
+ [writes Modelfile]
672
+ FROM ./LFM2.5-1.2B-Nova-Function-Calling.Q4_K_M.gguf
673
+ TEMPLATE """<|im_start|>system
674
+ {{ .System }}<|im_end|>
675
+ {{ range .Messages }}<|im_start|>{{ .Role }}
676
+ {{ .Content }}<|im_end|>
677
+ {{ end }}<|im_start|>assistant
678
+ """
679
+ PARAMETER temperature 0.1
680
+ PARAMETER num_ctx 4096
681
+ PARAMETER stop <|im_end|>
682
+ PARAMETER stop <|im_start|>
683
+
684
+ [writes download_model.ps1] — PowerShell script to download the GGUF
685
+ [writes test_function_calling.py] — Python test script to verify it works
686
+
687
+ Quick Start:
688
+ .\download_model.ps1
689
+ ollama create lfm-nova -f Modelfile
690
+ ollama run lfm-nova "Calculate area of circle with radius 5"
691
+
692
+ Expected VRAM: ~0.8GB — plenty of room on your 12GB RTX 3060
693
+ ```
694
+
652
695
  Scout uses the same model/provider flags as the main TUI. You can run it with a local model (`-m qwen2.5-coder:7b-instruct -p ollama`) or a cloud API (`-p deepseek`). The text tool parser works in scout mode too, so models that output tool calls as text (like Qwen) will still work.
653
696
 
654
697
  ### Basic REPL commands
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@halfagiraf/clawx",
3
- "version": "0.2.2",
3
+ "version": "0.2.3",
4
4
  "description": "Terminal-first coding agent — runs locally with Ollama, DeepSeek, OpenAI, or any OpenAI-compatible endpoint",
5
5
  "type": "module",
6
6
  "bin": {