loreguard-cli 0.3.2__tar.gz → 0.3.3__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: loreguard-cli
3
- Version: 0.3.2
3
+ Version: 0.3.3
4
4
  Summary: Local inference client for Loreguard NPCs
5
5
  Project-URL: Homepage, https://loreguard.com
6
6
  Project-URL: Documentation, https://github.com/beyond-logic-labs/loreguard-cli#readme
@@ -149,14 +149,15 @@ loreguard-cli
149
149
 
150
150
  ## Supported Models
151
151
 
152
- | Model ID | Name | Size | Notes |
153
- |----------|------|------|-------|
154
- | `qwen3-4b-instruct` | Qwen3 4B Instruct | 2.8 GB | **Recommended** |
155
- | `llama-3.2-3b-instruct` | Llama 3.2 3B | 2.0 GB | Fast |
156
- | `qwen3-8b` | Qwen3 8B | 5.2 GB | Higher quality |
157
- | `meta-llama-3-8b-instruct` | Llama 3 8B | 4.9 GB | General purpose |
152
+ Works with any `.gguf` model. Tested with the following model families:
158
153
 
159
- Or use any `.gguf` model with `--model /path/to/model.gguf`.
154
+ - **Qwen** - Recommended for best quality/speed balance
155
+ - **Llama** - Meta's open models
156
+ - **GPT** - GPT-style open models
157
+ - **RNJ** - Specialized models
158
+ - **Violet Lotus** - Community fine-tunes
159
+
160
+ Use any model with `--model /path/to/model.gguf`.
160
161
 
161
162
  ## Use Cases
162
163
 
@@ -116,14 +116,15 @@ loreguard-cli
116
116
 
117
117
  ## Supported Models
118
118
 
119
- | Model ID | Name | Size | Notes |
120
- |----------|------|------|-------|
121
- | `qwen3-4b-instruct` | Qwen3 4B Instruct | 2.8 GB | **Recommended** |
122
- | `llama-3.2-3b-instruct` | Llama 3.2 3B | 2.0 GB | Fast |
123
- | `qwen3-8b` | Qwen3 8B | 5.2 GB | Higher quality |
124
- | `meta-llama-3-8b-instruct` | Llama 3 8B | 4.9 GB | General purpose |
125
-
126
- Or use any `.gguf` model with `--model /path/to/model.gguf`.
119
+ Works with any `.gguf` model. Tested with the following model families:
120
+
121
+ - **Qwen** - Recommended for best quality/speed balance
122
+ - **Llama** - Meta's open models
123
+ - **GPT** - GPT-style open models
124
+ - **RNJ** - Specialized models
125
+ - **Violet Lotus** - Community fine-tunes
126
+
127
+ Use any model with `--model /path/to/model.gguf`.
127
128
 
128
129
  ## Use Cases
129
130
 
@@ -4,7 +4,7 @@ build-backend = "hatchling.build"
4
4
 
5
5
  [project]
6
6
  name = "loreguard-cli"
7
- version = "0.3.2"
7
+ version = "0.3.3"
8
8
  description = "Local inference client for Loreguard NPCs"
9
9
  readme = "README.md"
10
10
  license = "MIT"
@@ -48,8 +48,6 @@ def main():
48
48
  "--hidden-import", "src.models_registry",
49
49
  "--hidden-import", "src.cli",
50
50
  "--hidden-import", "src.npc_chat",
51
- "--hidden-import", "rich",
52
- "--hidden-import", "rich.console",
53
51
  "--hidden-import", "httpx",
54
52
  "--hidden-import", "websockets",
55
53
  "--hidden-import", "aiofiles",
@@ -68,11 +66,11 @@ def main():
68
66
 
69
67
  if result.returncode == 0:
70
68
  dist_path = root / "dist" / name
71
- print(f"\n Build successful!")
72
- print(f" Binary: {dist_path}")
73
- print(f" Size: {dist_path.stat().st_size / 1024 / 1024:.1f} MB")
69
+ print(f"\n[OK] Build successful!")
70
+ print(f" Binary: {dist_path}")
71
+ print(f" Size: {dist_path.stat().st_size / 1024 / 1024:.1f} MB")
74
72
  else:
75
- print("\n Build failed!")
73
+ print("\n[FAILED] Build failed!")
76
74
  sys.exit(1)
77
75
 
78
76
 
File without changes
File without changes
File without changes
File without changes
File without changes