@vtstech/pi-status 1.1.4 → 1.1.5

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (3) hide show
  1. package/README.md +27 -23
  2. package/package.json +2 -2
  3. package/status.js +36 -7
package/README.md CHANGED
@@ -1,8 +1,8 @@
1
1
  # @vtstech/pi-status
2
2
 
3
- System monitor / status bar extension for the [Pi Coding Agent](https://github.com/badlogic/pi-mono).
3
+ System monitor extension for the [Pi Coding Agent](https://github.com/badlogic/pi-mono).
4
4
 
5
- Replaces the Pi footer with a unified status bar showing system metrics, model info, and generation params.
5
+ Adds composable named status items to the framework footer using `ctx.ui.setStatus()`. Each metric gets its own named slot so it coexists cleanly with other extensions' status items.
6
6
 
7
7
  ## Install
8
8
 
@@ -12,35 +12,39 @@ pi install "npm:@vtstech/pi-status"
12
12
 
13
13
  ## How It Works
14
14
 
15
- Automatically loaded — no commands needed. Displays a 2-line status bar at the bottom of the Pi interface.
15
+ Automatically loaded — no commands needed. Slots are rendered in the framework footer alongside framework items (model name, session tokens, context usage). All labels use dimmed coloring; all values use green highlighting.
16
16
 
17
- **Line 1 (conf):**
17
+ CPU/RAM/Swap are only shown when using a local Ollama provider (not for cloud/remote). For cloud providers, system metrics are omitted.
18
+
19
+ **Example (local Ollama):**
18
20
  ```
19
- qwen3.5:0.8b · ~/.pi/agent · medium · CPU 9%
21
+ CtxMax:41k RespMax:16.4k Resp 2m3s CPU 12% RAM 2.2G/15.1G SEC:MAX Prompt: 2840 chr 393 tok pi:0.66.1
20
22
  ```
21
23
 
22
- **Line 2 (load):**
24
+ **Example (cloud provider, basic mode):**
23
25
  ```
24
- qwen3.5:0.8b · M:33k · S:9.0%/128k · RAM 2.2G/15.1G · Resp 5m24s · temp:0.0 · max:16384
26
+ CtxMax:128k RespMax:16.4k Resp 1m22s SEC:BASIC Prompt: 2840 chr 393 tok pi:0.66.1
25
27
  ```
26
28
 
27
- CPU/RAM/Swap are only shown when using a local Ollama provider (not for cloud/remote).
29
+ ## Status Slots
30
+
31
+ Slots are updated every 5 seconds (1 second for active tool timing). Render order is deterministic — all slots are managed through `flushStatus()`.
28
32
 
29
- ## What's Displayed
33
+ | Slot | Description | Condition |
34
+ |------|-------------|-----------|
35
+ | **CtxMax** | Native model context window from Ollama `/api/show` (k-notation) | Local or remote Ollama |
36
+ | **RespMax** | Max response/completion tokens with k-notation (e.g., `16k`) | After first provider request |
37
+ | **Resp** | Agent loop duration (e.g., `2m3s`) | After first agent cycle |
38
+ | **CPU%** | Per-core CPU usage delta | Local Ollama only |
39
+ | **RAM** | Used/total system memory | Local Ollama only |
40
+ | **Swap** | Used/total swap space | Local only, when active |
41
+ | **Generation params** | Temperature, top_p, top_k, num_predict, context size, reasoning_effort (dimmed) | After first provider request |
42
+ | **SEC** | Security mode indicator (`SEC:BASIC`/`SEC:MAX`) + session-scoped blocked count + 3s flash on block event | Always shown |
43
+ | **Active tool** | Live elapsed timer with `>` indicator | While a tool is running |
44
+ | **Prompt** | System prompt size as `chars chr tokens tok` | After first agent start |
45
+ | **Pi version** | `pi:0.66.1` (dimmed, always last) | Always shown |
30
46
 
31
- - **Working directory** compact `~`-relative path
32
- - **Git branch** — current branch name (cached)
33
- - **Active model** — the model Pi is currently using
34
- - **Thinking level** — shown when active (off is hidden)
35
- - **Context usage** — percentage and window size (`5.6%/128k`)
36
- - **CPU%** — per-core delta (updates every 3s)
37
- - **RAM** — used/total
38
- - **Swap** — shown only when active
39
- - **Loaded model** — Ollama model in memory via `/api/ps` (cached 15s)
40
- - **Response time** — agent loop duration
41
- - **Generation params** — temperature, top_p, top_k, max tokens, num_predict, context size
42
- - **Security indicator** — 3s flash on blocked tools + persistent blocked count
43
- - **Active tool timing** — live elapsed timer for running tool
47
+ All slots are cleared on `session_shutdown`. Metrics that the framework already provides (model name, session tokens, context usage, thinking level) are intentionally omitted to avoid duplication.
44
48
 
45
49
  ## Links
46
50
 
@@ -49,4 +53,4 @@ CPU/RAM/Swap are only shown when using a local Ollama provider (not for cloud/re
49
53
 
50
54
  ## License
51
55
 
52
- MIT — [VTSTech](https://www.vts-tech.org)
56
+ MIT — [VTSTech](https://www.vts-tech.org)
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@vtstech/pi-status",
3
- "version": "1.1.4",
3
+ "version": "1.1.5",
4
4
  "description": "System monitor / status bar extension for Pi Coding Agent",
5
5
  "main": "status.js",
6
6
  "keywords": ["pi-extensions"],
@@ -14,7 +14,7 @@
14
14
  "url": "https://github.com/VTSTech/pi-coding-agent"
15
15
  },
16
16
  "dependencies": {
17
- "@vtstech/pi-shared": "1.1.4"
17
+ "@vtstech/pi-shared": "1.1.5"
18
18
  },
19
19
  "peerDependencies": {
20
20
  "@mariozechner/pi-coding-agent": ">=0.66"
package/status.js CHANGED
@@ -5,6 +5,7 @@ import os from "node:os";
5
5
  import { getOllamaBaseUrl, fetchModelContextLength, readModelsJson } from "@vtstech/pi-shared/ollama";
6
6
  import { fmtBytes, fmtDur } from "@vtstech/pi-shared/format";
7
7
  import { debugLog } from "@vtstech/pi-shared/debug";
8
+ import { getSecurityMode } from "@vtstech/pi-shared/security";
8
9
  var STATUS_UPDATE_INTERVAL_MS = 5e3;
9
10
  var TOOL_TIMER_INTERVAL_MS = 1e3;
10
11
  function status_temp_default(pi) {
@@ -165,13 +166,14 @@ function status_temp_default(pi) {
165
166
  } else {
166
167
  ctxUi.setStatus("status-params", void 0);
167
168
  }
169
+ const secMode = getSecurityMode();
168
170
  const now = Date.now();
169
171
  if (securityFlashTool && now < securityFlashUntil) {
170
- ctxUi.setStatus("status-sec", `${dim2("SEC:")}${green2(String(blockedCount))} ${dim2("(blocked: " + securityFlashTool + ")")}`);
172
+ ctxUi.setStatus("status-sec", `${dim2("SEC:")}${green2(String(blockedCount))} ${dim2("(" + secMode.toUpperCase() + ")")} ${dim2("(blocked: " + securityFlashTool + ")")}`);
171
173
  } else if (blockedCount > 0) {
172
- ctxUi.setStatus("status-sec", `${dim2("SEC:")}${green2(String(blockedCount))}`);
174
+ ctxUi.setStatus("status-sec", `${dim2("SEC:")}${green2(String(blockedCount))} ${dim2("(" + secMode.toUpperCase() + ")")}`);
173
175
  } else {
174
- ctxUi.setStatus("status-sec", void 0);
176
+ ctxUi.setStatus("status-sec", `${dim2("SEC:")}${green2(secMode.toUpperCase())}`);
175
177
  }
176
178
  if (activeTool && activeToolStart > 0) {
177
179
  const elapsed = performance.now() - activeToolStart;
@@ -259,15 +261,42 @@ function status_temp_default(pi) {
259
261
  });
260
262
  pi.on("before_provider_request", (event) => {
261
263
  lastPayload = event.payload;
264
+ measurePromptFromPayload(lastPayload);
262
265
  });
266
+ function measurePromptFromPayload(payload) {
267
+ if (!payload || cachedPromptText) return;
268
+ const theme = ctxTheme;
269
+ const dim2 = (s) => theme?.fg?.("dim", s) ?? s;
270
+ const green2 = (s) => theme?.fg?.("success", s) ?? s;
271
+ try {
272
+ const messages = payload.messages;
273
+ if (!messages?.length) return;
274
+ const sysMsg = messages.find((m) => m.role === "system") ?? messages[0];
275
+ if (!sysMsg?.content) return;
276
+ const chr = sysMsg.content.length;
277
+ const tok = sysMsg.content.split(/\s+/).filter(Boolean).length;
278
+ cachedPromptText = `${dim2("Prompt:")} ${green2(`${chr} chr ${tok} tok`)}`;
279
+ debugLog("status", `system prompt measured from payload: ${chr} chars, ~${tok} words`);
280
+ flushStatus();
281
+ } catch (err) {
282
+ debugLog("status", "failed to measure prompt from payload", err);
283
+ }
284
+ }
263
285
  pi.on("agent_start", async (_event, ctx) => {
264
286
  agentStartTime = performance.now();
265
287
  try {
266
288
  const prompt = ctx.getSystemPrompt();
267
- const chr = prompt.length;
268
- const tok = prompt.split(/\s+/).filter(Boolean).length;
269
- cachedPromptText = `${dim("Prompt:")} ${green(`${chr} chr ${tok} tok`)}`;
270
- } catch {
289
+ if (prompt) {
290
+ const chr = prompt.length;
291
+ const tok = prompt.split(/\s+/).filter(Boolean).length;
292
+ cachedPromptText = `${dim("Prompt:")} ${green(`${chr} chr ${tok} tok`)}`;
293
+ debugLog("status", `system prompt measured via getSystemPrompt(): ${chr} chars, ~${tok} words`);
294
+ }
295
+ } catch (err) {
296
+ debugLog("status", "getSystemPrompt() not available, will measure from payload", err);
297
+ }
298
+ if (!cachedPromptText && lastPayload) {
299
+ measurePromptFromPayload(lastPayload);
271
300
  }
272
301
  flushStatus();
273
302
  });