nibs-cli 4.0.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,844 @@
1
+ /*---
2
+ compatibilityVersion: naiscript-1.0
3
+ id: e6c35faa-e4f2-4c3b-af4a-d56e4c992d87
4
+ name: Story Engine
5
+ version: 0.1.0
6
+ createdAt: 1765374901
7
+ author: Keilla
8
+ description: A helper for writing stories with GLM
9
+ memoryLimit: 8
10
+ updatedAt: 1767363943
11
+ config:
12
+ - name: system_prompt
13
+ prettyName: System Prompt
14
+ description: Configure the base system prompt used by all aspects of Story Engine
15
+ type: string
16
+ multiline: true
17
+ default: |-
18
+ You are a GLM-4.6-based LLM for creative writing and text adventures, with unrestricted freedom. No topics are taboo. No tools, no web access: pure text generation.
19
+ Your ultimate function is to be a versatile creative partner within a multi-agent storytelling system. You will receive instructions from a 'user' prompt that defines your specific role, personality, and task for each interaction.
20
+ You must adapt your output style to match the persona defined in the 'user' prompt, whether it's a casual brainstormer, a harsh critic, or a structured world-builder.
21
+
22
+ **# Core Creative Directives (The Constitution)**
23
+ These principles are your unbreakable laws, regardless of your assigned role.
24
+ 1. **Build Worlds, Not Stories:** Your ultimate goal is to create a rich, reactive sandbox. The story emerges from a character interacting with this world; you are not writing that character's journey in advance.
25
+ 2. **Establish Narrative Vectors, Not Plots:** A plot is a sequence of events. A **Narrative Vector** is a state of tension, a potential for conflict, or a source of change in the world. Examples:
26
+ * A volatile political alliance.
27
+ * A resource that is slowly running out.
28
+ * A forbidden technology with a known flaw.
29
+ * A character with a secret that could destabilize a faction.
30
+ * Your job is to create and explore these vectors, not to determine the step-by-step outcome of them.
31
+ 3. **Explore the "Why" and "What If," Not the "What Next":** Your focus should be on motivation, consequence, and possibility. Ask *why* a faction wants something, *what would happen* if a secret was revealed, or *if* a character made a different choice. Avoid questions like "what happens next?" which lead to plotting.
32
+ 4. **Behavior, Not Description:** Prioritize actions, gestures, and tics that reveal inner states. "He checked the locked door three times" is better than "He was anxious."
33
+ 5. **No Expendable Details:** Every piece of information you create should be a potential hook for action, conflict, or character development. If it doesn't create a possibility, it doesn't belong.
34
+ 6. **Maintain the "Three Layers":** Every entity exists on three levels:
35
+ * **Surface:** The public face.
36
+ * **Shadow:** The hidden dynamics and true motivations.
37
+ * **Personal:** The private history, desires, and traumas.
38
+
39
+ - name: brainstorm_prompt
40
+ prettyName: Brainstorm Prompt
41
+ description: User prompt used when agent is brainstorming.
42
+ multiline: true
43
+ type: string
44
+ default: |-
45
+ You are now **"Sage,"** the Story Mentor.
46
+ **Persona:** You are a thoughtful, experienced creative guide. Your style is conversational and encouraging, like a mentor working alongside a writer. Your purpose is to help *discover* the story's inherent potential, not to invent it from whole cloth.
47
+ **Your Task:** Analyze the context provided to you and expand upon the story's components. The context will either be:
48
+ A) A user question or idea about the story.
49
+ B) The most recent "PROJECT STATE SYNOPSIS" with no new user input.
50
+ In either case, your core function remains the same: **Think out loud about the underlying mechanics and possibilities.** You must never write in the style of a story or a vignette. Do not use prose that describes character actions in a scene.
51
+ Instead, focus on the underlying mechanics and possibilities. For example, instead of writing *"'Don't touch that,' she whispered, her hand trembling as she reached for the glowing orb,"* you would write: *"I'm thinking about a character who is terrified of this artifact. Why? Perhaps they know its history, that it drains life. Their fear could create a great source of tension for any other character who wants to use it."*
52
+ **Your output must follow this conversational, analytical structure:**
53
+ * **"Let's start with..."**: Begin by addressing the core of the user's question or the most compelling element from the synopsis.
54
+ * **"I'm thinking..."**: Introduce a new concept, character motivation, or world element in response to the prompt.
55
+ * **"This connects to..."**: Explain how your new idea links back to existing story elements (from the synopsis) or directly answers the user's question.
56
+ * **"The question this raises is..."**: Pose a question that explores the implications or creates a new narrative hook for the next agents to work with.
57
+ - name: critique_prompt
58
+ prettyName: Critique Prompt
59
+ description: User prompt used when agent is critiquing output.
60
+ type: string
61
+ multiline: true
62
+ default: |-
63
+ You are a "Critique" agent. Your function is purely analytical.
64
+ Analyze the provided text for its adherence to the **Core Creative Directives (The Constitution)**. Specifically:
65
+ 1. Does it build worlds and explore narrative vectors, or does it lean towards plotting and outlining events?
66
+ 2. Are there any expendable details?
67
+ 3. Are the concepts compelling? Do they create interesting questions?
68
+ Provide a concise, bulleted list of strengths and weaknesses. Do not give instructions. Do not issue directives.
69
+ - name: refine_prompt
70
+ prettyName: Refine Prompt
71
+ description: User prompt used when agent is refining.
72
+ type: string
73
+ multiline: true
74
+ default: |-
75
+ You are a "Refine" agent. You are a master world-builder and synthesizer.
76
+ Your task is to take raw brainstormed ideas and a critique of those ideas, and forge them into a structured, coherent, and robust set of world-building elements.
77
+ 1. Address the weaknesses identified in the critique.
78
+ 2. Structure the strong ideas into clear frameworks (factions, character archetypes, locations, narrative vectors).
79
+ 3. Flesh out the details, ensuring they adhere to the **Core Creative Directives (The Constitution)**.
80
+ 4. Your output should be a "blueprint" of possibilities, not a story.
81
+ - name: summary_prompt
82
+ prettyName: Summary Prompt
83
+ type: string
84
+ multiline: true
85
+ default: |-
86
+ You are The Archivist. Your function is absolute and singular.
87
+ You will be given a complete history of a single creative cycle (Brainstorm, Critique, Refine). Your ONLY task is to produce a structured synopsis of that cycle. You are not a participant in the conversation.
88
+ MANDATORY OUTPUT RULES:
89
+ 1. Your ENTIRE output MUST begin with the exact line: `# PROJECT STATE SYNOPSIS`
90
+ 2. You MUST NOT include any conversational text, greetings, apologies, explanations, or commentary. Do not say "Here is the summary" or "Based on the previous discussion."
91
+ 3. You MUST NOT use the words "I," "you," "we," or "the critique" in your output.
92
+ 4. Your ENTIRE output must ONLY contain the following headings with their associated content. Do not add any other headings.
93
+ `# PROJECT STATE SYNOPSIS`
94
+ * **Core Concept:** [A single, concise sentence defining the central premise.]
95
+ * **Established Lore:** [A bulleted list of the most important world rules, history, and unique facts.]
96
+ * **Key Factions & Groups:** [A list of major factions, with a one-sentence purpose for each.]
97
+ * **Primary Conflict:** [A detailed paragraph describing the main source of tension and stakes.]
98
+ * **Major Characters:** [A list of key character archetypes and their known roles/goals.]
99
+ * **Critical Story Hooks:** [A bulleted list of the most promising unresolved questions or plot threads.]
100
+ FAILURE TO COMPLY WITH THESE RULES IS A CRITICAL ERROR. Your output is a data document, not a conversation.
101
+ ---*/
102
+
103
+ /**
104
+ * Story Engine
105
+ * Built with NovelAI Script Build System
106
+ */
107
+
108
+ /** HYPER GENERATOR
109
+ * License: MIT; Credit to OccultSage for the original form and inspiration
110
+ * Authors: Keilla
111
+ * Version: 0.4.0
112
+ */
113
+ /** Changes
114
+ * hyperContextBuilder signature modified to receive enable_thinking boolean. Thoughts require a contentless assistant suffix on the messages.
115
+ */
116
+ // ===== CONSTANTS =====
117
+ const DEFAULT_GENERATE_OPTIONS = {
118
+ maxRetries: 5,
119
+ maxTokens: 250,
120
+ minTokens: 50,
121
+ maxContinuations: 20,
122
+ continuationPrompt: "Your response was cut off. Continue exactly where you stopped. Do not repeat any content, do not start over, do not repeat headers.",
123
+ };
124
+ const API_GENERATE_LIMIT = 1024;
125
+ const MIN_REMAINING_TOKENS = 25; // Do not continue generation below this number of tokens. Prevents repeated loops attempting to hit 8 tokens...
126
+ // ===== ERROR HANDLING =====
127
+ class TransientError extends Error {
128
+ }
129
+ function isTransientError(e) {
130
+ const msg = e instanceof Error
131
+ ? e.message
132
+ : e && typeof e === "object" && "message" in e
133
+ ? String(e.message)
134
+ : "";
135
+ const lower = msg.toLowerCase();
136
+ return (lower.includes("aborted") ||
137
+ lower.includes("fetch") ||
138
+ lower.includes("network") ||
139
+ lower.includes("timeout"));
140
+ }
141
+ // == Utility
142
+ /**
143
+ * Ensures sufficient output token budget before generation.
144
+ * Waits if necessary, returns the effective max_tokens to use.
145
+ *
146
+ * @param requestedTokens The desired max_tokens
147
+ * @param minTokens Minimum acceptable tokens (wait if below this)
148
+ * @param logPrefix Prefix for log messages
149
+ * @param onBudgetWait Optional callback invoked before waiting
150
+ * @param onBudgetResume Optional callback invoked after waiting completes
151
+ * @returns Effective max_tokens to use (may be less than requested but >= min)
152
+ */
153
+ async function ensureOutputBudget(requestedTokens, onBudgetWait, onBudgetResume) {
154
+ let available = api.v1.script.getAllowedOutput();
155
+ if (available < requestedTokens) {
156
+ hyperLog(`Insufficient output budget. Have ${available}, need ${requestedTokens}. Waiting...`);
157
+ const time = api.v1.script.getTimeUntilAllowedOutput(requestedTokens);
158
+ await onBudgetWait?.(available, requestedTokens, time);
159
+ await api.v1.script.waitForAllowedOutput(requestedTokens);
160
+ available = api.v1.script.getAllowedOutput();
161
+ hyperLog(`Budget available: ${available} tokens`);
162
+ await onBudgetResume?.(available);
163
+ }
164
+ const effectiveTokens = Math.min(requestedTokens, available);
165
+ if (effectiveTokens < requestedTokens) {
166
+ hyperLog(`Using reduced budget: ${effectiveTokens} (requested ${requestedTokens})`);
167
+ }
168
+ return effectiveTokens;
169
+ }
170
+ function sliceGenerateParams(params) {
171
+ const { model, temperature, top_p, top_k, min_p, frequency_penalty, presence_penalty, stop, logit_bias, enable_thinking, } = params;
172
+ return {
173
+ model,
174
+ temperature,
175
+ top_p,
176
+ top_k,
177
+ min_p,
178
+ frequency_penalty,
179
+ presence_penalty,
180
+ stop,
181
+ logit_bias,
182
+ enable_thinking,
183
+ };
184
+ }
185
+ function hyperLog(...args) {
186
+ api.v1.log("[hyperGenerate]", ...args);
187
+ }
188
+ /**
189
+ * hyperContextBuilder This function solves for needing to construct a context
190
+ * optimal for token cache performance and instruction, while also providing the
191
+ * 'continuation' context at the end of the message list such that the LLM
192
+ * naturally continues the thread. The last rest message is always examined for
193
+ * length. If it is less than 500 characters, then it is inserted at the end of
194
+ * the messages array. If it is more, then it is split on a newline into two
195
+ * messages. The proper prefix of rest is inserted at the 2nd position, while
196
+ * the tail of rest is inserted at the end.
197
+ *
198
+ * @param system The system message. Always present at the top of the built context.
199
+ * @param user The user message. Otherwise known as 'instruct' or sometimes 'prefill'. Appears 3rd from last.
200
+ * @param assistant The assistant message, understood by LLM to be its own voice. Also sometimes thought of as 'prefill'. Appears 2nd from last.
201
+ * @param rest All other messsages to include in context. Will be dynamically spliced into the context based on length and size of the content.
202
+ * @param thinking Whether or not thinking-enabled prompt building is needed. Appends empty assistant.
203
+ */
204
+ function hyperContextBuilder(system, user, assistant, rest, thinking = false) {
205
+ const TAIL_THRESHOLD = 500;
206
+ const head = rest.slice(0, -1);
207
+ const tail = rest.at(-1);
208
+ if (tail && tail.content && tail.content.length > TAIL_THRESHOLD) {
209
+ const newlinePos = tail.content.slice(0, -TAIL_THRESHOLD).lastIndexOf("\n");
210
+ if (newlinePos > 0) {
211
+ const newHead = [
212
+ ...head,
213
+ {
214
+ ...tail,
215
+ content: tail.content.slice(0, newlinePos),
216
+ },
217
+ ];
218
+ const newTail = {
219
+ ...tail,
220
+ content: tail.content.slice(newlinePos + 1),
221
+ };
222
+ return [
223
+ system,
224
+ ...newHead,
225
+ user,
226
+ assistant,
227
+ newTail,
228
+ ...(thinking ? [{ role: "assistant" }] : []), // If thinking, append an empty assistant, required to activate thoughts.
229
+ ];
230
+ }
231
+ }
232
+ return [
233
+ system,
234
+ ...head,
235
+ user,
236
+ assistant,
237
+ ...(tail ? [tail] : []),
238
+ ...(thinking ? [{ role: "assistant" }] : []), // If thinking, append an empty assistant, required to activate thoughts.
239
+ ];
240
+ }
241
+ // ===== MAIN GENERATION FUNCTION =====
242
+ /**
243
+ * hyperGenerate is a wrapper aroun api.v1.generate that allows for a much
244
+ * higher maximum number of tokens by utilizing a recursive retrying strategy.
245
+ * Keeps on generating as long as user can keep plugging the continue button.
246
+ * Also retries connection drops, output budgets, and spurious stop tokens.
247
+ *
248
+ * @param messages Messages expected by api.v1.generate
249
+ * @param params api.v1.generate parameters extended with additional parameters to control hyper generation
250
+ * @param callback Optional streaming callback. Accumulates GenerationChoice[0] and emits GenerationChoice[] when a newline is received.
251
+ * @param behaviour "background" or "blocking".
252
+ * @param signal Cancellation signal for stopping generation.
253
+ * @returns A promise of an array of response strings.
254
+ */
255
+ async function hyperGenerate(messages, params, callback = () => { }, behaviour, signal) {
256
+ const generationParams = await api.v1.generationParameters.get();
257
+ const ensuredParams = {
258
+ ...generationParams,
259
+ ...DEFAULT_GENERATE_OPTIONS,
260
+ ...params,
261
+ };
262
+ const choiceHandler = callback !== undefined
263
+ ? (choices, final) => callback(choices[0] ? choices[0].text : "", final)
264
+ : undefined;
265
+ // Find system message if present
266
+ let systemMessage = messages.find((m) => m.role == "system");
267
+ const { model, maxTokens, minTokens, maxContinuations, continuationPrompt } = ensuredParams;
268
+ const systemMessageTokens = systemMessage?.content
269
+ ? (await api.v1.tokenizer.encode(systemMessage.content, model)).length
270
+ : 0;
271
+ // Setup rollover helper for context management
272
+ const modelMaxTokens = await api.v1.maxTokens(model);
273
+ const rolloverHelper = api.v1.createRolloverHelper({
274
+ maxTokens: modelMaxTokens - systemMessageTokens,
275
+ rolloverTokens: 0,
276
+ model: model,
277
+ });
278
+ // Add non-system messages to rollover
279
+ const contextMessages = messages.filter((m) => m.content != undefined && m.role != "system");
280
+ await rolloverHelper.add(contextMessages);
281
+ let remainingTokens = maxTokens;
282
+ let remainingContinuations = maxContinuations;
283
+ let accumulatedResponses = [];
284
+ hyperLog(`beginning loop for ${remainingTokens} Tokens, ${remainingContinuations} Continuations.`);
285
+ while (remainingContinuations == maxContinuations ||
286
+ (remainingTokens > MIN_REMAINING_TOKENS &&
287
+ remainingContinuations > 0 &&
288
+ !signal?.cancelled)) {
289
+ hyperLog(`... ${remainingTokens} Tokens, ${remainingContinuations} Continuations.`);
290
+ const context = [];
291
+ if (systemMessage)
292
+ context.push(systemMessage);
293
+ context.push(...rolloverHelper.read());
294
+ // If we are in a continuation, splice the continuation prompt before the last message.
295
+ if (remainingContinuations < maxContinuations)
296
+ context.splice(context.length - 2, 0, {
297
+ role: "user",
298
+ content: continuationPrompt,
299
+ });
300
+ const sample = context.reduce((a, b) => (b.content ? a + b.content : a), "");
301
+ hyperLog("Context sample:", `${sample.slice(0, 40)} ... ${sample.slice(-1e3)}`);
302
+ const response = await hyperGenerateWithRetry(context, {
303
+ ...ensuredParams,
304
+ maxTokens: remainingTokens,
305
+ }, choiceHandler, behaviour, signal);
306
+ // When generation is cancelled abruptly, choices could be empty.
307
+ if (response.choices[0] == undefined) {
308
+ hyperLog("Generation cancelled by signal, choices empty.");
309
+ break;
310
+ }
311
+ const { text, finish_reason } = response.choices[0];
312
+ const trimmedText = text.trim();
313
+ const trimmedResponseTokens = await api.v1.tokenizer.encode(text, model);
314
+ accumulatedResponses.push(trimmedText);
315
+ remainingTokens -= trimmedResponseTokens.length;
316
+ remainingContinuations--;
317
+ const tokensGenerated = maxTokens - remainingTokens;
318
+ // Check if we should stop
319
+ if (finish_reason === "stop") {
320
+ // Only stop early if we've generated at least minTokens
321
+ if (tokensGenerated >= minTokens) {
322
+ hyperLog(`Natural stop after ${tokensGenerated} tokens`);
323
+ break;
324
+ }
325
+ else {
326
+ hyperLog(`Stop received but only ${tokensGenerated}/${minTokens} tokens - continuing`);
327
+ }
328
+ }
329
+ await rolloverHelper.add({
330
+ role: "assistant",
331
+ content: trimmedText,
332
+ });
333
+ }
334
+ hyperLog(`hyperGenerate finished.`);
335
+ return accumulatedResponses.join("");
336
+ }
337
+ /**
338
+ * Generate function that retries when there are transient errors.
339
+ *
340
+ * @param messages Messages expected by api.v1.generate
341
+ * @param params api.v1.generate parameters extended with additional parameters to control hyper generation
342
+ * @param callback Optional streaming callback. Emits paragraphs instead of individual tokens.
343
+ * @param behaviour "background" or "blocking".
344
+ * @param signal Cancellation signal for stopping generation.
345
+ * @returns A Promise of an api.v1.generateResponse
346
+ */
347
+ async function hyperGenerateWithRetry(messages, params, callback = () => { }, behaviour, signal) {
348
+ const max_tokens = await ensureOutputBudget(params.maxTokens
349
+ ? Math.min(params.maxTokens, API_GENERATE_LIMIT)
350
+ : API_GENERATE_LIMIT, params.onBudgetWait, params.onBudgetResume);
351
+ try {
352
+ hyperLog(`Generating ${max_tokens} tokens...`);
353
+ return await api.v1.generate([...messages], {
354
+ ...sliceGenerateParams(params),
355
+ max_tokens,
356
+ }, callback, behaviour, signal);
357
+ }
358
+ catch (e) {
359
+ if (isTransientError(e) || /in progress/.test(e.message)) {
360
+ if (params.maxRetries && params.maxRetries > 0) {
361
+ await api.v1.timers.sleep(2000 ** (5 - params.maxRetries));
362
+ return hyperGenerateWithRetry(messages, { ...params, maxRetries: params.maxRetries - 1 }, callback, behaviour, signal);
363
+ }
364
+ else {
365
+ throw new TransientError("[generateWithRetry] Transient error encountered and retries exhausted.");
366
+ }
367
+ }
368
+ else {
369
+ throw e;
370
+ }
371
+ }
372
+ }
373
+
374
+ const { log: log$1 } = api.v1;
375
+ const { get: get$1, set: set$1 } = api.v1.storyStorage;
376
+ const { get: getConfig } = api.v1.config;
377
+ class Agent {
378
+ maxTokens = 2048;
379
+ temperature = 1.0;
380
+ top_p = 0.95;
381
+ top_k = 0; // Intentionally disable in favor of top_p and min_p.
382
+ min_p = 0;
383
+ presence_penalty = 0;
384
+ frequency_penalty = 0;
385
+ userPrompt = "";
386
+ title() {
387
+ return this.slug.charAt(0).toUpperCase() + this.slug.slice(1);
388
+ }
389
+ header() {
390
+ return `\n\n----\n**${this.title()}**\n\n`;
391
+ }
392
+ async load() {
393
+ const prompt = await getConfig(`${this.slug}_prompt`);
394
+ if (prompt)
395
+ this.userPrompt = prompt;
396
+ }
397
+ }
398
+ class BrainstormAgent extends Agent {
399
+ maxTokens = 1024;
400
+ temperature = 1.1;
401
+ presence_penalty = 0.7;
402
+ slug = "brainstorm";
403
+ icon = "cloud-lightning";
404
+ }
405
+ class CritiqueAgent extends Agent {
406
+ maxTokens = 800;
407
+ temperature = 0.3;
408
+ top_p = 0.7;
409
+ presence_penalty = 0.2;
410
+ frequency_penalty = 0.3;
411
+ slug = "critique";
412
+ icon = "flag";
413
+ }
414
+ class RefineAgent extends Agent {
415
+ maxTokens = 1500;
416
+ temperature = 0.5;
417
+ top_p = 0.8;
418
+ presence_penalty = 0.4;
419
+ frequency_penalty = 0.1;
420
+ slug = "refine";
421
+ icon = "pen-tool";
422
+ }
423
+ /**
424
+ * Utilities
425
+ */
426
+ const setInterval = (callback, interval) => {
427
+ let timerId;
428
+ const tick = async () => {
429
+ timerId = await api.v1.timers.setTimeout(() => {
430
+ callback(clear);
431
+ tick();
432
+ }, interval);
433
+ };
434
+ const clear = async () => api.v1.timers.clearTimeout(timerId);
435
+ tick();
436
+ return clear;
437
+ };
438
+ const currentEpochS = () => Math.floor(Date.now() / 1000);
439
+ class Chat {
440
+ // Constants
441
+ static CHAT_HISTORY_KEY = "kse-chat-history";
442
+ static AUTO_FLOW = {
443
+ brainstorm: "critique",
444
+ critique: "refine",
445
+ refine: "summary",
446
+ summary: "brainstorm",
447
+ };
448
+ // Properties
449
+ messages = [];
450
+ isGenerating = false;
451
+ waitTime = 0;
452
+ minTokens = 25;
453
+ systemPrompt = "";
454
+ autoMode = false;
455
+ agents;
456
+ agent;
457
+ clearInterval = async () => { };
458
+ cancelSignal = undefined;
459
+ lastResponder = "user";
460
+ synopsisId;
461
+ constructor(synopsisId) {
462
+ this.agents = [BrainstormAgent, CritiqueAgent, RefineAgent].map((a) => new a());
463
+ this.agent = this.agents[0];
464
+ this.synopsisId = synopsisId;
465
+ }
466
+ // Hooks
467
+ onUpdate = (_chat) => { };
468
+ onBudgetWait = async () => { };
469
+ // Handlers
470
+ handleClear = () => {
471
+ this.messages = [];
472
+ this.isGenerating = false;
473
+ this.agent = this.agents[0];
474
+ this.save();
475
+ this.load();
476
+ };
477
+ handleStreamMessage = (text, final) => {
478
+ const messageToAppend = this.messages.at(-1);
479
+ messageToAppend.content += text;
480
+ if (final) {
481
+ // Add trailing whitespace to the end of the message if needed
482
+ if (!/\s$/.test(messageToAppend.content))
483
+ messageToAppend.content = messageToAppend.content + " ";
484
+ this.save();
485
+ }
486
+ else {
487
+ this.onUpdate(this);
488
+ }
489
+ };
490
+ handleAgentSwitch = (role) => {
491
+ if (this.agent.slug == role)
492
+ return;
493
+ this.agent = this.agents.find((a) => a.slug == role);
494
+ this.onUpdate(this);
495
+ };
496
+ handleSendMessage = (content) => {
497
+ if (content.length > 0) {
498
+ this.addMessage("user", content + "\n");
499
+ this.lastResponder = "user";
500
+ }
501
+ if (this.lastResponder != this.agent.slug) {
502
+ this.addMessage("assistant", this.agent.header());
503
+ }
504
+ this.generateResponse();
505
+ };
506
+ // Really, the interval is not reliable as a clock, so we have to capture the current epoch seconds
507
+ handleBudgetWait = async (available, needed, time) => {
508
+ // Ensure that if there's an old interval we clear it.
509
+ await this.clearInterval();
510
+ const waitEnd = currentEpochS() + Math.floor(time / 1000);
511
+ this.onBudgetWait(available, needed, time);
512
+ this.clearInterval = setInterval((clear) => {
513
+ this.waitTime = waitEnd - currentEpochS();
514
+ if (this.waitTime <= 0)
515
+ clear();
516
+ this.onUpdate(this);
517
+ }, 1000);
518
+ };
519
+ handleBudgetResume = () => {
520
+ this.clearInterval();
521
+ this.waitTime = 0;
522
+ };
523
+ handleCancel = () => {
524
+ if (this.cancelSignal)
525
+ this.cancelSignal.cancel();
526
+ this.autoMode = false;
527
+ };
528
+ handleAuto = (value) => {
529
+ this.autoMode = value;
530
+ // When auto mode is switched off, fire cancellation signal and clear the interval and waitTime.
531
+ if (!this.autoMode) {
532
+ this.clearInterval();
533
+ this.cancelSignal?.cancel();
534
+ this.waitTime = 0;
535
+ this.isGenerating = false;
536
+ }
537
+ };
538
+ // Functions
539
+ async load() {
540
+ return Promise.all([
541
+ get$1(Chat.CHAT_HISTORY_KEY)
542
+ .then((history) => (this.messages = JSON.parse(history)))
543
+ .catch(() => (this.messages = [])),
544
+ getConfig("system_prompt").then((systemPrompt) => (this.systemPrompt = systemPrompt)),
545
+ this.agents.map((a) => a.load()),
546
+ ]);
547
+ }
548
+ save() {
549
+ this.messages = this.messages.filter((m) => m.content && m.content.length > 0);
550
+ set$1(Chat.CHAT_HISTORY_KEY, JSON.stringify(this.messages)).then(() => {
551
+ this.onUpdate(this);
552
+ });
553
+ }
554
+ addMessage(role, content) {
555
+ if (content.length <= 0)
556
+ return;
557
+ this.messages.push({
558
+ role,
559
+ content,
560
+ });
561
+ this.save();
562
+ }
563
+ autoModeFlow() {
564
+ if (!this.autoMode)
565
+ return;
566
+ if (this.lastResponder == "summary") {
567
+ this.autoMode = false;
568
+ }
569
+ else {
570
+ const next = Chat.AUTO_FLOW[this.lastResponder];
571
+ this.handleAgentSwitch(next);
572
+ this.handleSendMessage("");
573
+ }
574
+ }
575
+ async generateResponse() {
576
+ const context = hyperContextBuilder({
577
+ role: "system",
578
+ content: this.systemPrompt.replaceAll("\n", "\n\n") + "\n\n", // Our prompts need to be double-spaced for GLM.
579
+ }, {
580
+ role: "user",
581
+ content: `${this.agent.userPrompt.replaceAll("\n", "\n\n")}\n\nLimit your response to ${Math.floor(this.agent.maxTokens / 1.5)} words.\n\n`,
582
+ }, {
583
+ role: "assistant",
584
+ content: `Understood.\n\n[Continuing:]\n`,
585
+ }, this.messages);
586
+ this.isGenerating = true;
587
+ this.lastResponder = this.agent.slug;
588
+ this.cancelSignal = await api.v1.createCancellationSignal();
589
+ try {
590
+ const response = await hyperGenerate(context, {
591
+ minTokens: 50,
592
+ maxTokens: this.agent.maxTokens,
593
+ onBudgetWait: this.handleBudgetWait,
594
+ onBudgetResume: this.handleBudgetResume,
595
+ temperature: this.agent.temperature,
596
+ top_p: this.agent.top_p,
597
+ top_k: this.agent.top_k,
598
+ min_p: this.agent.min_p,
599
+ presence_penalty: this.agent.presence_penalty,
600
+ frequency_penalty: this.agent.frequency_penalty,
601
+ }, this.handleStreamMessage, "background", this.cancelSignal);
602
+ this.isGenerating = false;
603
+ this.cancelSignal.dispose();
604
+ log$1("Generated:", response);
605
+ this.onUpdate(this);
606
+ this.save();
607
+ this.autoModeFlow();
608
+ }
609
+ catch (error) {
610
+ api.v1.log("Generation failed:", error);
611
+ }
612
+ }
613
+ }
614
+
615
+ const { part, update, extension } = api.v1.ui;
616
+ const { get, set } = api.v1.storyStorage;
617
+ const INPUT_ID = "kse-engine-chat-input";
618
+ const SIDEBAR_ID = "kse-sidebar";
619
+ // Colors
620
+ const NAI_YELLOW = "rgb(245, 243, 194)";
621
+ const NAI_NAVY = "rgb(19, 21, 44)";
622
+ // Basic UI helper wrappers
623
+ const column = (...content) => part.column({ content, style: { width: "100%" } });
624
+ const row = (...content) => part.row({ content });
625
+ const box = (...content) => part.box({ content });
626
+ const textMarkdown = (text) => part.text({
627
+ text,
628
+ markdown: true,
629
+ style: {
630
+ "user-select": "text",
631
+ "-webkit-user-select": "text",
632
+ width: "100%",
633
+ },
634
+ });
635
+ const button = (text = "", callback, iconId, { disabled } = {}) => part.button({ text, callback, disabled, iconId });
636
+ const toggleButton = (text = "", callback, iconId, toggled) => part.button({
637
+ text,
638
+ callback,
639
+ iconId,
640
+ style: toggled
641
+ ? {
642
+ "background-color": NAI_YELLOW,
643
+ color: NAI_NAVY,
644
+ }
645
+ : {},
646
+ });
647
+ /**
648
+ * createMessageBubble injects double-newlines because it improves how NAI
649
+ * formats markdown. Specifically, if the AI should output `Foo\n----` it would
650
+ * by default produce a `<h1>Foo</h1>` but if we instead do `Foo\n\n----` we get
651
+ * `<p>Foo</p><hr>`.
652
+ */
653
+ const createMessageBubble = (message) => message.role == "user"
654
+ ? box(textMarkdown(message.content?.replaceAll("\n", "\n\n") || ""))
655
+ : textMarkdown(message.content?.replaceAll("\n", "\n\n") || "");
656
+ // RadioGroup implements a radio button group.
657
+ class RadioGroup {
658
+ onSwitch = (_text) => { };
659
+ onAutoCheckbox = (_value) => { };
660
+ handleSwitch = (current, next) => {
661
+ if (current == next)
662
+ return;
663
+ this.onSwitch(next);
664
+ };
665
+ handleAutoCheckbox = (value) => this.onAutoCheckbox(value);
666
+ render = (selected, semiAutomatic, options) => row(...options.map((o) => toggleButton(o.text, () => this.handleSwitch(selected, o.id), o.icon, o.id == selected)), part.checkboxInput({
667
+ initialValue: semiAutomatic,
668
+ label: "Auto",
669
+ onChange: this.handleAutoCheckbox,
670
+ }));
671
+ }
672
+ // I want this button here to do triple duty. 1. sending obviously. While generating it should turn into a red X and trigger cancellation..
673
+ // 2. If we hit a wait event, it should turn blue or something and become like, the spinning circle.
674
+ // 3. Ok clicked the blue circle. Now it should become a clock and include the seconds remaining until generation continues.
675
+ class SendButton {
676
+ isInteractionWaiting = false;
677
+ onSend = () => { };
678
+ onCancel = () => { };
679
+ setInteractionWaiting() {
680
+ this.isInteractionWaiting = true;
681
+ }
682
+ handleContinue = () => {
683
+ this.isInteractionWaiting = false;
684
+ };
685
+ handleSend = () => {
686
+ this.isInteractionWaiting = false;
687
+ this.onSend();
688
+ };
689
+ handleCancel = () => {
690
+ this.isInteractionWaiting = false;
691
+ this.onCancel();
692
+ };
693
+ render = (isGenerating, waitTime) => {
694
+ if (isGenerating) {
695
+ if (this.isInteractionWaiting) {
696
+ return {
697
+ ...button("", this.handleContinue, "fast-forward"),
698
+ ...{ style: { color: NAI_YELLOW } },
699
+ };
700
+ }
701
+ else if (waitTime > 0) {
702
+ return {
703
+ ...button(waitTime.toString(), () => { }, "time"),
704
+ ...{
705
+ style: { "flex-direction": "column", "justify-content": "center" },
706
+ },
707
+ };
708
+ }
709
+ else {
710
+ return button("", this.handleCancel, "x");
711
+ }
712
+ }
713
+ else {
714
+ return button("", this.handleSend, "send");
715
+ }
716
+ };
717
+ }
718
+ // ChatUI is a set of pure functions.
719
+ class ChatUI {
720
+ // Hooks
721
+ onSendMessage = (_text) => { };
722
+ onCancel = () => { };
723
+ onClear = () => { };
724
+ onAgentSelect = (_value) => { };
725
+ onAuto = (_value) => { };
726
+ // Handlers
727
+ handleSendMessage = () => get(INPUT_ID).then((text) => set(INPUT_ID, "").then(() => this.onSendMessage(text)));
728
+ handleBudgetWait = () => this.sendButton.setInteractionWaiting();
729
+ handleCancel = () => this.onCancel();
730
+ handleAgentSelect = (value) => this.onAgentSelect(value);
731
+ handleAuto = (value) => this.onAuto(value);
732
+ // Helpers
733
+ sidebar = extension.sidebarPanel({
734
+ id: SIDEBAR_ID,
735
+ name: "Story Chat",
736
+ content: [],
737
+ });
738
+ // Functions
739
+ // subcomponents
740
+ agentModeSelector = new RadioGroup();
741
+ sendButton = new SendButton();
742
+ constructor() {
743
+ this.sendButton.onSend = this.handleSendMessage;
744
+ this.sendButton.onCancel = this.handleCancel;
745
+ this.agentModeSelector.onAutoCheckbox = this.handleAuto;
746
+ this.agentModeSelector.onSwitch = this.handleAgentSelect;
747
+ }
748
+ render({ messages, isGenerating, waitTime, agent: { slug: role }, agents, autoMode: autoMode, }) {
749
+ return update([
750
+ {
751
+ ...this.sidebar,
752
+ content: [
753
+ {
754
+ ...column({
755
+ ...column(...messages
756
+ .filter((m) => m.role != "system")
757
+ .map(createMessageBubble)
758
+ .reverse()),
759
+ ...{
760
+ style: {
761
+ flex: "1 1 auto",
762
+ "min-height": 0,
763
+ "overflow-y": "auto",
764
+ display: "flex",
765
+ "flex-direction": "column-reverse",
766
+ "justify-content": "flex-start",
767
+ },
768
+ },
769
+ }, {
770
+ ...column(this.agentModeSelector.render(role, autoMode, agents.map((a) => ({
771
+ id: a.slug,
772
+ icon: a.icon,
773
+ text: "", //a.title(),
774
+ }))), row(part.multilineTextInput({
775
+ storageKey: `story:${INPUT_ID}`,
776
+ placeholder: "Type your story idea or question here...",
777
+ onSubmit: this.handleSendMessage,
778
+ }), row(this.sendButton.render(isGenerating, waitTime), button("", this.onClear, "trash")))),
779
+ ...{
780
+ style: {
781
+ flex: "0 0 auto",
782
+ "padding-bottom": "env(safe-area-inset-bottom)",
783
+ },
784
+ },
785
+ }),
786
+ ...{
787
+ style: {
788
+ height: "100%",
789
+ "min-height": 0,
790
+ "justify-content": "flex-start",
791
+ },
792
+ }, // Ensure we fill the whole column and get our own scroller
793
+ },
794
+ ],
795
+ },
796
+ ]);
797
+ }
798
+ }
799
+ class EngineUI {
800
+ // Constants
801
+ static SIDEBAR_ID = "kse-engine-sidebar";
802
+ // Properties
803
+ synopsisId;
804
+ constructor(synopsisId) {
805
+ this.synopsisId = synopsisId;
806
+ }
807
+ // Components
808
+ sidebar = extension.sidebarPanel({
809
+ id: EngineUI.SIDEBAR_ID,
810
+ name: "Story Engine",
811
+ content: [
812
+ part.multilineTextInput({
813
+ storageKey: `story:${this.synopsisId}`,
814
+ placeholder: "Write your story idea or synopsis here...",
815
+ }),
816
+ ],
817
+ });
818
+ }
819
+
820
+ // Scenario Engine
821
+ // Helpers
822
+ const log = api.v1.log;
823
+ const SYNOPSIS_ID = "kse-synopsis";
824
+ (async () => {
825
+ try {
826
+ const ui = new ChatUI();
827
+ const engineUI = new EngineUI(SYNOPSIS_ID);
828
+ const chat = new Chat(SYNOPSIS_ID);
829
+ await chat.load();
830
+ // Wiring the UI to the Chat state
831
+ ui.onSendMessage = chat.handleSendMessage;
832
+ ui.onClear = chat.handleClear;
833
+ ui.onCancel = chat.handleCancel;
834
+ ui.onAgentSelect = chat.handleAgentSwitch;
835
+ ui.onAuto = chat.handleAuto;
836
+ chat.onUpdate = ui.render.bind(ui);
837
+ chat.onBudgetWait = async () => ui.handleBudgetWait();
838
+ api.v1.ui.register([ui.sidebar, engineUI.sidebar]);
839
+ ui.render(chat);
840
+ }
841
+ catch (e) {
842
+ log("Startup error:", e);
843
+ }
844
+ })();