chainlesschain 0.37.7 → 0.37.9

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -102,6 +102,100 @@ chainlesschain doctor
102
102
 
103
103
  Checks: Node.js version, npm, Docker, Docker Compose, Git, config directory, binary installation, setup status, port availability, disk space.
104
104
 
105
+ ---
106
+
107
+ ## Headless Commands
108
+
109
+ These commands work without the desktop GUI, using core packages directly.
110
+
111
+ ### `chainlesschain db <action>`
112
+
113
+ Database management.
114
+
115
+ ```bash
116
+ chainlesschain db init # Initialize the database
117
+ chainlesschain db init --path ./my.db # Custom database path
118
+ chainlesschain db info # Show database info (driver, tables, size)
119
+ chainlesschain db info --json # JSON output
120
+ chainlesschain db backup [output] # Create backup
121
+ chainlesschain db restore <backup> # Restore from backup
122
+ ```
123
+
124
+ ### `chainlesschain note <action>`
125
+
126
+ Note and knowledge base management.
127
+
128
+ ```bash
129
+ chainlesschain note add "My Note" -c "Content here" -t "tag1,tag2"
130
+ chainlesschain note list # List recent notes
131
+ chainlesschain note list --category dev --tag important
132
+ chainlesschain note show <id> # Show note by ID prefix
133
+ chainlesschain note search "keyword" # Full-text search
134
+ chainlesschain note delete <id> # Soft delete
135
+ ```
136
+
137
+ ### `chainlesschain chat`
138
+
139
+ Start an interactive AI chat session with streaming output.
140
+
141
+ ```bash
142
+ chainlesschain chat # Default: Ollama qwen2:7b
143
+ chainlesschain chat --model llama3 # Use different model
144
+ chainlesschain chat --provider openai --api-key sk-...
145
+ chainlesschain chat --agent # Agentic mode (can read/write files)
146
+ ```
147
+
148
+ Slash commands in chat: `/exit`, `/model`, `/provider`, `/clear`, `/history`, `/help`
149
+
150
+ ### `chainlesschain ask <question>`
151
+
152
+ Single-shot AI question (non-interactive).
153
+
154
+ ```bash
155
+ chainlesschain ask "What is WebRTC?"
156
+ chainlesschain ask "Explain this code" --model gpt-4o --provider openai
157
+ chainlesschain ask "Hello" --json # JSON output with question/answer/model
158
+ ```
159
+
160
+ ### `chainlesschain llm <action>`
161
+
162
+ LLM provider management.
163
+
164
+ ```bash
165
+ chainlesschain llm models # List installed Ollama models
166
+ chainlesschain llm models --json # JSON output
167
+ chainlesschain llm test # Test Ollama connectivity
168
+ chainlesschain llm test --provider openai --api-key sk-...
169
+ ```
170
+
171
+ ### `chainlesschain agent` (alias: `a`)
172
+
173
+ Start an agentic AI session — the AI can read/write files, run shell commands, search the codebase, and invoke 138 built-in skills.
174
+
175
+ ```bash
176
+ chainlesschain agent # Default: Ollama qwen2:7b
177
+ chainlesschain a --model llama3 # Short alias
178
+ chainlesschain agent --provider openai --api-key sk-...
179
+ ```
180
+
181
+ Built-in tools: `read_file`, `write_file`, `edit_file`, `run_shell`, `search_files`, `list_dir`, `run_skill`, `list_skills`
182
+
183
+ ### `chainlesschain skill <action>`
184
+
185
+ Manage and run 138 built-in AI skills.
186
+
187
+ ```bash
188
+ chainlesschain skill list # List all skills grouped by category
189
+ chainlesschain skill list --category automation
190
+ chainlesschain skill list --tag code --runnable
191
+ chainlesschain skill list --json # JSON output
192
+ chainlesschain skill categories # Show category breakdown
193
+ chainlesschain skill info code-review # Detailed skill info + docs
194
+ chainlesschain skill info code-review --json
195
+ chainlesschain skill search "browser" # Search by keyword
196
+ chainlesschain skill run code-review "Review this function..."
197
+ ```
198
+
105
199
  ## Global Options
106
200
 
107
201
  ```bash
@@ -171,12 +265,23 @@ Configuration is stored at `~/.chainlesschain/config.json`. The CLI creates and
171
265
  ```bash
172
266
  cd packages/cli
173
267
  npm install
174
- npm test # Run all tests
175
- npm run test:unit # Unit tests only
176
- npm run test:integration # Integration tests
177
- npm run test:e2e # End-to-end tests
268
+ npm test # Run all tests (117 tests across 18 files)
269
+ npm run test:unit # Unit tests only (10 files)
270
+ npm run test:integration # Integration tests (3 files)
271
+ npm run test:e2e # End-to-end tests (5 files)
178
272
  ```
179
273
 
274
+ ### Test Coverage
275
+
276
+ | Category | Files | Tests | Status |
277
+ | ------------------ | ------ | ------- | --------------- |
278
+ | Unit — lib modules | 8 | 52 | All passing |
279
+ | Unit — commands | 3 | 31 | All passing |
280
+ | Unit — runtime | 1 | 6 | All passing |
281
+ | Integration | 3 | 7 | All passing |
282
+ | E2E | 3 | 21 | All passing |
283
+ | **Total** | **18** | **117** | **All passing** |
284
+
180
285
  ## License
181
286
 
182
287
  MIT
File without changes
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "chainlesschain",
3
- "version": "0.37.7",
3
+ "version": "0.37.9",
4
4
  "description": "CLI for ChainlessChain - install, configure, and manage your personal AI management system",
5
5
  "type": "module",
6
6
  "bin": {
@@ -45,7 +45,12 @@
45
45
  "@inquirer/prompts": "^7.2.0",
46
46
  "chalk": "^5.4.1",
47
47
  "ora": "^8.1.1",
48
- "semver": "^7.6.3"
48
+ "semver": "^7.6.3",
49
+ "@chainlesschain/core-env": "0.1.0",
50
+ "@chainlesschain/shared-logger": "0.1.0",
51
+ "@chainlesschain/core-db": "0.1.0",
52
+ "@chainlesschain/core-config": "0.1.0",
53
+ "@chainlesschain/core-infra": "0.1.0"
49
54
  },
50
55
  "devDependencies": {
51
56
  "vitest": "^3.1.1"
@@ -0,0 +1,30 @@
1
+ /**
2
+ * Agentic AI assistant - Claude Code style
3
+ * chainlesschain agent [--model] [--provider]
4
+ *
5
+ * User describes what they want in natural language.
6
+ * AI reads files, writes code, runs commands, and explains what it's doing.
7
+ */
8
+
9
+ import { startAgentRepl } from "../repl/agent-repl.js";
10
+
11
+ export function registerAgentCommand(program) {
12
+ program
13
+ .command("agent")
14
+ .alias("a")
15
+ .description(
16
+ "Start an agentic AI session (reads/writes files, runs commands)",
17
+ )
18
+ .option("--model <model>", "Model name", "qwen2:7b")
19
+ .option("--provider <provider>", "LLM provider (ollama, openai)", "ollama")
20
+ .option("--base-url <url>", "API base URL")
21
+ .option("--api-key <key>", "API key")
22
+ .action(async (options) => {
23
+ await startAgentRepl({
24
+ model: options.model,
25
+ provider: options.provider,
26
+ baseUrl: options.baseUrl,
27
+ apiKey: options.apiKey,
28
+ });
29
+ });
30
+ }
@@ -0,0 +1,114 @@
1
+ /**
2
+ * Single-shot AI question command
3
+ * chainlesschain ask "What is..." [--model qwen2:7b] [--provider ollama] [--json]
4
+ */
5
+
6
+ import ora from "ora";
7
+ import chalk from "chalk";
8
+ import { logger } from "../lib/logger.js";
9
+
10
+ /**
11
+ * Send a single question to an LLM provider
12
+ * @param {string} question
13
+ * @param {object} options
14
+ * @returns {Promise<string>}
15
+ */
16
+ async function queryLLM(question, options = {}) {
17
+ const provider = options.provider || "ollama";
18
+ const model = options.model || "qwen2:7b";
19
+ const baseUrl = options.baseUrl || "http://localhost:11434";
20
+
21
+ if (provider === "ollama") {
22
+ const response = await fetch(`${baseUrl}/api/generate`, {
23
+ method: "POST",
24
+ headers: { "Content-Type": "application/json" },
25
+ body: JSON.stringify({
26
+ model,
27
+ prompt: question,
28
+ stream: false,
29
+ }),
30
+ });
31
+
32
+ if (!response.ok) {
33
+ throw new Error(
34
+ `Ollama error: ${response.status} ${response.statusText}`,
35
+ );
36
+ }
37
+
38
+ const data = await response.json();
39
+ return data.response;
40
+ } else if (provider === "openai") {
41
+ const apiKey = options.apiKey || process.env.OPENAI_API_KEY;
42
+ if (!apiKey)
43
+ throw new Error("OpenAI API key required (--api-key or OPENAI_API_KEY)");
44
+
45
+ const apiBase = options.baseUrl || "https://api.openai.com/v1";
46
+ const response = await fetch(`${apiBase}/chat/completions`, {
47
+ method: "POST",
48
+ headers: {
49
+ "Content-Type": "application/json",
50
+ Authorization: `Bearer ${apiKey}`,
51
+ },
52
+ body: JSON.stringify({
53
+ model: model || "gpt-4o-mini",
54
+ messages: [{ role: "user", content: question }],
55
+ }),
56
+ });
57
+
58
+ if (!response.ok) {
59
+ throw new Error(
60
+ `OpenAI error: ${response.status} ${response.statusText}`,
61
+ );
62
+ }
63
+
64
+ const data = await response.json();
65
+ return data.choices[0].message.content;
66
+ }
67
+
68
+ throw new Error(`Unsupported provider: ${provider}`);
69
+ }
70
+
71
+ export function registerAskCommand(program) {
72
+ program
73
+ .command("ask")
74
+ .description("Ask a question to the AI (single-shot)")
75
+ .argument("<question>", "The question to ask")
76
+ .option("--model <model>", "Model name", "qwen2:7b")
77
+ .option("--provider <provider>", "LLM provider (ollama, openai)", "ollama")
78
+ .option("--base-url <url>", "API base URL")
79
+ .option("--api-key <key>", "API key")
80
+ .option("--json", "Output as JSON")
81
+ .action(async (question, options) => {
82
+ const spinner = ora("Thinking...").start();
83
+ try {
84
+ const answer = await queryLLM(question, {
85
+ model: options.model,
86
+ provider: options.provider,
87
+ baseUrl: options.baseUrl,
88
+ apiKey: options.apiKey,
89
+ });
90
+
91
+ spinner.stop();
92
+
93
+ if (options.json) {
94
+ console.log(
95
+ JSON.stringify(
96
+ {
97
+ question,
98
+ answer,
99
+ model: options.model,
100
+ provider: options.provider,
101
+ },
102
+ null,
103
+ 2,
104
+ ),
105
+ );
106
+ } else {
107
+ logger.log(answer);
108
+ }
109
+ } catch (err) {
110
+ spinner.fail(`Failed: ${err.message}`);
111
+ process.exit(1);
112
+ }
113
+ });
114
+ }
@@ -0,0 +1,35 @@
1
+ /**
2
+ * Interactive AI chat command
3
+ * chainlesschain chat [--model qwen2:7b] [--provider ollama] [--agent]
4
+ */
5
+
6
+ import { startChatRepl } from "../repl/chat-repl.js";
7
+ import { startAgentRepl } from "../repl/agent-repl.js";
8
+
9
+ export function registerChatCommand(program) {
10
+ program
11
+ .command("chat")
12
+ .description("Start an interactive AI chat session")
13
+ .option("--model <model>", "Model name", "qwen2:7b")
14
+ .option("--provider <provider>", "LLM provider (ollama, openai)", "ollama")
15
+ .option("--base-url <url>", "API base URL")
16
+ .option("--api-key <key>", "API key")
17
+ .option(
18
+ "--agent",
19
+ "Agentic mode - AI can read/write files and run commands (like Claude Code)",
20
+ )
21
+ .action(async (options) => {
22
+ const replOptions = {
23
+ model: options.model,
24
+ provider: options.provider,
25
+ baseUrl: options.baseUrl,
26
+ apiKey: options.apiKey,
27
+ };
28
+
29
+ if (options.agent) {
30
+ await startAgentRepl(replOptions);
31
+ } else {
32
+ await startChatRepl(replOptions);
33
+ }
34
+ });
35
+ }
@@ -0,0 +1,152 @@
1
+ /**
2
+ * Database management commands
3
+ * chainlesschain db init|info|backup|restore|migrate
4
+ */
5
+
6
+ import ora from "ora";
7
+ import chalk from "chalk";
8
+ import { logger } from "../lib/logger.js";
9
+ import { bootstrap, shutdown } from "../runtime/bootstrap.js";
10
+
11
+ export function registerDbCommand(program) {
12
+ const db = program.command("db").description("Database management");
13
+
14
+ // db init
15
+ db.command("init")
16
+ .description("Initialize the database")
17
+ .option("--path <path>", "Custom database path")
18
+ .option("--force", "Overwrite existing database")
19
+ .action(async (options) => {
20
+ const spinner = ora("Initializing database...").start();
21
+ try {
22
+ const ctx = await bootstrap({
23
+ dbPath: options.path,
24
+ verbose: program.opts().verbose,
25
+ });
26
+
27
+ if (!ctx.db) {
28
+ spinner.fail("Failed to initialize database");
29
+ process.exit(1);
30
+ }
31
+
32
+ const info = ctx.db.getInfo();
33
+ spinner.succeed("Database initialized");
34
+ logger.log(` Path: ${chalk.cyan(info.path)}`);
35
+ logger.log(` Driver: ${chalk.cyan(info.driver)}`);
36
+ logger.log(` Tables: ${chalk.cyan(info.tableCount)}`);
37
+
38
+ await shutdown();
39
+ } catch (err) {
40
+ spinner.fail(`Database init failed: ${err.message}`);
41
+ process.exit(1);
42
+ }
43
+ });
44
+
45
+ // db info
46
+ db.command("info")
47
+ .description("Show database information")
48
+ .option("--json", "Output as JSON")
49
+ .action(async (options) => {
50
+ try {
51
+ const ctx = await bootstrap({ verbose: program.opts().verbose });
52
+
53
+ if (!ctx.db) {
54
+ logger.error("No database available");
55
+ process.exit(1);
56
+ }
57
+
58
+ const info = ctx.db.getInfo();
59
+
60
+ if (options.json) {
61
+ console.log(JSON.stringify(info, null, 2));
62
+ } else {
63
+ logger.log(chalk.bold("Database Info:"));
64
+ logger.log(` Path: ${chalk.cyan(info.path)}`);
65
+ logger.log(` Driver: ${chalk.cyan(info.driver)}`);
66
+ logger.log(
67
+ ` Encrypted: ${info.encrypted ? chalk.green("Yes") : chalk.gray("No")}`,
68
+ );
69
+ logger.log(` Size: ${chalk.cyan(info.fileSizeMB + " MB")}`);
70
+ logger.log(` Tables: ${chalk.cyan(info.tableCount)}`);
71
+ if (info.tables.length > 0) {
72
+ logger.log(
73
+ ` Table list: ${chalk.gray(info.tables.slice(0, 10).join(", "))}${info.tables.length > 10 ? "..." : ""}`,
74
+ );
75
+ }
76
+ }
77
+
78
+ await shutdown();
79
+ } catch (err) {
80
+ logger.error(`Failed to get db info: ${err.message}`);
81
+ process.exit(1);
82
+ }
83
+ });
84
+
85
+ // db backup
86
+ db.command("backup")
87
+ .description("Create database backup")
88
+ .argument("[output]", "Backup file path")
89
+ .action(async (output) => {
90
+ const spinner = ora("Creating backup...").start();
91
+ try {
92
+ const ctx = await bootstrap({ verbose: program.opts().verbose });
93
+
94
+ if (!ctx.db) {
95
+ spinner.fail("No database to backup");
96
+ process.exit(1);
97
+ }
98
+
99
+ const timestamp = new Date().toISOString().replace(/[:.]/g, "-");
100
+ const backupPath =
101
+ output || `${ctx.env.dataDir}/chainlesschain.db.backup.${timestamp}`;
102
+
103
+ ctx.db.backup(backupPath);
104
+ spinner.succeed(`Backup created: ${chalk.cyan(backupPath)}`);
105
+
106
+ await shutdown();
107
+ } catch (err) {
108
+ spinner.fail(`Backup failed: ${err.message}`);
109
+ process.exit(1);
110
+ }
111
+ });
112
+
113
+ // db restore
114
+ db.command("restore")
115
+ .description("Restore database from backup")
116
+ .argument("<backup>", "Backup file path")
117
+ .option("--force", "Skip confirmation")
118
+ .action(async (backup, options) => {
119
+ try {
120
+ const fs = await import("fs");
121
+ if (!fs.existsSync(backup)) {
122
+ logger.error(`Backup file not found: ${backup}`);
123
+ process.exit(1);
124
+ }
125
+
126
+ if (!options.force) {
127
+ const { confirm } = await import("@inquirer/prompts");
128
+ const ok = await confirm({
129
+ message: "This will overwrite the current database. Continue?",
130
+ });
131
+ if (!ok) {
132
+ logger.info("Restore cancelled");
133
+ return;
134
+ }
135
+ }
136
+
137
+ const ctx = await bootstrap({
138
+ skipDb: true,
139
+ verbose: program.opts().verbose,
140
+ });
141
+ const dbPath = `${ctx.env.dataDir}/chainlesschain.db`;
142
+
143
+ fs.copyFileSync(backup, dbPath);
144
+ logger.success(`Database restored from ${chalk.cyan(backup)}`);
145
+
146
+ await shutdown();
147
+ } catch (err) {
148
+ logger.error(`Restore failed: ${err.message}`);
149
+ process.exit(1);
150
+ }
151
+ });
152
+ }
@@ -0,0 +1,137 @@
1
+ /**
2
+ * LLM management commands
3
+ * chainlesschain llm list|test|models
4
+ */
5
+
6
+ import ora from "ora";
7
+ import chalk from "chalk";
8
+ import { logger } from "../lib/logger.js";
9
+
10
+ export function registerLlmCommand(program) {
11
+ const llm = program.command("llm").description("LLM provider management");
12
+
13
+ // llm models - list available models
14
+ llm
15
+ .command("models")
16
+ .description("List available models from the current provider")
17
+ .option("--provider <provider>", "LLM provider", "ollama")
18
+ .option("--base-url <url>", "API base URL", "http://localhost:11434")
19
+ .option("--json", "Output as JSON")
20
+ .action(async (options) => {
21
+ const spinner = ora("Fetching models...").start();
22
+ try {
23
+ if (options.provider === "ollama") {
24
+ const response = await fetch(`${options.baseUrl}/api/tags`);
25
+ if (!response.ok) {
26
+ throw new Error(`Ollama not reachable: ${response.status}`);
27
+ }
28
+ const data = await response.json();
29
+ spinner.stop();
30
+
31
+ if (options.json) {
32
+ console.log(JSON.stringify(data.models, null, 2));
33
+ } else {
34
+ if (!data.models || data.models.length === 0) {
35
+ logger.info("No models installed. Run: ollama pull qwen2:7b");
36
+ } else {
37
+ logger.log(chalk.bold(`Models (${data.models.length}):\n`));
38
+ for (const m of data.models) {
39
+ const size = m.size
40
+ ? chalk.gray(`(${(m.size / 1e9).toFixed(1)}GB)`)
41
+ : "";
42
+ logger.log(` ${chalk.cyan(m.name.padEnd(30))} ${size}`);
43
+ }
44
+ }
45
+ }
46
+ } else {
47
+ spinner.fail(
48
+ `Model listing not supported for provider: ${options.provider}`,
49
+ );
50
+ }
51
+ } catch (err) {
52
+ spinner.fail(`Failed to list models: ${err.message}`);
53
+ process.exit(1);
54
+ }
55
+ });
56
+
57
+ // llm test - test connectivity
58
+ llm
59
+ .command("test")
60
+ .description("Test LLM provider connectivity")
61
+ .option("--provider <provider>", "LLM provider", "ollama")
62
+ .option("--model <model>", "Model to test", "qwen2:7b")
63
+ .option("--base-url <url>", "API base URL", "http://localhost:11434")
64
+ .option("--api-key <key>", "API key")
65
+ .action(async (options) => {
66
+ const spinner = ora(`Testing ${options.provider}...`).start();
67
+ try {
68
+ const start = Date.now();
69
+
70
+ if (options.provider === "ollama") {
71
+ const response = await fetch(`${options.baseUrl}/api/generate`, {
72
+ method: "POST",
73
+ headers: { "Content-Type": "application/json" },
74
+ body: JSON.stringify({
75
+ model: options.model,
76
+ prompt: "Say hi in one word.",
77
+ stream: false,
78
+ }),
79
+ });
80
+
81
+ if (!response.ok) {
82
+ throw new Error(`HTTP ${response.status}`);
83
+ }
84
+
85
+ const data = await response.json();
86
+ const elapsed = Date.now() - start;
87
+
88
+ spinner.succeed(
89
+ `${chalk.green("Connected")} to Ollama (${options.model}) in ${elapsed}ms`,
90
+ );
91
+ logger.log(
92
+ ` Response: ${chalk.gray(data.response.trim().substring(0, 100))}`,
93
+ );
94
+ } else if (options.provider === "openai") {
95
+ const apiKey = options.apiKey || process.env.OPENAI_API_KEY;
96
+ if (!apiKey) throw new Error("API key required");
97
+
98
+ const url =
99
+ options.baseUrl !== "http://localhost:11434"
100
+ ? options.baseUrl
101
+ : "https://api.openai.com/v1";
102
+
103
+ const response = await fetch(`${url}/chat/completions`, {
104
+ method: "POST",
105
+ headers: {
106
+ "Content-Type": "application/json",
107
+ Authorization: `Bearer ${apiKey}`,
108
+ },
109
+ body: JSON.stringify({
110
+ model: options.model || "gpt-4o-mini",
111
+ messages: [{ role: "user", content: "Say hi in one word." }],
112
+ max_tokens: 10,
113
+ }),
114
+ });
115
+
116
+ if (!response.ok) {
117
+ throw new Error(`HTTP ${response.status}`);
118
+ }
119
+
120
+ const data = await response.json();
121
+ const elapsed = Date.now() - start;
122
+
123
+ spinner.succeed(
124
+ `${chalk.green("Connected")} to OpenAI (${options.model || "gpt-4o-mini"}) in ${elapsed}ms`,
125
+ );
126
+ logger.log(
127
+ ` Response: ${chalk.gray(data.choices[0].message.content.trim())}`,
128
+ );
129
+ } else {
130
+ spinner.fail(`Unknown provider: ${options.provider}`);
131
+ }
132
+ } catch (err) {
133
+ spinner.fail(`Test failed: ${err.message}`);
134
+ process.exit(1);
135
+ }
136
+ });
137
+ }