askimo 1.0.0 → 1.2.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -1,57 +1,71 @@
1
+ <p align="center">
2
+ <img width="400" height="400" alt="Askimo"
3
+ src="https://github.com/user-attachments/assets/cbf2ab5d-5a07-45a2-9109-6a7bc22ea878" />
4
+ </p>
5
+
1
6
  # Askimo
2
7
 
3
- A CLI tool for communicating with AI providers (Perplexity, OpenAI, Anthropic).
8
+ A CLI tool for communicating with AI providers.
9
+
10
+ **Supported providers:** Perplexity · OpenAI · Anthropic · xAI (Grok)
11
+
12
+ ---
4
13
 
5
- ## Installation
14
+ ## 📦 Installation
6
15
 
7
16
  ```bash
8
17
  npm install -g askimo
9
18
  ```
10
19
 
11
- ## Configuration
20
+ ## ⚙️ Configuration
12
21
 
13
22
  Create a config file at `~/.askimo/config`:
14
23
 
15
- ```
24
+ ```bash
16
25
  # API Keys (at least one required)
17
26
  PERPLEXITY_API_KEY=your-perplexity-key
18
27
  OPENAI_API_KEY=your-openai-key
19
28
  ANTHROPIC_API_KEY=your-anthropic-key
29
+ XAI_API_KEY=your-xai-key
20
30
 
21
31
  # Optional settings
22
32
  DEFAULT_PROVIDER=perplexity
23
33
  PERPLEXITY_MODEL=sonar
24
34
  OPENAI_MODEL=gpt-4o
25
35
  ANTHROPIC_MODEL=claude-sonnet-4-20250514
36
+ XAI_MODEL=grok-4
26
37
  ```
27
38
 
28
- ## Usage
39
+ ---
29
40
 
30
- ### Ask a single question
41
+ ## 🚀 Usage
42
+
43
+ ### Quick question
31
44
 
32
45
  ```bash
33
46
  askimo "What is the capital of France?"
34
47
  ```
35
48
 
36
- The `ask` command is the default, so you can omit it:
49
+ ### Choose a provider
37
50
 
38
- ```bash
39
- askimo ask "What is the capital of France?"
40
- ```
41
-
42
- ### Provider flags
51
+ | Flag | Provider |
52
+ |------|----------------------|
53
+ | `-p` | Perplexity (default) |
54
+ | `-o` | OpenAI |
55
+ | `-a` | Anthropic |
56
+ | `-x` | xAI (Grok) |
43
57
 
44
58
  ```bash
45
- askimo "question" -p # Use Perplexity (default)
46
- askimo "question" -o # Use OpenAI
47
- askimo "question" -a # Use Anthropic
59
+ askimo "explain quantum computing" -o # Use OpenAI
60
+ askimo "write a haiku" -a # Use Anthropic
61
+ askimo "what's happening today?" -x # Use xAI Grok
48
62
  ```
49
63
 
50
64
  ### Continue a conversation
51
65
 
52
66
  ```bash
53
- askimo "follow up question" -c 1 # Continue last conversation
54
- askimo "follow up question" -c 2 # Continue second-to-last
67
+ askimo "tell me more" -c 1 # Continue last conversation
68
+ askimo "go deeper" -c 2 # Continue second-to-last
55
69
  ```
56
70
 
57
71
  ### JSON output
@@ -60,36 +74,55 @@ askimo "follow up question" -c 2 # Continue second-to-last
60
74
  askimo "question" --json
61
75
  ```
62
76
 
63
- Returns structured JSON with provider, model, question, response, and sources (for Perplexity).
77
+ ### Pipe content
78
+
79
+ ```bash
80
+ cat code.js | askimo "explain this code"
81
+ echo "hello world" | askimo "translate to French"
82
+ ```
83
+
84
+ ### Read from file
85
+
86
+ ```bash
87
+ askimo -f code.js "what does this do"
88
+ askimo -f error.log "find the bug"
89
+ ```
64
90
 
65
91
  ### Interactive chat
66
92
 
67
93
  ```bash
68
- askimo chat
69
- askimo chat -o # Chat with OpenAI
70
- askimo chat -c 1 # Continue last conversation
94
+ askimo chat # Start new chat
95
+ askimo chat -o # Chat with OpenAI
96
+ askimo chat -x # Chat with xAI Grok
97
+ askimo chat -c 1 # Continue last conversation
71
98
  ```
72
99
 
73
- Type `exit` or press `Ctrl+C` to quit.
100
+ Type `exit` or `Ctrl+C` to quit.
74
101
 
75
- ### List available models
102
+ ### List models
76
103
 
77
104
  ```bash
78
- askimo models # List all providers
79
- askimo models -p # Perplexity only
80
- askimo models -o # OpenAI only
81
- askimo models -a # Anthropic only
105
+ askimo models # All providers
106
+ askimo models -p # Perplexity only
107
+ askimo models -x # xAI only
82
108
  ```
83
109
 
84
- ## Features
110
+ ---
111
+
112
+ ## ✨ Features
85
113
 
86
- - Streaming responses
87
- - Conversation history (saved to `~/.askimo/conversations/`)
88
- - Source citations (Perplexity)
89
- - Multiple AI providers
90
- - Configurable default models
114
+ | Feature | Description |
115
+ |----------------|---------------------------------------------------|
116
+ | Streaming | Real-time response output |
117
+ | Piping | Pipe content via stdin |
118
+ | File input | Read content from files with `-f` |
119
+ | Citations | Source links with Perplexity |
120
+ | History | Conversations saved to `~/.askimo/conversations/` |
121
+ | Multi-provider | Switch between AI providers easily |
91
122
 
92
- ## Development
123
+ ---
124
+
125
+ ## 🛠️ Development
93
126
 
94
127
  ```bash
95
128
  npm install
@@ -97,6 +130,8 @@ npm test
97
130
  npm run lint
98
131
  ```
99
132
 
100
- ## License
133
+ ---
134
+
135
+ ## 📄 License
101
136
 
102
137
  Apache-2.0
package/index.mjs CHANGED
@@ -4,6 +4,7 @@ import { Command } from 'commander'
4
4
  import { startChat } from './lib/chat.mjs'
5
5
  import { ensureDirectories, loadConfig } from './lib/config.mjs'
6
6
  import { createConversation, loadConversation, saveConversation } from './lib/conversation.mjs'
7
+ import { buildMessage, readFile, readStdin } from './lib/input.mjs'
7
8
  import { DEFAULT_MODELS, determineProvider, getProvider, listModels } from './lib/providers.mjs'
8
9
  import { generateResponse, outputJson, streamResponse } from './lib/stream.mjs'
9
10
  import pkg from './package.json' with { type: 'json' }
@@ -15,14 +16,32 @@ program.name('askimo').description('CLI tool for communicating with AI providers
15
16
  program
16
17
  .command('ask', { isDefault: true })
17
18
  .description('Ask a single question')
18
- .argument('<question>', 'The question to ask')
19
+ .argument('[question]', 'The question to ask (can also pipe content via stdin)')
19
20
  .option('-p, --perplexity', 'Use Perplexity AI (default)')
20
21
  .option('-o, --openai', 'Use OpenAI')
21
22
  .option('-a, --anthropic', 'Use Anthropic Claude')
23
+ .option('-x, --xai', 'Use xAI Grok')
22
24
  .option('-j, --json', 'Output as JSON instead of streaming')
23
25
  .option('-c, --continue <n>', 'Continue conversation N (1=last, 2=second-to-last)', Number.parseInt)
26
+ .option('-f, --file <path>', 'Read content from file')
24
27
  .action(async (question, options) => {
25
28
  try {
29
+ const stdinContent = await readStdin()
30
+ const fileContent = options.file ? await readFile(options.file) : null
31
+
32
+ if (stdinContent && options.file) {
33
+ console.error('Error: Cannot use both piped input and --file flag')
34
+ process.exit(1)
35
+ }
36
+
37
+ const content = stdinContent || fileContent
38
+ const message = buildMessage(question, content)
39
+
40
+ if (!message) {
41
+ console.error('Error: No question provided. Use: askimo "question" or pipe content')
42
+ process.exit(1)
43
+ }
44
+
26
45
  const config = await loadConfig()
27
46
  await ensureDirectories()
28
47
 
@@ -46,22 +65,22 @@ program
46
65
 
47
66
  conversation.messages.push({
48
67
  role: 'user',
49
- content: question
68
+ content: message
50
69
  })
51
70
 
52
71
  let responseText
53
72
 
54
73
  if (options.json) {
55
- const { text, sources } = await generateResponse(model, conversation.messages)
74
+ const { text, sources, duration } = await generateResponse(model, conversation.messages)
56
75
  responseText = text
57
76
  conversation.messages.push({
58
77
  role: 'assistant',
59
78
  content: responseText
60
79
  })
61
80
  await saveConversation(conversation, existingPath)
62
- outputJson(conversation, responseText, sources)
81
+ outputJson(conversation, responseText, sources, duration)
63
82
  } else {
64
- responseText = await streamResponse(model, conversation.messages)
83
+ responseText = await streamResponse(model, conversation.messages, modelName)
65
84
  conversation.messages.push({
66
85
  role: 'assistant',
67
86
  content: responseText
@@ -80,6 +99,7 @@ program
80
99
  .option('-p, --perplexity', 'Use Perplexity AI (default)')
81
100
  .option('-o, --openai', 'Use OpenAI')
82
101
  .option('-a, --anthropic', 'Use Anthropic Claude')
102
+ .option('-x, --xai', 'Use xAI Grok')
83
103
  .option('-c, --continue <n>', 'Continue conversation N (1=last, 2=second-to-last)', Number.parseInt)
84
104
  .action(async (options) => {
85
105
  try {
@@ -102,6 +122,7 @@ program
102
122
  .option('-p, --perplexity', 'Show only Perplexity models')
103
123
  .option('-o, --openai', 'Show only OpenAI models')
104
124
  .option('-a, --anthropic', 'Show only Anthropic models')
125
+ .option('-x, --xai', 'Show only xAI models')
105
126
  .action(async (options) => {
106
127
  try {
107
128
  const config = await loadConfig()
@@ -110,8 +131,9 @@ program
110
131
  if (options.perplexity) providers.push('perplexity')
111
132
  if (options.openai) providers.push('openai')
112
133
  if (options.anthropic) providers.push('anthropic')
134
+ if (options.xai) providers.push('xai')
113
135
 
114
- const toShow = providers.length === 0 ? ['perplexity', 'openai', 'anthropic'] : providers
136
+ const toShow = providers.length === 0 ? ['perplexity', 'openai', 'anthropic', 'xai'] : providers
115
137
 
116
138
  const results = await Promise.all(
117
139
  toShow.map(async (provider) => ({
package/lib/chat.mjs CHANGED
@@ -42,7 +42,7 @@ async function startChat(model, providerName, modelName, continueN = null) {
42
42
  })
43
43
 
44
44
  console.log('')
45
- const responseText = await streamResponse(model, conversation.messages)
45
+ const responseText = await streamResponse(model, conversation.messages, modelName)
46
46
 
47
47
  conversation.messages.push({
48
48
  role: 'assistant',
package/lib/input.mjs ADDED
@@ -0,0 +1,55 @@
1
+ import fs from 'node:fs/promises'
2
+
3
+ async function readStdin() {
4
+ if (process.stdin.isTTY) {
5
+ return null
6
+ }
7
+
8
+ // In non-TTY environments, check if data is available with a short timeout
9
+ // to avoid hanging when no data is being piped
10
+ return new Promise((resolve) => {
11
+ const chunks = []
12
+ let hasData = false
13
+
14
+ const timeout = setTimeout(() => {
15
+ if (!hasData) {
16
+ process.stdin.removeAllListeners()
17
+ process.stdin.pause()
18
+ resolve(null)
19
+ }
20
+ }, 10)
21
+
22
+ process.stdin.on('readable', () => {
23
+ let chunk = process.stdin.read()
24
+ while (chunk !== null) {
25
+ hasData = true
26
+ chunks.push(chunk)
27
+ chunk = process.stdin.read()
28
+ }
29
+ })
30
+
31
+ process.stdin.on('end', () => {
32
+ clearTimeout(timeout)
33
+ if (chunks.length === 0) {
34
+ resolve(null)
35
+ } else {
36
+ const content = Buffer.concat(chunks).toString('utf8').trim()
37
+ resolve(content || null)
38
+ }
39
+ })
40
+ })
41
+ }
42
+
43
+ async function readFile(filePath) {
44
+ const content = await fs.readFile(filePath, 'utf8')
45
+ return content.trim() || null
46
+ }
47
+
48
+ function buildMessage(prompt, content) {
49
+ if (prompt && content) {
50
+ return `${prompt}:\n\n${content}`
51
+ }
52
+ return content || prompt || null
53
+ }
54
+
55
+ export { readStdin, readFile, buildMessage }
package/lib/providers.mjs CHANGED
@@ -1,11 +1,13 @@
1
1
  import { createAnthropic } from '@ai-sdk/anthropic'
2
2
  import { createOpenAI } from '@ai-sdk/openai'
3
3
  import { createPerplexity } from '@ai-sdk/perplexity'
4
+ import { createXai } from '@ai-sdk/xai'
4
5
 
5
6
  const DEFAULT_MODELS = {
6
7
  perplexity: 'sonar',
7
8
  openai: 'gpt-4o',
8
- anthropic: 'claude-sonnet-4-20250514'
9
+ anthropic: 'claude-sonnet-4-20250514',
10
+ xai: 'grok-4'
9
11
  }
10
12
 
11
13
  // Perplexity doesn't have a models list API, so we hardcode these
@@ -17,6 +19,20 @@ const PERPLEXITY_MODELS = [
17
19
  { id: 'sonar-deep-research', description: 'Deep research sessions' }
18
20
  ]
19
21
 
22
+ // xAI doesn't have a public models list API, so we hardcode these
23
+ const XAI_MODELS = [
24
+ { id: 'grok-4-1-fast-reasoning', description: 'Grok 4.1 fast with reasoning' },
25
+ { id: 'grok-4-1-fast-non-reasoning', description: 'Grok 4.1 fast without reasoning' },
26
+ { id: 'grok-code-fast-1', description: 'Grok optimized for code' },
27
+ { id: 'grok-4-fast-reasoning', description: 'Grok 4 fast with reasoning' },
28
+ { id: 'grok-4-fast-non-reasoning', description: 'Grok 4 fast without reasoning' },
29
+ { id: 'grok-4-0709', description: 'Grok 4 flagship model' },
30
+ { id: 'grok-3-mini', description: 'Lightweight Grok 3 model' },
31
+ { id: 'grok-3', description: 'Grok 3 base model' },
32
+ { id: 'grok-2-vision-1212', description: 'Grok 2 with vision capabilities' },
33
+ { id: 'grok-2-image-1212', description: 'Image generation model' }
34
+ ]
35
+
20
36
  async function fetchOpenAiModels(apiKey) {
21
37
  const response = await fetch('https://api.openai.com/v1/models', {
22
38
  // biome-ignore lint/style/useNamingConvention: headers use standard capitalization
@@ -71,6 +87,9 @@ async function listModels(provider, config) {
71
87
  return fetchAnthropicModels(apiKey)
72
88
  }
73
89
 
90
+ case 'xai':
91
+ return XAI_MODELS
92
+
74
93
  default:
75
94
  throw new Error(`Unknown provider: ${provider}`)
76
95
  }
@@ -117,6 +136,19 @@ function getProvider(providerName, config) {
117
136
  modelName
118
137
  }
119
138
  }
139
+ case 'xai': {
140
+ const apiKey = config.XAI_API_KEY
141
+ if (!apiKey) {
142
+ throw new Error('XAI_API_KEY not found in config')
143
+ }
144
+ const modelName = config.XAI_MODEL || DEFAULT_MODELS.xai
145
+ const xai = createXai({ apiKey })
146
+ return {
147
+ model: xai(modelName),
148
+ name: 'xai',
149
+ modelName
150
+ }
151
+ }
120
152
  default:
121
153
  throw new Error(`Unknown provider: ${providerName}`)
122
154
  }
@@ -126,9 +158,10 @@ function determineProvider(options, config = {}) {
126
158
  if (options.openai) return 'openai'
127
159
  if (options.anthropic) return 'anthropic'
128
160
  if (options.perplexity) return 'perplexity'
161
+ if (options.xai) return 'xai'
129
162
 
130
163
  const defaultProvider = config.DEFAULT_PROVIDER?.toLowerCase()
131
- if (defaultProvider && ['perplexity', 'openai', 'anthropic'].includes(defaultProvider)) {
164
+ if (defaultProvider && ['perplexity', 'openai', 'anthropic', 'xai'].includes(defaultProvider)) {
132
165
  return defaultProvider
133
166
  }
134
167
 
package/lib/stream.mjs CHANGED
@@ -1,6 +1,13 @@
1
1
  import { generateText, streamText } from 'ai'
2
2
 
3
- async function streamResponse(model, messages) {
3
+ function formatDuration(ms) {
4
+ if (ms < 1000) return `${ms}ms`
5
+ return `${(ms / 1000).toFixed(1)}s`
6
+ }
7
+
8
+ async function streamResponse(model, messages, modelName) {
9
+ const startTime = Date.now()
10
+
4
11
  const result = streamText({
5
12
  model,
6
13
  messages
@@ -29,27 +36,35 @@ async function streamResponse(model, messages) {
29
36
  })
30
37
  }
31
38
 
39
+ // Display status line
40
+ const duration = Date.now() - startTime
41
+ process.stdout.write(`\n\x1b[2m${modelName} · ${formatDuration(duration)}\x1b[0m\n`)
42
+
32
43
  return fullText
33
44
  }
34
45
 
35
46
  async function generateResponse(model, messages) {
47
+ const startTime = Date.now()
48
+
36
49
  const { text, sources } = await generateText({
37
50
  model,
38
51
  messages
39
52
  })
40
53
 
41
- return { text, sources }
54
+ const duration = Date.now() - startTime
55
+ return { text, sources, duration }
42
56
  }
43
57
 
44
- function buildJsonOutput(conversation, response, sources) {
45
- const lastUserMessage = conversation.messages[conversation.messages.length - 1]
58
+ function buildJsonOutput(conversation, response, sources, duration) {
59
+ const lastUserMessage = conversation.messages.findLast((m) => m.role === 'user')
46
60
  const output = {
47
61
  provider: conversation.provider,
48
62
  model: conversation.model,
49
63
  question: lastUserMessage?.content || '',
50
64
  response,
51
65
  conversationId: conversation.id,
52
- messageCount: conversation.messages.length + 1
66
+ messageCount: conversation.messages.length + 1,
67
+ durationMs: duration
53
68
  }
54
69
 
55
70
  if (sources?.length > 0) {
@@ -59,8 +74,8 @@ function buildJsonOutput(conversation, response, sources) {
59
74
  return output
60
75
  }
61
76
 
62
- function outputJson(conversation, response, sources) {
63
- const output = buildJsonOutput(conversation, response, sources)
77
+ function outputJson(conversation, response, sources, duration) {
78
+ const output = buildJsonOutput(conversation, response, sources, duration)
64
79
  console.log(JSON.stringify(output, null, 2))
65
80
  }
66
81
 
package/package.json CHANGED
@@ -1,9 +1,21 @@
1
1
  {
2
2
  "name": "askimo",
3
- "version": "1.0.0",
3
+ "version": "1.2.0",
4
4
  "description": "A CLI tool for communicating with AI providers (Perplexity, OpenAI, Anthropic)",
5
5
  "license": "Apache-2.0",
6
6
  "author": "Amit Tal",
7
+ "keywords": [
8
+ "cli",
9
+ "ai",
10
+ "llm",
11
+ "perplexity",
12
+ "openai",
13
+ "anthropic",
14
+ "claude",
15
+ "gpt",
16
+ "chatbot",
17
+ "terminal"
18
+ ],
7
19
  "type": "module",
8
20
  "bin": {
9
21
  "askimo": "./index.mjs"
@@ -27,6 +39,7 @@
27
39
  "@ai-sdk/anthropic": "^2.0.53",
28
40
  "@ai-sdk/openai": "^2.0.76",
29
41
  "@ai-sdk/perplexity": "^2.0.21",
42
+ "@ai-sdk/xai": "^2.0.40",
30
43
  "@inquirer/input": "^5.0.2",
31
44
  "ai": "^5.0.106",
32
45
  "commander": "^14.0.2"
package/test/input.mjs ADDED
@@ -0,0 +1,53 @@
1
+ import test from 'ava'
2
+ import { buildMessage } from '../lib/input.mjs'
3
+
4
+ test('buildMessage combines prompt and content with colon format', (t) => {
5
+ const result = buildMessage('explain this', 'const x = 1')
6
+ t.is(result, 'explain this:\n\nconst x = 1')
7
+ })
8
+
9
+ test('buildMessage returns content only when no prompt', (t) => {
10
+ const result = buildMessage(null, 'some content')
11
+ t.is(result, 'some content')
12
+ })
13
+
14
+ test('buildMessage returns content only when prompt is undefined', (t) => {
15
+ const result = buildMessage(undefined, 'some content')
16
+ t.is(result, 'some content')
17
+ })
18
+
19
+ test('buildMessage returns prompt only when no content', (t) => {
20
+ const result = buildMessage('what is 2+2', null)
21
+ t.is(result, 'what is 2+2')
22
+ })
23
+
24
+ test('buildMessage returns prompt only when content is undefined', (t) => {
25
+ const result = buildMessage('what is 2+2', undefined)
26
+ t.is(result, 'what is 2+2')
27
+ })
28
+
29
+ test('buildMessage returns null when both are null', (t) => {
30
+ const result = buildMessage(null, null)
31
+ t.is(result, null)
32
+ })
33
+
34
+ test('buildMessage returns null when both are undefined', (t) => {
35
+ const result = buildMessage(undefined, undefined)
36
+ t.is(result, null)
37
+ })
38
+
39
+ test('buildMessage handles empty string prompt as falsy', (t) => {
40
+ const result = buildMessage('', 'content')
41
+ t.is(result, 'content')
42
+ })
43
+
44
+ test('buildMessage handles empty string content as falsy', (t) => {
45
+ const result = buildMessage('prompt', '')
46
+ t.is(result, 'prompt')
47
+ })
48
+
49
+ test('buildMessage preserves multiline content', (t) => {
50
+ const content = 'line 1\nline 2\nline 3'
51
+ const result = buildMessage('summarize', content)
52
+ t.is(result, 'summarize:\n\nline 1\nline 2\nline 3')
53
+ })
package/test/stream.mjs CHANGED
@@ -45,6 +45,17 @@ test('buildJsonOutput extracts question from last user message', (t) => {
45
45
  t.is(output.question, 'Second question')
46
46
  })
47
47
 
48
+ test('buildJsonOutput finds user message even when assistant message is last', (t) => {
49
+ const conversation = createMockConversation({
50
+ messages: [
51
+ { role: 'user', content: 'My question' },
52
+ { role: 'assistant', content: 'My answer' }
53
+ ]
54
+ })
55
+ const output = buildJsonOutput(conversation, 'response')
56
+ t.is(output.question, 'My question')
57
+ })
58
+
48
59
  test('buildJsonOutput returns empty question when no messages', (t) => {
49
60
  const conversation = createMockConversation({ messages: [] })
50
61
  const output = buildJsonOutput(conversation, 'response')