risicare 0.2.1 → 0.2.2

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (2) hide show
  1. package/README.md +131 -45
  2. package/package.json +17 -7
package/README.md CHANGED
@@ -1,57 +1,147 @@
1
- # Risicare
1
+ # risicare
2
2
 
3
- Self-healing observability for AI agents. Captures decision-level traces, diagnoses failures, and deploys fixes — automatically.
3
+ AI agent observability and self-healing for Node.js and TypeScript.
4
4
 
5
5
  [![npm version](https://img.shields.io/npm/v/risicare.svg)](https://www.npmjs.com/package/risicare)
6
- [![Node.js](https://img.shields.io/node/v/risicare.svg)](https://www.npmjs.com/package/risicare)
6
+ [![npm downloads](https://img.shields.io/npm/dm/risicare.svg)](https://www.npmjs.com/package/risicare)
7
+ [![TypeScript](https://img.shields.io/badge/TypeScript-ready-blue.svg)](https://www.npmjs.com/package/risicare)
8
+ [![License: MIT](https://img.shields.io/npm/l/risicare.svg)](https://opensource.org/licenses/MIT)
7
9
 
8
- ## Quick Start
10
+ Monitor your AI agents in production. Trace every LLM call, detect errors automatically, and get AI-generated fixes — with 3 lines of setup.
11
+
12
+ ## Quickstart
9
13
 
10
14
  ```bash
11
15
  npm install risicare
12
16
  ```
13
17
 
14
18
  ```typescript
15
- import { init } from 'risicare';
19
+ import { init, agent, shutdown } from 'risicare';
20
+ import { patchOpenAI } from 'risicare/openai';
21
+ import OpenAI from 'openai';
16
22
 
23
+ // 1. Initialize
17
24
  init({
18
25
  apiKey: 'rsk-...',
19
26
  endpoint: 'https://app.risicare.ai',
20
27
  });
21
28
 
22
- // That's it. LLM calls are now traced automatically.
29
+ // 2. Patch your LLM client
30
+ const openai = patchOpenAI(new OpenAI());
31
+
32
+ // 3. Wrap your agent — all LLM calls inside are traced automatically
33
+ const myAgent = agent({ name: 'research-agent' }, async (query: string) => {
34
+ const response = await openai.chat.completions.create({
35
+ model: 'gpt-4',
36
+ messages: [{ role: 'user', content: query }],
37
+ });
38
+ return response.choices[0].message.content;
39
+ });
40
+
41
+ // Run it — traces appear in your dashboard instantly
42
+ const result = await myAgent('What is quantum computing?');
43
+ await shutdown();
23
44
  ```
24
45
 
25
- ## Provider Instrumentation
46
+ **That's it.** Your agent's LLM calls, latency, token usage, and costs now appear in the [Risicare dashboard](https://app.risicare.ai).
47
+
48
+ ## Features
49
+
50
+ - **12 LLM providers** — OpenAI, Anthropic, Google, Mistral, Groq, Cohere, Together, Ollama, HuggingFace, Cerebras, Bedrock, Vercel AI
51
+ - **4 framework integrations** — LangChain, LangGraph, Instructor, LlamaIndex
52
+ - **Self-healing** — Automatic error diagnosis and AI-generated fix suggestions
53
+ - **Evaluation scores** — Rate agent quality with `score()` and 13 built-in scorers
54
+ - **Streaming support** — `tracedStream()` for async iterator tracing
55
+ - **Context propagation** — Automatic across `async/await`, `Promise`, `setTimeout`, `EventEmitter`
56
+ - **Zero runtime dependencies** — No bloat in your node_modules
57
+ - **Dual CJS/ESM** — Works with `require()` and `import`
58
+ - **Full TypeScript** — Strict types and IntelliSense out of the box
59
+ - **Non-blocking** — Async batch export with circuit breaker and retry
60
+ - **Zero overhead when disabled** — Frozen NOOP_SPAN singleton, no allocations
26
61
 
27
- Wrap your provider client to auto-capture all LLM calls:
62
+ ## LLM Providers
28
63
 
29
64
  ```typescript
30
65
  import { patchOpenAI } from 'risicare/openai';
31
- import OpenAI from 'openai';
66
+ import { patchAnthropic } from 'risicare/anthropic';
67
+ import { patchGoogle } from 'risicare/google';
68
+ // ... and 9 more
32
69
 
33
70
  const openai = patchOpenAI(new OpenAI());
34
- // All chat.completions.create and embeddings calls are now traced.
71
+ // Every call is now traced model, tokens, latency, cost
35
72
  ```
36
73
 
37
- ### Supported Providers
74
+ All 12 providers:
38
75
 
39
- | Provider | Import |
40
- |----------|--------|
41
- | OpenAI | `risicare/openai` |
42
- | Anthropic | `risicare/anthropic` |
43
- | Vercel AI SDK | `risicare/vercel-ai` |
76
+ `openai` · `anthropic` · `google` · `mistral` · `groq` · `cohere` · `together` · `ollama` · `huggingface` · `cerebras` · `bedrock` · `vercel-ai`
44
77
 
45
- ## Decorators
78
+ ## Framework Integrations
79
+
80
+ ```typescript
81
+ // LangChain
82
+ import { RisicareCallbackHandler } from 'risicare/langchain';
83
+ const handler = new RisicareCallbackHandler();
84
+ await chain.invoke(input, { callbacks: [handler] });
85
+
86
+ // LangGraph
87
+ import { instrumentLangGraph } from 'risicare/langgraph';
88
+ const tracedGraph = instrumentLangGraph(compiledGraph);
89
+
90
+ // Instructor
91
+ import { patchInstructor } from 'risicare/instructor';
92
+ const client = patchInstructor(instructor);
93
+
94
+ // LlamaIndex
95
+ import { RisicareLlamaIndexHandler } from 'risicare/llamaindex';
96
+ ```
46
97
 
47
- Structure your traces with agent identity and decision phases:
98
+ ## Core API
48
99
 
49
100
  ```typescript
50
- import { agent, session, traceThink, traceDecide, traceAct } from 'risicare';
101
+ import {
102
+ init, shutdown, // Lifecycle
103
+ agent, session, // Identity & grouping
104
+ traceThink, traceDecide, traceAct, // Decision phases
105
+ reportError, score, // Self-healing & evaluation
106
+ tracedStream, // Streaming
107
+ } from 'risicare';
108
+
109
+ init({ apiKey, endpoint }) // Initialize SDK
110
+ agent({ name }, fn) // Wrap function with agent identity
111
+ session({ sessionId, userId }, fn) // Group traces into user sessions
112
+ traceThink('analyze', async () => {...}) // Tag reasoning phase
113
+ traceDecide('choose', async () => {...}) // Tag decision phase
114
+ traceAct('execute', async () => {...}) // Tag action phase
115
+ reportError(error) // Report caught errors for diagnosis
116
+ score(traceId, 'quality', 0.92) // Record evaluation score [0.0-1.0]
117
+ tracedStream(asyncIterable, 'stream') // Trace async iterators
118
+ await shutdown() // Flush pending spans and close
119
+ ```
120
+
121
+ ## Self-Healing
122
+
123
+ When your agent fails, Risicare automatically:
124
+
125
+ 1. **Classifies** the error (154 codes across TOOL, MEMORY, REASONING, OUTPUT, etc.)
126
+ 2. **Diagnoses** the root cause using AI analysis
127
+ 3. **Generates** a fix you can review and apply
128
+
129
+ ```typescript
130
+ try {
131
+ await myAgent(input);
132
+ } catch (error) {
133
+ reportError(error); // Triggers automatic diagnosis pipeline
134
+ }
135
+ ```
136
+
137
+ ## Decision Phases
138
+
139
+ Structure your traces to see how your agent thinks, decides, and acts:
51
140
 
141
+ ```typescript
52
142
  const myAgent = agent({ name: 'planner', role: 'coordinator' }, async (input) => {
53
143
  const analysis = await traceThink('analyze', async () => {
54
- return await openai.chat.completions.create({ ... });
144
+ return await openai.chat.completions.create({ /* ... */ });
55
145
  });
56
146
 
57
147
  const decision = await traceDecide('choose-action', async () => {
@@ -62,38 +152,34 @@ const myAgent = agent({ name: 'planner', role: 'coordinator' }, async (input) =>
62
152
  return executeAction(decision);
63
153
  });
64
154
  });
65
-
66
- // Wrap in a session for user-level tracking
67
- const result = await session({ userId: 'user-123' }, () => myAgent(input));
68
155
  ```
69
156
 
70
- ## Features
71
-
72
- - **Zero runtime dependencies** — no bloat in your node_modules
73
- - **Dual CJS/ESM** — works with CommonJS `require()` and ES module `import`
74
- - **Full TypeScript** — strict types, generics, and IntelliSense out of the box
75
- - **Non-blocking** — async batch export with circuit breaker and retry
76
- - **Context propagation** — automatic across `async/await`, `Promise`, `setTimeout`, and `EventEmitter`
77
- - **Zero overhead when disabled** — frozen NOOP_SPAN singleton, no allocations
157
+ ## Sessions
78
158
 
79
- ## Progressive Integration
159
+ Group traces from the same user conversation:
80
160
 
81
- | Tier | Effort | What You Get |
82
- |------|--------|-------------|
83
- | **Tier 0** | `RISICARE_TRACING=true` (env var) | Auto-instrument all LLM calls |
84
- | **Tier 1** | `import { init } from 'risicare'` | Explicit config, custom endpoint |
85
- | **Tier 2** | `agent()` wrapper | Agent identity and hierarchy |
86
- | **Tier 3** | `session()` wrapper | User session tracking |
87
- | **Tier 4** | `traceThink / traceDecide / traceAct` | Decision phase visibility |
88
- | **Tier 5** | `traceMessage / traceDelegate` | Multi-agent communication |
161
+ ```typescript
162
+ const result = await session(
163
+ { sessionId: 'sess-abc123', userId: 'user-456' },
164
+ () => myAgent(userMessage)
165
+ );
166
+ ```
89
167
 
90
168
  ## Requirements
91
169
 
92
- - Node.js >= 18.0.0
93
- - TypeScript >= 5.0 (optional, for type checking)
170
+ - Node.js 18+
171
+ - TypeScript 5.0+ (optional, types included)
94
172
 
95
- ## Links
173
+ ## Documentation
96
174
 
97
- - [Documentation](https://risicare.ai/docs)
98
- - [Dashboard](https://app.risicare.ai)
175
+ - [Full docs](https://risicare.ai/docs)
176
+ - [Quickstart guide](https://risicare.ai/docs/start/quickstart)
177
+ - [JS SDK reference](https://risicare.ai/docs/reference/js-sdk)
178
+ - [LLM providers](https://risicare.ai/docs/instrument/providers)
179
+ - [Self-healing](https://risicare.ai/docs/heal)
180
+ - [Evaluations](https://risicare.ai/docs/observe/evaluations)
99
181
  - [Python SDK](https://pypi.org/project/risicare/)
182
+
183
+ ## License
184
+
185
+ [MIT](LICENSE)
package/package.json CHANGED
@@ -1,7 +1,7 @@
1
1
  {
2
2
  "name": "risicare",
3
- "version": "0.2.1",
4
- "description": "JavaScript/TypeScript SDK for RisicareAgent Self-Healing Infrastructure",
3
+ "version": "0.2.2",
4
+ "description": "AI agent observability and self-healing for Node.jstrace LLM calls, detect errors, get AI-generated fixes",
5
5
  "type": "module",
6
6
  "main": "./dist/index.cjs",
7
7
  "module": "./dist/index.js",
@@ -209,22 +209,32 @@
209
209
  "@types/node": "^20.0.0"
210
210
  },
211
211
  "keywords": [
212
- "risicare",
213
212
  "ai",
214
- "agents",
213
+ "llm",
215
214
  "observability",
216
215
  "tracing",
217
- "self-healing",
216
+ "opentelemetry",
218
217
  "openai",
219
218
  "anthropic",
220
219
  "langchain",
221
- "llm"
220
+ "agent",
221
+ "self-healing",
222
+ "monitoring",
223
+ "gpt",
224
+ "claude",
225
+ "gemini",
226
+ "ai-agent",
227
+ "llm-observability",
228
+ "agentic-ai"
222
229
  ],
223
230
  "license": "MIT",
224
- "homepage": "https://risicare.ai",
231
+ "homepage": "https://risicare.ai/docs",
225
232
  "repository": {
226
233
  "type": "git",
227
234
  "url": "https://github.com/risicare/risicare",
228
235
  "directory": "packages/risicare-sdk-js"
236
+ },
237
+ "bugs": {
238
+ "url": "https://github.com/risicare/risicare/issues"
229
239
  }
230
240
  }