mdcontext 0.0.1 → 0.2.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/.changeset/README.md +28 -0
- package/.changeset/config.json +11 -0
- package/.claude/settings.local.json +25 -0
- package/.github/workflows/ci.yml +83 -0
- package/.github/workflows/claude-code-review.yml +44 -0
- package/.github/workflows/claude.yml +85 -0
- package/.github/workflows/release.yml +113 -0
- package/.tldrignore +112 -0
- package/BACKLOG.md +338 -0
- package/CONTRIBUTING.md +186 -0
- package/NOTES/NOTES +44 -0
- package/README.md +434 -11
- package/biome.json +36 -0
- package/cspell.config.yaml +14 -0
- package/dist/chunk-23UPXDNL.js +3044 -0
- package/dist/chunk-2W7MO2DL.js +1366 -0
- package/dist/chunk-3NUAZGMA.js +1689 -0
- package/dist/chunk-7TOWB2XB.js +366 -0
- package/dist/chunk-7XOTOADQ.js +3065 -0
- package/dist/chunk-AH2PDM2K.js +3042 -0
- package/dist/chunk-BNXWSZ63.js +3742 -0
- package/dist/chunk-BTL5DJVU.js +3222 -0
- package/dist/chunk-HDHYG7E4.js +104 -0
- package/dist/chunk-HLR4KZBP.js +3234 -0
- package/dist/chunk-IP3FRFEB.js +1045 -0
- package/dist/chunk-KHU56VDO.js +3042 -0
- package/dist/chunk-KRYIFLQR.js +88 -0
- package/dist/chunk-LBSDNLEM.js +287 -0
- package/dist/chunk-MNTQ7HCP.js +2643 -0
- package/dist/chunk-MUJELQQ6.js +1387 -0
- package/dist/chunk-MXJGMSLV.js +2199 -0
- package/dist/chunk-N6QJGC3Z.js +2636 -0
- package/dist/chunk-OBELGBPM.js +1713 -0
- package/dist/chunk-OT7R5XTA.js +3192 -0
- package/dist/chunk-P7X4RA2T.js +106 -0
- package/dist/chunk-PIDUQNC2.js +3185 -0
- package/dist/chunk-POGCDIH4.js +3187 -0
- package/dist/chunk-PSIEOQGZ.js +3043 -0
- package/dist/chunk-PVRT3IHA.js +3238 -0
- package/dist/chunk-QNN4TT23.js +1430 -0
- package/dist/chunk-RE3R45RJ.js +3042 -0
- package/dist/chunk-S7E6TFX6.js +803 -0
- package/dist/chunk-SG6GLU4U.js +1378 -0
- package/dist/chunk-SJCDV2ST.js +274 -0
- package/dist/chunk-SYE5XLF3.js +104 -0
- package/dist/chunk-T5VLYBZD.js +103 -0
- package/dist/chunk-TOQB7VWU.js +3238 -0
- package/dist/chunk-VFNMZ4ZQ.js +3228 -0
- package/dist/chunk-VVTGZNBT.js +1629 -0
- package/dist/chunk-W7Q4RFEV.js +104 -0
- package/dist/chunk-XTYYVRLO.js +3190 -0
- package/dist/chunk-Y6MDYVJD.js +3063 -0
- package/dist/cli/main.d.ts +1 -0
- package/dist/cli/main.js +5458 -0
- package/dist/index.d.ts +653 -0
- package/dist/index.js +79 -0
- package/dist/mcp/server.d.ts +1 -0
- package/dist/mcp/server.js +472 -0
- package/dist/schema-BAWSG7KY.js +22 -0
- package/dist/schema-E3QUPL26.js +20 -0
- package/dist/schema-EHL7WUT6.js +20 -0
- package/docs/019-USAGE.md +625 -0
- package/docs/020-current-implementation.md +364 -0
- package/docs/021-DOGFOODING-FINDINGS.md +175 -0
- package/docs/BACKLOG.md +80 -0
- package/docs/CONFIG.md +1123 -0
- package/docs/DESIGN.md +439 -0
- package/docs/ERRORS.md +383 -0
- package/docs/PROJECT.md +88 -0
- package/docs/ROADMAP.md +407 -0
- package/docs/summarization.md +320 -0
- package/docs/test-links.md +9 -0
- package/justfile +40 -0
- package/package.json +74 -9
- package/pnpm-workspace.yaml +5 -0
- package/research/INDEX.md +315 -0
- package/research/code-review/README.md +90 -0
- package/research/code-review/cli-error-handling-review.md +979 -0
- package/research/code-review/code-review-validation-report.md +464 -0
- package/research/code-review/main-ts-review.md +1128 -0
- package/research/config-analysis/01-current-implementation.md +470 -0
- package/research/config-analysis/02-strategy-recommendation.md +428 -0
- package/research/config-analysis/03-task-candidates.md +715 -0
- package/research/config-analysis/033-research-configuration-management.md +828 -0
- package/research/config-analysis/034-research-effect-cli-config.md +1504 -0
- package/research/config-analysis/04-consolidated-task-candidates.md +277 -0
- package/research/config-docs/SUMMARY.md +357 -0
- package/research/config-docs/TEST-RESULTS.md +776 -0
- package/research/config-docs/TODO.md +542 -0
- package/research/config-docs/analysis.md +744 -0
- package/research/config-docs/fix-validation.md +502 -0
- package/research/config-docs/help-audit.md +264 -0
- package/research/config-docs/help-system-analysis.md +890 -0
- package/research/dogfood/consolidated-tool-evaluation.md +373 -0
- package/research/dogfood/strategy-a/a-synthesis.md +184 -0
- package/research/dogfood/strategy-a/a1-docs.md +226 -0
- package/research/dogfood/strategy-a/a2-amorphic.md +156 -0
- package/research/dogfood/strategy-a/a3-llm.md +164 -0
- package/research/dogfood/strategy-b/b-synthesis.md +228 -0
- package/research/dogfood/strategy-b/b1-architecture.md +207 -0
- package/research/dogfood/strategy-b/b2-gaps.md +258 -0
- package/research/dogfood/strategy-b/b3-workflows.md +250 -0
- package/research/dogfood/strategy-c/c-synthesis.md +451 -0
- package/research/dogfood/strategy-c/c1-explorer.md +192 -0
- package/research/dogfood/strategy-c/c2-diver-memory.md +145 -0
- package/research/dogfood/strategy-c/c3-diver-control.md +148 -0
- package/research/dogfood/strategy-c/c4-diver-failure.md +151 -0
- package/research/dogfood/strategy-c/c5-diver-execution.md +221 -0
- package/research/dogfood/strategy-c/c6-diver-org.md +221 -0
- package/research/effect-cli-error-handling.md +845 -0
- package/research/effect-errors-as-values.md +943 -0
- package/research/errors-task-analysis/00-consolidated-tasks.md +207 -0
- package/research/errors-task-analysis/cli-commands-analysis.md +909 -0
- package/research/errors-task-analysis/embeddings-analysis.md +709 -0
- package/research/errors-task-analysis/index-search-analysis.md +812 -0
- package/research/frontmatter/COMMENTS-ARE-SKIPPED.md +149 -0
- package/research/frontmatter/LLM-CODE-NAVIGATION.md +276 -0
- package/research/issue-review.md +603 -0
- package/research/llm-summarization/agent-cli-tools-2026.md +1082 -0
- package/research/llm-summarization/alternative-providers-2026.md +1428 -0
- package/research/llm-summarization/anthropic-2026.md +367 -0
- package/research/llm-summarization/claude-cli-integration.md +1706 -0
- package/research/llm-summarization/cli-integration-patterns.md +3155 -0
- package/research/llm-summarization/openai-2026.md +473 -0
- package/research/llm-summarization/openai-compatible-providers-2026.md +1022 -0
- package/research/llm-summarization/opencode-cli-integration.md +1552 -0
- package/research/llm-summarization/prompt-engineering-2026.md +1426 -0
- package/research/llm-summarization/prototype-results.md +56 -0
- package/research/llm-summarization/provider-switching-patterns-2026.md +2153 -0
- package/research/llm-summarization/typescript-llm-libraries-2026.md +2436 -0
- package/research/mdcontext-error-analysis.md +521 -0
- package/research/mdcontext-pudding/00-EXECUTIVE-SUMMARY.md +282 -0
- package/research/mdcontext-pudding/01-index-embed.md +956 -0
- package/research/mdcontext-pudding/02-search-COMMANDS.md +142 -0
- package/research/mdcontext-pudding/02-search-SUMMARY.md +146 -0
- package/research/mdcontext-pudding/02-search.md +970 -0
- package/research/mdcontext-pudding/03-context.md +779 -0
- package/research/mdcontext-pudding/04-navigation-and-analytics.md +803 -0
- package/research/mdcontext-pudding/04-tree.md +704 -0
- package/research/mdcontext-pudding/05-config.md +1038 -0
- package/research/mdcontext-pudding/06-links-summary.txt +87 -0
- package/research/mdcontext-pudding/06-links.md +679 -0
- package/research/mdcontext-pudding/07-stats.md +693 -0
- package/research/mdcontext-pudding/BUG-FIX-PLAN.md +388 -0
- package/research/mdcontext-pudding/P0-BUG-VALIDATION.md +167 -0
- package/research/mdcontext-pudding/README.md +168 -0
- package/research/mdcontext-pudding/TESTING-SUMMARY.md +128 -0
- package/research/npm_publish/011-npm-workflow-research-agent2.md +792 -0
- package/research/npm_publish/012-npm-workflow-research-agent1.md +530 -0
- package/research/npm_publish/013-npm-workflow-research-agent3.md +722 -0
- package/research/npm_publish/014-npm-workflow-synthesis.md +556 -0
- package/research/npm_publish/031-npm-workflow-task-analysis.md +134 -0
- package/research/research-quality-review.md +834 -0
- package/research/semantic-search/002-research-embedding-models.md +490 -0
- package/research/semantic-search/003-research-rag-alternatives.md +523 -0
- package/research/semantic-search/004-research-vector-search.md +841 -0
- package/research/semantic-search/032-research-semantic-search.md +427 -0
- package/research/semantic-search/embedding-text-analysis.md +156 -0
- package/research/semantic-search/multi-word-failure-reproduction.md +171 -0
- package/research/semantic-search/query-processing-analysis.md +207 -0
- package/research/semantic-search/root-cause-and-solution.md +114 -0
- package/research/semantic-search/threshold-validation-report.md +69 -0
- package/research/semantic-search/vector-search-analysis.md +63 -0
- package/research/task-management-2026/00-synthesis-recommendations.md +295 -0
- package/research/task-management-2026/01-ai-workflow-tools.md +416 -0
- package/research/task-management-2026/02-agent-framework-patterns.md +476 -0
- package/research/task-management-2026/03-lightweight-file-based.md +567 -0
- package/research/task-management-2026/04-established-tools-ai-features.md +541 -0
- package/research/task-management-2026/linear/01-core-features-workflow.md +771 -0
- package/research/task-management-2026/linear/02-api-integrations.md +930 -0
- package/research/task-management-2026/linear/03-ai-features.md +368 -0
- package/research/task-management-2026/linear/04-pricing-setup.md +205 -0
- package/research/task-management-2026/linear/05-usage-patterns-best-practices.md +605 -0
- package/research/test-path-issues.md +276 -0
- package/review/ALP-76/1-error-type-design.md +962 -0
- package/review/ALP-76/2-error-handling-patterns.md +906 -0
- package/review/ALP-76/3-error-presentation.md +624 -0
- package/review/ALP-76/4-test-coverage.md +625 -0
- package/review/ALP-76/5-migration-completeness.md +440 -0
- package/review/ALP-76/6-effect-best-practices.md +755 -0
- package/scripts/apply-branch-protection.sh +47 -0
- package/scripts/branch-protection-templates.json +79 -0
- package/scripts/prototype-summarization.ts +346 -0
- package/scripts/rebuild-hnswlib.js +58 -0
- package/scripts/setup-branch-protection.sh +64 -0
- package/src/__tests__/fixtures/semantic-search/multi-word-corpus/.mdcontext/active-provider.json +7 -0
- package/src/__tests__/fixtures/semantic-search/multi-word-corpus/.mdcontext/bm25.json +541 -0
- package/src/__tests__/fixtures/semantic-search/multi-word-corpus/.mdcontext/bm25.meta.json +5 -0
- package/src/__tests__/fixtures/semantic-search/multi-word-corpus/.mdcontext/config.json +8 -0
- package/src/__tests__/fixtures/semantic-search/multi-word-corpus/.mdcontext/embeddings/openai_text-embedding-3-small_512/vectors.bin +0 -0
- package/src/__tests__/fixtures/semantic-search/multi-word-corpus/.mdcontext/embeddings/openai_text-embedding-3-small_512/vectors.meta.bin +0 -0
- package/src/__tests__/fixtures/semantic-search/multi-word-corpus/.mdcontext/indexes/documents.json +60 -0
- package/src/__tests__/fixtures/semantic-search/multi-word-corpus/.mdcontext/indexes/links.json +13 -0
- package/src/__tests__/fixtures/semantic-search/multi-word-corpus/.mdcontext/indexes/sections.json +1197 -0
- package/src/__tests__/fixtures/semantic-search/multi-word-corpus/configuration-management.md +99 -0
- package/src/__tests__/fixtures/semantic-search/multi-word-corpus/distributed-systems.md +92 -0
- package/src/__tests__/fixtures/semantic-search/multi-word-corpus/error-handling.md +78 -0
- package/src/__tests__/fixtures/semantic-search/multi-word-corpus/failure-automation.md +55 -0
- package/src/__tests__/fixtures/semantic-search/multi-word-corpus/job-context.md +69 -0
- package/src/__tests__/fixtures/semantic-search/multi-word-corpus/process-orchestration.md +99 -0
- package/src/cli/argv-preprocessor.test.ts +210 -0
- package/src/cli/argv-preprocessor.ts +202 -0
- package/src/cli/cli.test.ts +627 -0
- package/src/cli/commands/backlinks.ts +54 -0
- package/src/cli/commands/config-cmd.ts +642 -0
- package/src/cli/commands/context.ts +285 -0
- package/src/cli/commands/duplicates.ts +122 -0
- package/src/cli/commands/embeddings.ts +529 -0
- package/src/cli/commands/index-cmd.ts +480 -0
- package/src/cli/commands/index.ts +16 -0
- package/src/cli/commands/links.ts +52 -0
- package/src/cli/commands/search.ts +1281 -0
- package/src/cli/commands/stats.ts +149 -0
- package/src/cli/commands/tree.ts +128 -0
- package/src/cli/config-layer.ts +176 -0
- package/src/cli/error-handler.test.ts +235 -0
- package/src/cli/error-handler.ts +655 -0
- package/src/cli/flag-schemas.ts +341 -0
- package/src/cli/help.ts +588 -0
- package/src/cli/index.ts +9 -0
- package/src/cli/main.ts +435 -0
- package/src/cli/options.ts +41 -0
- package/src/cli/shared-error-handling.ts +199 -0
- package/src/cli/typo-suggester.test.ts +105 -0
- package/src/cli/typo-suggester.ts +130 -0
- package/src/cli/utils.ts +259 -0
- package/src/config/file-provider.test.ts +320 -0
- package/src/config/file-provider.ts +273 -0
- package/src/config/index.ts +72 -0
- package/src/config/integration.test.ts +667 -0
- package/src/config/precedence.test.ts +277 -0
- package/src/config/precedence.ts +451 -0
- package/src/config/schema.test.ts +414 -0
- package/src/config/schema.ts +603 -0
- package/src/config/service.test.ts +320 -0
- package/src/config/service.ts +243 -0
- package/src/config/testing.test.ts +264 -0
- package/src/config/testing.ts +110 -0
- package/src/core/index.ts +1 -0
- package/src/core/types.ts +113 -0
- package/src/duplicates/detector.test.ts +183 -0
- package/src/duplicates/detector.ts +414 -0
- package/src/duplicates/index.ts +18 -0
- package/src/embeddings/embedding-namespace.test.ts +300 -0
- package/src/embeddings/embedding-namespace.ts +947 -0
- package/src/embeddings/heading-boost.test.ts +222 -0
- package/src/embeddings/hnsw-build-options.test.ts +198 -0
- package/src/embeddings/hyde.test.ts +272 -0
- package/src/embeddings/hyde.ts +264 -0
- package/src/embeddings/index.ts +10 -0
- package/src/embeddings/openai-provider.ts +414 -0
- package/src/embeddings/pricing.json +22 -0
- package/src/embeddings/provider-constants.ts +204 -0
- package/src/embeddings/provider-errors.test.ts +967 -0
- package/src/embeddings/provider-errors.ts +565 -0
- package/src/embeddings/provider-factory.test.ts +240 -0
- package/src/embeddings/provider-factory.ts +225 -0
- package/src/embeddings/provider-integration.test.ts +788 -0
- package/src/embeddings/query-preprocessing.test.ts +187 -0
- package/src/embeddings/semantic-search-threshold.test.ts +508 -0
- package/src/embeddings/semantic-search.ts +1270 -0
- package/src/embeddings/types.ts +359 -0
- package/src/embeddings/vector-store.ts +708 -0
- package/src/embeddings/voyage-provider.ts +313 -0
- package/src/errors/errors.test.ts +845 -0
- package/src/errors/index.ts +533 -0
- package/src/index/ignore-patterns.test.ts +354 -0
- package/src/index/ignore-patterns.ts +305 -0
- package/src/index/index.ts +4 -0
- package/src/index/indexer.ts +684 -0
- package/src/index/storage.ts +260 -0
- package/src/index/types.ts +147 -0
- package/src/index/watcher.ts +189 -0
- package/src/index.ts +30 -0
- package/src/integration/search-keyword.test.ts +678 -0
- package/src/mcp/server.ts +612 -0
- package/src/parser/index.ts +1 -0
- package/src/parser/parser.test.ts +291 -0
- package/src/parser/parser.ts +394 -0
- package/src/parser/section-filter.test.ts +277 -0
- package/src/parser/section-filter.ts +392 -0
- package/src/search/__tests__/hybrid-search.test.ts +650 -0
- package/src/search/bm25-store.ts +366 -0
- package/src/search/cross-encoder.test.ts +253 -0
- package/src/search/cross-encoder.ts +406 -0
- package/src/search/fuzzy-search.test.ts +419 -0
- package/src/search/fuzzy-search.ts +273 -0
- package/src/search/hybrid-search.ts +448 -0
- package/src/search/path-matcher.test.ts +276 -0
- package/src/search/path-matcher.ts +33 -0
- package/src/search/query-parser.test.ts +260 -0
- package/src/search/query-parser.ts +319 -0
- package/src/search/searcher.test.ts +280 -0
- package/src/search/searcher.ts +724 -0
- package/src/search/wink-bm25.d.ts +30 -0
- package/src/summarization/cli-providers/claude.ts +202 -0
- package/src/summarization/cli-providers/detection.test.ts +273 -0
- package/src/summarization/cli-providers/detection.ts +118 -0
- package/src/summarization/cli-providers/index.ts +8 -0
- package/src/summarization/cost.test.ts +139 -0
- package/src/summarization/cost.ts +102 -0
- package/src/summarization/error-handler.test.ts +127 -0
- package/src/summarization/error-handler.ts +111 -0
- package/src/summarization/index.ts +102 -0
- package/src/summarization/pipeline.test.ts +498 -0
- package/src/summarization/pipeline.ts +231 -0
- package/src/summarization/prompts.test.ts +269 -0
- package/src/summarization/prompts.ts +133 -0
- package/src/summarization/provider-factory.test.ts +396 -0
- package/src/summarization/provider-factory.ts +178 -0
- package/src/summarization/types.ts +184 -0
- package/src/summarize/budget-bugs.test.ts +620 -0
- package/src/summarize/formatters.ts +419 -0
- package/src/summarize/index.ts +20 -0
- package/src/summarize/summarizer.test.ts +275 -0
- package/src/summarize/summarizer.ts +597 -0
- package/src/summarize/verify-bugs.test.ts +238 -0
- package/src/types/huggingface-transformers.d.ts +66 -0
- package/src/utils/index.ts +1 -0
- package/src/utils/tokens.test.ts +142 -0
- package/src/utils/tokens.ts +186 -0
- package/tests/fixtures/cli/.mdcontext/active-provider.json +7 -0
- package/tests/fixtures/cli/.mdcontext/config.json +8 -0
- package/tests/fixtures/cli/.mdcontext/embeddings/openai_text-embedding-3-small_512/vectors.bin +0 -0
- package/tests/fixtures/cli/.mdcontext/embeddings/openai_text-embedding-3-small_512/vectors.meta.bin +0 -0
- package/tests/fixtures/cli/.mdcontext/indexes/documents.json +33 -0
- package/tests/fixtures/cli/.mdcontext/indexes/links.json +12 -0
- package/tests/fixtures/cli/.mdcontext/indexes/sections.json +247 -0
- package/tests/fixtures/cli/README.md +9 -0
- package/tests/fixtures/cli/api-reference.md +11 -0
- package/tests/fixtures/cli/getting-started.md +11 -0
- package/tests/integration/embed-index.test.ts +712 -0
- package/tests/integration/search-context.test.ts +469 -0
- package/tests/integration/search-semantic.test.ts +522 -0
- package/tsconfig.json +26 -0
- package/vitest.config.ts +16 -0
- package/vitest.setup.ts +12 -0
|
@@ -0,0 +1,2436 @@
|
|
|
1
|
+
# TypeScript LLM Libraries: Comprehensive Research (2026)
|
|
2
|
+
|
|
3
|
+
**Research Date:** January 26, 2026
|
|
4
|
+
**Purpose:** Evaluate TypeScript alternatives to Python's LiteLLM for unified LLM provider access
|
|
5
|
+
**Use Case:** Code summarization API with multi-provider support
|
|
6
|
+
|
|
7
|
+
---
|
|
8
|
+
|
|
9
|
+
## Executive Summary
|
|
10
|
+
|
|
11
|
+
This research evaluates TypeScript/JavaScript libraries that provide unified interfaces to multiple LLM providers (similar to Python's LiteLLM). After comprehensive analysis, **Vercel AI SDK** emerges as the top recommendation for mdcontext's API provider abstraction, with **OpenRouter** as a compelling alternative for maximum flexibility.
|
|
12
|
+
|
|
13
|
+
### Top Recommendations
|
|
14
|
+
|
|
15
|
+
1. **Vercel AI SDK** - Best overall for production TypeScript applications
|
|
16
|
+
2. **OpenRouter** - Best for maximum provider choice with minimal code
|
|
17
|
+
3. **Instructor-js** - Best for structured output-focused applications
|
|
18
|
+
4. **LiteLLM Proxy** - Best when Python infrastructure is available
|
|
19
|
+
|
|
20
|
+
---
|
|
21
|
+
|
|
22
|
+
## 1. Vercel AI SDK
|
|
23
|
+
|
|
24
|
+
**GitHub:** [vercel/ai](https://github.com/vercel/ai)
|
|
25
|
+
**NPM:** `ai` (20M+ monthly downloads)
|
|
26
|
+
**License:** MIT
|
|
27
|
+
**Maintenance:** Extremely Active (2026)
|
|
28
|
+
|
|
29
|
+
### Overview
|
|
30
|
+
|
|
31
|
+
The Vercel AI SDK is the leading TypeScript toolkit for building AI applications, created by the Next.js team. It provides a provider-agnostic architecture with 25+ official integrations and 30+ community providers.
|
|
32
|
+
|
|
33
|
+
### Provider Support
|
|
34
|
+
|
|
35
|
+
**Official First-Party Providers (25+):**
|
|
36
|
+
- OpenAI (`@ai-sdk/openai`) - GPT series
|
|
37
|
+
- Anthropic (`@ai-sdk/anthropic`) - Claude models
|
|
38
|
+
- Google (`@ai-sdk/google`) - Gemini models
|
|
39
|
+
- Mistral (`@ai-sdk/mistral`)
|
|
40
|
+
- Amazon Bedrock (`@ai-sdk/amazon-bedrock`)
|
|
41
|
+
- Azure OpenAI (`@ai-sdk/azure`)
|
|
42
|
+
- Groq (`@ai-sdk/groq`)
|
|
43
|
+
- DeepSeek (via OpenAI-compatible provider)
|
|
44
|
+
|
|
45
|
+
**Community Providers (30+):**
|
|
46
|
+
- Ollama
|
|
47
|
+
- OpenRouter (`@openrouter/ai-sdk-provider`)
|
|
48
|
+
- Cloudflare Workers AI
|
|
49
|
+
- Together AI
|
|
50
|
+
- Fireworks AI
|
|
51
|
+
- And many more
|
|
52
|
+
|
|
53
|
+
**OpenAI-Compatible Support:**
|
|
54
|
+
Any self-hosted provider supporting OpenAI specification works via the OpenAI Compatible Provider (LM Studio, Heroku, etc.)
|
|
55
|
+
|
|
56
|
+
### Key Features
|
|
57
|
+
|
|
58
|
+
#### Unified API
|
|
59
|
+
Switch providers by changing a single line - the model string:
|
|
60
|
+
```typescript
|
|
61
|
+
import { generateText } from 'ai';
|
|
62
|
+
|
|
63
|
+
// OpenAI
|
|
64
|
+
const result = await generateText({
|
|
65
|
+
model: 'openai/gpt-4o',
|
|
66
|
+
prompt: 'Summarize this code...'
|
|
67
|
+
});
|
|
68
|
+
|
|
69
|
+
// Anthropic - just change the model string
|
|
70
|
+
const result = await generateText({
|
|
71
|
+
model: 'anthropic/claude-opus-4.5',
|
|
72
|
+
prompt: 'Summarize this code...'
|
|
73
|
+
});
|
|
74
|
+
|
|
75
|
+
// Google Gemini
|
|
76
|
+
const result = await generateText({
|
|
77
|
+
model: 'google/gemini-2.0-flash',
|
|
78
|
+
prompt: 'Summarize this code...'
|
|
79
|
+
});
|
|
80
|
+
```
|
|
81
|
+
|
|
82
|
+
#### Streaming Support
|
|
83
|
+
First-class streaming with Server-Sent Events (SSE):
|
|
84
|
+
```typescript
|
|
85
|
+
import { streamText } from 'ai';
|
|
86
|
+
|
|
87
|
+
const result = await streamText({
|
|
88
|
+
model: 'anthropic/claude-opus-4.5',
|
|
89
|
+
prompt: 'Summarize this codebase...'
|
|
90
|
+
});
|
|
91
|
+
|
|
92
|
+
for await (const chunk of result.textStream) {
|
|
93
|
+
console.log(chunk);
|
|
94
|
+
}
|
|
95
|
+
```
|
|
96
|
+
|
|
97
|
+
#### Structured Outputs
|
|
98
|
+
Generate typed JSON with schema validation:
|
|
99
|
+
```typescript
|
|
100
|
+
import { generateObject } from 'ai';
|
|
101
|
+
import { z } from 'zod';
|
|
102
|
+
|
|
103
|
+
const CodeSummarySchema = z.object({
|
|
104
|
+
summary: z.string(),
|
|
105
|
+
keyFunctions: z.array(z.string()),
|
|
106
|
+
complexity: z.enum(['low', 'medium', 'high']),
|
|
107
|
+
suggestedImprovements: z.array(z.string())
|
|
108
|
+
});
|
|
109
|
+
|
|
110
|
+
const result = await generateObject({
|
|
111
|
+
model: 'openai/gpt-4o',
|
|
112
|
+
schema: CodeSummarySchema,
|
|
113
|
+
prompt: 'Analyze this code: ...'
|
|
114
|
+
});
|
|
115
|
+
|
|
116
|
+
// result.object is fully typed
|
|
117
|
+
console.log(result.object.complexity); // TypeScript knows this is 'low' | 'medium' | 'high'
|
|
118
|
+
```
|
|
119
|
+
|
|
120
|
+
#### AI SDK 6 Features (Latest - 2026)
|
|
121
|
+
|
|
122
|
+
**Unified Tool Calling + Structured Outputs:**
|
|
123
|
+
```typescript
|
|
124
|
+
// Multi-step tool calling with structured output at the end
|
|
125
|
+
const result = await generateText({
|
|
126
|
+
model: 'anthropic/claude-opus-4.5',
|
|
127
|
+
tools: {
|
|
128
|
+
analyzeDependencies: {
|
|
129
|
+
description: 'Analyze package dependencies',
|
|
130
|
+
parameters: z.object({ packageJson: z.string() })
|
|
131
|
+
}
|
|
132
|
+
},
|
|
133
|
+
output: {
|
|
134
|
+
type: 'object',
|
|
135
|
+
schema: CodeAnalysisSchema
|
|
136
|
+
},
|
|
137
|
+
prompt: 'Analyze this codebase and provide structured insights'
|
|
138
|
+
});
|
|
139
|
+
```
|
|
140
|
+
|
|
141
|
+
**Agent Abstraction:**
|
|
142
|
+
```typescript
|
|
143
|
+
// Define reusable agents
|
|
144
|
+
const codeAnalyzer = createAgent({
|
|
145
|
+
model: 'anthropic/claude-opus-4.5',
|
|
146
|
+
instructions: 'You are an expert code analyzer',
|
|
147
|
+
tools: { analyzeDependencies, detectPatterns }
|
|
148
|
+
});
|
|
149
|
+
|
|
150
|
+
// Use across application with type-safe streaming
|
|
151
|
+
const result = await codeAnalyzer.run(userPrompt);
|
|
152
|
+
```
|
|
153
|
+
|
|
154
|
+
**Schema Library Flexibility:**
|
|
155
|
+
Supports any schema library implementing Standard JSON Schema V1:
|
|
156
|
+
- Zod
|
|
157
|
+
- Valibot
|
|
158
|
+
- JSON Schema
|
|
159
|
+
- Custom implementations
|
|
160
|
+
|
|
161
|
+
### TypeScript Support
|
|
162
|
+
|
|
163
|
+
**Quality:** Excellent - TypeScript-first design
|
|
164
|
+
**Type Inference:** Full type inference from schemas to outputs
|
|
165
|
+
**Codebase:** 75.3% TypeScript
|
|
166
|
+
|
|
167
|
+
### API Design & Ergonomics
|
|
168
|
+
|
|
169
|
+
**Pros:**
|
|
170
|
+
- Intuitive function names (`generateText`, `streamText`, `generateObject`)
|
|
171
|
+
- Minimal boilerplate
|
|
172
|
+
- Consistent API across all providers
|
|
173
|
+
- Excellent documentation at [ai-sdk.dev](https://ai-sdk.dev/)
|
|
174
|
+
- Clear error messages
|
|
175
|
+
|
|
176
|
+
**Cons:**
|
|
177
|
+
- Retry logic doesn't respect provider-specific `retry-after` headers (uses exponential backoff)
|
|
178
|
+
- Default maxRetries is only 2
|
|
179
|
+
- Some advanced features require understanding Vercel ecosystem
|
|
180
|
+
|
|
181
|
+
### Maintenance Status (2026)
|
|
182
|
+
|
|
183
|
+
- **GitHub Stars:** 21,000+
|
|
184
|
+
- **Forks:** Active development
|
|
185
|
+
- **Contributors:** Large team
|
|
186
|
+
- **Dependencies:** 89,000+ projects
|
|
187
|
+
- **Recent Updates:** AI SDK 6 released with major improvements
|
|
188
|
+
- **NPM Downloads:** 20M+ monthly
|
|
189
|
+
- **Last Update:** Continuously updated (multiple releases in Jan 2026)
|
|
190
|
+
|
|
191
|
+
### Provider-Specific Quirks Handling
|
|
192
|
+
|
|
193
|
+
**Rate Limiting:**
|
|
194
|
+
- Built-in exponential backoff
|
|
195
|
+
- Default 2 retries (configurable with `maxRetries`)
|
|
196
|
+
- **Issue:** Doesn't respect `retry-after` headers from providers
|
|
197
|
+
- **Workaround:** Set `maxRetries: 0` and implement custom retry logic
|
|
198
|
+
|
|
199
|
+
**Authentication:**
|
|
200
|
+
- Provider-specific auth handled per provider package
|
|
201
|
+
- Environment variables or direct API key configuration
|
|
202
|
+
|
|
203
|
+
**Retries:**
|
|
204
|
+
```typescript
|
|
205
|
+
const result = await generateText({
|
|
206
|
+
model: 'openai/gpt-4o',
|
|
207
|
+
maxRetries: 5, // Override default
|
|
208
|
+
prompt: 'Summarize...'
|
|
209
|
+
});
|
|
210
|
+
```
|
|
211
|
+
|
|
212
|
+
### Production Readiness
|
|
213
|
+
|
|
214
|
+
**Strengths:**
|
|
215
|
+
- Battle-tested by major companies
|
|
216
|
+
- Used in production by thousands of applications
|
|
217
|
+
- Comprehensive error handling
|
|
218
|
+
- Monitoring integration with Vercel platform
|
|
219
|
+
|
|
220
|
+
**Considerations:**
|
|
221
|
+
- Best when paired with Vercel infrastructure
|
|
222
|
+
- Some features optimized for Next.js (but works standalone)
|
|
223
|
+
|
|
224
|
+
### Code Example: Code Summarization
|
|
225
|
+
|
|
226
|
+
```typescript
|
|
227
|
+
import { generateText, streamText } from 'ai';
|
|
228
|
+
import { anthropic } from '@ai-sdk/anthropic';
|
|
229
|
+
import { openai } from '@ai-sdk/openai';
|
|
230
|
+
|
|
231
|
+
// Simple generation
|
|
232
|
+
async function summarizeCode(code: string, provider: 'openai' | 'anthropic') {
|
|
233
|
+
const modelMap = {
|
|
234
|
+
openai: 'gpt-4o',
|
|
235
|
+
anthropic: 'claude-opus-4.5'
|
|
236
|
+
};
|
|
237
|
+
|
|
238
|
+
const result = await generateText({
|
|
239
|
+
model: `${provider}/${modelMap[provider]}`,
|
|
240
|
+
maxRetries: 3,
|
|
241
|
+
prompt: `Analyze and summarize this code:\n\n${code}`
|
|
242
|
+
});
|
|
243
|
+
|
|
244
|
+
return result.text;
|
|
245
|
+
}
|
|
246
|
+
|
|
247
|
+
// Structured output
|
|
248
|
+
import { z } from 'zod';
|
|
249
|
+
|
|
250
|
+
const SummarySchema = z.object({
|
|
251
|
+
overview: z.string(),
|
|
252
|
+
functions: z.array(z.object({
|
|
253
|
+
name: z.string(),
|
|
254
|
+
purpose: z.string(),
|
|
255
|
+
complexity: z.enum(['low', 'medium', 'high'])
|
|
256
|
+
})),
|
|
257
|
+
dependencies: z.array(z.string()),
|
|
258
|
+
recommendations: z.array(z.string())
|
|
259
|
+
});
|
|
260
|
+
|
|
261
|
+
async function structuredCodeSummary(code: string) {
|
|
262
|
+
const result = await generateObject({
|
|
263
|
+
model: 'anthropic/claude-opus-4.5',
|
|
264
|
+
schema: SummarySchema,
|
|
265
|
+
prompt: `Provide detailed analysis: ${code}`
|
|
266
|
+
});
|
|
267
|
+
|
|
268
|
+
return result.object; // Fully typed
|
|
269
|
+
}
|
|
270
|
+
|
|
271
|
+
// Streaming for real-time feedback
|
|
272
|
+
async function streamingSummary(code: string) {
|
|
273
|
+
const result = await streamText({
|
|
274
|
+
model: 'openai/gpt-4o',
|
|
275
|
+
prompt: `Summarize: ${code}`
|
|
276
|
+
});
|
|
277
|
+
|
|
278
|
+
for await (const chunk of result.textStream) {
|
|
279
|
+
process.stdout.write(chunk);
|
|
280
|
+
}
|
|
281
|
+
}
|
|
282
|
+
```
|
|
283
|
+
|
|
284
|
+
### Pros for mdcontext
|
|
285
|
+
|
|
286
|
+
✅ Industry-leading TypeScript support
|
|
287
|
+
✅ 20M+ monthly downloads - proven stability
|
|
288
|
+
✅ Unified API - change providers with one line
|
|
289
|
+
✅ Excellent structured output with Zod
|
|
290
|
+
✅ First-class streaming support
|
|
291
|
+
✅ Active development (AI SDK 6 just released)
|
|
292
|
+
✅ Comprehensive documentation
|
|
293
|
+
✅ Works standalone (Node.js, not just Next.js)
|
|
294
|
+
|
|
295
|
+
### Cons for mdcontext
|
|
296
|
+
|
|
297
|
+
❌ Retry logic doesn't respect provider retry-after headers
|
|
298
|
+
❌ Requires installing separate packages per provider
|
|
299
|
+
❌ Some features optimized for Vercel platform
|
|
300
|
+
❌ More opinionated than pure client libraries
|
|
301
|
+
|
|
302
|
+
### Recommendation for mdcontext
|
|
303
|
+
|
|
304
|
+
**Rating: 9.5/10 - HIGHLY RECOMMENDED**
|
|
305
|
+
|
|
306
|
+
**Best for:** Production TypeScript applications requiring multi-provider LLM access with excellent developer experience.
|
|
307
|
+
|
|
308
|
+
**Use when:** You want industry-standard tooling with great TypeScript support, structured outputs, and streaming capabilities.
|
|
309
|
+
|
|
310
|
+
---
|
|
311
|
+
|
|
312
|
+
## 2. LangChain.js
|
|
313
|
+
|
|
314
|
+
**GitHub:** [langchain-ai/langchainjs](https://github.com/langchain-ai/langchainjs)
|
|
315
|
+
**NPM:** `langchain`
|
|
316
|
+
**License:** MIT
|
|
317
|
+
**Maintenance:** Active (2026)
|
|
318
|
+
|
|
319
|
+
### Overview
|
|
320
|
+
|
|
321
|
+
LangChain.js is the JavaScript/TypeScript counterpart to the popular Python LangChain framework. It provides a comprehensive framework for building LLM-powered applications with emphasis on chaining operations and RAG patterns.
|
|
322
|
+
|
|
323
|
+
### Provider Support
|
|
324
|
+
|
|
325
|
+
**Major Providers:**
|
|
326
|
+
- OpenAI (GPT series)
|
|
327
|
+
- Anthropic (Claude)
|
|
328
|
+
- Google (Gemini)
|
|
329
|
+
- Cohere
|
|
330
|
+
- Azure OpenAI
|
|
331
|
+
- Hugging Face
|
|
332
|
+
- Ollama (local models)
|
|
333
|
+
- 50+ total integrations
|
|
334
|
+
|
|
335
|
+
**Unique Features:**
|
|
336
|
+
- Document loaders (PDF, web, databases)
|
|
337
|
+
- Vector store integrations (Pinecone, Weaviate, Chroma, etc.)
|
|
338
|
+
- Memory systems for conversational agents
|
|
339
|
+
|
|
340
|
+
### Key Features
|
|
341
|
+
|
|
342
|
+
#### Model Abstraction
|
|
343
|
+
```typescript
|
|
344
|
+
import { ChatOpenAI } from "@langchain/openai";
|
|
345
|
+
import { ChatAnthropic } from "@langchain/anthropic";
|
|
346
|
+
|
|
347
|
+
// OpenAI
|
|
348
|
+
const openaiModel = new ChatOpenAI({
|
|
349
|
+
modelName: "gpt-4o",
|
|
350
|
+
temperature: 0.7
|
|
351
|
+
});
|
|
352
|
+
|
|
353
|
+
// Anthropic - similar interface
|
|
354
|
+
const claudeModel = new ChatAnthropic({
|
|
355
|
+
modelName: "claude-opus-4.5",
|
|
356
|
+
temperature: 0.7
|
|
357
|
+
});
|
|
358
|
+
|
|
359
|
+
// Use with same API
|
|
360
|
+
const response = await openaiModel.invoke([
|
|
361
|
+
{ role: "user", content: "Summarize this code..." }
|
|
362
|
+
]);
|
|
363
|
+
```
|
|
364
|
+
|
|
365
|
+
#### Chains and Agents
|
|
366
|
+
```typescript
|
|
367
|
+
import { ChatOpenAI } from "@langchain/openai";
|
|
368
|
+
import { PromptTemplate } from "@langchain/core/prompts";
|
|
369
|
+
import { LLMChain } from "langchain/chains";
|
|
370
|
+
|
|
371
|
+
const template = "Summarize this code with focus on {aspect}:\n\n{code}";
|
|
372
|
+
const prompt = new PromptTemplate({
|
|
373
|
+
template,
|
|
374
|
+
inputVariables: ["aspect", "code"]
|
|
375
|
+
});
|
|
376
|
+
|
|
377
|
+
const chain = new LLMChain({
|
|
378
|
+
llm: new ChatOpenAI({ modelName: "gpt-4o" }),
|
|
379
|
+
prompt
|
|
380
|
+
});
|
|
381
|
+
|
|
382
|
+
const result = await chain.call({
|
|
383
|
+
aspect: "performance implications",
|
|
384
|
+
code: sourceCode
|
|
385
|
+
});
|
|
386
|
+
```
|
|
387
|
+
|
|
388
|
+
#### Streaming
|
|
389
|
+
```typescript
|
|
390
|
+
import { ChatOpenAI } from "@langchain/openai";
|
|
391
|
+
|
|
392
|
+
const model = new ChatOpenAI({ streaming: true });
|
|
393
|
+
|
|
394
|
+
const stream = await model.stream("Summarize this code...");
|
|
395
|
+
|
|
396
|
+
for await (const chunk of stream) {
|
|
397
|
+
console.log(chunk.content);
|
|
398
|
+
}
|
|
399
|
+
```
|
|
400
|
+
|
|
401
|
+
#### Structured Output
|
|
402
|
+
```typescript
|
|
403
|
+
import { ChatOpenAI } from "@langchain/openai";
|
|
404
|
+
import { z } from "zod";
|
|
405
|
+
|
|
406
|
+
const SummarySchema = z.object({
|
|
407
|
+
overview: z.string(),
|
|
408
|
+
complexity: z.enum(["low", "medium", "high"])
|
|
409
|
+
});
|
|
410
|
+
|
|
411
|
+
const model = new ChatOpenAI({ modelName: "gpt-4o" });
|
|
412
|
+
const structured = model.withStructuredOutput(SummarySchema);
|
|
413
|
+
|
|
414
|
+
const result = await structured.invoke("Analyze this code...");
|
|
415
|
+
// result is typed according to schema
|
|
416
|
+
```
|
|
417
|
+
|
|
418
|
+
### TypeScript Support
|
|
419
|
+
|
|
420
|
+
**Quality:** Good - Full TypeScript rewrite
|
|
421
|
+
**Type Inference:** Good, improving with recent versions
|
|
422
|
+
**Codebase:** 95.6% TypeScript
|
|
423
|
+
|
|
424
|
+
### API Design & Ergonomics
|
|
425
|
+
|
|
426
|
+
**Pros:**
|
|
427
|
+
- Comprehensive framework with many utilities
|
|
428
|
+
- Strong RAG and document processing capabilities
|
|
429
|
+
- Extensive ecosystem of integrations
|
|
430
|
+
- Familiar for Python LangChain users
|
|
431
|
+
|
|
432
|
+
**Cons:**
|
|
433
|
+
- More verbose than AI SDK
|
|
434
|
+
- Requires understanding LangChain concepts (chains, agents, prompts)
|
|
435
|
+
- More boilerplate code for simple tasks
|
|
436
|
+
- Tool calling more complex (requires `createReactAgent`)
|
|
437
|
+
|
|
438
|
+
### Maintenance Status (2026)
|
|
439
|
+
|
|
440
|
+
- **GitHub Stars:** 16,800+
|
|
441
|
+
- **Forks:** 3,000+
|
|
442
|
+
- **Contributors:** 1,065+
|
|
443
|
+
- **Dependencies:** 49,100+ projects
|
|
444
|
+
- **Recent Updates:** Active releases in 2026
|
|
445
|
+
- **Compatibility:** Node.js 20.x/22.x/24.x, Cloudflare Workers, Deno, Bun, browsers
|
|
446
|
+
|
|
447
|
+
### Provider-Specific Quirks Handling
|
|
448
|
+
|
|
449
|
+
**Rate Limiting:**
|
|
450
|
+
- Provider-specific implementations
|
|
451
|
+
- Configurable retry logic per model
|
|
452
|
+
|
|
453
|
+
**Authentication:**
|
|
454
|
+
- Environment variables
|
|
455
|
+
- Direct API key configuration
|
|
456
|
+
- Azure AD integration for Azure OpenAI
|
|
457
|
+
|
|
458
|
+
**Retries:**
|
|
459
|
+
```typescript
|
|
460
|
+
const model = new ChatOpenAI({
|
|
461
|
+
maxRetries: 3,
|
|
462
|
+
timeout: 60000
|
|
463
|
+
});
|
|
464
|
+
```
|
|
465
|
+
|
|
466
|
+
### Production Readiness
|
|
467
|
+
|
|
468
|
+
**Strengths:**
|
|
469
|
+
- Battle-tested framework
|
|
470
|
+
- Used by LinkedIn, Uber, Klarna, GitLab (via LangGraph)
|
|
471
|
+
- LangSmith integration for monitoring
|
|
472
|
+
- Comprehensive error handling
|
|
473
|
+
|
|
474
|
+
**Considerations:**
|
|
475
|
+
- Larger bundle size than simpler alternatives
|
|
476
|
+
- Learning curve for framework concepts
|
|
477
|
+
- May be overkill for simple LLM calls
|
|
478
|
+
|
|
479
|
+
### Code Example: Code Summarization
|
|
480
|
+
|
|
481
|
+
```typescript
|
|
482
|
+
import { ChatOpenAI } from "@langchain/openai";
|
|
483
|
+
import { ChatAnthropic } from "@langchain/anthropic";
|
|
484
|
+
import { PromptTemplate } from "@langchain/core/prompts";
|
|
485
|
+
import { z } from "zod";
|
|
486
|
+
|
|
487
|
+
// Simple summarization with provider switching
|
|
488
|
+
class CodeSummarizer {
|
|
489
|
+
private models = {
|
|
490
|
+
openai: new ChatOpenAI({ modelName: "gpt-4o" }),
|
|
491
|
+
anthropic: new ChatAnthropic({ modelName: "claude-opus-4.5" })
|
|
492
|
+
};
|
|
493
|
+
|
|
494
|
+
async summarize(code: string, provider: 'openai' | 'anthropic') {
|
|
495
|
+
const model = this.models[provider];
|
|
496
|
+
|
|
497
|
+
const result = await model.invoke([
|
|
498
|
+
{
|
|
499
|
+
role: "system",
|
|
500
|
+
content: "You are an expert code analyst."
|
|
501
|
+
},
|
|
502
|
+
{
|
|
503
|
+
role: "user",
|
|
504
|
+
content: `Summarize this code:\n\n${code}`
|
|
505
|
+
}
|
|
506
|
+
]);
|
|
507
|
+
|
|
508
|
+
return result.content;
|
|
509
|
+
}
|
|
510
|
+
}
|
|
511
|
+
|
|
512
|
+
// Structured output
|
|
513
|
+
const SummarySchema = z.object({
|
|
514
|
+
overview: z.string(),
|
|
515
|
+
functions: z.array(z.object({
|
|
516
|
+
name: z.string(),
|
|
517
|
+
purpose: z.string()
|
|
518
|
+
})),
|
|
519
|
+
complexity: z.enum(["low", "medium", "high"])
|
|
520
|
+
});
|
|
521
|
+
|
|
522
|
+
async function structuredSummary(code: string) {
|
|
523
|
+
const model = new ChatOpenAI({ modelName: "gpt-4o" });
|
|
524
|
+
const structured = model.withStructuredOutput(SummarySchema);
|
|
525
|
+
|
|
526
|
+
const result = await structured.invoke(
|
|
527
|
+
`Analyze this code: ${code}`
|
|
528
|
+
);
|
|
529
|
+
|
|
530
|
+
return result; // Fully typed
|
|
531
|
+
}
|
|
532
|
+
|
|
533
|
+
// Chain-based approach
|
|
534
|
+
import { LLMChain } from "langchain/chains";
|
|
535
|
+
|
|
536
|
+
const summaryTemplate = `
|
|
537
|
+
You are analyzing code. Focus on {aspect}.
|
|
538
|
+
|
|
539
|
+
Code:
|
|
540
|
+
{code}
|
|
541
|
+
|
|
542
|
+
Provide a detailed summary.
|
|
543
|
+
`;
|
|
544
|
+
|
|
545
|
+
const prompt = new PromptTemplate({
|
|
546
|
+
template: summaryTemplate,
|
|
547
|
+
inputVariables: ["aspect", "code"]
|
|
548
|
+
});
|
|
549
|
+
|
|
550
|
+
const chain = new LLMChain({
|
|
551
|
+
llm: new ChatOpenAI({ modelName: "gpt-4o" }),
|
|
552
|
+
prompt
|
|
553
|
+
});
|
|
554
|
+
|
|
555
|
+
const result = await chain.call({
|
|
556
|
+
aspect: "performance and scalability",
|
|
557
|
+
code: sourceCode
|
|
558
|
+
});
|
|
559
|
+
```
|
|
560
|
+
|
|
561
|
+
### Pros for mdcontext
|
|
562
|
+
|
|
563
|
+
✅ Comprehensive framework with RAG capabilities
|
|
564
|
+
✅ 16.8k stars - proven reliability
|
|
565
|
+
✅ Extensive provider support
|
|
566
|
+
✅ Strong document processing features
|
|
567
|
+
✅ LangSmith monitoring integration
|
|
568
|
+
✅ Large community (1,065+ contributors)
|
|
569
|
+
|
|
570
|
+
### Cons for mdcontext
|
|
571
|
+
|
|
572
|
+
❌ More verbose than alternatives
|
|
573
|
+
❌ Steeper learning curve
|
|
574
|
+
❌ Heavier bundle size
|
|
575
|
+
❌ Tool calling requires additional setup
|
|
576
|
+
❌ May be overkill for simple summarization
|
|
577
|
+
|
|
578
|
+
### Recommendation for mdcontext
|
|
579
|
+
|
|
580
|
+
**Rating: 7/10 - GOOD BUT HEAVYWEIGHT**
|
|
581
|
+
|
|
582
|
+
**Best for:** Complex applications requiring RAG, document processing, or multi-step agent workflows.
|
|
583
|
+
|
|
584
|
+
**Use when:** You need comprehensive LLM orchestration capabilities beyond simple API calls, or plan to build complex agent systems.
|
|
585
|
+
|
|
586
|
+
**Skip when:** You just need clean multi-provider LLM access without the framework overhead.
|
|
587
|
+
|
|
588
|
+
---
|
|
589
|
+
|
|
590
|
+
## 3. Instructor-js
|
|
591
|
+
|
|
592
|
+
**GitHub:** [instructor-ai/instructor-js](https://github.com/instructor-ai/instructor-js)
|
|
593
|
+
**NPM:** `@instructor-ai/instructor`
|
|
594
|
+
**License:** MIT
|
|
595
|
+
**Maintenance:** Active (2026)
|
|
596
|
+
|
|
597
|
+
### Overview
|
|
598
|
+
|
|
599
|
+
Instructor is a specialized library focused on structured data extraction from LLMs using TypeScript and Zod schemas. It's lightweight, fast, and purpose-built for reliable structured outputs.
|
|
600
|
+
|
|
601
|
+
### Provider Support
|
|
602
|
+
|
|
603
|
+
**Primary:** OpenAI API
|
|
604
|
+
**Extended via llm-polyglot:**
|
|
605
|
+
- Anthropic
|
|
606
|
+
- Azure OpenAI
|
|
607
|
+
- Cohere
|
|
608
|
+
- Any OpenAI-compatible API
|
|
609
|
+
|
|
610
|
+
**Focus:** Provider support via OpenAI-compatible interfaces rather than native integrations.
|
|
611
|
+
|
|
612
|
+
### Key Features
|
|
613
|
+
|
|
614
|
+
#### Type-Safe Structured Extraction
|
|
615
|
+
```typescript
|
|
616
|
+
import Instructor from "@instructor-ai/instructor";
|
|
617
|
+
import OpenAI from "openai";
|
|
618
|
+
import { z } from "zod";
|
|
619
|
+
|
|
620
|
+
const client = Instructor({
|
|
621
|
+
client: new OpenAI({ apiKey: process.env.OPENAI_API_KEY }),
|
|
622
|
+
mode: "TOOLS" // or "FUNCTIONS", "JSON", "MD_JSON"
|
|
623
|
+
});
|
|
624
|
+
|
|
625
|
+
const CodeSummarySchema = z.object({
|
|
626
|
+
overview: z.string().describe("High-level summary of the code"),
|
|
627
|
+
functions: z.array(z.object({
|
|
628
|
+
name: z.string(),
|
|
629
|
+
purpose: z.string(),
|
|
630
|
+
complexity: z.enum(["low", "medium", "high"])
|
|
631
|
+
})),
|
|
632
|
+
dependencies: z.array(z.string()),
|
|
633
|
+
technicalDebt: z.array(z.string()).optional()
|
|
634
|
+
});
|
|
635
|
+
|
|
636
|
+
const summary = await client.chat.completions.create({
|
|
637
|
+
messages: [
|
|
638
|
+
{
|
|
639
|
+
role: "user",
|
|
640
|
+
content: `Analyze this code:\n\n${sourceCode}`
|
|
641
|
+
}
|
|
642
|
+
],
|
|
643
|
+
model: "gpt-4o",
|
|
644
|
+
response_model: {
|
|
645
|
+
schema: CodeSummarySchema,
|
|
646
|
+
name: "CodeSummary"
|
|
647
|
+
}
|
|
648
|
+
});
|
|
649
|
+
|
|
650
|
+
// summary is fully typed from the Zod schema
|
|
651
|
+
console.log(summary.overview);
|
|
652
|
+
console.log(summary.functions[0].complexity); // Type-safe!
|
|
653
|
+
```
|
|
654
|
+
|
|
655
|
+
#### Response Modes
|
|
656
|
+
|
|
657
|
+
**TOOLS Mode (Recommended):**
|
|
658
|
+
```typescript
|
|
659
|
+
const client = Instructor({
|
|
660
|
+
client: new OpenAI(),
|
|
661
|
+
mode: "TOOLS" // Uses OpenAI's tool specification
|
|
662
|
+
});
|
|
663
|
+
```
|
|
664
|
+
|
|
665
|
+
**JSON Mode:**
|
|
666
|
+
```typescript
|
|
667
|
+
const client = Instructor({
|
|
668
|
+
client: new OpenAI(),
|
|
669
|
+
mode: "JSON" // Sets response_format to json_object
|
|
670
|
+
});
|
|
671
|
+
```
|
|
672
|
+
|
|
673
|
+
**MD_JSON Mode:**
|
|
674
|
+
```typescript
|
|
675
|
+
const client = Instructor({
|
|
676
|
+
client: new OpenAI(),
|
|
677
|
+
mode: "MD_JSON" // JSON embedded in Markdown code blocks
|
|
678
|
+
});
|
|
679
|
+
```
|
|
680
|
+
|
|
681
|
+
#### Partial Streaming
|
|
682
|
+
```typescript
|
|
683
|
+
import { z } from "zod";
|
|
684
|
+
|
|
685
|
+
const ProgressSchema = z.object({
|
|
686
|
+
currentStep: z.string(),
|
|
687
|
+
progress: z.number(),
|
|
688
|
+
summary: z.string().optional()
|
|
689
|
+
});
|
|
690
|
+
|
|
691
|
+
const stream = await client.chat.completions.create({
|
|
692
|
+
messages: [{ role: "user", content: "Analyze large codebase..." }],
|
|
693
|
+
model: "gpt-4o",
|
|
694
|
+
response_model: { schema: ProgressSchema, name: "Progress" },
|
|
695
|
+
stream: true
|
|
696
|
+
});
|
|
697
|
+
|
|
698
|
+
for await (const partial of stream) {
|
|
699
|
+
console.log(`Step: ${partial.currentStep} - ${partial.progress}%`);
|
|
700
|
+
}
|
|
701
|
+
```
|
|
702
|
+
|
|
703
|
+
#### Validation and Retries
|
|
704
|
+
```typescript
|
|
705
|
+
const EmailSchema = z.object({
|
|
706
|
+
email: z.string().email(), // Built-in validation
|
|
707
|
+
verified: z.boolean()
|
|
708
|
+
});
|
|
709
|
+
|
|
710
|
+
// Automatic retry on validation failure
|
|
711
|
+
const result = await client.chat.completions.create({
|
|
712
|
+
messages: [{ role: "user", content: "Extract email..." }],
|
|
713
|
+
model: "gpt-4o",
|
|
714
|
+
response_model: { schema: EmailSchema, name: "Email" },
|
|
715
|
+
max_retries: 3 // Retry if Zod validation fails
|
|
716
|
+
});
|
|
717
|
+
```
|
|
718
|
+
|
|
719
|
+
### TypeScript Support
|
|
720
|
+
|
|
721
|
+
**Quality:** Excellent - TypeScript-first design
|
|
722
|
+
**Type Inference:** Full type inference from Zod schemas
|
|
723
|
+
**Zod Integration:** First-class (24M+ monthly downloads ecosystem)
|
|
724
|
+
|
|
725
|
+
### API Design & Ergonomics
|
|
726
|
+
|
|
727
|
+
**Pros:**
|
|
728
|
+
- Extremely simple API - wraps OpenAI client
|
|
729
|
+
- Automatic type inference from schemas
|
|
730
|
+
- Minimal boilerplate
|
|
731
|
+
- Transparent - easy to debug
|
|
732
|
+
- Focused on one thing (structured extraction) and does it excellently
|
|
733
|
+
|
|
734
|
+
**Cons:**
|
|
735
|
+
- Provider switching less elegant than AI SDK
|
|
736
|
+
- Requires OpenAI-compatible APIs (not native Anthropic, etc.)
|
|
737
|
+
- No built-in agent/chain capabilities
|
|
738
|
+
- Limited to structured output use case
|
|
739
|
+
|
|
740
|
+
### Maintenance Status (2026)
|
|
741
|
+
|
|
742
|
+
- **GitHub Stars:** Growing (part of instructor ecosystem)
|
|
743
|
+
- **Python Version:** 3M+ monthly downloads, 11k+ stars, 100+ contributors
|
|
744
|
+
- **TypeScript Version:** Active development
|
|
745
|
+
- **Community:** Strong support across languages (Python, TypeScript, Go, Ruby, Elixir, Rust)
|
|
746
|
+
|
|
747
|
+
### Provider-Specific Quirks Handling
|
|
748
|
+
|
|
749
|
+
**Rate Limiting:**
|
|
750
|
+
- Inherits from underlying OpenAI client
|
|
751
|
+
- Custom retry logic for validation failures
|
|
752
|
+
|
|
753
|
+
**Authentication:**
|
|
754
|
+
- Pass through OpenAI client configuration
|
|
755
|
+
|
|
756
|
+
**Provider Switching:**
|
|
757
|
+
```typescript
|
|
758
|
+
// Using Anthropic via llm-polyglot
|
|
759
|
+
import { createAnthropic } from 'llm-polyglot';
|
|
760
|
+
|
|
761
|
+
const anthropicClient = createAnthropic({
|
|
762
|
+
apiKey: process.env.ANTHROPIC_API_KEY
|
|
763
|
+
});
|
|
764
|
+
|
|
765
|
+
const client = Instructor({
|
|
766
|
+
client: anthropicClient,
|
|
767
|
+
mode: "TOOLS"
|
|
768
|
+
});
|
|
769
|
+
|
|
770
|
+
// Same API as OpenAI
|
|
771
|
+
const result = await client.chat.completions.create({
|
|
772
|
+
messages: [{ role: "user", content: "..." }],
|
|
773
|
+
model: "claude-opus-4.5",
|
|
774
|
+
response_model: { schema: MySchema, name: "Response" }
|
|
775
|
+
});
|
|
776
|
+
```
|
|
777
|
+
|
|
778
|
+
### Production Readiness
|
|
779
|
+
|
|
780
|
+
**Strengths:**
|
|
781
|
+
- Lightweight and fast
|
|
782
|
+
- Focused scope reduces bug surface
|
|
783
|
+
- Battle-tested (Python version widely used)
|
|
784
|
+
- Easy to debug due to transparency
|
|
785
|
+
|
|
786
|
+
**Considerations:**
|
|
787
|
+
- Less comprehensive than full frameworks
|
|
788
|
+
- Provider support requires compatibility layers
|
|
789
|
+
- Single-purpose tool (not full LLM orchestration)
|
|
790
|
+
|
|
791
|
+
### Code Example: Code Summarization
|
|
792
|
+
|
|
793
|
+
```typescript
|
|
794
|
+
import Instructor from "@instructor-ai/instructor";
|
|
795
|
+
import OpenAI from "openai";
|
|
796
|
+
import { z } from "zod";
|
|
797
|
+
|
|
798
|
+
// Define comprehensive schema
|
|
799
|
+
const CodeAnalysisSchema = z.object({
|
|
800
|
+
summary: z.string().describe("High-level overview in 2-3 sentences"),
|
|
801
|
+
|
|
802
|
+
functions: z.array(z.object({
|
|
803
|
+
name: z.string(),
|
|
804
|
+
purpose: z.string(),
|
|
805
|
+
parameters: z.array(z.string()),
|
|
806
|
+
complexity: z.enum(["low", "medium", "high"]),
|
|
807
|
+
testCoverage: z.enum(["none", "partial", "complete"])
|
|
808
|
+
})),
|
|
809
|
+
|
|
810
|
+
architecture: z.object({
|
|
811
|
+
pattern: z.enum(["mvc", "microservices", "monolith", "serverless", "other"]),
|
|
812
|
+
description: z.string()
|
|
813
|
+
}),
|
|
814
|
+
|
|
815
|
+
dependencies: z.array(z.object({
|
|
816
|
+
name: z.string(),
|
|
817
|
+
version: z.string().optional(),
|
|
818
|
+
purpose: z.string()
|
|
819
|
+
})),
|
|
820
|
+
|
|
821
|
+
codeQuality: z.object({
|
|
822
|
+
maintainability: z.number().min(1).max(10),
|
|
823
|
+
technicalDebt: z.array(z.string()),
|
|
824
|
+
strengths: z.array(z.string()),
|
|
825
|
+
improvements: z.array(z.string())
|
|
826
|
+
}),
|
|
827
|
+
|
|
828
|
+
security: z.object({
|
|
829
|
+
concerns: z.array(z.string()),
|
|
830
|
+
recommendations: z.array(z.string())
|
|
831
|
+
}).optional()
|
|
832
|
+
});
|
|
833
|
+
|
|
834
|
+
// Multi-provider wrapper
|
|
835
|
+
class StructuredCodeAnalyzer {
|
|
836
|
+
private clients: Map<string, any> = new Map();
|
|
837
|
+
|
|
838
|
+
constructor() {
|
|
839
|
+
// OpenAI
|
|
840
|
+
this.clients.set('openai', Instructor({
|
|
841
|
+
client: new OpenAI({ apiKey: process.env.OPENAI_API_KEY }),
|
|
842
|
+
mode: "TOOLS"
|
|
843
|
+
}));
|
|
844
|
+
|
|
845
|
+
// Can add other OpenAI-compatible providers
|
|
846
|
+
}
|
|
847
|
+
|
|
848
|
+
async analyze(
|
|
849
|
+
code: string,
|
|
850
|
+
provider: string = 'openai',
|
|
851
|
+
model: string = 'gpt-4o'
|
|
852
|
+
) {
|
|
853
|
+
const client = this.clients.get(provider);
|
|
854
|
+
if (!client) throw new Error(`Provider ${provider} not configured`);
|
|
855
|
+
|
|
856
|
+
const analysis = await client.chat.completions.create({
|
|
857
|
+
messages: [
|
|
858
|
+
{
|
|
859
|
+
role: "system",
|
|
860
|
+
content: "You are an expert code analyst. Provide comprehensive, accurate analysis."
|
|
861
|
+
},
|
|
862
|
+
{
|
|
863
|
+
role: "user",
|
|
864
|
+
content: `Analyze this code:\n\n\`\`\`\n${code}\n\`\`\``
|
|
865
|
+
}
|
|
866
|
+
],
|
|
867
|
+
model,
|
|
868
|
+
response_model: {
|
|
869
|
+
schema: CodeAnalysisSchema,
|
|
870
|
+
name: "CodeAnalysis"
|
|
871
|
+
},
|
|
872
|
+
max_retries: 2
|
|
873
|
+
});
|
|
874
|
+
|
|
875
|
+
return analysis; // Fully typed!
|
|
876
|
+
}
|
|
877
|
+
|
|
878
|
+
// Streaming version
|
|
879
|
+
async analyzeStreaming(code: string) {
|
|
880
|
+
const client = this.clients.get('openai')!;
|
|
881
|
+
|
|
882
|
+
const stream = await client.chat.completions.create({
|
|
883
|
+
messages: [
|
|
884
|
+
{ role: "user", content: `Analyze: ${code}` }
|
|
885
|
+
],
|
|
886
|
+
model: "gpt-4o",
|
|
887
|
+
response_model: {
|
|
888
|
+
schema: CodeAnalysisSchema,
|
|
889
|
+
name: "CodeAnalysis"
|
|
890
|
+
},
|
|
891
|
+
stream: true
|
|
892
|
+
});
|
|
893
|
+
|
|
894
|
+
for await (const partial of stream) {
|
|
895
|
+
// Partial results as they're generated
|
|
896
|
+
if (partial.summary) {
|
|
897
|
+
console.log("Summary:", partial.summary);
|
|
898
|
+
}
|
|
899
|
+
}
|
|
900
|
+
}
|
|
901
|
+
}
|
|
902
|
+
|
|
903
|
+
// Usage
|
|
904
|
+
const analyzer = new StructuredCodeAnalyzer();
|
|
905
|
+
const result = await analyzer.analyze(sourceCode);
|
|
906
|
+
|
|
907
|
+
// TypeScript knows all these fields exist and their types
|
|
908
|
+
console.log(result.summary);
|
|
909
|
+
console.log(result.codeQuality.maintainability); // number
|
|
910
|
+
console.log(result.functions[0].complexity); // "low" | "medium" | "high"
|
|
911
|
+
```
|
|
912
|
+
|
|
913
|
+
### Pros for mdcontext
|
|
914
|
+
|
|
915
|
+
✅ Best-in-class structured outputs
|
|
916
|
+
✅ Excellent TypeScript type inference
|
|
917
|
+
✅ Minimal boilerplate
|
|
918
|
+
✅ Lightweight and fast
|
|
919
|
+
✅ Easy to debug (transparent wrapper)
|
|
920
|
+
✅ Zod ecosystem (24M+ downloads)
|
|
921
|
+
✅ Perfect for code analysis schemas
|
|
922
|
+
|
|
923
|
+
### Cons for mdcontext
|
|
924
|
+
|
|
925
|
+
❌ Provider switching less elegant
|
|
926
|
+
❌ Requires OpenAI-compatible APIs
|
|
927
|
+
❌ Single-purpose (structured extraction only)
|
|
928
|
+
❌ No built-in RAG or agent features
|
|
929
|
+
|
|
930
|
+
### Recommendation for mdcontext
|
|
931
|
+
|
|
932
|
+
**Rating: 8.5/10 - EXCELLENT FOR STRUCTURED USE CASES**
|
|
933
|
+
|
|
934
|
+
**Best for:** Applications where structured, type-safe outputs are critical (like code analysis).
|
|
935
|
+
|
|
936
|
+
**Use when:** Your primary need is reliable structured data extraction with excellent TypeScript types.
|
|
937
|
+
|
|
938
|
+
**Skip when:** You need native support for many providers or complex agent orchestration.
|
|
939
|
+
|
|
940
|
+
---
|
|
941
|
+
|
|
942
|
+
## 4. LlamaIndex.TS
|
|
943
|
+
|
|
944
|
+
**GitHub:** [run-llama/LlamaIndexTS](https://github.com/run-llama/LlamaIndexTS)
|
|
945
|
+
**NPM:** `llamaindex`
|
|
946
|
+
**License:** MIT
|
|
947
|
+
**Maintenance:** Active (2026)
|
|
948
|
+
|
|
949
|
+
### Overview
|
|
950
|
+
|
|
951
|
+
LlamaIndex.TS is a TypeScript data framework for LLM applications, focusing on RAG (Retrieval-Augmented Generation) and document-centric workflows. It's the TypeScript port of the popular Python LlamaIndex library.
|
|
952
|
+
|
|
953
|
+
### Provider Support
|
|
954
|
+
|
|
955
|
+
**Modular Architecture:**
|
|
956
|
+
- OpenAI (`@llamaindex/openai`)
|
|
957
|
+
- Anthropic (via provider packages)
|
|
958
|
+
- Google (via provider packages)
|
|
959
|
+
- Azure OpenAI
|
|
960
|
+
- Ollama (local models)
|
|
961
|
+
- Provider packages installed separately
|
|
962
|
+
|
|
963
|
+
**Core Focus:** Document ingestion, vector stores, and RAG patterns.
|
|
964
|
+
|
|
965
|
+
### Key Features
|
|
966
|
+
|
|
967
|
+
#### RAG Workflows
|
|
968
|
+
```typescript
|
|
969
|
+
import { VectorStoreIndex, SimpleDirectoryReader } from "llamaindex";
|
|
970
|
+
|
|
971
|
+
// Load documents
|
|
972
|
+
const documents = await new SimpleDirectoryReader().loadData({
|
|
973
|
+
directoryPath: "./codebase"
|
|
974
|
+
});
|
|
975
|
+
|
|
976
|
+
// Create index
|
|
977
|
+
const index = await VectorStoreIndex.fromDocuments(documents);
|
|
978
|
+
|
|
979
|
+
// Query with any LLM
|
|
980
|
+
const queryEngine = index.asQueryEngine();
|
|
981
|
+
const response = await queryEngine.query(
|
|
982
|
+
"Summarize the authentication implementation"
|
|
983
|
+
);
|
|
984
|
+
```
|
|
985
|
+
|
|
986
|
+
#### Multi-Provider LLM Support
|
|
987
|
+
```typescript
|
|
988
|
+
import { OpenAI } from "@llamaindex/openai";
|
|
989
|
+
import { Settings } from "llamaindex";
|
|
990
|
+
|
|
991
|
+
// Configure global LLM
|
|
992
|
+
Settings.llm = new OpenAI({
|
|
993
|
+
model: "gpt-4o",
|
|
994
|
+
apiKey: process.env.OPENAI_API_KEY
|
|
995
|
+
});
|
|
996
|
+
|
|
997
|
+
// Use in queries
|
|
998
|
+
const response = await queryEngine.query("...");
|
|
999
|
+
```
|
|
1000
|
+
|
|
1001
|
+
#### Document Agents
|
|
1002
|
+
```typescript
|
|
1003
|
+
import { OpenAIAgent } from "llamaindex/agent/openai";
|
|
1004
|
+
import { QueryEngineTool } from "llamaindex";
|
|
1005
|
+
|
|
1006
|
+
const agent = new OpenAIAgent({
|
|
1007
|
+
tools: [
|
|
1008
|
+
new QueryEngineTool({
|
|
1009
|
+
queryEngine: codebaseQueryEngine,
|
|
1010
|
+
metadata: {
|
|
1011
|
+
name: "codebase_search",
|
|
1012
|
+
description: "Search the codebase for relevant code"
|
|
1013
|
+
}
|
|
1014
|
+
})
|
|
1015
|
+
]
|
|
1016
|
+
});
|
|
1017
|
+
|
|
1018
|
+
const response = await agent.chat({
|
|
1019
|
+
message: "Find all authentication-related code and summarize"
|
|
1020
|
+
});
|
|
1021
|
+
```
|
|
1022
|
+
|
|
1023
|
+
### TypeScript Support
|
|
1024
|
+
|
|
1025
|
+
**Quality:** Good
|
|
1026
|
+
**Type Inference:** Improving
|
|
1027
|
+
**Focus:** Server-side TypeScript solutions
|
|
1028
|
+
|
|
1029
|
+
### API Design & Ergonomics
|
|
1030
|
+
|
|
1031
|
+
**Pros:**
|
|
1032
|
+
- Excellent for RAG use cases
|
|
1033
|
+
- Document processing out of the box
|
|
1034
|
+
- Pre-built agent templates
|
|
1035
|
+
- MCP server integration (2026)
|
|
1036
|
+
|
|
1037
|
+
**Cons:**
|
|
1038
|
+
- More complex than simple LLM clients
|
|
1039
|
+
- Heavier dependency footprint
|
|
1040
|
+
- Learning curve for LlamaIndex concepts
|
|
1041
|
+
- Provider switching less straightforward
|
|
1042
|
+
|
|
1043
|
+
### Maintenance Status (2026)
|
|
1044
|
+
|
|
1045
|
+
- **GitHub Stars:** ~3,000
|
|
1046
|
+
- **Forks:** 507
|
|
1047
|
+
- **NPM Version:** 0.12.0 (updated Dec 2025)
|
|
1048
|
+
- **Dependent Projects:** 51
|
|
1049
|
+
- **Recent Updates:** Agent workflows with ACP integration, native MCP search
|
|
1050
|
+
|
|
1051
|
+
### Provider-Specific Quirks Handling
|
|
1052
|
+
|
|
1053
|
+
Provider-specific handling delegated to provider packages. Less unified than AI SDK.
|
|
1054
|
+
|
|
1055
|
+
### Production Readiness
|
|
1056
|
+
|
|
1057
|
+
**Strengths:**
|
|
1058
|
+
- Strong RAG capabilities
|
|
1059
|
+
- Document agent templates
|
|
1060
|
+
- Active 2026 development
|
|
1061
|
+
|
|
1062
|
+
**Considerations:**
|
|
1063
|
+
- Smaller community than LangChain
|
|
1064
|
+
- More experimental features
|
|
1065
|
+
- Less mature than Python version
|
|
1066
|
+
|
|
1067
|
+
### Code Example: Code Summarization
|
|
1068
|
+
|
|
1069
|
+
```typescript
|
|
1070
|
+
import {
|
|
1071
|
+
VectorStoreIndex,
|
|
1072
|
+
Document,
|
|
1073
|
+
OpenAI,
|
|
1074
|
+
Settings
|
|
1075
|
+
} from "llamaindex";
|
|
1076
|
+
|
|
1077
|
+
// Configure LLM
|
|
1078
|
+
Settings.llm = new OpenAI({
|
|
1079
|
+
model: "gpt-4o",
|
|
1080
|
+
apiKey: process.env.OPENAI_API_KEY
|
|
1081
|
+
});
|
|
1082
|
+
|
|
1083
|
+
// Load codebase files
|
|
1084
|
+
const codeFiles = [
|
|
1085
|
+
new Document({ text: file1Content, id_: "file1.ts" }),
|
|
1086
|
+
new Document({ text: file2Content, id_: "file2.ts" })
|
|
1087
|
+
];
|
|
1088
|
+
|
|
1089
|
+
// Create vector index
|
|
1090
|
+
const index = await VectorStoreIndex.fromDocuments(codeFiles);
|
|
1091
|
+
|
|
1092
|
+
// Query
|
|
1093
|
+
const queryEngine = index.asQueryEngine();
|
|
1094
|
+
const summary = await queryEngine.query(
|
|
1095
|
+
"Provide a comprehensive summary of this codebase's architecture"
|
|
1096
|
+
);
|
|
1097
|
+
|
|
1098
|
+
console.log(summary.response);
|
|
1099
|
+
```
|
|
1100
|
+
|
|
1101
|
+
### Pros for mdcontext
|
|
1102
|
+
|
|
1103
|
+
✅ Excellent RAG capabilities
|
|
1104
|
+
✅ Document processing built-in
|
|
1105
|
+
✅ Agent templates
|
|
1106
|
+
✅ MCP integration (2026)
|
|
1107
|
+
|
|
1108
|
+
### Cons for mdcontext
|
|
1109
|
+
|
|
1110
|
+
❌ Smaller community than alternatives
|
|
1111
|
+
❌ More complex for simple use cases
|
|
1112
|
+
❌ Heavier than lightweight clients
|
|
1113
|
+
❌ Provider support less comprehensive
|
|
1114
|
+
|
|
1115
|
+
### Recommendation for mdcontext
|
|
1116
|
+
|
|
1117
|
+
**Rating: 6.5/10 - GOOD FOR RAG-FOCUSED USE CASES**
|
|
1118
|
+
|
|
1119
|
+
**Best for:** Document-heavy applications requiring semantic search and RAG.
|
|
1120
|
+
|
|
1121
|
+
**Use when:** You need to analyze large codebases with vector search capabilities.
|
|
1122
|
+
|
|
1123
|
+
**Skip when:** Simple LLM API calls without RAG suffice.
|
|
1124
|
+
|
|
1125
|
+
---
|
|
1126
|
+
|
|
1127
|
+
## 5. OpenRouter
|
|
1128
|
+
|
|
1129
|
+
**Website:** [openrouter.ai](https://openrouter.ai)
|
|
1130
|
+
**GitHub SDK:** [OpenRouterTeam/typescript-sdk](https://github.com/OpenRouterTeam/typescript-sdk)
|
|
1131
|
+
**Community Kit:** [openrouter-kit](https://github.com/mmeerrkkaa/openrouter-kit)
|
|
1132
|
+
**NPM:** `@openrouter/ai-sdk-provider` (for AI SDK integration)
|
|
1133
|
+
**License:** Varies by SDK
|
|
1134
|
+
|
|
1135
|
+
### Overview
|
|
1136
|
+
|
|
1137
|
+
OpenRouter is a unified API gateway providing access to 300+ AI models from 60+ providers through a single endpoint. It's not a library but a service - a LiteLLM-like proxy as a hosted solution.
|
|
1138
|
+
|
|
1139
|
+
### Provider Support
|
|
1140
|
+
|
|
1141
|
+
**300+ Models from 60+ Providers:**
|
|
1142
|
+
- OpenAI (GPT-4o, GPT-4, GPT-3.5)
|
|
1143
|
+
- Anthropic (Claude Opus, Sonnet, Haiku)
|
|
1144
|
+
- Google (Gemini Pro, Flash)
|
|
1145
|
+
- Meta (Llama models)
|
|
1146
|
+
- Mistral
|
|
1147
|
+
- DeepSeek
|
|
1148
|
+
- Cohere
|
|
1149
|
+
- Together AI
|
|
1150
|
+
- Fireworks
|
|
1151
|
+
- And 50+ more
|
|
1152
|
+
|
|
1153
|
+
**Key Advantage:** Change models without changing code or managing multiple API keys.
|
|
1154
|
+
|
|
1155
|
+
### Key Features
|
|
1156
|
+
|
|
1157
|
+
#### Single API for All Providers
|
|
1158
|
+
```typescript
|
|
1159
|
+
import OpenAI from "openai";
|
|
1160
|
+
|
|
1161
|
+
const openrouter = new OpenAI({
|
|
1162
|
+
baseURL: "https://openrouter.ai/api/v1",
|
|
1163
|
+
apiKey: process.env.OPENROUTER_API_KEY,
|
|
1164
|
+
defaultHeaders: {
|
|
1165
|
+
"HTTP-Referer": "https://mdcontext.app", // Optional
|
|
1166
|
+
"X-Title": "mdcontext" // Optional
|
|
1167
|
+
}
|
|
1168
|
+
});
|
|
1169
|
+
|
|
1170
|
+
// Use OpenAI
|
|
1171
|
+
const response1 = await openrouter.chat.completions.create({
|
|
1172
|
+
model: "openai/gpt-4o",
|
|
1173
|
+
messages: [{ role: "user", content: "Summarize this code..." }]
|
|
1174
|
+
});
|
|
1175
|
+
|
|
1176
|
+
// Switch to Claude - just change model string
|
|
1177
|
+
const response2 = await openrouter.chat.completions.create({
|
|
1178
|
+
model: "anthropic/claude-opus-4.5",
|
|
1179
|
+
messages: [{ role: "user", content: "Summarize this code..." }]
|
|
1180
|
+
});
|
|
1181
|
+
|
|
1182
|
+
// Try DeepSeek
|
|
1183
|
+
const response3 = await openrouter.chat.completions.create({
|
|
1184
|
+
model: "deepseek/deepseek-chat",
|
|
1185
|
+
messages: [{ role: "user", content: "Summarize this code..." }]
|
|
1186
|
+
});
|
|
1187
|
+
```
|
|
1188
|
+
|
|
1189
|
+
#### Streaming
|
|
1190
|
+
```typescript
|
|
1191
|
+
const stream = await openrouter.chat.completions.create({
|
|
1192
|
+
model: "anthropic/claude-opus-4.5",
|
|
1193
|
+
messages: [{ role: "user", content: "Analyze..." }],
|
|
1194
|
+
stream: true
|
|
1195
|
+
});
|
|
1196
|
+
|
|
1197
|
+
for await (const chunk of stream) {
|
|
1198
|
+
process.stdout.write(chunk.choices[0]?.delta?.content || "");
|
|
1199
|
+
}
|
|
1200
|
+
```
|
|
1201
|
+
|
|
1202
|
+
#### Integration with Vercel AI SDK
|
|
1203
|
+
```typescript
|
|
1204
|
+
import { streamText } from 'ai';
|
|
1205
|
+
import { createOpenRouter } from '@openrouter/ai-sdk-provider';
|
|
1206
|
+
|
|
1207
|
+
const openrouter = createOpenRouter({
|
|
1208
|
+
apiKey: process.env.OPENROUTER_API_KEY
|
|
1209
|
+
});
|
|
1210
|
+
|
|
1211
|
+
// Use with AI SDK
|
|
1212
|
+
const result = await streamText({
|
|
1213
|
+
model: openrouter('anthropic/claude-opus-4.5'),
|
|
1214
|
+
prompt: 'Summarize this code...'
|
|
1215
|
+
});
|
|
1216
|
+
```
|
|
1217
|
+
|
|
1218
|
+
#### OpenRouter Kit (Community Library)
|
|
1219
|
+
```typescript
|
|
1220
|
+
import { OpenRouterKit } from 'openrouter-kit';
|
|
1221
|
+
|
|
1222
|
+
const client = new OpenRouterKit({
|
|
1223
|
+
apiKey: process.env.OPENROUTER_API_KEY,
|
|
1224
|
+
defaultModel: 'anthropic/claude-opus-4.5'
|
|
1225
|
+
});
|
|
1226
|
+
|
|
1227
|
+
// Easy chat
|
|
1228
|
+
const response = await client.chat({
|
|
1229
|
+
messages: [{ role: 'user', content: 'Analyze this code...' }]
|
|
1230
|
+
});
|
|
1231
|
+
|
|
1232
|
+
// Streaming
|
|
1233
|
+
const stream = await client.chatStream({
|
|
1234
|
+
messages: [{ role: 'user', content: 'Analyze...' }]
|
|
1235
|
+
});
|
|
1236
|
+
|
|
1237
|
+
for await (const chunk of stream) {
|
|
1238
|
+
console.log(chunk);
|
|
1239
|
+
}
|
|
1240
|
+
|
|
1241
|
+
// Cost tracking
|
|
1242
|
+
const cost = client.getTotalCost();
|
|
1243
|
+
console.log(`Total cost: $${cost}`);
|
|
1244
|
+
```
|
|
1245
|
+
|
|
1246
|
+
#### Automatic Routing & Fallbacks
|
|
1247
|
+
```typescript
|
|
1248
|
+
// OpenRouter can automatically route to best provider
|
|
1249
|
+
const response = await openrouter.chat.completions.create({
|
|
1250
|
+
model: "openai/gpt-4o",
|
|
1251
|
+
messages: [...],
|
|
1252
|
+
route: "fallback" // Automatically try alternatives if primary fails
|
|
1253
|
+
});
|
|
1254
|
+
```
|
|
1255
|
+
|
|
1256
|
+
#### Response Healing (2026 Feature)
|
|
1257
|
+
Automatically fixes malformed JSON responses from models.
|
|
1258
|
+
|
|
1259
|
+
### TypeScript Support
|
|
1260
|
+
|
|
1261
|
+
**Quality:** Excellent - Uses standard OpenAI SDK types
|
|
1262
|
+
**Type Inference:** Full TypeScript support
|
|
1263
|
+
**Multiple Options:**
|
|
1264
|
+
- Official TypeScript SDK
|
|
1265
|
+
- Use with OpenAI SDK
|
|
1266
|
+
- Community libraries (openrouter-kit)
|
|
1267
|
+
- AI SDK provider integration
|
|
1268
|
+
|
|
1269
|
+
### API Design & Ergonomics
|
|
1270
|
+
|
|
1271
|
+
**Pros:**
|
|
1272
|
+
- Familiar OpenAI-compatible API
|
|
1273
|
+
- Single API key for all providers
|
|
1274
|
+
- Unified billing
|
|
1275
|
+
- No need to manage multiple provider accounts
|
|
1276
|
+
- Automatic fallbacks
|
|
1277
|
+
- Cost tracking built-in
|
|
1278
|
+
- 300+ model choices
|
|
1279
|
+
|
|
1280
|
+
**Cons:**
|
|
1281
|
+
- Adds latency (proxy layer)
|
|
1282
|
+
- Requires internet (can't use local-only)
|
|
1283
|
+
- Costs include small OpenRouter margin
|
|
1284
|
+
- Service dependency (not self-hosted)
|
|
1285
|
+
|
|
1286
|
+
### Maintenance Status (2026)
|
|
1287
|
+
|
|
1288
|
+
- **Service Status:** Active production service
|
|
1289
|
+
- **Recent Updates:** Response Healing (2026), expanding model catalog
|
|
1290
|
+
- **Sponsorship:** Super Gold sponsor at SaaStr AI 2026
|
|
1291
|
+
- **Community:** Active development of TypeScript tooling
|
|
1292
|
+
|
|
1293
|
+
### Provider-Specific Quirks Handling
|
|
1294
|
+
|
|
1295
|
+
**Rate Limiting:**
|
|
1296
|
+
- OpenRouter handles provider rate limits
|
|
1297
|
+
- Unified rate limiting across providers
|
|
1298
|
+
- Automatic fallbacks on rate limit errors
|
|
1299
|
+
|
|
1300
|
+
**Authentication:**
|
|
1301
|
+
- Single API key for all providers
|
|
1302
|
+
- No need to manage provider-specific keys
|
|
1303
|
+
|
|
1304
|
+
**Retries:**
|
|
1305
|
+
- Automatic routing and fallback support
|
|
1306
|
+
- Provider-specific error handling abstracted
|
|
1307
|
+
|
|
1308
|
+
### Production Readiness
|
|
1309
|
+
|
|
1310
|
+
**Strengths:**
|
|
1311
|
+
- Production service (no self-hosting needed)
|
|
1312
|
+
- Unified billing simplifies accounting
|
|
1313
|
+
- Access to latest models immediately
|
|
1314
|
+
- Cost tracking included
|
|
1315
|
+
- Automatic failover
|
|
1316
|
+
|
|
1317
|
+
**Considerations:**
|
|
1318
|
+
- Service dependency (not self-hosted)
|
|
1319
|
+
- Additional latency vs direct API calls
|
|
1320
|
+
- Small markup on provider costs
|
|
1321
|
+
- Requires internet connectivity
|
|
1322
|
+
|
|
1323
|
+
### Code Example: Code Summarization
|
|
1324
|
+
|
|
1325
|
+
```typescript
|
|
1326
|
+
import OpenAI from "openai";
|
|
1327
|
+
|
|
1328
|
+
// OpenRouter as drop-in OpenAI replacement
|
|
1329
|
+
class CodeSummarizer {
|
|
1330
|
+
private client: OpenAI;
|
|
1331
|
+
|
|
1332
|
+
constructor() {
|
|
1333
|
+
this.client = new OpenAI({
|
|
1334
|
+
baseURL: "https://openrouter.ai/api/v1",
|
|
1335
|
+
apiKey: process.env.OPENROUTER_API_KEY
|
|
1336
|
+
});
|
|
1337
|
+
}
|
|
1338
|
+
|
|
1339
|
+
async summarize(
|
|
1340
|
+
code: string,
|
|
1341
|
+
model: string = "anthropic/claude-opus-4.5"
|
|
1342
|
+
) {
|
|
1343
|
+
const response = await this.client.chat.completions.create({
|
|
1344
|
+
model,
|
|
1345
|
+
messages: [
|
|
1346
|
+
{
|
|
1347
|
+
role: "system",
|
|
1348
|
+
content: "You are an expert code analyst."
|
|
1349
|
+
},
|
|
1350
|
+
{
|
|
1351
|
+
role: "user",
|
|
1352
|
+
content: `Analyze and summarize:\n\n${code}`
|
|
1353
|
+
}
|
|
1354
|
+
],
|
|
1355
|
+
temperature: 0.3
|
|
1356
|
+
});
|
|
1357
|
+
|
|
1358
|
+
return response.choices[0].message.content;
|
|
1359
|
+
}
|
|
1360
|
+
|
|
1361
|
+
// Multi-model comparison
|
|
1362
|
+
async compareModels(code: string) {
|
|
1363
|
+
const models = [
|
|
1364
|
+
"openai/gpt-4o",
|
|
1365
|
+
"anthropic/claude-opus-4.5",
|
|
1366
|
+
"google/gemini-2.0-flash",
|
|
1367
|
+
"deepseek/deepseek-chat"
|
|
1368
|
+
];
|
|
1369
|
+
|
|
1370
|
+
const summaries = await Promise.all(
|
|
1371
|
+
models.map(async model => ({
|
|
1372
|
+
model,
|
|
1373
|
+
summary: await this.summarize(code, model)
|
|
1374
|
+
}))
|
|
1375
|
+
);
|
|
1376
|
+
|
|
1377
|
+
return summaries;
|
|
1378
|
+
}
|
|
1379
|
+
|
|
1380
|
+
// Streaming
|
|
1381
|
+
async streamSummary(code: string) {
|
|
1382
|
+
const stream = await this.client.chat.completions.create({
|
|
1383
|
+
model: "anthropic/claude-opus-4.5",
|
|
1384
|
+
messages: [
|
|
1385
|
+
{ role: "user", content: `Summarize: ${code}` }
|
|
1386
|
+
],
|
|
1387
|
+
stream: true
|
|
1388
|
+
});
|
|
1389
|
+
|
|
1390
|
+
for await (const chunk of stream) {
|
|
1391
|
+
const content = chunk.choices[0]?.delta?.content;
|
|
1392
|
+
if (content) process.stdout.write(content);
|
|
1393
|
+
}
|
|
1394
|
+
}
|
|
1395
|
+
}
|
|
1396
|
+
|
|
1397
|
+
// With AI SDK integration
|
|
1398
|
+
import { generateText } from 'ai';
|
|
1399
|
+
import { createOpenRouter } from '@openrouter/ai-sdk-provider';
|
|
1400
|
+
|
|
1401
|
+
const openrouter = createOpenRouter({
|
|
1402
|
+
apiKey: process.env.OPENROUTER_API_KEY
|
|
1403
|
+
});
|
|
1404
|
+
|
|
1405
|
+
async function smartSummary(code: string) {
|
|
1406
|
+
// Try expensive model first, fall back to cheaper if needed
|
|
1407
|
+
try {
|
|
1408
|
+
return await generateText({
|
|
1409
|
+
model: openrouter('anthropic/claude-opus-4.5'),
|
|
1410
|
+
prompt: `Detailed analysis: ${code}`,
|
|
1411
|
+
maxRetries: 0
|
|
1412
|
+
});
|
|
1413
|
+
} catch (error) {
|
|
1414
|
+
// Fallback to cheaper model
|
|
1415
|
+
return await generateText({
|
|
1416
|
+
model: openrouter('anthropic/claude-3-5-haiku'),
|
|
1417
|
+
prompt: `Quick summary: ${code}`
|
|
1418
|
+
});
|
|
1419
|
+
}
|
|
1420
|
+
}
|
|
1421
|
+
```
|
|
1422
|
+
|
|
1423
|
+
### Pros for mdcontext
|
|
1424
|
+
|
|
1425
|
+
✅ 300+ models from single API
|
|
1426
|
+
✅ No provider account management
|
|
1427
|
+
✅ Unified billing and cost tracking
|
|
1428
|
+
✅ Automatic fallbacks
|
|
1429
|
+
✅ OpenAI-compatible (familiar API)
|
|
1430
|
+
✅ Works with AI SDK and other libraries
|
|
1431
|
+
✅ Latest models available immediately
|
|
1432
|
+
✅ Response healing (2026)
|
|
1433
|
+
|
|
1434
|
+
### Cons for mdcontext
|
|
1435
|
+
|
|
1436
|
+
❌ Service dependency (not self-hosted)
|
|
1437
|
+
❌ Added latency (proxy layer)
|
|
1438
|
+
❌ Small cost markup
|
|
1439
|
+
❌ Requires internet connectivity
|
|
1440
|
+
|
|
1441
|
+
### Recommendation for mdcontext
|
|
1442
|
+
|
|
1443
|
+
**Rating: 9/10 - EXCELLENT FOR MAXIMUM FLEXIBILITY**
|
|
1444
|
+
|
|
1445
|
+
**Best for:** Applications wanting maximum model choice with minimal code complexity.
|
|
1446
|
+
|
|
1447
|
+
**Use when:** You want to experiment with many models, don't want to manage provider accounts, or need automatic failover.
|
|
1448
|
+
|
|
1449
|
+
**Skip when:** You need lowest possible latency or want self-hosted solution.
|
|
1450
|
+
|
|
1451
|
+
---
|
|
1452
|
+
|
|
1453
|
+
## 6. Agentica
|
|
1454
|
+
|
|
1455
|
+
**GitHub:** [wrtnlabs/agentica](https://github.com/wrtnlabs/agentica)
|
|
1456
|
+
**NPM:** `@agentica/core`
|
|
1457
|
+
**License:** MIT
|
|
1458
|
+
**Maintainer:** Wrtn Technologies
|
|
1459
|
+
|
|
1460
|
+
### Overview
|
|
1461
|
+
|
|
1462
|
+
Agentica is a TypeScript AI function calling framework enhanced by compiler skills. It focuses on simplicity for agentic AI with automatic schema generation.
|
|
1463
|
+
|
|
1464
|
+
### Provider Support
|
|
1465
|
+
|
|
1466
|
+
**Core:** OpenAI SDK (npm i openai)
|
|
1467
|
+
**Compatible Providers:**
|
|
1468
|
+
- OpenAI
|
|
1469
|
+
- Anthropic Claude
|
|
1470
|
+
- DeepSeek
|
|
1471
|
+
- Meta Llama
|
|
1472
|
+
- Any provider following OpenAI API design
|
|
1473
|
+
|
|
1474
|
+
**Approach:** Uses OpenAI SDK as foundation since most modern LLMs follow OpenAI's API design.
|
|
1475
|
+
|
|
1476
|
+
### Key Features
|
|
1477
|
+
|
|
1478
|
+
#### Compiler-Driven Schema Generation
|
|
1479
|
+
```typescript
|
|
1480
|
+
import { Agentica } from "@agentica/core";
|
|
1481
|
+
|
|
1482
|
+
// Automatic function calling schema from TypeScript
|
|
1483
|
+
interface CodeAnalysis {
|
|
1484
|
+
summary: string;
|
|
1485
|
+
complexity: "low" | "medium" | "high";
|
|
1486
|
+
functions: Array<{
|
|
1487
|
+
name: string;
|
|
1488
|
+
purpose: string;
|
|
1489
|
+
}>;
|
|
1490
|
+
}
|
|
1491
|
+
|
|
1492
|
+
// Schema automatically generated by compiler
|
|
1493
|
+
const agent = new Agentica({
|
|
1494
|
+
model: "gpt-4o",
|
|
1495
|
+
functions: {
|
|
1496
|
+
analyzeCode: async (code: string): Promise<CodeAnalysis> => {
|
|
1497
|
+
// Implementation
|
|
1498
|
+
}
|
|
1499
|
+
}
|
|
1500
|
+
});
|
|
1501
|
+
```
|
|
1502
|
+
|
|
1503
|
+
#### JSON Schema Conversion
|
|
1504
|
+
Automatically handles specification differences between vendors (OpenAI, Claude, DeepSeek, etc.).
|
|
1505
|
+
|
|
1506
|
+
### TypeScript Support
|
|
1507
|
+
|
|
1508
|
+
**Quality:** Excellent - Compiler-driven
|
|
1509
|
+
**Type Inference:** Automatic from function signatures
|
|
1510
|
+
**Dependencies:** Requires `typia` for compile-time type reflection
|
|
1511
|
+
|
|
1512
|
+
### API Design & Ergonomics
|
|
1513
|
+
|
|
1514
|
+
**Pros:**
|
|
1515
|
+
- Minimal boilerplate (compiler generates schemas)
|
|
1516
|
+
- Simple API for function calling
|
|
1517
|
+
- TypeScript-first design
|
|
1518
|
+
|
|
1519
|
+
**Cons:**
|
|
1520
|
+
- Requires `typia` compiler setup
|
|
1521
|
+
- Focused on function calling (not general LLM use)
|
|
1522
|
+
- Smaller community
|
|
1523
|
+
- Less documentation than major frameworks
|
|
1524
|
+
|
|
1525
|
+
### Maintenance Status (2026)
|
|
1526
|
+
|
|
1527
|
+
- **Status:** Active development
|
|
1528
|
+
- **Maintainer:** Wrtn Technologies
|
|
1529
|
+
- **Community:** Growing but smaller than major alternatives
|
|
1530
|
+
- **Updates:** Regular maintenance
|
|
1531
|
+
|
|
1532
|
+
### Recommendation for mdcontext
|
|
1533
|
+
|
|
1534
|
+
**Rating: 6/10 - INTERESTING BUT NICHE**
|
|
1535
|
+
|
|
1536
|
+
**Best for:** Function calling-heavy applications with TypeScript.
|
|
1537
|
+
|
|
1538
|
+
**Use when:** You need automatic schema generation from TypeScript types.
|
|
1539
|
+
|
|
1540
|
+
**Skip when:** You need comprehensive provider support or mature ecosystem.
|
|
1541
|
+
|
|
1542
|
+
---
|
|
1543
|
+
|
|
1544
|
+
## 7. LiteLLM Proxy (Python Backend)
|
|
1545
|
+
|
|
1546
|
+
**GitHub:** [BerriAI/litellm](https://github.com/BerriAI/litellm)
|
|
1547
|
+
**Approach:** Run Python proxy, access from TypeScript
|
|
1548
|
+
|
|
1549
|
+
### Overview
|
|
1550
|
+
|
|
1551
|
+
Use Python's LiteLLM as a proxy server, access via OpenAI-compatible TypeScript client. This is the "closest" to original LiteLLM but requires Python infrastructure.
|
|
1552
|
+
|
|
1553
|
+
### Architecture
|
|
1554
|
+
|
|
1555
|
+
```
|
|
1556
|
+
TypeScript App → LiteLLM Proxy (Python) → 100+ LLM Providers
|
|
1557
|
+
```
|
|
1558
|
+
|
|
1559
|
+
### Setup
|
|
1560
|
+
|
|
1561
|
+
**Install LiteLLM:**
|
|
1562
|
+
```bash
|
|
1563
|
+
pip install 'litellm[proxy]'
|
|
1564
|
+
```
|
|
1565
|
+
|
|
1566
|
+
**Start Proxy:**
|
|
1567
|
+
```bash
|
|
1568
|
+
litellm --model gpt-4o
|
|
1569
|
+
# or with config
|
|
1570
|
+
litellm --config config.yaml
|
|
1571
|
+
```
|
|
1572
|
+
|
|
1573
|
+
**config.yaml:**
|
|
1574
|
+
```yaml
|
|
1575
|
+
model_list:
|
|
1576
|
+
- model_name: gpt-4o
|
|
1577
|
+
litellm_params:
|
|
1578
|
+
model: openai/gpt-4o
|
|
1579
|
+
api_key: ${OPENAI_API_KEY}
|
|
1580
|
+
|
|
1581
|
+
- model_name: claude-opus
|
|
1582
|
+
litellm_params:
|
|
1583
|
+
model: anthropic/claude-opus-4.5
|
|
1584
|
+
api_key: ${ANTHROPIC_API_KEY}
|
|
1585
|
+
|
|
1586
|
+
- model_name: gemini-flash
|
|
1587
|
+
litellm_params:
|
|
1588
|
+
model: gemini/gemini-2.0-flash
|
|
1589
|
+
api_key: ${GOOGLE_API_KEY}
|
|
1590
|
+
```
|
|
1591
|
+
|
|
1592
|
+
### TypeScript Client
|
|
1593
|
+
|
|
1594
|
+
```typescript
|
|
1595
|
+
import OpenAI from "openai";
|
|
1596
|
+
|
|
1597
|
+
// Point to LiteLLM proxy
|
|
1598
|
+
const client = new OpenAI({
|
|
1599
|
+
baseURL: "http://localhost:4000", // LiteLLM proxy
|
|
1600
|
+
apiKey: "any-string" // Not used by proxy
|
|
1601
|
+
});
|
|
1602
|
+
|
|
1603
|
+
// Use any configured model
|
|
1604
|
+
const response = await client.chat.completions.create({
|
|
1605
|
+
model: "gpt-4o", // From config.yaml
|
|
1606
|
+
messages: [{ role: "user", content: "Summarize..." }]
|
|
1607
|
+
});
|
|
1608
|
+
|
|
1609
|
+
// Switch to Claude
|
|
1610
|
+
const response2 = await client.chat.completions.create({
|
|
1611
|
+
model: "claude-opus", // From config.yaml
|
|
1612
|
+
messages: [{ role: "user", content: "Summarize..." }]
|
|
1613
|
+
});
|
|
1614
|
+
```
|
|
1615
|
+
|
|
1616
|
+
### Features
|
|
1617
|
+
|
|
1618
|
+
✅ True LiteLLM compatibility (100+ providers)
|
|
1619
|
+
✅ Cost tracking built-in
|
|
1620
|
+
✅ Load balancing
|
|
1621
|
+
✅ Guardrails support
|
|
1622
|
+
✅ Logging and monitoring
|
|
1623
|
+
✅ OpenAI-compatible API
|
|
1624
|
+
|
|
1625
|
+
❌ Requires Python infrastructure
|
|
1626
|
+
❌ Additional service to manage
|
|
1627
|
+
❌ Network hop (latency)
|
|
1628
|
+
❌ More complex deployment
|
|
1629
|
+
|
|
1630
|
+
### Recommendation for mdcontext
|
|
1631
|
+
|
|
1632
|
+
**Rating: 7.5/10 - EXCELLENT IF PYTHON IS ACCEPTABLE**
|
|
1633
|
+
|
|
1634
|
+
**Best for:** Teams already running Python services or needing true LiteLLM feature parity.
|
|
1635
|
+
|
|
1636
|
+
**Use when:** You need LiteLLM's advanced features (cost tracking, load balancing, guardrails) and don't mind Python dependency.
|
|
1637
|
+
|
|
1638
|
+
**Skip when:** You want pure TypeScript stack without additional services.
|
|
1639
|
+
|
|
1640
|
+
---
|
|
1641
|
+
|
|
1642
|
+
## 8. Native Provider SDKs (Direct Approach)
|
|
1643
|
+
|
|
1644
|
+
### Overview
|
|
1645
|
+
|
|
1646
|
+
Use each provider's official TypeScript SDK directly without abstraction layer.
|
|
1647
|
+
|
|
1648
|
+
### Provider SDKs
|
|
1649
|
+
|
|
1650
|
+
**OpenAI:**
|
|
1651
|
+
```typescript
|
|
1652
|
+
import OpenAI from "openai";
|
|
1653
|
+
|
|
1654
|
+
const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
|
|
1655
|
+
const response = await openai.chat.completions.create({
|
|
1656
|
+
model: "gpt-4o",
|
|
1657
|
+
messages: [{ role: "user", content: "Summarize..." }]
|
|
1658
|
+
});
|
|
1659
|
+
```
|
|
1660
|
+
|
|
1661
|
+
**Anthropic:**
|
|
1662
|
+
```typescript
|
|
1663
|
+
import Anthropic from "@anthropic-ai/sdk";
|
|
1664
|
+
|
|
1665
|
+
const anthropic = new Anthropic({ apiKey: process.env.ANTHROPIC_API_KEY });
|
|
1666
|
+
const response = await anthropic.messages.create({
|
|
1667
|
+
model: "claude-opus-4.5",
|
|
1668
|
+
max_tokens: 1024,
|
|
1669
|
+
messages: [{ role: "user", content: "Summarize..." }]
|
|
1670
|
+
});
|
|
1671
|
+
```
|
|
1672
|
+
|
|
1673
|
+
**Google (Gemini):**
|
|
1674
|
+
```typescript
|
|
1675
|
+
import { GoogleGenerativeAI } from "@google/generative-ai";
|
|
1676
|
+
|
|
1677
|
+
const genAI = new GoogleGenerativeAI(process.env.GOOGLE_API_KEY);
|
|
1678
|
+
const model = genAI.getGenerativeModel({ model: "gemini-2.0-flash" });
|
|
1679
|
+
const result = await model.generateContent("Summarize...");
|
|
1680
|
+
```
|
|
1681
|
+
|
|
1682
|
+
**DeepSeek:**
|
|
1683
|
+
```typescript
|
|
1684
|
+
import { DeepSeek } from "node-deepseek";
|
|
1685
|
+
|
|
1686
|
+
const deepseek = new DeepSeek({ apiKey: process.env.DEEPSEEK_API_KEY });
|
|
1687
|
+
const response = await deepseek.chat.completions.create({
|
|
1688
|
+
model: "deepseek-chat",
|
|
1689
|
+
messages: [{ role: "user", content: "Summarize..." }]
|
|
1690
|
+
});
|
|
1691
|
+
```
|
|
1692
|
+
|
|
1693
|
+
**Ollama (Local):**
|
|
1694
|
+
```typescript
|
|
1695
|
+
import { Ollama } from "ollama";
|
|
1696
|
+
|
|
1697
|
+
const ollama = new Ollama();
|
|
1698
|
+
const response = await ollama.chat({
|
|
1699
|
+
model: "llama3.2",
|
|
1700
|
+
messages: [{ role: "user", content: "Summarize..." }]
|
|
1701
|
+
});
|
|
1702
|
+
```
|
|
1703
|
+
|
|
1704
|
+
### Abstraction Layer
|
|
1705
|
+
|
|
1706
|
+
Create your own unified interface:
|
|
1707
|
+
|
|
1708
|
+
```typescript
|
|
1709
|
+
// types.ts
|
|
1710
|
+
export interface LLMMessage {
|
|
1711
|
+
role: "system" | "user" | "assistant";
|
|
1712
|
+
content: string;
|
|
1713
|
+
}
|
|
1714
|
+
|
|
1715
|
+
export interface LLMResponse {
|
|
1716
|
+
content: string;
|
|
1717
|
+
model: string;
|
|
1718
|
+
usage?: {
|
|
1719
|
+
promptTokens: number;
|
|
1720
|
+
completionTokens: number;
|
|
1721
|
+
};
|
|
1722
|
+
}
|
|
1723
|
+
|
|
1724
|
+
export interface LLMProvider {
|
|
1725
|
+
generate(messages: LLMMessage[]): Promise<LLMResponse>;
|
|
1726
|
+
stream(messages: LLMMessage[]): AsyncGenerator<string>;
|
|
1727
|
+
}
|
|
1728
|
+
|
|
1729
|
+
// providers/openai.ts
|
|
1730
|
+
import OpenAI from "openai";
|
|
1731
|
+
import type { LLMProvider, LLMMessage, LLMResponse } from "../types";
|
|
1732
|
+
|
|
1733
|
+
export class OpenAIProvider implements LLMProvider {
|
|
1734
|
+
private client: OpenAI;
|
|
1735
|
+
private model: string;
|
|
1736
|
+
|
|
1737
|
+
constructor(apiKey: string, model: string = "gpt-4o") {
|
|
1738
|
+
this.client = new OpenAI({ apiKey });
|
|
1739
|
+
this.model = model;
|
|
1740
|
+
}
|
|
1741
|
+
|
|
1742
|
+
async generate(messages: LLMMessage[]): Promise<LLMResponse> {
|
|
1743
|
+
const response = await this.client.chat.completions.create({
|
|
1744
|
+
model: this.model,
|
|
1745
|
+
messages
|
|
1746
|
+
});
|
|
1747
|
+
|
|
1748
|
+
return {
|
|
1749
|
+
content: response.choices[0].message.content || "",
|
|
1750
|
+
model: this.model,
|
|
1751
|
+
usage: {
|
|
1752
|
+
promptTokens: response.usage?.prompt_tokens || 0,
|
|
1753
|
+
completionTokens: response.usage?.completion_tokens || 0
|
|
1754
|
+
}
|
|
1755
|
+
};
|
|
1756
|
+
}
|
|
1757
|
+
|
|
1758
|
+
async *stream(messages: LLMMessage[]): AsyncGenerator<string> {
|
|
1759
|
+
const stream = await this.client.chat.completions.create({
|
|
1760
|
+
model: this.model,
|
|
1761
|
+
messages,
|
|
1762
|
+
stream: true
|
|
1763
|
+
});
|
|
1764
|
+
|
|
1765
|
+
for await (const chunk of stream) {
|
|
1766
|
+
const content = chunk.choices[0]?.delta?.content;
|
|
1767
|
+
if (content) yield content;
|
|
1768
|
+
}
|
|
1769
|
+
}
|
|
1770
|
+
}
|
|
1771
|
+
|
|
1772
|
+
// providers/anthropic.ts
|
|
1773
|
+
import Anthropic from "@anthropic-ai/sdk";
|
|
1774
|
+
import type { LLMProvider, LLMMessage, LLMResponse } from "../types";
|
|
1775
|
+
|
|
1776
|
+
export class AnthropicProvider implements LLMProvider {
|
|
1777
|
+
private client: Anthropic;
|
|
1778
|
+
private model: string;
|
|
1779
|
+
|
|
1780
|
+
constructor(apiKey: string, model: string = "claude-opus-4.5") {
|
|
1781
|
+
this.client = new Anthropic({ apiKey });
|
|
1782
|
+
this.model = model;
|
|
1783
|
+
}
|
|
1784
|
+
|
|
1785
|
+
async generate(messages: LLMMessage[]): Promise<LLMResponse> {
|
|
1786
|
+
// Convert messages format
|
|
1787
|
+
const anthropicMessages = messages.filter(m => m.role !== "system");
|
|
1788
|
+
const systemMessage = messages.find(m => m.role === "system")?.content;
|
|
1789
|
+
|
|
1790
|
+
const response = await this.client.messages.create({
|
|
1791
|
+
model: this.model,
|
|
1792
|
+
max_tokens: 4096,
|
|
1793
|
+
system: systemMessage,
|
|
1794
|
+
messages: anthropicMessages
|
|
1795
|
+
});
|
|
1796
|
+
|
|
1797
|
+
return {
|
|
1798
|
+
content: response.content[0].type === "text"
|
|
1799
|
+
? response.content[0].text
|
|
1800
|
+
: "",
|
|
1801
|
+
model: this.model,
|
|
1802
|
+
usage: {
|
|
1803
|
+
promptTokens: response.usage.input_tokens,
|
|
1804
|
+
completionTokens: response.usage.output_tokens
|
|
1805
|
+
}
|
|
1806
|
+
};
|
|
1807
|
+
}
|
|
1808
|
+
|
|
1809
|
+
async *stream(messages: LLMMessage[]): AsyncGenerator<string> {
|
|
1810
|
+
const anthropicMessages = messages.filter(m => m.role !== "system");
|
|
1811
|
+
const systemMessage = messages.find(m => m.role === "system")?.content;
|
|
1812
|
+
|
|
1813
|
+
const stream = await this.client.messages.create({
|
|
1814
|
+
model: this.model,
|
|
1815
|
+
max_tokens: 4096,
|
|
1816
|
+
system: systemMessage,
|
|
1817
|
+
messages: anthropicMessages,
|
|
1818
|
+
stream: true
|
|
1819
|
+
});
|
|
1820
|
+
|
|
1821
|
+
for await (const event of stream) {
|
|
1822
|
+
if (
|
|
1823
|
+
event.type === "content_block_delta" &&
|
|
1824
|
+
event.delta.type === "text_delta"
|
|
1825
|
+
) {
|
|
1826
|
+
yield event.delta.text;
|
|
1827
|
+
}
|
|
1828
|
+
}
|
|
1829
|
+
}
|
|
1830
|
+
}
|
|
1831
|
+
|
|
1832
|
+
// llm-service.ts
|
|
1833
|
+
export class LLMService {
|
|
1834
|
+
private providers: Map<string, LLMProvider> = new Map();
|
|
1835
|
+
|
|
1836
|
+
constructor() {
|
|
1837
|
+
// Initialize providers
|
|
1838
|
+
this.providers.set(
|
|
1839
|
+
"openai",
|
|
1840
|
+
new OpenAIProvider(process.env.OPENAI_API_KEY!)
|
|
1841
|
+
);
|
|
1842
|
+
this.providers.set(
|
|
1843
|
+
"anthropic",
|
|
1844
|
+
new AnthropicProvider(process.env.ANTHROPIC_API_KEY!)
|
|
1845
|
+
);
|
|
1846
|
+
}
|
|
1847
|
+
|
|
1848
|
+
async generate(
|
|
1849
|
+
messages: LLMMessage[],
|
|
1850
|
+
provider: string = "openai"
|
|
1851
|
+
): Promise<LLMResponse> {
|
|
1852
|
+
const llm = this.providers.get(provider);
|
|
1853
|
+
if (!llm) throw new Error(`Provider ${provider} not found`);
|
|
1854
|
+
|
|
1855
|
+
return llm.generate(messages);
|
|
1856
|
+
}
|
|
1857
|
+
|
|
1858
|
+
async *stream(
|
|
1859
|
+
messages: LLMMessage[],
|
|
1860
|
+
provider: string = "openai"
|
|
1861
|
+
): AsyncGenerator<string> {
|
|
1862
|
+
const llm = this.providers.get(provider);
|
|
1863
|
+
if (!llm) throw new Error(`Provider ${provider} not found`);
|
|
1864
|
+
|
|
1865
|
+
yield* llm.stream(messages);
|
|
1866
|
+
}
|
|
1867
|
+
}
|
|
1868
|
+
|
|
1869
|
+
// Usage
|
|
1870
|
+
const llm = new LLMService();
|
|
1871
|
+
|
|
1872
|
+
// OpenAI
|
|
1873
|
+
const result1 = await llm.generate(
|
|
1874
|
+
[{ role: "user", content: "Summarize..." }],
|
|
1875
|
+
"openai"
|
|
1876
|
+
);
|
|
1877
|
+
|
|
1878
|
+
// Anthropic
|
|
1879
|
+
const result2 = await llm.generate(
|
|
1880
|
+
[{ role: "user", content: "Summarize..." }],
|
|
1881
|
+
"anthropic"
|
|
1882
|
+
);
|
|
1883
|
+
|
|
1884
|
+
// Streaming
|
|
1885
|
+
for await (const chunk of llm.stream(
|
|
1886
|
+
[{ role: "user", content: "Analyze..." }],
|
|
1887
|
+
"anthropic"
|
|
1888
|
+
)) {
|
|
1889
|
+
process.stdout.write(chunk);
|
|
1890
|
+
}
|
|
1891
|
+
```
|
|
1892
|
+
|
|
1893
|
+
### Pros
|
|
1894
|
+
|
|
1895
|
+
✅ Direct access to provider features
|
|
1896
|
+
✅ Lowest latency (no abstraction layer)
|
|
1897
|
+
✅ Official SDKs (best documentation)
|
|
1898
|
+
✅ Full control over implementation
|
|
1899
|
+
✅ No third-party dependencies
|
|
1900
|
+
|
|
1901
|
+
### Cons
|
|
1902
|
+
|
|
1903
|
+
❌ Manual abstraction layer maintenance
|
|
1904
|
+
❌ Different APIs per provider
|
|
1905
|
+
❌ More code to write
|
|
1906
|
+
❌ Must handle provider quirks yourself
|
|
1907
|
+
❌ No built-in retry/fallback logic
|
|
1908
|
+
|
|
1909
|
+
### Recommendation for mdcontext
|
|
1910
|
+
|
|
1911
|
+
**Rating: 7/10 - GOOD FOR SPECIFIC NEEDS**
|
|
1912
|
+
|
|
1913
|
+
**Best for:** Applications needing direct provider access or provider-specific features.
|
|
1914
|
+
|
|
1915
|
+
**Use when:** You want maximum control and minimal dependencies.
|
|
1916
|
+
|
|
1917
|
+
**Skip when:** You prefer battle-tested abstractions and faster development.
|
|
1918
|
+
|
|
1919
|
+
---
|
|
1920
|
+
|
|
1921
|
+
## Comparison Matrix
|
|
1922
|
+
|
|
1923
|
+
| Feature | AI SDK | LangChain.js | Instructor-js | LlamaIndex.TS | OpenRouter | Agentica | LiteLLM Proxy | Native SDKs |
|
|
1924
|
+
|---------|--------|--------------|---------------|---------------|------------|----------|---------------|-------------|
|
|
1925
|
+
| **Provider Support** | 25+ official, 30+ community | 50+ | OpenAI-compatible | Modular | 300+ | OpenAI-compatible | 100+ | Provider-specific |
|
|
1926
|
+
| **TypeScript Quality** | ⭐⭐⭐⭐⭐ | ⭐⭐⭐⭐ | ⭐⭐⭐⭐⭐ | ⭐⭐⭐⭐ | ⭐⭐⭐⭐⭐ | ⭐⭐⭐⭐⭐ | ⭐⭐⭐⭐ | ⭐⭐⭐⭐⭐ |
|
|
1927
|
+
| **Ease of Use** | ⭐⭐⭐⭐⭐ | ⭐⭐⭐ | ⭐⭐⭐⭐⭐ | ⭐⭐⭐ | ⭐⭐⭐⭐⭐ | ⭐⭐⭐⭐ | ⭐⭐⭐⭐ | ⭐⭐⭐ |
|
|
1928
|
+
| **Streaming** | ✅ Excellent | ✅ Good | ✅ Good | ✅ Good | ✅ Excellent | ❌ Limited | ✅ Good | ✅ Varies |
|
|
1929
|
+
| **Structured Outputs** | ✅ Zod/Valibot/JSON | ✅ Zod | ⭐ Best (Zod-focused) | ✅ Basic | ✅ Via SDK | ✅ Function calling | ✅ Via providers | ✅ Via providers |
|
|
1930
|
+
| **Rate Limiting** | Exponential backoff | Per provider | Via OpenAI SDK | Per provider | Handled by service | Basic | ✅ Advanced | Manual |
|
|
1931
|
+
| **Retries** | Built-in (doesn't respect retry-after) | Configurable | Built-in | Configurable | Automatic fallback | Basic | ✅ Smart | Manual |
|
|
1932
|
+
| **NPM Downloads** | 20M+/month | Active | Growing | 51 dependents | N/A (service) | Small | N/A (Python) | Varies |
|
|
1933
|
+
| **GitHub Stars** | 21k+ | 16.8k+ | Growing | 3k+ | N/A | Small | 15k+ (Python) | Official |
|
|
1934
|
+
| **Bundle Size** | Medium | Large | Small | Large | Tiny (client only) | Small | Tiny (client only) | Varies |
|
|
1935
|
+
| **Learning Curve** | Low | Medium-High | Low | Medium | Very Low | Medium | Low | Low |
|
|
1936
|
+
| **RAG Support** | ❌ | ✅ Excellent | ❌ | ⭐ Best | ❌ | ❌ | ❌ | ❌ |
|
|
1937
|
+
| **Agent Support** | ✅ AI SDK 6 | ✅ LangGraph | ❌ | ✅ Templates | ❌ | ⭐ Focused | ❌ | ❌ |
|
|
1938
|
+
| **Cost Tracking** | ❌ | Via LangSmith | ❌ | ❌ | ✅ Built-in | ❌ | ⭐ Best | Manual |
|
|
1939
|
+
| **Maintenance** | ⭐ Very Active | ⭐ Active | Active | Active | ⭐ Active Service | Active | ⭐ Active | Official |
|
|
1940
|
+
| **Production Ready** | ⭐⭐⭐⭐⭐ | ⭐⭐⭐⭐⭐ | ⭐⭐⭐⭐ | ⭐⭐⭐⭐ | ⭐⭐⭐⭐⭐ | ⭐⭐⭐ | ⭐⭐⭐⭐⭐ | ⭐⭐⭐⭐⭐ |
|
|
1941
|
+
| **Best For** | General use | Complex workflows | Structured data | RAG applications | Maximum flexibility | Function calling | LiteLLM parity | Full control |
|
|
1942
|
+
|
|
1943
|
+
---
|
|
1944
|
+
|
|
1945
|
+
## Recommendations for mdcontext
|
|
1946
|
+
|
|
1947
|
+
### 🥇 Primary Recommendation: Vercel AI SDK
|
|
1948
|
+
|
|
1949
|
+
**Why:**
|
|
1950
|
+
1. **Industry Standard:** 20M+ monthly downloads, proven at scale
|
|
1951
|
+
2. **TypeScript Excellence:** Best-in-class TypeScript support with full type inference
|
|
1952
|
+
3. **Developer Experience:** Intuitive API, minimal boilerplate, excellent docs
|
|
1953
|
+
4. **Provider Flexibility:** 25+ official providers, easy to switch with one line
|
|
1954
|
+
5. **Structured Outputs:** First-class Zod integration for code analysis schemas
|
|
1955
|
+
6. **Streaming:** Excellent streaming support via SSE
|
|
1956
|
+
7. **Future-Proof:** Active development (AI SDK 6 just released in 2026)
|
|
1957
|
+
8. **Production Ready:** Battle-tested by major companies
|
|
1958
|
+
|
|
1959
|
+
**Implementation:**
|
|
1960
|
+
```typescript
|
|
1961
|
+
import { generateObject } from 'ai';
|
|
1962
|
+
import { z } from 'zod';
|
|
1963
|
+
|
|
1964
|
+
const CodeSummarySchema = z.object({
|
|
1965
|
+
summary: z.string(),
|
|
1966
|
+
functions: z.array(z.object({
|
|
1967
|
+
name: z.string(),
|
|
1968
|
+
purpose: z.string(),
|
|
1969
|
+
complexity: z.enum(['low', 'medium', 'high'])
|
|
1970
|
+
})),
|
|
1971
|
+
recommendations: z.array(z.string())
|
|
1972
|
+
});
|
|
1973
|
+
|
|
1974
|
+
async function summarizeCode(
|
|
1975
|
+
code: string,
|
|
1976
|
+
provider: 'openai' | 'anthropic' | 'google' = 'anthropic'
|
|
1977
|
+
) {
|
|
1978
|
+
const modelMap = {
|
|
1979
|
+
openai: 'gpt-4o',
|
|
1980
|
+
anthropic: 'claude-opus-4.5',
|
|
1981
|
+
google: 'gemini-2.0-flash'
|
|
1982
|
+
};
|
|
1983
|
+
|
|
1984
|
+
const result = await generateObject({
|
|
1985
|
+
model: `${provider}/${modelMap[provider]}`,
|
|
1986
|
+
schema: CodeSummarySchema,
|
|
1987
|
+
prompt: `Analyze this code:\n\n${code}`,
|
|
1988
|
+
maxRetries: 3
|
|
1989
|
+
});
|
|
1990
|
+
|
|
1991
|
+
return result.object; // Fully typed!
|
|
1992
|
+
}
|
|
1993
|
+
```
|
|
1994
|
+
|
|
1995
|
+
**When to Reconsider:**
|
|
1996
|
+
- If you need provider-specific features not exposed by AI SDK
|
|
1997
|
+
- If you're already heavily invested in LangChain ecosystem
|
|
1998
|
+
- If you need self-hosted LiteLLM-specific features
|
|
1999
|
+
|
|
2000
|
+
---
|
|
2001
|
+
|
|
2002
|
+
### 🥈 Alternative: OpenRouter
|
|
2003
|
+
|
|
2004
|
+
**Why:**
|
|
2005
|
+
1. **Maximum Flexibility:** 300+ models from single API
|
|
2006
|
+
2. **Simplicity:** OpenAI-compatible, familiar API
|
|
2007
|
+
3. **No Account Management:** Single API key for all providers
|
|
2008
|
+
4. **Cost Tracking:** Built-in cost monitoring
|
|
2009
|
+
5. **Automatic Fallbacks:** Provider failover included
|
|
2010
|
+
6. **AI SDK Compatible:** Can use with `@openrouter/ai-sdk-provider`
|
|
2011
|
+
|
|
2012
|
+
**Implementation:**
|
|
2013
|
+
```typescript
|
|
2014
|
+
import OpenAI from "openai";
|
|
2015
|
+
|
|
2016
|
+
const openrouter = new OpenAI({
|
|
2017
|
+
baseURL: "https://openrouter.ai/api/v1",
|
|
2018
|
+
apiKey: process.env.OPENROUTER_API_KEY
|
|
2019
|
+
});
|
|
2020
|
+
|
|
2021
|
+
// Try multiple models easily
|
|
2022
|
+
async function summarizeWithBestModel(code: string) {
|
|
2023
|
+
const models = [
|
|
2024
|
+
"anthropic/claude-opus-4.5",
|
|
2025
|
+
"openai/gpt-4o",
|
|
2026
|
+
"google/gemini-2.0-flash"
|
|
2027
|
+
];
|
|
2028
|
+
|
|
2029
|
+
for (const model of models) {
|
|
2030
|
+
try {
|
|
2031
|
+
const response = await openrouter.chat.completions.create({
|
|
2032
|
+
model,
|
|
2033
|
+
messages: [{ role: "user", content: `Summarize: ${code}` }]
|
|
2034
|
+
});
|
|
2035
|
+
return response.choices[0].message.content;
|
|
2036
|
+
} catch (error) {
|
|
2037
|
+
console.log(`${model} failed, trying next...`);
|
|
2038
|
+
}
|
|
2039
|
+
}
|
|
2040
|
+
}
|
|
2041
|
+
```
|
|
2042
|
+
|
|
2043
|
+
**When to Reconsider:**
|
|
2044
|
+
- If latency is critical (adds proxy hop)
|
|
2045
|
+
- If you need self-hosted solution
|
|
2046
|
+
- If cost markup is a concern
|
|
2047
|
+
|
|
2048
|
+
---
|
|
2049
|
+
|
|
2050
|
+
### 🥉 Third Option: Instructor-js (For Structured Output Focus)
|
|
2051
|
+
|
|
2052
|
+
**Why:**
|
|
2053
|
+
1. **Best Structured Outputs:** Purpose-built for reliable extraction
|
|
2054
|
+
2. **Type Safety:** Excellent Zod integration with full inference
|
|
2055
|
+
3. **Lightweight:** Minimal dependencies, fast
|
|
2056
|
+
4. **Transparent:** Easy to debug, simple wrapper
|
|
2057
|
+
5. **Perfect for Code Analysis:** Ideal for structured code summaries
|
|
2058
|
+
|
|
2059
|
+
**Implementation:**
|
|
2060
|
+
```typescript
|
|
2061
|
+
import Instructor from "@instructor-ai/instructor";
|
|
2062
|
+
import OpenAI from "openai";
|
|
2063
|
+
import { z } from "zod";
|
|
2064
|
+
|
|
2065
|
+
const client = Instructor({
|
|
2066
|
+
client: new OpenAI({ apiKey: process.env.OPENAI_API_KEY }),
|
|
2067
|
+
mode: "TOOLS"
|
|
2068
|
+
});
|
|
2069
|
+
|
|
2070
|
+
const DetailedAnalysisSchema = z.object({
|
|
2071
|
+
overview: z.string(),
|
|
2072
|
+
architecture: z.object({
|
|
2073
|
+
pattern: z.enum(['mvc', 'microservices', 'monolith', 'other']),
|
|
2074
|
+
description: z.string()
|
|
2075
|
+
}),
|
|
2076
|
+
functions: z.array(z.object({
|
|
2077
|
+
name: z.string(),
|
|
2078
|
+
purpose: z.string(),
|
|
2079
|
+
complexity: z.enum(['low', 'medium', 'high']),
|
|
2080
|
+
testCoverage: z.enum(['none', 'partial', 'complete'])
|
|
2081
|
+
})),
|
|
2082
|
+
codeQuality: z.object({
|
|
2083
|
+
maintainability: z.number().min(1).max(10),
|
|
2084
|
+
technicalDebt: z.array(z.string()),
|
|
2085
|
+
improvements: z.array(z.string())
|
|
2086
|
+
})
|
|
2087
|
+
});
|
|
2088
|
+
|
|
2089
|
+
const analysis = await client.chat.completions.create({
|
|
2090
|
+
messages: [{ role: "user", content: `Analyze: ${code}` }],
|
|
2091
|
+
model: "gpt-4o",
|
|
2092
|
+
response_model: {
|
|
2093
|
+
schema: DetailedAnalysisSchema,
|
|
2094
|
+
name: "CodeAnalysis"
|
|
2095
|
+
},
|
|
2096
|
+
max_retries: 2
|
|
2097
|
+
});
|
|
2098
|
+
|
|
2099
|
+
// analysis is fully typed from schema!
|
|
2100
|
+
console.log(analysis.codeQuality.maintainability); // number
|
|
2101
|
+
```
|
|
2102
|
+
|
|
2103
|
+
**When to Reconsider:**
|
|
2104
|
+
- If you need native multi-provider support
|
|
2105
|
+
- If you need RAG or complex agent workflows
|
|
2106
|
+
- If provider switching is more important than structured outputs
|
|
2107
|
+
|
|
2108
|
+
---
|
|
2109
|
+
|
|
2110
|
+
### 🔧 Hybrid Approach (Recommended for Maximum Flexibility)
|
|
2111
|
+
|
|
2112
|
+
**Combine AI SDK + OpenRouter:**
|
|
2113
|
+
|
|
2114
|
+
```typescript
|
|
2115
|
+
import { generateObject, streamText } from 'ai';
|
|
2116
|
+
import { createOpenRouter } from '@openrouter/ai-sdk-provider';
|
|
2117
|
+
import { z } from 'zod';
|
|
2118
|
+
|
|
2119
|
+
const openrouter = createOpenRouter({
|
|
2120
|
+
apiKey: process.env.OPENROUTER_API_KEY
|
|
2121
|
+
});
|
|
2122
|
+
|
|
2123
|
+
// Use AI SDK's excellent API with OpenRouter's 300+ models
|
|
2124
|
+
const CodeSummarySchema = z.object({
|
|
2125
|
+
summary: z.string(),
|
|
2126
|
+
complexity: z.enum(['low', 'medium', 'high'])
|
|
2127
|
+
});
|
|
2128
|
+
|
|
2129
|
+
async function summarize(code: string, modelId: string) {
|
|
2130
|
+
const result = await generateObject({
|
|
2131
|
+
model: openrouter(modelId), // Any of 300+ models
|
|
2132
|
+
schema: CodeSummarySchema,
|
|
2133
|
+
prompt: `Summarize: ${code}`
|
|
2134
|
+
});
|
|
2135
|
+
|
|
2136
|
+
return result.object;
|
|
2137
|
+
}
|
|
2138
|
+
|
|
2139
|
+
// Try different models easily
|
|
2140
|
+
const claude = await summarize(code, 'anthropic/claude-opus-4.5');
|
|
2141
|
+
const gpt4 = await summarize(code, 'openai/gpt-4o');
|
|
2142
|
+
const gemini = await summarize(code, 'google/gemini-2.0-flash');
|
|
2143
|
+
const deepseek = await summarize(code, 'deepseek/deepseek-chat');
|
|
2144
|
+
```
|
|
2145
|
+
|
|
2146
|
+
**Benefits:**
|
|
2147
|
+
- AI SDK's excellent TypeScript and API design
|
|
2148
|
+
- OpenRouter's 300+ model catalog
|
|
2149
|
+
- Best of both worlds
|
|
2150
|
+
|
|
2151
|
+
---
|
|
2152
|
+
|
|
2153
|
+
## Implementation Roadmap for mdcontext
|
|
2154
|
+
|
|
2155
|
+
### Phase 1: Start with AI SDK (Week 1)
|
|
2156
|
+
|
|
2157
|
+
**Install:**
|
|
2158
|
+
```bash
|
|
2159
|
+
npm install ai @ai-sdk/anthropic @ai-sdk/openai zod
|
|
2160
|
+
```
|
|
2161
|
+
|
|
2162
|
+
**Basic Implementation:**
|
|
2163
|
+
```typescript
|
|
2164
|
+
// src/llm/summarizer.ts
|
|
2165
|
+
import { generateObject } from 'ai';
|
|
2166
|
+
import { anthropic } from '@ai-sdk/anthropic';
|
|
2167
|
+
import { openai } from '@ai-sdk/openai';
|
|
2168
|
+
import { z } from 'zod';
|
|
2169
|
+
|
|
2170
|
+
export const CodeSummarySchema = z.object({
|
|
2171
|
+
summary: z.string(),
|
|
2172
|
+
functions: z.array(z.object({
|
|
2173
|
+
name: z.string(),
|
|
2174
|
+
purpose: z.string(),
|
|
2175
|
+
complexity: z.enum(['low', 'medium', 'high'])
|
|
2176
|
+
})),
|
|
2177
|
+
dependencies: z.array(z.string()),
|
|
2178
|
+
recommendations: z.array(z.string())
|
|
2179
|
+
});
|
|
2180
|
+
|
|
2181
|
+
export type CodeSummary = z.infer<typeof CodeSummarySchema>;
|
|
2182
|
+
|
|
2183
|
+
export async function summarizeCode(
|
|
2184
|
+
code: string,
|
|
2185
|
+
options: {
|
|
2186
|
+
provider?: 'openai' | 'anthropic';
|
|
2187
|
+
model?: string;
|
|
2188
|
+
} = {}
|
|
2189
|
+
): Promise<CodeSummary> {
|
|
2190
|
+
const { provider = 'anthropic', model } = options;
|
|
2191
|
+
|
|
2192
|
+
const modelMap = {
|
|
2193
|
+
openai: model || 'gpt-4o',
|
|
2194
|
+
anthropic: model || 'claude-opus-4.5'
|
|
2195
|
+
};
|
|
2196
|
+
|
|
2197
|
+
const result = await generateObject({
|
|
2198
|
+
model: `${provider}/${modelMap[provider]}`,
|
|
2199
|
+
schema: CodeSummarySchema,
|
|
2200
|
+
prompt: `Analyze and summarize this code:\n\n${code}`,
|
|
2201
|
+
maxRetries: 3
|
|
2202
|
+
});
|
|
2203
|
+
|
|
2204
|
+
return result.object;
|
|
2205
|
+
}
|
|
2206
|
+
```
|
|
2207
|
+
|
|
2208
|
+
**Test:**
|
|
2209
|
+
```typescript
|
|
2210
|
+
const summary = await summarizeCode(sourceCode, {
|
|
2211
|
+
provider: 'anthropic'
|
|
2212
|
+
});
|
|
2213
|
+
|
|
2214
|
+
console.log(summary.summary);
|
|
2215
|
+
console.log(summary.functions[0].complexity); // Type-safe!
|
|
2216
|
+
```
|
|
2217
|
+
|
|
2218
|
+
### Phase 2: Add OpenRouter for Flexibility (Week 2)
|
|
2219
|
+
|
|
2220
|
+
**Install:**
|
|
2221
|
+
```bash
|
|
2222
|
+
npm install @openrouter/ai-sdk-provider
|
|
2223
|
+
```
|
|
2224
|
+
|
|
2225
|
+
**Enhance:**
|
|
2226
|
+
```typescript
|
|
2227
|
+
import { createOpenRouter } from '@openrouter/ai-sdk-provider';
|
|
2228
|
+
|
|
2229
|
+
const openrouter = createOpenRouter({
|
|
2230
|
+
apiKey: process.env.OPENROUTER_API_KEY
|
|
2231
|
+
});
|
|
2232
|
+
|
|
2233
|
+
export async function summarizeWithAnyModel(
|
|
2234
|
+
code: string,
|
|
2235
|
+
modelId: string
|
|
2236
|
+
): Promise<CodeSummary> {
|
|
2237
|
+
const result = await generateObject({
|
|
2238
|
+
model: openrouter(modelId),
|
|
2239
|
+
schema: CodeSummarySchema,
|
|
2240
|
+
prompt: `Analyze: ${code}`,
|
|
2241
|
+
maxRetries: 3
|
|
2242
|
+
});
|
|
2243
|
+
|
|
2244
|
+
return result.object;
|
|
2245
|
+
}
|
|
2246
|
+
|
|
2247
|
+
// Now support 300+ models
|
|
2248
|
+
await summarizeWithAnyModel(code, 'deepseek/deepseek-chat');
|
|
2249
|
+
await summarizeWithAnyModel(code, 'meta/llama-3.2-90b');
|
|
2250
|
+
```
|
|
2251
|
+
|
|
2252
|
+
### Phase 3: Add Streaming for UX (Week 3)
|
|
2253
|
+
|
|
2254
|
+
```typescript
|
|
2255
|
+
import { streamText } from 'ai';
|
|
2256
|
+
|
|
2257
|
+
export async function* streamCodeSummary(
|
|
2258
|
+
code: string,
|
|
2259
|
+
provider: string = 'anthropic'
|
|
2260
|
+
) {
|
|
2261
|
+
const result = await streamText({
|
|
2262
|
+
model: `${provider}/claude-opus-4.5`,
|
|
2263
|
+
prompt: `Summarize this code:\n\n${code}`
|
|
2264
|
+
});
|
|
2265
|
+
|
|
2266
|
+
for await (const chunk of result.textStream) {
|
|
2267
|
+
yield chunk;
|
|
2268
|
+
}
|
|
2269
|
+
}
|
|
2270
|
+
|
|
2271
|
+
// Usage
|
|
2272
|
+
for await (const chunk of streamCodeSummary(code)) {
|
|
2273
|
+
process.stdout.write(chunk);
|
|
2274
|
+
}
|
|
2275
|
+
```
|
|
2276
|
+
|
|
2277
|
+
### Phase 4: Production Hardening (Week 4)
|
|
2278
|
+
|
|
2279
|
+
**Error Handling:**
|
|
2280
|
+
```typescript
|
|
2281
|
+
import { APICallError } from 'ai';
|
|
2282
|
+
|
|
2283
|
+
export async function robustSummarize(
|
|
2284
|
+
code: string,
|
|
2285
|
+
options: SummarizeOptions
|
|
2286
|
+
): Promise<CodeSummary> {
|
|
2287
|
+
try {
|
|
2288
|
+
return await summarizeCode(code, options);
|
|
2289
|
+
} catch (error) {
|
|
2290
|
+
if (error instanceof APICallError) {
|
|
2291
|
+
// Rate limit - retry with exponential backoff
|
|
2292
|
+
if (error.statusCode === 429) {
|
|
2293
|
+
const retryAfter = error.responseHeaders?.['retry-after'];
|
|
2294
|
+
if (retryAfter) {
|
|
2295
|
+
await sleep(parseInt(retryAfter) * 1000);
|
|
2296
|
+
return summarizeCode(code, options);
|
|
2297
|
+
}
|
|
2298
|
+
}
|
|
2299
|
+
|
|
2300
|
+
// Fallback to different provider
|
|
2301
|
+
if (options.provider === 'openai') {
|
|
2302
|
+
return summarizeCode(code, { ...options, provider: 'anthropic' });
|
|
2303
|
+
}
|
|
2304
|
+
}
|
|
2305
|
+
|
|
2306
|
+
throw error;
|
|
2307
|
+
}
|
|
2308
|
+
}
|
|
2309
|
+
```
|
|
2310
|
+
|
|
2311
|
+
**Cost Tracking:**
|
|
2312
|
+
```typescript
|
|
2313
|
+
export async function summarizeWithCost(
|
|
2314
|
+
code: string,
|
|
2315
|
+
options: SummarizeOptions
|
|
2316
|
+
): Promise<{ summary: CodeSummary; cost: number }> {
|
|
2317
|
+
const result = await generateObject({
|
|
2318
|
+
model: `${options.provider}/${options.model}`,
|
|
2319
|
+
schema: CodeSummarySchema,
|
|
2320
|
+
prompt: `Analyze: ${code}`
|
|
2321
|
+
});
|
|
2322
|
+
|
|
2323
|
+
// Estimate cost based on usage
|
|
2324
|
+
const cost = calculateCost(
|
|
2325
|
+
result.usage?.promptTokens || 0,
|
|
2326
|
+
result.usage?.completionTokens || 0,
|
|
2327
|
+
options.model
|
|
2328
|
+
);
|
|
2329
|
+
|
|
2330
|
+
return {
|
|
2331
|
+
summary: result.object,
|
|
2332
|
+
cost
|
|
2333
|
+
};
|
|
2334
|
+
}
|
|
2335
|
+
```
|
|
2336
|
+
|
|
2337
|
+
---
|
|
2338
|
+
|
|
2339
|
+
## Key Takeaways
|
|
2340
|
+
|
|
2341
|
+
### For mdcontext Code Summarization API:
|
|
2342
|
+
|
|
2343
|
+
1. **Start with Vercel AI SDK**
|
|
2344
|
+
- Industry-standard TypeScript support
|
|
2345
|
+
- Clean API for provider switching
|
|
2346
|
+
- Excellent structured output with Zod
|
|
2347
|
+
- 20M+ downloads = proven reliability
|
|
2348
|
+
|
|
2349
|
+
2. **Add OpenRouter for Experimentation**
|
|
2350
|
+
- Test 300+ models with same code
|
|
2351
|
+
- No provider account management
|
|
2352
|
+
- Easy cost comparison
|
|
2353
|
+
- Automatic fallbacks
|
|
2354
|
+
|
|
2355
|
+
3. **Consider Instructor-js for Complex Schemas**
|
|
2356
|
+
- If code analysis schemas become very complex
|
|
2357
|
+
- Best type inference for nested structures
|
|
2358
|
+
- Lightweight addition to stack
|
|
2359
|
+
|
|
2360
|
+
4. **Avoid Over-Engineering**
|
|
2361
|
+
- Don't use LangChain unless you need RAG/complex agents
|
|
2362
|
+
- Don't use LlamaIndex unless document processing is core
|
|
2363
|
+
- Don't build custom abstraction if AI SDK suffices
|
|
2364
|
+
|
|
2365
|
+
5. **Production Checklist**
|
|
2366
|
+
- ✅ Structured outputs with Zod schemas
|
|
2367
|
+
- ✅ Provider fallbacks for reliability
|
|
2368
|
+
- ✅ Streaming for better UX
|
|
2369
|
+
- ✅ Error handling for rate limits
|
|
2370
|
+
- ✅ Cost tracking per request
|
|
2371
|
+
- ✅ TypeScript type safety everywhere
|
|
2372
|
+
|
|
2373
|
+
---
|
|
2374
|
+
|
|
2375
|
+
## Sources
|
|
2376
|
+
|
|
2377
|
+
### Vercel AI SDK
|
|
2378
|
+
- [AI SDK by Vercel](https://ai-sdk.dev/docs/introduction)
|
|
2379
|
+
- [AI SDK](https://vercel.com/docs/ai-sdk)
|
|
2380
|
+
- [AI SDK 6 - Vercel](https://vercel.com/blog/ai-sdk-6)
|
|
2381
|
+
- [GitHub - vercel/ai](https://github.com/vercel/ai)
|
|
2382
|
+
- [Foundations: Providers and Models - AI SDK](https://ai-sdk.dev/docs/foundations/providers-and-models)
|
|
2383
|
+
- [ai - npm](https://www.npmjs.com/package/ai)
|
|
2384
|
+
|
|
2385
|
+
### LangChain.js
|
|
2386
|
+
- [LangChain overview - Docs by LangChain](https://docs.langchain.com/oss/javascript/langchain/overview)
|
|
2387
|
+
- [GitHub - langchain-ai/langchainjs](https://github.com/langchain-ai/langchainjs)
|
|
2388
|
+
- [langchain - npm](https://www.npmjs.com/package/langchain)
|
|
2389
|
+
- [LangChain vs Vercel AI SDK: A Developer's Ultimate Guide](https://www.templatehub.dev/blog/langchain-vs-vercel-ai-sdk-a-developers-ultimate-guide-2561)
|
|
2390
|
+
|
|
2391
|
+
### Instructor-js
|
|
2392
|
+
- [Instructor - Multi-Language Library for Structured LLM Outputs](https://python.useinstructor.com/)
|
|
2393
|
+
- [GitHub - instructor-ai/instructor-js](https://github.com/instructor-ai/instructor-js)
|
|
2394
|
+
- [Welcome To Instructor - Instructor (JS)](https://js.useinstructor.com/)
|
|
2395
|
+
- [Why use Instructor? - Instructor (JS)](https://instructor-ai.github.io/instructor-js/why/)
|
|
2396
|
+
|
|
2397
|
+
### LlamaIndex.TS
|
|
2398
|
+
- [GitHub - run-llama/LlamaIndexTS](https://github.com/run-llama/LlamaIndexTS)
|
|
2399
|
+
- [Welcome to LlamaIndex.TS](https://developers.llamaindex.ai/typescript/framework/)
|
|
2400
|
+
- [llamaindex - npm](https://www.npmjs.com/package/llamaindex)
|
|
2401
|
+
|
|
2402
|
+
### OpenRouter
|
|
2403
|
+
- [OpenRouter: A Unified Interface for LLMs - KDnuggets](https://www.kdnuggets.com/openrouter-a-unified-interface-for-llms)
|
|
2404
|
+
- [Call Model Overview (Typescript) | OpenRouter SDK](https://openrouter.ai/docs/sdks/call-model/overview)
|
|
2405
|
+
- [GitHub - mmeerrkkaa/openrouter-kit](https://github.com/mmeerrkkaa/openrouter-kit)
|
|
2406
|
+
- [OpenRouter](https://openrouter.ai)
|
|
2407
|
+
- [Streaming | OpenRouter SDK](https://openrouter.ai/docs/sdks/typescript/call-model/streaming)
|
|
2408
|
+
|
|
2409
|
+
### Agentica
|
|
2410
|
+
- [GitHub - wrtnlabs/agentica](https://github.com/wrtnlabs/agentica)
|
|
2411
|
+
- [@agentica/core - npm](https://www.npmjs.com/package/@agentica/core)
|
|
2412
|
+
- [Agentica > Guide Documents > Core Library > LLM Vendors](https://wrtnlabs.io/agentica/docs/core/vendor/)
|
|
2413
|
+
|
|
2414
|
+
### LiteLLM
|
|
2415
|
+
- [LiteLLM Proxy (LLM Gateway)](https://docs.litellm.ai/docs/providers/litellm_proxy)
|
|
2416
|
+
- [GitHub - BerriAI/litellm](https://github.com/BerriAI/litellm)
|
|
2417
|
+
- [Cookbook - LiteLLM (Proxy) + Langfuse OpenAI Integration (JS/TS)](https://langfuse.com/guides/cookbook/js_integration_litellm_proxy)
|
|
2418
|
+
|
|
2419
|
+
### Provider-Specific
|
|
2420
|
+
- [GitHub - m-alhoomaidi/node-deepseek](https://github.com/m-alhoomaidi/node-deepseek)
|
|
2421
|
+
- [Simple Agent Function-Calling with DeepSeek-V3 in TypeScript](https://medium.com/@wickerwobber/simple-agent-function-calling-with-deepseek-v3-in-typescript-38a5914c3cf3)
|
|
2422
|
+
- [GitHub - ollama/ollama-js](https://github.com/ollama/ollama-js)
|
|
2423
|
+
- [Using Ollama with TypeScript: A Simple Guide](https://medium.com/@jonigl/using-ollama-with-typescript-a-simple-guide-20f5e8d3827c)
|
|
2424
|
+
|
|
2425
|
+
### Comparisons & Analysis
|
|
2426
|
+
- [AI Framework Comparison: AI SDK, Genkit and Langchain](https://komelin.com/blog/ai-framework-comparison)
|
|
2427
|
+
- [14 AI Agent Frameworks Compared](https://softcery.com/lab/top-14-ai-agent-frameworks-of-2025-a-founders-guide-to-building-smarter-systems)
|
|
2428
|
+
- [The Top 15 LangChain Alternatives in 2026](https://www.vellum.ai/blog/top-langchain-alternatives)
|
|
2429
|
+
- [Comparing AI SDKs for React: Vercel, LangChain, Hugging Face](https://dev.to/brayancodes/comparing-ai-sdks-for-react-vercel-langchain-hugging-face-5g66)
|
|
2430
|
+
|
|
2431
|
+
---
|
|
2432
|
+
|
|
2433
|
+
**Document Status:** Complete
|
|
2434
|
+
**Last Updated:** January 26, 2026
|
|
2435
|
+
**Maintainer:** mdcontext research team
|
|
2436
|
+
**Next Review:** When considering new LLM integrations or major version updates
|