@techwavedev/agi-agent-kit 1.3.4 โ 1.3.5
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/CHANGELOG.md +1 -1
- package/LICENSE +0 -0
- package/README.md +28 -79
- package/package.json +1 -1
- package/templates/.agent/agents/backend-specialist.md +0 -0
- package/templates/.agent/agents/code-archaeologist.md +0 -0
- package/templates/.agent/agents/database-architect.md +0 -0
- package/templates/.agent/agents/debugger.md +0 -0
- package/templates/.agent/agents/devops-engineer.md +0 -0
- package/templates/.agent/agents/documentation-writer.md +0 -0
- package/templates/.agent/agents/explorer-agent.md +0 -0
- package/templates/.agent/agents/frontend-specialist.md +0 -0
- package/templates/.agent/agents/game-developer.md +0 -0
- package/templates/.agent/agents/mobile-developer.md +0 -0
- package/templates/.agent/agents/orchestrator.md +0 -0
- package/templates/.agent/agents/penetration-tester.md +0 -0
- package/templates/.agent/agents/performance-optimizer.md +0 -0
- package/templates/.agent/agents/product-manager.md +0 -0
- package/templates/.agent/agents/project-planner.md +0 -0
- package/templates/.agent/agents/qa-automation-engineer.md +0 -0
- package/templates/.agent/agents/security-auditor.md +0 -0
- package/templates/.agent/agents/seo-specialist.md +0 -0
- package/templates/.agent/agents/test-engineer.md +0 -0
- package/templates/.agent/rules/GEMINI.md +0 -0
- package/templates/.agent/workflows/brainstorm.md +0 -0
- package/templates/.agent/workflows/create.md +0 -0
- package/templates/.agent/workflows/debug.md +0 -0
- package/templates/.agent/workflows/deploy.md +0 -0
- package/templates/.agent/workflows/enhance.md +0 -0
- package/templates/.agent/workflows/orchestrate.md +0 -0
- package/templates/.agent/workflows/plan.md +0 -0
- package/templates/.agent/workflows/preview.md +0 -0
- package/templates/.agent/workflows/status.md +0 -0
- package/templates/.agent/workflows/test.md +0 -0
- package/templates/.agent/workflows/ui-ux-pro-max.md +0 -0
- package/templates/base/.env.example +0 -0
- package/templates/base/README.md +28 -79
- package/templates/base/directives/memory_integration.md +0 -0
- package/templates/base/execution/memory_manager.py +0 -0
- package/templates/base/execution/session_boot.py +0 -0
- package/templates/base/execution/session_init.py +0 -0
- package/templates/base/requirements.txt +0 -0
- package/templates/skills/core/README.md +0 -0
- package/templates/skills/knowledge/doc.md +0 -0
- package/templates/skills/extended/frontend/ui-ux-pro-max/scripts/__pycache__/core.cpython-314.pyc +0 -0
- package/templates/skills/extended/frontend/ui-ux-pro-max/scripts/__pycache__/design_system.cpython-314.pyc +0 -0
package/CHANGELOG.md
CHANGED
|
@@ -5,7 +5,7 @@ All notable changes to this project will be documented in this file.
|
|
|
5
5
|
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
|
|
6
6
|
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
|
|
7
7
|
|
|
8
|
-
## [1.3.
|
|
8
|
+
## [1.3.5] - 2026-02-16
|
|
9
9
|
|
|
10
10
|
### Added
|
|
11
11
|
|
package/LICENSE
CHANGED
|
File without changes
|
package/README.md
CHANGED
|
@@ -1,13 +1,37 @@
|
|
|
1
|
-
# AGI Agent Kit
|
|
1
|
+
# ๐ AGI Agent Kit
|
|
2
2
|
|
|
3
|
-
**
|
|
3
|
+
> **Stop hallucinating. Start executing.**
|
|
4
4
|
|
|
5
5
|
[](https://www.npmjs.com/package/@techwavedev/agi-agent-kit)
|
|
6
|
+
[](https://www.npmjs.com/package/@techwavedev/agi-agent-kit)
|
|
6
7
|
[](https://opensource.org/licenses/Apache-2.0)
|
|
8
|
+
[](https://claude.ai)
|
|
9
|
+
[](https://github.com/google-gemini/gemini-cli)
|
|
10
|
+
[](https://github.com/openai/codex)
|
|
11
|
+
[](https://cursor.sh)
|
|
12
|
+
[](https://github.com/features/copilot)
|
|
13
|
+
[](https://github.com/opencode-ai/opencode)
|
|
14
|
+
[](https://github.com/techwavedev/agi-agent-kit)
|
|
15
|
+
[](https://sylph.ai/)
|
|
16
|
+
[](https://www.buymeacoffee.com/eltonmachado)
|
|
7
17
|
|
|
8
|
-
|
|
18
|
+
**AGI Agent Kit** is the enterprise-grade scaffolding that turns any AI coding assistant into a **deterministic production machine**. While LLMs are probabilistic (90% accuracy per step = 59% over 5 steps), this framework forces them through a **3-Layer Architecture** โ Intent โ Orchestration โ Execution โ where business logic lives in tested scripts, not hallucinated code.
|
|
9
19
|
|
|
10
|
-
|
|
20
|
+
### Why this exists
|
|
21
|
+
|
|
22
|
+
Most AI coding setups give you a prompt and hope for the best. AGI Agent Kit gives you:
|
|
23
|
+
|
|
24
|
+
- ๐ง **Semantic Memory** โ Qdrant-powered caching that eliminates redundant LLM calls (90-100% token savings)
|
|
25
|
+
- ๐ฏ **19 Specialist Agents** โ Domain-bounded experts (Frontend, Backend, Security, Mobile, Game Dev...) with enforced file ownership
|
|
26
|
+
- โก **861 Curated Skills** โ 4 core + 75 professional + 782 community skills across 16 domain categories
|
|
27
|
+
- ๐ **Verification Gates** โ No task completes without evidence. TDD enforcement. Two-stage code review.
|
|
28
|
+
- ๐ **8 Platforms, One Config** โ Write once, run on Claude Code, Gemini CLI, Codex CLI, Cursor, Copilot, OpenCode, AdaL CLI, Antigravity IDE
|
|
29
|
+
|
|
30
|
+
```bash
|
|
31
|
+
npx @techwavedev/agi-agent-kit init
|
|
32
|
+
```
|
|
33
|
+
|
|
34
|
+
If this project helps you, consider [supporting it here](https://www.buymeacoffee.com/eltonmachado) or simply โญ the repo.
|
|
11
35
|
|
|
12
36
|
---
|
|
13
37
|
|
|
@@ -154,81 +178,6 @@ python3 execution/memory_manager.py auto \
|
|
|
154
178
|
# โ {"source": "cache", "cache_hit": true, "tokens_saved_estimate": 12}
|
|
155
179
|
```
|
|
156
180
|
|
|
157
|
-
---
|
|
158
|
-
|
|
159
|
-
## ๐งช Real Benchmark: Subagents vs Agent Teams
|
|
160
|
-
|
|
161
|
-
The framework supports two orchestration modes. Here are **real test results** from `execution/benchmark_modes.py` running on local infrastructure (Qdrant + Ollama `nomic-embed-text`, zero cloud API calls):
|
|
162
|
-
|
|
163
|
-
```
|
|
164
|
-
MODE A: SUBAGENTS โ Independent, fire-and-forget
|
|
165
|
-
๐ค Explore Auth Patterns โ โ
stored in cache + memory (127ms)
|
|
166
|
-
๐ค Query Performance โ โ FAILED (timeout โ fault tolerant)
|
|
167
|
-
๐ค Scan CVEs โ โ
stored in cache + memory (14ms)
|
|
168
|
-
Summary: 2/3 completed, 1 failed, 0 cross-references
|
|
169
|
-
|
|
170
|
-
MODE B: AGENT TEAMS โ Shared context, coordinated
|
|
171
|
-
๐ค Backend Specialist โ โ
stored in shared memory (14ms)
|
|
172
|
-
๐ค Database Specialist โ โ
stored in shared memory (13ms)
|
|
173
|
-
๐ค Frontend Specialist โ ๐ Read Backend + Database output first
|
|
174
|
-
โ
Got context from team-backend: "API contract: POST /api/messages..."
|
|
175
|
-
โ
Got context from team-database: "Schema: users(id UUID PK, name..."
|
|
176
|
-
โ โ
stored in shared memory (14ms)
|
|
177
|
-
Summary: 3/3 completed, 0 failed, 2 cross-references
|
|
178
|
-
```
|
|
179
|
-
|
|
180
|
-
**2nd run (cache warm):** All queries hit cache at **score 1.000**, reducing total time from 314ms โ 76ms (Subagents) and 292ms โ 130ms (Agent Teams).
|
|
181
|
-
|
|
182
|
-
| Metric | Subagents | Agent Teams |
|
|
183
|
-
| -------------------- | ------------------------------------ | ------------------------------------ |
|
|
184
|
-
| Execution model | Fire-and-forget (isolated) | Shared context (coordinated) |
|
|
185
|
-
| Tasks completed | 2/3 (fault tolerant) | 3/3 |
|
|
186
|
-
| Cross-references | 0 (not supported) | 2 (peers read each other's work) |
|
|
187
|
-
| Context sharing | โ Each agent isolated | โ
Peer-to-peer via Qdrant |
|
|
188
|
-
| Two-stage review | โ | โ
Spec + Quality |
|
|
189
|
-
| Cache hits (2nd run) | 5/5 | 5/5 |
|
|
190
|
-
| Embedding provider | Ollama local (nomic-embed-text 137M) | Ollama local (nomic-embed-text 137M) |
|
|
191
|
-
|
|
192
|
-
**Try it yourself:**
|
|
193
|
-
|
|
194
|
-
```bash
|
|
195
|
-
# 1. Start infrastructure
|
|
196
|
-
docker run -d -p 6333:6333 -v qdrant_storage:/qdrant/storage qdrant/qdrant
|
|
197
|
-
ollama serve & ollama pull nomic-embed-text
|
|
198
|
-
|
|
199
|
-
# 2. Boot memory system
|
|
200
|
-
python3 execution/session_boot.py --auto-fix
|
|
201
|
-
# โ
Memory system ready โ 5 memories, 1 cached responses
|
|
202
|
-
|
|
203
|
-
# 3. Run the full benchmark (both modes)
|
|
204
|
-
python3 execution/benchmark_modes.py --verbose
|
|
205
|
-
|
|
206
|
-
# 4. Or test individual operations:
|
|
207
|
-
|
|
208
|
-
# Store a decision (embedding generated locally via Ollama)
|
|
209
|
-
python3 execution/memory_manager.py store \
|
|
210
|
-
--content "Chose PostgreSQL for relational data" \
|
|
211
|
-
--type decision --project myapp
|
|
212
|
-
# โ {"status": "stored", "point_id": "...", "token_count": 5}
|
|
213
|
-
|
|
214
|
-
# Auto-query: checks cache first, then retrieves context
|
|
215
|
-
python3 execution/memory_manager.py auto \
|
|
216
|
-
--query "what database did we choose?"
|
|
217
|
-
# โ {"source": "memory", "cache_hit": false, "context_chunks": [...]}
|
|
218
|
-
|
|
219
|
-
# Cache an LLM response for future reuse
|
|
220
|
-
python3 execution/memory_manager.py cache-store \
|
|
221
|
-
--query "how to set up auth?" \
|
|
222
|
-
--response "Use JWT with 24h expiry, refresh tokens in httpOnly cookies"
|
|
223
|
-
|
|
224
|
-
# Re-query โ instant cache hit (score 1.000, zero re-computation)
|
|
225
|
-
python3 execution/memory_manager.py auto \
|
|
226
|
-
--query "how to set up auth?"
|
|
227
|
-
# โ {"source": "cache", "cache_hit": true, "tokens_saved_estimate": 12}
|
|
228
|
-
```
|
|
229
|
-
|
|
230
|
-
---
|
|
231
|
-
|
|
232
181
|
## ๐ Platform Support
|
|
233
182
|
|
|
234
183
|
The framework automatically detects your AI coding environment and activates the best available features.
|
package/package.json
CHANGED
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
{
|
|
2
2
|
"name": "@techwavedev/agi-agent-kit",
|
|
3
|
-
"version": "1.3.
|
|
3
|
+
"version": "1.3.5",
|
|
4
4
|
"description": "Enterprise-Grade Agentic Framework - Modular skill-based AI assistant toolkit with deterministic execution, semantic memory, and platform-adaptive orchestration.",
|
|
5
5
|
"bin": {
|
|
6
6
|
"agi-agent-kit": "./bin/init.js"
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
package/templates/base/README.md
CHANGED
|
@@ -1,13 +1,37 @@
|
|
|
1
|
-
# AGI Agent Kit
|
|
1
|
+
# ๐ AGI Agent Kit
|
|
2
2
|
|
|
3
|
-
**
|
|
3
|
+
> **Stop hallucinating. Start executing.**
|
|
4
4
|
|
|
5
5
|
[](https://www.npmjs.com/package/@techwavedev/agi-agent-kit)
|
|
6
|
+
[](https://www.npmjs.com/package/@techwavedev/agi-agent-kit)
|
|
6
7
|
[](https://opensource.org/licenses/Apache-2.0)
|
|
8
|
+
[](https://claude.ai)
|
|
9
|
+
[](https://github.com/google-gemini/gemini-cli)
|
|
10
|
+
[](https://github.com/openai/codex)
|
|
11
|
+
[](https://cursor.sh)
|
|
12
|
+
[](https://github.com/features/copilot)
|
|
13
|
+
[](https://github.com/opencode-ai/opencode)
|
|
14
|
+
[](https://github.com/techwavedev/agi-agent-kit)
|
|
15
|
+
[](https://sylph.ai/)
|
|
16
|
+
[](https://www.buymeacoffee.com/eltonmachado)
|
|
7
17
|
|
|
8
|
-
|
|
18
|
+
**AGI Agent Kit** is the enterprise-grade scaffolding that turns any AI coding assistant into a **deterministic production machine**. While LLMs are probabilistic (90% accuracy per step = 59% over 5 steps), this framework forces them through a **3-Layer Architecture** โ Intent โ Orchestration โ Execution โ where business logic lives in tested scripts, not hallucinated code.
|
|
9
19
|
|
|
10
|
-
|
|
20
|
+
### Why this exists
|
|
21
|
+
|
|
22
|
+
Most AI coding setups give you a prompt and hope for the best. AGI Agent Kit gives you:
|
|
23
|
+
|
|
24
|
+
- ๐ง **Semantic Memory** โ Qdrant-powered caching that eliminates redundant LLM calls (90-100% token savings)
|
|
25
|
+
- ๐ฏ **19 Specialist Agents** โ Domain-bounded experts (Frontend, Backend, Security, Mobile, Game Dev...) with enforced file ownership
|
|
26
|
+
- โก **861 Curated Skills** โ 4 core + 75 professional + 782 community skills across 16 domain categories
|
|
27
|
+
- ๐ **Verification Gates** โ No task completes without evidence. TDD enforcement. Two-stage code review.
|
|
28
|
+
- ๐ **8 Platforms, One Config** โ Write once, run on Claude Code, Gemini CLI, Codex CLI, Cursor, Copilot, OpenCode, AdaL CLI, Antigravity IDE
|
|
29
|
+
|
|
30
|
+
```bash
|
|
31
|
+
npx @techwavedev/agi-agent-kit init
|
|
32
|
+
```
|
|
33
|
+
|
|
34
|
+
If this project helps you, consider [supporting it here](https://www.buymeacoffee.com/eltonmachado) or simply โญ the repo.
|
|
11
35
|
|
|
12
36
|
---
|
|
13
37
|
|
|
@@ -154,81 +178,6 @@ python3 execution/memory_manager.py auto \
|
|
|
154
178
|
# โ {"source": "cache", "cache_hit": true, "tokens_saved_estimate": 12}
|
|
155
179
|
```
|
|
156
180
|
|
|
157
|
-
---
|
|
158
|
-
|
|
159
|
-
## ๐งช Real Benchmark: Subagents vs Agent Teams
|
|
160
|
-
|
|
161
|
-
The framework supports two orchestration modes. Here are **real test results** from `execution/benchmark_modes.py` running on local infrastructure (Qdrant + Ollama `nomic-embed-text`, zero cloud API calls):
|
|
162
|
-
|
|
163
|
-
```
|
|
164
|
-
MODE A: SUBAGENTS โ Independent, fire-and-forget
|
|
165
|
-
๐ค Explore Auth Patterns โ โ
stored in cache + memory (127ms)
|
|
166
|
-
๐ค Query Performance โ โ FAILED (timeout โ fault tolerant)
|
|
167
|
-
๐ค Scan CVEs โ โ
stored in cache + memory (14ms)
|
|
168
|
-
Summary: 2/3 completed, 1 failed, 0 cross-references
|
|
169
|
-
|
|
170
|
-
MODE B: AGENT TEAMS โ Shared context, coordinated
|
|
171
|
-
๐ค Backend Specialist โ โ
stored in shared memory (14ms)
|
|
172
|
-
๐ค Database Specialist โ โ
stored in shared memory (13ms)
|
|
173
|
-
๐ค Frontend Specialist โ ๐ Read Backend + Database output first
|
|
174
|
-
โ
Got context from team-backend: "API contract: POST /api/messages..."
|
|
175
|
-
โ
Got context from team-database: "Schema: users(id UUID PK, name..."
|
|
176
|
-
โ โ
stored in shared memory (14ms)
|
|
177
|
-
Summary: 3/3 completed, 0 failed, 2 cross-references
|
|
178
|
-
```
|
|
179
|
-
|
|
180
|
-
**2nd run (cache warm):** All queries hit cache at **score 1.000**, reducing total time from 314ms โ 76ms (Subagents) and 292ms โ 130ms (Agent Teams).
|
|
181
|
-
|
|
182
|
-
| Metric | Subagents | Agent Teams |
|
|
183
|
-
| -------------------- | ------------------------------------ | ------------------------------------ |
|
|
184
|
-
| Execution model | Fire-and-forget (isolated) | Shared context (coordinated) |
|
|
185
|
-
| Tasks completed | 2/3 (fault tolerant) | 3/3 |
|
|
186
|
-
| Cross-references | 0 (not supported) | 2 (peers read each other's work) |
|
|
187
|
-
| Context sharing | โ Each agent isolated | โ
Peer-to-peer via Qdrant |
|
|
188
|
-
| Two-stage review | โ | โ
Spec + Quality |
|
|
189
|
-
| Cache hits (2nd run) | 5/5 | 5/5 |
|
|
190
|
-
| Embedding provider | Ollama local (nomic-embed-text 137M) | Ollama local (nomic-embed-text 137M) |
|
|
191
|
-
|
|
192
|
-
**Try it yourself:**
|
|
193
|
-
|
|
194
|
-
```bash
|
|
195
|
-
# 1. Start infrastructure
|
|
196
|
-
docker run -d -p 6333:6333 -v qdrant_storage:/qdrant/storage qdrant/qdrant
|
|
197
|
-
ollama serve & ollama pull nomic-embed-text
|
|
198
|
-
|
|
199
|
-
# 2. Boot memory system
|
|
200
|
-
python3 execution/session_boot.py --auto-fix
|
|
201
|
-
# โ
Memory system ready โ 5 memories, 1 cached responses
|
|
202
|
-
|
|
203
|
-
# 3. Run the full benchmark (both modes)
|
|
204
|
-
python3 execution/benchmark_modes.py --verbose
|
|
205
|
-
|
|
206
|
-
# 4. Or test individual operations:
|
|
207
|
-
|
|
208
|
-
# Store a decision (embedding generated locally via Ollama)
|
|
209
|
-
python3 execution/memory_manager.py store \
|
|
210
|
-
--content "Chose PostgreSQL for relational data" \
|
|
211
|
-
--type decision --project myapp
|
|
212
|
-
# โ {"status": "stored", "point_id": "...", "token_count": 5}
|
|
213
|
-
|
|
214
|
-
# Auto-query: checks cache first, then retrieves context
|
|
215
|
-
python3 execution/memory_manager.py auto \
|
|
216
|
-
--query "what database did we choose?"
|
|
217
|
-
# โ {"source": "memory", "cache_hit": false, "context_chunks": [...]}
|
|
218
|
-
|
|
219
|
-
# Cache an LLM response for future reuse
|
|
220
|
-
python3 execution/memory_manager.py cache-store \
|
|
221
|
-
--query "how to set up auth?" \
|
|
222
|
-
--response "Use JWT with 24h expiry, refresh tokens in httpOnly cookies"
|
|
223
|
-
|
|
224
|
-
# Re-query โ instant cache hit (score 1.000, zero re-computation)
|
|
225
|
-
python3 execution/memory_manager.py auto \
|
|
226
|
-
--query "how to set up auth?"
|
|
227
|
-
# โ {"source": "cache", "cache_hit": true, "tokens_saved_estimate": 12}
|
|
228
|
-
```
|
|
229
|
-
|
|
230
|
-
---
|
|
231
|
-
|
|
232
181
|
## ๐ Platform Support
|
|
233
182
|
|
|
234
183
|
The framework automatically detects your AI coding environment and activates the best available features.
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|