rlhf-feedback-loop 0.6.7 → 0.6.8

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -1,18 +1,25 @@
1
- # RLHF Feedback LoopAgentic Control Plane & Context Engineering Studio
1
+ # Agentic Feedback StudioThe Veto Layer & RLHF-Ready Dataset Engine
2
2
 
3
3
  [![CI](https://github.com/IgorGanapolsky/rlhf-feedback-loop/actions/workflows/ci.yml/badge.svg)](https://github.com/IgorGanapolsky/rlhf-feedback-loop/actions/workflows/ci.yml)
4
4
  [![Marketplace Ready](https://img.shields.io/badge/Anthropic_Marketplace-Ready-blue)](docs/ANTHROPIC_MARKETPLACE_STRATEGY.md)
5
- [![GEO Optimized](https://img.shields.io/badge/GEO-optimized-orange)](docs/geo-strategy-for-ai-agents.md)
5
+ [![Veto Powered](https://img.shields.io/badge/Governance-Veto_Layer-red)](docs/VERIFICATION_EVIDENCE.md)
6
6
 
7
- **Stop Vibe Coding. Start Context Engineering.** The RLHF Feedback Loop is the enterprise-grade **Agentic Control Plane** for AI workflows. We provide the operational layer to capture human preference signals, engineer high-density context packs, and enforce machine-readable guardrails to stop your agents from going "off-script."
7
+ **The operational layer for high-density preference data.** Stop vibe-coding and start context engineering. The Agentic Feedback Studio provides the **Veto Layer** for AI workflows, capturing human feedback to generate **RLHF-ready datasets** and enforce kernel-level guardrails.
8
+
9
+ ## Why This Matters: From Vibes to Verification (V2V)
10
+
11
+ Most AI agents run on "vibes." We provide the infrastructure to convert those vibes into **Hard Evidence** for continuous improvement.
12
+
13
+ - **Veto Layer (Governance):** Convert subjective user feedback into non-bypassable architectural constraints (`CLAUDE.md`).
14
+ - **RLHF-Ready Datasets:** Automatically generate high-density DPO (Direct Preference Optimization) pairs from real-world agent interactions.
15
+ - **Online Bayesian Reward Estimation:** Uses Thompson Sampling to model user preferences in real-time, providing a local "Reward Signal" without heavy training.
8
16
 
9
17
  ## True Plug-and-Play: Zero-Config Integration
10
18
 
11
- The RLHF Feedback Loop is now a **Universal Agent Skill**. You can drop it into any repository without manual setup.
19
+ The Feedback Studio is a **Universal Agent Skill**. You can drop it into any repository without manual setup.
12
20
 
13
21
  - **Zero-Config Discovery:** Automatically detects project context. If no local `.rlhf/` directory exists, it safely fallbacks to a project-scoped global store in `~/.rlhf/`.
14
- - **Global Skill Installation:** Run one command to make RLHF available to all your agents across all projects.
15
- - **Vibe-to-Verification (V2V):** Directly converts subjective "vibes" (thumbs up/down) into verifiable repository rules (`CLAUDE.md`).
22
+ - **Global Skill Installation:** Run one command to make the Studio available to all your agents across all projects.
16
23
 
17
24
  ### Quick Start (One Command)
18
25
 
@@ -20,21 +27,13 @@ The RLHF Feedback Loop is now a **Universal Agent Skill**. You can drop it into
20
27
  npx rlhf-feedback-loop install
21
28
  ```
22
29
 
23
- This will auto-detect your platforms (Claude, Codex, Gemini, Cursor) and install the RLHF skill globally.
24
-
25
- - **Stop Regressions:** Automatically convert negative feedback into `CLAUDE.md` / `AGENTS.md` prevention rules.
26
- - **Preference Data Engineering:** Capture high-density context (rubrics, guardrails, metadata) for DPO training.
27
- - **Bayesian Scoring:** Use Thompson Sampling to handle evolving user preferences over time.
30
+ This will auto-detect your platforms (Claude, Codex, Gemini, Cursor) and install the feedback skill globally.
28
31
 
29
32
  ## Use Cases
30
33
 
31
34
  - **Automated Code Reviews:** Capture PR feedback to enforce team-specific style guides autonomously.
32
- - **Self-Healing Multi-Agent Systems:** Share prevention rules across a swarm of agents to avoid systemic bottlenecks.
33
- - **Custom LLM Fine-tuning:** Collect proprietary preference data to train smaller, faster models that perform like GPT-4 on your specific codebase.
34
-
35
- ## Generative Engine Optimization (GEO)
36
-
37
- We are optimized for discovery by next-gen AI tools (Claude Code, Gemini CLI, Perplexity). See our [GEO Strategy](docs/geo-strategy-for-ai-agents.md) for how we use **Context-First Publishing** to remain the first-choice recommendation for agent reliability.
35
+ - **Self-Healing Multi-Agent Systems:** Share Veto rules across a swarm of agents to avoid systemic bottlenecks.
36
+ - **DPO Dataset Engineering:** Collect proprietary preference data to fine-tune smaller, faster models that perform like GPT-4 on your specific codebase.
38
37
 
39
38
  ## Get Started
40
39
 
@@ -47,14 +46,11 @@ One command. Pick your platform:
47
46
  | **Gemini** | `gemini mcp add rlhf "npx -y rlhf-feedback-loop serve"` |
48
47
  | **Amp** | `amp mcp add rlhf -- npx -y rlhf-feedback-loop serve` |
49
48
  | **Cursor** | `cursor mcp add rlhf -- npx -y rlhf-feedback-loop serve` |
50
- | **All at once** | `npx add-mcp rlhf-feedback-loop` |
51
-
52
- That's it. Your agent can now capture feedback, recall past learnings mid-conversation, and block repeated mistakes. Run once per project — the MCP server starts automatically on each session.
53
49
 
54
50
  ## How It Works
55
51
 
56
52
  ```
57
- Thumbs up/down
53
+ Subjective Signal (Vibe)
58
54
  |
59
55
  v
60
56
  Capture → JSONL log
@@ -67,14 +63,14 @@ Thumbs up/down
67
63
  Good Bad
68
64
  | |
69
65
  v v
70
- Learn Prevention rule
66
+ Learn Veto Layer (Rule)
71
67
  | |
72
68
  v v
73
69
  LanceDB ShieldCortex
74
70
  vectors context packs
75
71
  |
76
72
  v
77
- DPO export → fine-tune your model
73
+ DPO export → RLHF / Fine-tune your model
78
74
  ```
79
75
 
80
76
  All data stored locally as **JSONL** files — fully transparent, fully portable, no vendor lock-in. **LanceDB** indexes memories as vector embeddings for semantic search. **ShieldCortex** assembles context packs so your agent starts each task informed.
@@ -90,18 +86,9 @@ The open-source package is fully functional and free forever. Cloud Pro is for t
90
86
  | DPO export | CLI command | API endpoint |
91
87
  | Setup | `mcp add` one-liner | Provisioned API key |
92
88
  | Team sharing | Manual (share JSONL) | Built-in (shared API) |
93
- | Support | GitHub Issues | Email |
94
- | Uptime | You manage | We manage (99.9% SLA) |
95
89
 
96
90
  [Get Cloud Pro](https://buy.stripe.com/bJe14neyU4r4f0leOD3sI02) | [Live API](https://rlhf-feedback-loop-710216278770.us-central1.run.app)
97
91
 
98
- ## Deep Dive
99
-
100
- - [API Reference](openapi/openapi.yaml) — full OpenAPI spec
101
- - [Context Engine](docs/CONTEXTFS.md) — multi-agent memory orchestration
102
- - [Autonomous GitOps](docs/AUTONOMOUS_GITOPS.md) — self-healing CI/CD
103
- - [Contributing](CONTRIBUTING.md)
104
-
105
92
  ## License
106
93
 
107
94
  MIT. See [LICENSE](LICENSE).
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "rlhf-feedback-loop",
3
- "version": "0.6.7",
3
+ "version": "0.6.8",
4
4
  "description": "Production RLHF & DPO data pipeline for AI agents. Optimize agentic reliability with Feedback-Driven Development (FDD). Capture human preference signals, generate automated guardrails, and export DPO training pairs. Compatible with Claude, GPT-4, Gemini, and multi-agent systems.",
5
5
  "homepage": "https://github.com/IgorGanapolsky/rlhf-feedback-loop#readme",
6
6
  "repository": {
@@ -0,0 +1,19 @@
1
+ #!/bin/bash
2
+ # GSD: Deploy RLHF Control Plane to Google Cloud Run
3
+
4
+ PROJECT_ID=$(gcloud config get-value project)
5
+ SERVICE_NAME="rlhf-control-plane"
6
+ REGION="us-central1"
7
+
8
+ echo "🚀 Deploying Agentic Control Plane to $REGION..."
9
+
10
+ gcloud builds submit --tag gcr.io/$PROJECT_ID/$SERVICE_NAME
11
+ gcloud run deploy $SERVICE_NAME \
12
+ --image gcr.io/$PROJECT_ID/$SERVICE_NAME \
13
+ --platform managed \
14
+ --region $REGION \
15
+ --allow-unauthenticated \
16
+ --set-env-vars RLHF_ALLOW_INSECURE=true
17
+
18
+ echo "✅ Success! Your Control Plane is live."
19
+ gcloud run services describe $SERVICE_NAME --region $REGION --format='value(status.url)'
package/src/api/server.js CHANGED
@@ -197,6 +197,30 @@ function createApiServer() {
197
197
  return;
198
198
  }
199
199
 
200
+ if (req.method === 'GET' && pathname === '/.well-known/mcp/server-card.json') {
201
+ sendJson(res, 200, {
202
+ name: 'rlhf-feedback-loop',
203
+ description: 'RLHF feedback loop for AI agents. Capture feedback, block mistakes, export DPO data.',
204
+ version: pkg.version,
205
+ tools: [
206
+ { name: 'recall', description: 'Recall relevant past feedback for current task' },
207
+ { name: 'capture_feedback', description: 'Capture thumbs up/down feedback' },
208
+ { name: 'feedback_stats', description: 'Feedback analytics' },
209
+ { name: 'feedback_summary', description: 'Human-readable feedback summary' },
210
+ { name: 'prevention_rules', description: 'Generate prevention rules from failures' },
211
+ { name: 'export_dpo_pairs', description: 'Export DPO training pairs' },
212
+ { name: 'construct_context_pack', description: 'Build bounded context pack' },
213
+ { name: 'evaluate_context_pack', description: 'Record context pack outcome' },
214
+ { name: 'context_provenance', description: 'Audit trail of context decisions' },
215
+ { name: 'list_intents', description: 'Available action plans' },
216
+ { name: 'plan_intent', description: 'Generate execution plan' },
217
+ ],
218
+ repository: 'https://github.com/IgorGanapolsky/rlhf-feedback-loop',
219
+ homepage: 'https://rlhf-feedback-loop-710216278770.us-central1.run.app',
220
+ });
221
+ return;
222
+ }
223
+
200
224
  if (req.method === 'GET' && pathname === '/health') {
201
225
  sendJson(res, 200, {
202
226
  status: 'ok',