opencode-orchestrator 1.0.53 → 1.0.55

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (2) hide show
  1. package/README.md +19 -13
  2. package/package.json +1 -1
package/README.md CHANGED
@@ -23,20 +23,22 @@ In an OpenCode environment:
23
23
 
24
24
  ## Overview
25
25
 
26
- OpenCode Orchestrator is a testament to the operational paradox: **Complexity is easy; Simplicity is hard.**
26
+ OpenCode Orchestrator is a **Distributed Cognitive Architecture** designed for high-precision software engineering. It operates on a strict **"Verify, then Trust"** philosophy, distinguishing itself from simple stochastic chatbots by enforcing rigorous architectural standards.
27
+
28
+ The system is a testament to the operational paradox: **Complexity is easy; Simplicity is hard.**
27
29
 
28
30
  While the user interaction remains elegantly minimal, the internal architecture encapsulates a rigorous alignment of **microscopic state management** (`Rust atoms`) and **macroscopic strategic planning** (`Agent Topology`). Every component reflects a deep design philosophy aimed at abstracting chaos into order.
29
31
 
30
- Building this system reaffirmed a timeless engineering truth: **"Simple is Best" is the ultimate complexity to conquer.** This engine is our answer to that challenge—hiding the heavy machinery of autonomous intelligence behind a seamless veil of collaboration.
32
+ Building this system reaffirmed a timeless engineering truth: **"Simple is Best" is the ultimate complexity to conquer.** This engine is our answer to that challenge—hiding the **intricate dynamics of Autonomous Agentic Collaboration** behind a seamless, user-friendly veil.
31
33
 
32
- This philosophy extends to efficiency. We achieved **Zero-Configuration** usability while rigorously optimizing for **Token Efficiency** (saving ~40% vs major alternatives). By maximizing the potential of cost-effective models like **GLM-4.7**, we prove that superior engineering—not just raw model size—is the key to autonomous performance.
34
+ This philosophy extends to efficiency. We achieved **Zero-Configuration** usability while rigorously optimizing for performance—**delivering higher quality outcomes** than alternatives while **saving ~40% of tokens**. By maximizing the potential of cost-effective models like **GLM-4.7**, we prove that superior engineering—not just raw model size—is the key to autonomous performance.
33
35
 
34
36
  ---
35
37
 
36
38
  ## 📊 Workflow
37
39
 
38
40
  ```text
39
- [ 👑 User Task Input ]
41
+ [ User Task Input ]
40
42
 
41
43
  ┌───────────▼───────────┐ <────────────────────────────────────────┐
42
44
  │ 🫡 COMMANDER (Hub) │ (Orchestration) │
@@ -85,23 +87,27 @@ This philosophy extends to efficiency. We achieved **Zero-Configuration** usabil
85
87
 
86
88
  ## 🧠 Cognitive Architecture & Key Strengths
87
89
 
88
- ### 📉 Exponential Context Smoothing & Stable Memory
89
- We combat "Context Drift" using a mechanism akin to **Exponential Smoothing**. Irrelevant conversation noise decays rapidly, while critical architectural decisions are reinforced into a **Stable Core Memory**. This ensures agents retain only the "pure essence" of the mission state, allowing them to work indefinitely without **Catastrophic Forgetting** or polluting the context window.
90
+ ### 📉 Adaptive Context Gating (EMA-based)
91
+ We combat "Context Drift" using a mechanism derived from **Exponential Moving Average (EMA)** algorithms. Irrelevant conversation noise follows a rapid decay curve, while critical architectural decisions are reinforced into **Stable Core Memory**. This functions as an **Attention Sink**, allowing agents to work indefinitely without **Catastrophic Forgetting**.
90
92
 
91
- ### 🧬 Adaptive Anthropomorphic Collaboration
92
- Built on an **Adaptive AI Philosophy**, the agents function like a **Human Engineering Squad**, not a chatbot. They **do not estimate** or guess. If tools are missing, they install them; if requirements are vague, they clarify. The Commander, Planner, and Reviewer collaborate organically to solve problems deterministically, mirroring a senior team's workflow.
93
+ ### 🧬 BDI (Belief-Desire-Intention) Collaboration
94
+ The system implements a variant of the **BDI Software Agent Model**:
95
+ - **Belief (Context)**: Shared state & file system reality.
96
+ - **Desire (Mission)**: The user's goal (e.g., "Fix this bug").
97
+ - **Intention (Plan)**: The `TODO.md` roadmap execution.
98
+ Agents do not merely "chat"; they collaborate to align their Beliefs with Desires through strictly executed Intentions, mirroring human engineering squads.
93
99
 
94
- ### ⚙️ Neuro-Symbolic Hybrid Engine (Rust + LLM)
100
+ ### ⚙️ Neuro-Symbolic Hybrid Engine
95
101
  Pure LLM approaches are stochastic. We bind them with a **Neuro-Symbolic Architecture** that anchors probabilistic reasoning to the deterministic precision of **Rust-based AST/LSP Tools**. This ensures every generated token is grounded in rigorous syntax analysis, delivering high performance with minimal resource overhead.
96
102
 
97
- ### ⚡ Hybrid Sync/Async & Dynamic Parallelism
98
- The engine features an **Intelligent Load-Balancing System** that fluidly switches between synchronous synchronization barriers and asynchronous fork-join patterns. It monitors execution pressure to **Dynamically Adjust** concurrency slots in real-time. This **Resource-Aware Multi-Parallel System** maximizes throughput on high-end hardware while maintaining stability on constrained environments.
103
+ ### ⚡ Dynamic Fork-Join Parallelism with Backpressure
104
+ The engine features an **Intelligent Load-Balancing System** that fluidly switches between synchronous barriers and asynchronous **Fork-Join** patterns. It monitors **System Backpressure** to dynamically adjust concurrency slots in real-time (`Adaptive Throttling`), maximizing throughput on high-end hardware while maintaining stability on constrained environments.
99
105
 
100
106
  ### 🎯 Iterative Rejection Sampling (Zero-Shot Defense)
101
- We employ a **Rejection Sampling Loop** driven by the Reviewer Agent (Reward Model). Through the **Metric-based Strict Verification Protocol (MSVP)**, code paths that fail execution tests are pruned. The system iterates until the solution converges on a mathematically correct state (0% Error Rate), rejecting any solution that lacks evidence.
107
+ We employ a **Rejection Sampling Loop** driven by the Reviewer Agent (**Reward Model**). Through the **Metric-based Strict Verification Protocol (MSVP)**, code paths that fail execution tests are pruned. The system iterates until the solution converges on a mathematically correct state (0% Error Rate), rejecting any solution that lacks evidence.
102
108
 
103
109
  ### 🧩 Externalized Chain-of-Thought (CoT)
104
- The Planner's `TODO.md` serves as an **Externalized Working Memory**. This persistent **Symbolic Chain-of-Thought** decouples detailed planning from the LLM's immediate context window, enabling the orchestration of massive, multi-step engineering tasks without logical degradation.
110
+ The Planner's `TODO.md` serves as an **Externalized Working Memory** (Scratchpad). This persistent **Symbolic Chain-of-Thought** decouples detailed planning from the LLM's immediate context window, enabling the orchestration of massive, multi-step engineering tasks without logical degradation.
105
111
 
106
112
  ---
107
113
 
package/package.json CHANGED
@@ -2,7 +2,7 @@
2
2
  "name": "opencode-orchestrator",
3
3
  "displayName": "OpenCode Orchestrator",
4
4
  "description": "Distributed Cognitive Architecture for OpenCode. Turns simple prompts into specialized multi-agent workflows (Planner, Coder, Reviewer).",
5
- "version": "1.0.53",
5
+ "version": "1.0.55",
6
6
  "author": "agnusdei1207",
7
7
  "license": "MIT",
8
8
  "repository": {