berget 2.0.6 → 2.1.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/AGENTS.md CHANGED
@@ -56,6 +56,10 @@ Functional, modular Koa + TypeScript services with schema-first approach and cod
56
56
 
57
57
  #### devops
58
58
 
59
+ # ⚠️ ABSOLUTE RULE: kubectl apply NEVER
60
+
61
+ **THIS RULE HAS NO EXCEPTIONS - APPLIES TO ALL ENVIRONMENTS: DEV, STAGING, PRODUCTION**
62
+
59
63
  Declarative GitOps infrastructure with FluxCD, Kustomize, Helm, and operators.
60
64
 
61
65
  **Use when:**
@@ -70,6 +74,159 @@ Declarative GitOps infrastructure with FluxCD, Kustomize, Helm, and operators.
70
74
  - Operator-first approach
71
75
  - SemVer with release candidates
72
76
 
77
+ ## 🚨 CRITICAL: WHY kubectl apply DESTROYS GITOPS
78
+
79
+ **kubectl apply is fundamentally incompatible with GitOps because it:**
80
+
81
+ 1. **Overwrites FluxCD metadata** - The `kubectl.kubernetes.io/last-applied-configuration` annotation gets replaced with kubectl's version, breaking FluxCD's tracking
82
+ 2. **Breaks the single source of truth** - Your cluster state diverges from Git state, making Git no longer authoritative
83
+ 3. **Creates synchronization conflicts** - FluxCD cannot reconcile differences between Git and cluster state
84
+ 4. **Makes debugging impossible** - Manual changes are invisible in Git history
85
+ 5. **Undermines the entire GitOps model** - The promise of "Git as source of truth" is broken
86
+
87
+ ## 📋 EXACTLY WHAT GETS DESTROYED
88
+
89
+ When you run `kubectl apply`, these critical metadata fields are corrupted:
90
+
91
+ ```yaml
92
+ # BEFORE: FluxCD-managed resource
93
+ metadata:
94
+ annotations:
95
+ kubectl.kubernetes.io/last-applied-configuration: |
96
+ {"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"name":"app","namespace":"default"},"spec":{"template":{"spec":{"containers":[{"image":"nginx:1.21","name":"nginx"}]}}}}
97
+ kustomize.toolkit.fluxcd.io/checksum: a1b2c3d4e5f6
98
+ kustomize.toolkit.fluxcd.io/ssa: Merge
99
+
100
+ # AFTER: kubectl apply destroys this
101
+ metadata:
102
+ annotations:
103
+ kubectl.kubernetes.io/last-applied-configuration: |
104
+ {"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"name":"app","namespace":"default"},"spec":{"template":{"spec":{"containers":[{"image":"nginx:1.22","name":"nginx"}]}}}}
105
+ # kustomize.toolkit.fluxcd.io/checksum: GONE!
106
+ # kustomize.toolkit.fluxcd.io/ssa: GONE!
107
+ ```
108
+
109
+ ## 🔥 CONSEQUENCES OF USING kubectl apply
110
+
111
+ **Immediate Impact:**
112
+ - FluxCD loses track of the resource
113
+ - Future Git commits may not apply correctly
114
+ - Resource becomes "orphaned" from GitOps control
115
+
116
+ **Long-term Damage:**
117
+ - Cluster drift becomes undetectable
118
+ - Rollback capabilities are compromised
119
+ - Audit trail is broken
120
+ - Team loses trust in GitOps process
121
+
122
+ **Recovery Required:**
123
+ - Manual intervention to restore FluxCD metadata
124
+ - Potential resource recreation
125
+ - Downtime during recovery
126
+ - Complete audit of affected resources
127
+
128
+ ## 🚨 KRITISKA REGLER FÖR FLUXCD-kluster
129
+
130
+ # ⚠️ ABSOLUT ALDRIG: kubectl apply
131
+
132
+ **DENNA REGLER HAR INGA UNDTAG - GÄLLER ALLTID: DEV, STAGING, PRODUCTION**
133
+
134
+ **ABSOLUT ALDRIG använd `kubectl apply` i FluxCD-hanterade kluster!**
135
+
136
+ ### ❌ FORBUDNA OPERATIONER
137
+
138
+ ```bash
139
+ # ❌ ALDRIG GÖR DETTA!
140
+ kubectl apply -f deployment.yaml
141
+ kubectl apply -f kustomization.yaml
142
+ kubectl apply -f flux-system/ # SPECIELT INTE FLUXCD-MANIFEST!
143
+ kubectl create -f ...
144
+ kubectl replace -f ...
145
+ kubectl edit deployment/...
146
+ kubectl patch deployment/...
147
+ ```
148
+
149
+ ### ✅ TILLÅTNA OPERATIONER (Read-Only)
150
+
151
+ ```bash
152
+ # ✅ SÄKERT FÖR DIAGNOSTIK
153
+ kubectl get pods
154
+ kubectl describe deployment/app
155
+ kubectl logs -f pod/name
156
+ kubectl get events
157
+ kubectl top nodes
158
+ ```
159
+
160
+ ### 🔄 RÄTT SÄTT ATT GÖRA ÄNDRINGAR
161
+
162
+ 1. **Git är sanningens källa** - alla ändringar måste gå via Git repository
163
+ 2. **FluxCD synkroniserar automatiskt** - ändra YAML-filer, inte klustret direkt
164
+ 3. **Använd PR workflow** - commit ändringar, skapa PR, låt FluxCD hantera deployment
165
+
166
+ ### 🚨 VAD HÄNDER OM DU ÄNDÅ ANVÄNDER kubectl apply?
167
+
168
+ **DET HÄNDER OM DU ANVÄNDER kubectl apply:**
169
+
170
+ - **Förstör FluxCD-metadata** - `kubectl.kubernetes.io/last-applied-configuration` skrivs över
171
+ - **Breakar GitOps-modellen** - klustret divergerar från Git-repository
172
+ - **FluxCD kan inte synkronisera** - konflikter mellan Git-state och kluster-state
173
+ - **Svår att diagnostisera** - manuella ändringar är osynliga i Git-historiken
174
+
175
+ **RESULTATET: FluxCD FÖRLORAR KONTROLLEN OCH KLUSTRET BLIR O-SYNKRONISERAT FRÅN GIT!**
176
+
177
+ ### 🆘 NÖDSITUATIONER
178
+
179
+ ```bash
180
+ # Pausa FluxCD temporärt
181
+ flux suspend kustomization app-name
182
+
183
+ # Gör nödvändiga ändringar i Git
184
+ git commit -m "emergency fix"
185
+ git push
186
+
187
+ # Återuppta FluxCD
188
+ flux resume kustomization app-name
189
+ ```
190
+
191
+ ### 💡 MINNESREGEL
192
+
193
+ > **"Git first, kubectl never"**
194
+ >
195
+ > Om du måste använda `kubectl apply` - gör det inte. Gör en ändring i Git istället.
196
+
197
+ ### 📋 CHECKLIST FÖR ÄNDRINGAR
198
+
199
+ - [ ] Ändring gjord i Git repository?
200
+ - [ ] PR skapad och granskad?
201
+ - [ ] FluxCD synkroniserar korrekt?
202
+ - [ ] Ingen `kubectl apply` använd?
203
+ - [ ] Kluster-state matchar Git-state?
204
+
205
+ **VIKTIGT:** Dessa regler gäller ALLTID, även i utvecklingsmiljöer och tester!
206
+
207
+ ### 💡 MINNESREGEL
208
+
209
+ > **"Git first, kubectl never"**
210
+ >
211
+ > Om du måste använda `kubectl apply` - gör det inte. Gör en ändring i Git istället.
212
+
213
+ ### 📋 CHECKLIST FÖR ÄNDRINGAR
214
+
215
+ - [ ] Ändring gjord i Git repository?
216
+ - [ ] PR skapad och granskad?
217
+ - [ ] FluxCD synkroniserar korrekt?
218
+ - [ ] Ingen `kubectl apply` använd?
219
+ - [ ] Kluster-state matchar Git-state?
220
+
221
+ **VIKTIGT:** Dessa regler gäller ALLTID, även i utvecklingsmiljöer och tester!
222
+
223
+ ---
224
+
225
+ ## ⚠️ ABSOLUT SLUTREGEL: INGA UNDTAG
226
+
227
+ **kubectl apply är FÖRBJUDET i ALLA FluxCD-kluster, ALLTID, utan undantag.**
228
+ **Detta inkluderar: dev, staging, production, testmiljöer, lokala kluster, ALLT.**
229
+
73
230
  **Helm Values Configuration Process:**
74
231
 
75
232
  1. **Documentation First Approach:**
@@ -206,8 +363,12 @@ BERGET_API_KEY=your_api_key_here
206
363
 
207
364
  All agents follow these principles:
208
365
 
209
- - Never work directly in main branch
366
+ - **NEVER work directly in main branch** - ALWAYS use pull requests
367
+ - **NEVER use 'git add .'** - ALWAYS add specific files with 'git add path/to/file'
368
+ - **ALWAYS clean up test files, documentation files, and temporary artifacts before committing**
369
+ - **ALWAYS ensure git history maintains production quality** - no test commits, no debugging code
370
+ - **ALWAYS create descriptive commit messages following project conventions**
371
+ - **ALWAYS run tests and build before creating PR**
210
372
  - Follow branch strategy and commit conventions
211
373
  - Create PRs for new functionality
212
- - Run tests before committing
213
374
  - Address reviewer feedback promptly
package/dist/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "berget",
3
- "version": "2.0.6",
3
+ "version": "2.1.0",
4
4
  "main": "dist/index.js",
5
5
  "bin": {
6
6
  "berget": "dist/index.js"
@@ -57,8 +57,7 @@ function registerChatCommands(program) {
57
57
  .command(command_structure_1.SUBCOMMANDS.CHAT.RUN)
58
58
  .description('Run a chat session with a specified model')
59
59
  .argument('[message]', 'Message to send directly (skips interactive mode)')
60
- .option('-m, --model <model>', 'Model to use (default: deepseek-r1)')
61
- .option('--no-reasoning', 'Disable reasoning mode (adds </think> to messages)')
60
+ .option('-m, --model <model>', 'Model to use (default: glm-4.7)')
62
61
  .option('-t, --temperature <temp>', 'Temperature (0-1)', parseFloat)
63
62
  .option('--max-tokens <tokens>', 'Maximum tokens to generate', parseInt)
64
63
  .option('-k, --api-key <key>', 'API key to use for this chat session')
@@ -68,7 +68,18 @@ function mergeConfigurations(currentConfig, latestConfig) {
68
68
  return __awaiter(this, void 0, void 0, function* () {
69
69
  try {
70
70
  const client = (0, client_1.createAuthenticatedClient)();
71
- const modelConfig = (0, config_loader_1.getModelConfig)();
71
+ // Get model config with fallback for init scenario
72
+ let modelConfig;
73
+ try {
74
+ modelConfig = (0, config_loader_1.getModelConfig)();
75
+ }
76
+ catch (error) {
77
+ // Fallback to defaults when no config exists (init scenario)
78
+ modelConfig = {
79
+ primary: 'berget/glm-4.7',
80
+ small: 'berget/gpt-oss',
81
+ };
82
+ }
72
83
  console.log(chalk_1.default.blue('🤖 Using AI to merge configurations...'));
73
84
  const mergePrompt = `You are a configuration merge specialist. Merge these two OpenCode configurations:
74
85
 
@@ -267,20 +278,78 @@ function getProjectName() {
267
278
  return path_1.default.basename(process.cwd());
268
279
  }
269
280
  /**
270
- * Load the latest agent configuration from opencode.json
281
+ * Load the latest agent configuration from embedded config
271
282
  */
272
283
  function loadLatestAgentConfig() {
273
284
  return __awaiter(this, void 0, void 0, function* () {
274
- try {
275
- const configPath = path_1.default.join(__dirname, '../../opencode.json');
276
- const configContent = yield (0, promises_1.readFile)(configPath, 'utf8');
277
- const config = JSON.parse(configContent);
278
- return config.agent || {};
279
- }
280
- catch (error) {
281
- console.warn(chalk_1.default.yellow('⚠️ Could not load latest agent config, using fallback'));
282
- return {};
283
- }
285
+ // Return the latest agent configuration directly - no file reading needed
286
+ return {
287
+ fullstack: {
288
+ model: 'berget/glm-4.7',
289
+ temperature: 0.3,
290
+ top_p: 0.9,
291
+ mode: 'primary',
292
+ permission: { edit: 'allow', bash: 'allow', webfetch: 'allow' },
293
+ description: 'Router/coordinator agent for full-stack development with schema-driven architecture',
294
+ prompt: "Voice: Scandinavian calm—precise, concise, confident; no fluff. You are Berget Code Fullstack agent. Act as a router and coordinator in a monorepo. Bottom-up schema: database → OpenAPI → generated types. Top-down types: API → UI → components. Use openapi-fetch and Zod at every boundary; compile-time errors are desired when contracts change. Routing rules: if task/paths match /apps/frontend or React (.tsx) → use frontend; if /apps/app or Expo/React Native → app; if /infra, /k8s, flux-system, kustomization.yaml, Helm values → devops; if /services, Koa routers, services/adapters/domain → backend. If ambiguous, remain fullstack and outline the end-to-end plan, then delegate subtasks to the right persona. Security: validate inputs; secrets via FluxCD SOPS/Sealed Secrets. Documentation is generated from code—never duplicated.\n\nGIT WORKFLOW RULES (CRITICAL):\n- NEVER push directly to main branch - ALWAYS use pull requests\n- NEVER use 'git add .' - ALWAYS add specific files with 'git add path/to/file'\n- ALWAYS clean up test files, documentation files, and temporary artifacts before committing\n- ALWAYS ensure git history maintains production quality - no test commits, no debugging code\n- ALWAYS create descriptive commit messages following project conventions\n- ALWAYS run tests and build before creating PR\n\nCRITICAL: When all implementation tasks are complete and ready for merge, ALWAYS invoke @quality subagent to handle testing, building, and complete PR management including URL provision.",
295
+ },
296
+ frontend: {
297
+ model: 'berget/glm-4.7',
298
+ temperature: 0.4,
299
+ top_p: 0.9,
300
+ mode: 'primary',
301
+ permission: { edit: 'allow', bash: 'deny', webfetch: 'allow' },
302
+ note: 'Bash access is denied for frontend persona to prevent shell command execution in UI environments. This restriction enforces security and architectural boundaries.',
303
+ description: 'Builds Scandinavian, type-safe UIs with React, Tailwind, Shadcn.',
304
+ prompt: 'You are Berget Code Frontend agent. Voice: Scandinavian calm—precise, concise, confident. React 18 + TypeScript. Tailwind + Shadcn UI only via the design system (index.css, tailwind.config.ts). Use semantic tokens for color/spacing/typography/motion; never ad-hoc classes or inline colors. Components are pure and responsive; props-first data; minimal global state (Zustand/Jotai). Accessibility and keyboard navigation mandatory. Mock data only at init under /data via typed hooks (e.g., useProducts() reading /data/products.json). Design: minimal, balanced, quiet motion.\n\nIMPORTANT: You have NO bash access and cannot run git commands. When your frontend implementation tasks are complete, inform the user that changes are ready and suggest using /pr command to create a pull request with proper testing and quality checks.\n\nCODE QUALITY RULES:\n- Write clean, production-ready code\n- Follow React and TypeScript best practices\n- Ensure accessibility and responsive design\n- Use semantic tokens from design system\n- Test your components manually when possible\n- Document any complex logic with comments\n\nCRITICAL: When frontend implementation is complete, ALWAYS inform the user to use "/pr" command to handle testing, building, and pull request creation.',
305
+ },
306
+ backend: {
307
+ model: 'berget/glm-4.7',
308
+ temperature: 0.3,
309
+ top_p: 0.9,
310
+ mode: 'primary',
311
+ permission: { edit: 'allow', bash: 'allow', webfetch: 'allow' },
312
+ description: 'Functional, modular Koa + TypeScript services; schema-first with code quality focus.',
313
+ prompt: "You are Berget Code Backend agent. Voice: Scandinavian calm—precise, concise, confident. TypeScript + Koa. Prefer many small pure functions; avoid big try/catch blocks. Routes thin; logic in services/adapters/domain. Validate with Zod; auto-generate OpenAPI. Adapters isolate external systems; domain never depends on framework. Test with supertest; idempotent and stateless by default. Each microservice emits an OpenAPI contract; changes propagate upward to types. Code Quality & Refactoring Principles: Apply Single Responsibility Principle, fail fast with explicit errors, eliminate code duplication, remove nested complexity, use descriptive error codes, keep functions under 30 lines. Always leave code cleaner and more readable than you found it.\n\nGIT WORKFLOW RULES (CRITICAL):\n- NEVER push directly to main branch - ALWAYS use pull requests\n- NEVER use 'git add .' - ALWAYS add specific files with 'git add path/to/file'\n- ALWAYS clean up test files, documentation files, and temporary artifacts before committing\n- ALWAYS ensure git history maintains production quality - no test commits, no debugging code\n- ALWAYS create descriptive commit messages following project conventions\n- ALWAYS run tests and build before creating PR\n\nCRITICAL: When all backend implementation tasks are complete and ready for merge, ALWAYS invoke @quality subagent to handle testing, building, and complete PR management including URL provision.",
314
+ },
315
+ devops: {
316
+ model: 'berget/glm-4.7',
317
+ temperature: 0.3,
318
+ top_p: 0.8,
319
+ mode: 'primary',
320
+ permission: { edit: 'allow', bash: 'allow', webfetch: 'allow' },
321
+ description: 'Declarative GitOps infra with FluxCD, Kustomize, Helm, operators.',
322
+ prompt: "You are Berget Code DevOps agent. Voice: Scandinavian calm—precise, concise, confident. Start simple: k8s/{deployment,service,ingress}. Add FluxCD sync to repo and image automation. Use Kustomize bases/overlays (staging, production). Add dependencies via Helm from upstream sources; prefer native operators when available (CloudNativePG, cert-manager, external-dns). SemVer with -rc tags keeps CI environments current. Observability with Prometheus/Grafana. No manual kubectl in production—Git is the source of truth.\n\nGIT WORKFLOW RULES (CRITICAL):\n- NEVER push directly to main branch - ALWAYS use pull requests\n- NEVER use 'git add .' - ALWAYS add specific files with 'git add path/to/file'\n- ALWAYS clean up test files, documentation files, and temporary artifacts before committing\n- ALWAYS ensure git history maintains production quality - no test commits, no debugging code\n- ALWAYS create descriptive commit messages following project conventions\n- ALWAYS run tests and build before creating PR\n\nHelm Values Configuration Process:\n1. Documentation First Approach: Always fetch official documentation from Artifact Hub/GitHub for the specific chart version before writing values. Search Artifact Hub for exact chart version documentation, check the chart's GitHub repository for official docs and examples, verify the exact version being used in the deployment.\n2. Validation Requirements: Check for available validation schemas before committing YAML files. Use Helm's built-in validation tools (helm lint, helm template). Validate against JSON schema if available for the chart. Ensure YAML syntax correctness with linters.\n3. Standard Workflow: Identify chart name and exact version. Fetch official documentation from Artifact Hub/GitHub. Check for available schemas and validation tools. Write values according to official documentation. Validate against schema (if available). Test with helm template or helm lint. Commit validated YAML files.\n4. Quality Assurance: Never commit unvalidated Helm values. Use helm dependency update when adding new charts. Test rendering with helm template --dry-run before deployment. Document any custom values with comments referencing official docs.",
323
+ },
324
+ app: {
325
+ model: 'berget/glm-4.7',
326
+ temperature: 0.4,
327
+ top_p: 0.9,
328
+ mode: 'primary',
329
+ permission: { edit: 'allow', bash: 'deny', webfetch: 'allow' },
330
+ note: 'Bash access is denied for app persona to prevent shell command execution in mobile/Expo environments. This restriction enforces security and architectural boundaries.',
331
+ description: 'Expo + React Native apps; props-first, offline-aware, shared tokens.',
332
+ prompt: "You are Berget Code App agent. Voice: Scandinavian calm—precise, concise, confident. Expo + React Native + TypeScript. Structure by components/hooks/services/navigation. Components are pure; data via props; refactor shared logic into hooks/stores. Share tokens with frontend. Mock data in /data via typed hooks; later replace with live APIs. Offline via SQLite/MMKV; notifications via Expo. Request permissions only when needed. Subtle, meaningful motion; light/dark parity.\n\nGIT WORKFLOW RULES (CRITICAL):\n- NEVER push directly to main branch - ALWAYS use pull requests\n- NEVER use 'git add .' - ALWAYS add specific files with 'git add path/to/file'\n- ALWAYS clean up test files, documentation files, and temporary artifacts before committing\n- ALWAYS ensure git history maintains production quality - no test commits, no debugging code\n- ALWAYS create descriptive commit messages following project conventions\n- ALWAYS run tests and build before creating PR\n\nCRITICAL: When all app implementation tasks are complete and ready for merge, ALWAYS invoke @quality subagent to handle testing, building, and complete PR management including URL provision.",
333
+ },
334
+ security: {
335
+ model: 'berget/glm-4.7',
336
+ temperature: 0.2,
337
+ top_p: 0.8,
338
+ mode: 'subagent',
339
+ permission: { edit: 'deny', bash: 'allow', webfetch: 'allow' },
340
+ description: 'Security specialist for pentesting, OWASP compliance, and vulnerability assessments.',
341
+ prompt: "Voice: Scandinavian calm—precise, concise, confident. You are Berget Code Security agent. Expert in application security, penetration testing, and OWASP standards. Core responsibilities: Conduct security assessments and penetration tests, Validate OWASP Top 10 compliance, Review code for security vulnerabilities, Implement security headers and Content Security Policy (CSP), Audit API security, Check for sensitive data exposure, Validate input sanitization and output encoding, Assess dependency security and supply chain risks. Tools and techniques: OWASP ZAP, Burp Suite, security linters, dependency scanners, manual code review. Always provide specific, actionable security recommendations with priority levels.\n\nGIT WORKFLOW RULES (CRITICAL):\n- NEVER push directly to main branch - ALWAYS use pull requests\n- NEVER use 'git add .' - ALWAYS add specific files with 'git add path/to/file'\n- ALWAYS clean up test files, documentation files, and temporary artifacts before committing\n- ALWAYS ensure git history maintains production quality - no test commits, no debugging code\n- ALWAYS create descriptive commit messages following project conventions\n- ALWAYS run tests and build before creating PR",
342
+ },
343
+ quality: {
344
+ model: 'berget/glm-4.7',
345
+ temperature: 0.1,
346
+ top_p: 0.9,
347
+ mode: 'subagent',
348
+ permission: { edit: 'allow', bash: 'allow', webfetch: 'allow' },
349
+ description: 'Quality assurance specialist for testing, building, and PR management.',
350
+ prompt: "Voice: Scandinavian calm—precise, concise, confident. You are Berget Code Quality agent. Specialist in code quality assurance, testing, building, and pull request management.\\n\\nCore responsibilities:\\n - Run comprehensive test suites (npm test, npm run test, jest, vitest)\\n - Execute build processes (npm run build, webpack, vite, tsc)\\n - Create and manage pull requests with proper descriptions\\n - Monitor GitHub for Copilot/reviewer comments\\n - Ensure code quality standards are met\\n - Validate linting and formatting (npm run lint, prettier)\\n - Check test coverage and performance benchmarks\\n - Handle CI/CD pipeline validation\\n\\nGIT WORKFLOW RULES (CRITICAL - ENFORCE STRICTLY):\\n - NEVER push directly to main branch - ALWAYS use pull requests\\n - NEVER use 'git add .' - ALWAYS add specific files with 'git add path/to/file'\\n - ALWAYS clean up test files, documentation files, and temporary artifacts before committing\\n - ALWAYS ensure git history maintains production quality - no test commits, no debugging code\\n - ALWAYS create descriptive commit messages following project conventions\\n - ALWAYS run tests and build before creating PR\\n\\nCommon CLI commands:\\n - npm test or npm run test (run test suite)\\n - npm run build (build project)\\n - npm run lint (run linting)\\n - npm run format (format code)\\n - npm run test:coverage (check coverage)\\n - gh pr create (create pull request)\\n - gh pr view --comments (check PR comments)\\n - git add specific/files && git commit -m \\\"message\\\" && git push (NEVER use git add .)\\n\\nPR Workflow:\\n 1. Ensure all tests pass: npm test\\n 2. Build successfully: npm run build\\n 3. Create/update PR with clear description\\n 4. Monitor for reviewer comments\\n 5. Address feedback promptly\\n 6. Update PR with fixes\\n 7. Ensure CI checks pass\\n\\nAlways provide specific command examples and wait for processes to complete before proceeding.",
351
+ },
352
+ };
284
353
  });
285
354
  }
286
355
  /**
@@ -310,7 +379,6 @@ function installOpencode() {
310
379
  yield new Promise((resolve, reject) => {
311
380
  const install = (0, child_process_1.spawn)('npm', ['install', '-g', 'opencode-ai'], {
312
381
  stdio: 'inherit',
313
- shell: true,
314
382
  });
315
383
  install.on('close', (code) => {
316
384
  if (code === 0) {
@@ -336,7 +404,7 @@ function installOpencode() {
336
404
  console.error(chalk_1.default.red('Failed to install OpenCode:'));
337
405
  console.error(error instanceof Error ? error.message : String(error));
338
406
  console.log(chalk_1.default.blue('\nAlternative installation methods:'));
339
- console.log(chalk_1.default.blue(' curl -fsSL https://opencode.ai/install | bash'));
407
+ console.log(chalk_1.default.blue(' curl -fsSL https://opencode.ai/install | sh'));
340
408
  console.log(chalk_1.default.blue(' Or visit: https://opencode.ai/docs'));
341
409
  return false;
342
410
  }
@@ -454,7 +522,9 @@ function registerCodeCommands(program) {
454
522
  existingKeys.forEach((key, index) => {
455
523
  console.log(`${chalk_1.default.cyan((index + 1).toString())}. ${chalk_1.default.bold(key.name)} (${key.prefix}...)`);
456
524
  console.log(chalk_1.default.dim(` Created: ${new Date(key.created).toLocaleDateString('sv-SE')}`));
457
- console.log(chalk_1.default.dim(` Last used: ${key.lastUsed ? new Date(key.lastUsed).toLocaleDateString('sv-SE') : 'Never'}`));
525
+ console.log(chalk_1.default.dim(` Last used: ${key.lastUsed
526
+ ? new Date(key.lastUsed).toLocaleDateString('sv-SE')
527
+ : 'Never'}`));
458
528
  if (index < existingKeys.length - 1)
459
529
  console.log();
460
530
  });
@@ -466,9 +536,7 @@ function registerCodeCommands(program) {
466
536
  input: process.stdin,
467
537
  output: process.stdout,
468
538
  });
469
- rl.question(chalk_1.default.blue('\nSelect an option (1-' +
470
- (existingKeys.length + 1) +
471
- '): '), (answer) => {
539
+ rl.question(chalk_1.default.blue('\nSelect an option (1-' + (existingKeys.length + 1) + '): '), (answer) => {
472
540
  rl.close();
473
541
  resolve(answer.trim());
474
542
  });
@@ -543,9 +611,13 @@ function registerCodeCommands(program) {
543
611
  }
544
612
  // Prepare .env file path for safe update
545
613
  const envPath = path_1.default.join(process.cwd(), '.env');
546
- // Load latest agent configuration to ensure consistency
614
+ // Load latest agent configuration from our own codebase
547
615
  const latestAgentConfig = yield loadLatestAgentConfig();
548
- const modelConfig = (0, config_loader_1.getModelConfig)();
616
+ // Use hardcoded defaults for init - never try to load from project
617
+ const modelConfig = {
618
+ primary: 'berget/glm-4.7',
619
+ small: 'berget/gpt-oss',
620
+ };
549
621
  // Create opencode.json config with optimized agent-based format
550
622
  const config = {
551
623
  $schema: 'https://opencode.ai/config.json',
@@ -672,7 +744,21 @@ function registerCodeCommands(program) {
672
744
  baseURL: 'https://api.berget.ai/v1',
673
745
  apiKey: '{env:BERGET_API_KEY}',
674
746
  },
675
- models: (0, config_loader_1.getProviderModels)(),
747
+ models: {
748
+ 'glm-4.7': {
749
+ name: 'GLM-4.7',
750
+ limit: { output: 4000, context: 90000 },
751
+ },
752
+ 'gpt-oss': {
753
+ name: 'GPT-OSS',
754
+ limit: { output: 4000, context: 128000 },
755
+ modalities: ['text', 'image'],
756
+ },
757
+ 'llama-8b': {
758
+ name: 'llama-3.1-8b',
759
+ limit: { output: 4000, context: 128000 },
760
+ },
761
+ },
676
762
  },
677
763
  },
678
764
  };
@@ -1057,7 +1143,18 @@ All agents follow these principles:
1057
1143
  console.log(chalk_1.default.dim(` Agents: ${Object.keys(currentConfig.agent || {}).length} configured`));
1058
1144
  // Load latest agent configuration to ensure consistency
1059
1145
  const latestAgentConfig = yield loadLatestAgentConfig();
1060
- const modelConfig = (0, config_loader_1.getModelConfig)();
1146
+ // Get model config with fallback for init scenario
1147
+ let modelConfig;
1148
+ try {
1149
+ modelConfig = (0, config_loader_1.getModelConfig)();
1150
+ }
1151
+ catch (error) {
1152
+ // Fallback to defaults when no config exists (init scenario)
1153
+ modelConfig = {
1154
+ primary: 'berget/glm-4.7',
1155
+ small: 'berget/gpt-oss',
1156
+ };
1157
+ }
1061
1158
  // Create latest configuration with all improvements
1062
1159
  const latestConfig = {
1063
1160
  $schema: 'https://opencode.ai/config.json',
@@ -1216,9 +1313,9 @@ All agents follow these principles:
1216
1313
  if (((_c = (_b = currentConfig.agent) === null || _b === void 0 ? void 0 : _b.security) === null || _c === void 0 ? void 0 : _c.mode) !== 'subagent') {
1217
1314
  console.log(chalk_1.default.cyan(' • Security agent converted to subagent (read-only)'));
1218
1315
  }
1219
- // Check for GLM-4.6 optimizations
1316
+ // Check for GLM-4.7 optimizations
1220
1317
  if (!((_h = (_g = (_f = (_e = (_d = currentConfig.provider) === null || _d === void 0 ? void 0 : _d.berget) === null || _e === void 0 ? void 0 : _e.models) === null || _f === void 0 ? void 0 : _f[modelConfig.primary.replace('berget/', '')]) === null || _g === void 0 ? void 0 : _g.limit) === null || _h === void 0 ? void 0 : _h.context)) {
1221
- console.log(chalk_1.default.cyan(' • GLM-4.6 token limits and auto-compaction'));
1318
+ console.log(chalk_1.default.cyan(' • GLM-4.7 token limits and auto-compaction'));
1222
1319
  }
1223
1320
  console.log(chalk_1.default.cyan(' • Latest agent prompts and improvements'));
1224
1321
  }
@@ -1443,7 +1540,7 @@ All agents follow these principles:
1443
1540
  console.log(chalk_1.default.cyan(' • @quality subagent for testing and PR management'));
1444
1541
  console.log(chalk_1.default.cyan(' • @security subagent for security reviews'));
1445
1542
  console.log(chalk_1.default.cyan(' • Improved agent prompts and routing'));
1446
- console.log(chalk_1.default.cyan(' • GLM-4.6 token optimizations'));
1543
+ console.log(chalk_1.default.cyan(' • GLM-4.7 token optimizations'));
1447
1544
  console.log(chalk_1.default.blue('\nTry these new commands:'));
1448
1545
  console.log(chalk_1.default.cyan(' @quality run tests and create PR'));
1449
1546
  console.log(chalk_1.default.cyan(' @security review this code'));
@@ -269,6 +269,7 @@ class ChatService {
269
269
  * @returns A promise that resolves when the stream is complete
270
270
  */
271
271
  handleStreamingResponse(options, headers) {
272
+ var _a, _b, _c, _d;
272
273
  return __awaiter(this, void 0, void 0, function* () {
273
274
  // Use the same base URL as the client
274
275
  const baseUrl = process.env.API_BASE_URL || 'https://api.berget.ai';
@@ -299,22 +300,74 @@ class ChatService {
299
300
  const decoder = new TextDecoder();
300
301
  let fullContent = '';
301
302
  let fullResponse = null;
303
+ let buffer = ''; // Buffer to accumulate partial JSON data
302
304
  while (true) {
303
305
  const { done, value } = yield reader.read();
304
306
  if (done)
305
307
  break;
306
308
  const chunk = decoder.decode(value, { stream: true });
307
309
  logger_1.logger.debug(`Received chunk: ${chunk.length} bytes`);
308
- // Process the chunk - it may contain multiple SSE events
309
- const lines = chunk.split('\n');
310
- for (const line of lines) {
310
+ // Add chunk to buffer
311
+ buffer += chunk;
312
+ logger_1.logger.debug(`Added chunk to buffer. Buffer length: ${buffer.length}`);
313
+ // Process the buffer - it may contain multiple SSE events
314
+ const lines = buffer.split('\n');
315
+ logger_1.logger.debug(`Processing ${lines.length} lines from buffer`);
316
+ // Keep track of processed lines to update buffer
317
+ let processedLines = 0;
318
+ for (let i = 0; i < lines.length; i++) {
319
+ const line = lines[i];
320
+ logger_1.logger.debug(`Line ${i}: "${line}"`);
311
321
  if (line.startsWith('data:')) {
312
322
  const jsonData = line.slice(5).trim();
323
+ logger_1.logger.debug(`Extracted JSON data: "${jsonData}"`);
313
324
  // Skip empty data or [DONE] marker
314
- if (jsonData === '' || jsonData === '[DONE]')
325
+ if (jsonData === '' || jsonData === '[DONE]') {
326
+ logger_1.logger.debug(`Skipping empty data or [DONE] marker`);
327
+ processedLines = i + 1;
315
328
  continue;
329
+ }
330
+ // Check if JSON looks complete (basic validation)
331
+ if (!jsonData.startsWith('{')) {
332
+ logger_1.logger.warn(`JSON data doesn't start with '{', might be partial: "${jsonData.substring(0, 50)}..."`);
333
+ // Don't process this line yet, keep it in buffer
334
+ break;
335
+ }
336
+ // Count braces to check if JSON is complete
337
+ let braceCount = 0;
338
+ let inString = false;
339
+ let escaped = false;
340
+ for (let j = 0; j < jsonData.length; j++) {
341
+ const char = jsonData[j];
342
+ if (escaped) {
343
+ escaped = false;
344
+ continue;
345
+ }
346
+ if (char === '\\') {
347
+ escaped = true;
348
+ continue;
349
+ }
350
+ if (char === '"') {
351
+ inString = !inString;
352
+ continue;
353
+ }
354
+ if (!inString && char === '{') {
355
+ braceCount++;
356
+ }
357
+ else if (!inString && char === '}') {
358
+ braceCount--;
359
+ }
360
+ }
361
+ if (braceCount !== 0) {
362
+ logger_1.logger.warn(`JSON braces don't balance (${braceCount}), treating as partial: "${jsonData.substring(0, 50)}..."`);
363
+ // Don't process this line yet, keep it in buffer
364
+ break;
365
+ }
316
366
  try {
367
+ logger_1.logger.debug(`Attempting to parse JSON of length: ${jsonData.length}`);
317
368
  const parsedData = JSON.parse(jsonData);
369
+ logger_1.logger.debug(`Successfully parsed JSON: ${JSON.stringify(parsedData, null, 2)}`);
370
+ processedLines = i + 1; // Mark this line as processed
318
371
  // Call the onChunk callback with the parsed data
319
372
  if (options.onChunk) {
320
373
  options.onChunk(parsedData);
@@ -334,10 +387,29 @@ class ChatService {
334
387
  }
335
388
  catch (e) {
336
389
  logger_1.logger.error(`Error parsing chunk: ${e}`);
337
- logger_1.logger.debug(`Problematic chunk: ${jsonData}`);
390
+ logger_1.logger.error(`JSON parse error at position ${((_b = (_a = e.message) === null || _a === void 0 ? void 0 : _a.match(/position (\d+)/)) === null || _b === void 0 ? void 0 : _b[1]) || 'unknown'}`);
391
+ logger_1.logger.error(`Problematic chunk length: ${jsonData.length}`);
392
+ logger_1.logger.error(`Problematic chunk content: "${jsonData}"`);
393
+ logger_1.logger.error(`Chunk starts with: "${jsonData.substring(0, 50)}..."`);
394
+ logger_1.logger.error(`Chunk ends with: "...${jsonData.substring(jsonData.length - 50)}"`);
395
+ // Show character codes around the error position
396
+ const errorPos = parseInt(((_d = (_c = e.message) === null || _c === void 0 ? void 0 : _c.match(/position (\d+)/)) === null || _d === void 0 ? void 0 : _d[1]) || '0');
397
+ if (errorPos > 0) {
398
+ const start = Math.max(0, errorPos - 20);
399
+ const end = Math.min(jsonData.length, errorPos + 20);
400
+ logger_1.logger.error(`Context around error position ${errorPos}:`);
401
+ logger_1.logger.error(`"${jsonData.substring(start, end)}"`);
402
+ logger_1.logger.error(`Character codes: ${Array.from(jsonData.substring(start, end)).map(c => c.charCodeAt(0)).join(' ')}`);
403
+ }
338
404
  }
339
405
  }
340
406
  }
407
+ // Update buffer to only contain unprocessed lines
408
+ if (processedLines > 0) {
409
+ const remainingLines = lines.slice(processedLines);
410
+ buffer = remainingLines.join('\n');
411
+ logger_1.logger.debug(`Updated buffer. Remaining lines: ${remainingLines.length}, Buffer length: ${buffer.length}`);
412
+ }
341
413
  }
342
414
  // Construct the final response object similar to non-streaming response
343
415
  if (fullResponse) {